SlideShare a Scribd company logo
1 of 16
Download to read offline
Int. J. Computational Vision and Robotics, Vol. X, No. Y, xxxx 1
Copyright © 20XX Inderscience Enterprises Ltd.
A real time aggressive human behaviour detection
system in cage environment across multiple cameras
Phooi Yee Lau*
Faculty of Information and Communication Technology,
Universiti Tunku Abdul Rahman,
1, Jalan Universiti, Bandar Barat,
31900 Kampar, Perak, Malaysia
Email: laupy@utar.edu.my
*Corresponding author
Hock Woon Hon, Zulaikha Kadim and
Kim Meng Liang
MIMOS Bhd,
Technology Park Malaysia,
Kuala Lumpur, Malaysia
Email: hockwoon.hon@mimos.my
Email: zulaikha.kadim@mimos.my
Email: liang.kimmeng@mimos.my
Abstract: The monitoring of activities in the enclosed cage environments to
detect abnormalities such as aggressive behaviour, employing a real-time video
analysis technology, has become an emerging and challenging problem. Such
system should be able: 1) to track individuals; 2) to identify their action;
3) to keep a record of how often the aggressive behaviour happened, at the
scene. On top of that, the system should be implemented in real-time, whereby,
the following limitations should be taken into consideration: 1) viewing angle
(fish-eye); 2) low resolution; 3) number of people; 4) low lighting (normal);
5) number of cameras. This paper proposes to develop a vision-based system
that is able to monitor aggressive activities of individuals in an enclosed cage
environment using multiple cameras considering the above-mentioned
conditions. Experimental results show that the proposed system is easily
realised and achieved impressive real-time performance, even on low end
computers.
Keywords: surveillance system; behaviour monitoring; perspective correction;
background subtraction; real-time video processing.
Reference to this paper should be made as follows: Lau, P.Y., Hon, H.W.,
Kadim, Z. and Liang, K.M. (xxxx) ‘A real time aggressive human behaviour
detection system in cage environment across multiple cameras’, Int. J.
Computational Vision and Robotics, Vol. X, No. Y, pp.xxx–xxx.
Biographical notes: Phooi Yee Lau received her BCompSc from the Universiti
Teknologi Malaysia, Malaysia in 1996, MCompSc from the Universiti Malaya,
Malaysia in 2002, and PhD in Engineering from the Keio University, Japan in
2006. From 2009 to 2011, with the BK21 Fellowship Grant from the Korean
Government, was attached as a researcher at the Convergence Communications
2 P.Y. Lau et al.
Laboratory, Hanyang University, Republic of Korea, and was previously a
post-doc researcher from 2006 to 2008, under the Portuguese Government
Grant, at the Multimedia Signal Processing Group (previously Image Group) of
Instituto de Telecomunicações, Portugal. Her current research interests include
multimedia signal processing and intelligent system.
Hock Woon Hon is a Senior Principal Researcher and the Head of Advanced
Informatics Lab at Mimos Berhad, a National ICT Research in Malaysia. He
received his Bachelor’s degree in Electrical and Electronic Engineering and
Doctorate degree from Nottingham Trent University in 1997 and 2000,
respectively. His main research area is in imaging/image processing including
intelligent surveillance, 3D visualisation and X-ray imaging. He has published
a number of journal papers (IEE, NDT&T) and has filed a number of patents in
the area of image processing locally and internationally.
Zulaikha Kadim received her degree in Engineering from Multimedia
University Malaysia in 2000. Subsequently, she received her Master’s degree in
2004 from the same university and currently pursuing her PhD in Computer
Systems Engineering at Malaysia National University (UKM). She is currently
a researcher at the MIMOS Berhad, a national R&D institution. Her research
interests include object detection and tracking, and video analytics.
Kim Meng Liang is the Principal Researcher in Advanced Informatics
Department in MIMOS Berhad. He graduated with MS in Image Processing in
2003. He is certified with Green Belt Six Sigma, TRIZ (Problem Solving
Methodology) and Infrared Thermography. With his vast knowledge in image
processing and pattern recognition, he had more than 50 patents and 20 white
papers filed under his name.
This paper is a revised and expanded version of a paper entitled ‘GuARD: a
real-time system for detecting aggressive human behaviour in cage
environment’ presented at the Multi-disciplinary Trends in Artificial
Intelligence: 11th International Workshop (MIWAI 2017), Gadong, Brunei,
20–22 November 2017.
1 Introduction
Recent work in vision-based surveillance system aims to learn about the presence and the
behaviour of a person in a pre-determined or closed environment (Haritaoglu et al., 2000;
Chen et al., 2008; Ouanane et al., 2012; Theodoridis and Hu, 2013). These works often
focus on monitoring activities such as violent behaviour, usually processing the scene in a
fully automatic manner for surveillance purposes. Also, these systems often come with a
well-designed alarm to be triggered depending on the situations defined, and to connect
to remote security control centres. In these video surveillance systems, some are devoted
to using low cost off-the-shelf cameras (Haritaoglu et al., 2000).
In the past, CCTV is often used as a surveillance tool to be deployed together with
security guards monitoring the scene captured. Nevertheless, humans are poor at
remaining alert for long periods of time and this has limited human participation in the
detection chain, especially in 24/7 systems. As such, a vast majority of CCTV camera
footages remains unmonitored and it is unlikely that incidents can be detected
immediately when they are happening. It is only after a serious crime has happened that
A real time aggressive human behaviour detection system 3
those videos will only be used to ascertain what has happened, reducing it to a
trace-driven tool, for verification or support.
In 2008, Chen reported that video surveillance has become a self-reporting tool with
the ability to detect and to monitor potential aggressive behaviour (Chen et al., 2008). His
work describes a framework to recognise aggressive behaviour using local binary motion
descriptors. However, aggressive behaviour in his work entails the involvement of an
object, e.g., chair, as it is difficult to notice, due to occlusion, an aggressive action by
itself. In 2012, Ouanane et al. proposed to recognise boxing action as aggressive
behaviour. His proposed work is based on the geometrical approach associated with
shape representation to recognise an aggressive human gesture. However, the work
cannot resolve the occlusion when more than one person is present at the scene (Ouanane
et al., 2012). In 2013, Theodoridis and Hu investigated both the recognition and
modelling of aggressive behaviour using kinematics and electromyographic performance
data. Their primary objective was to develop a recognition system capable of modelling
and classifying aggressive behaviour using genetic programming, decomposing an action
set into action groups to evolve specialised taxonomers for each behaviour. Nonetheless,
no real-time system implementation was discussed (Theodoridis and Hu, 2013). In 2015,
Lyu and Yang proposed a violence detection algorithm based on the local spatio-temporal
points and optical flow method (Lyu and Yang, 2015). His proposed work is able to
detect aggressive action regardless of the context and the number of involved persons.
However, no real-time system implementation was discussed.
In recent years, vision-based action recognition worked on classifying human
behaviour based on human action (Chen et al., 2008; Lau et al., 2017; Chang et al.,
2010). As human behaviour vary greatly, developing a video cookbook could be time
consuming and tedious, as too many code words, or too few, will hurt the recognition
performance, especially for real-time systems. In recent years, building a mature
behaviour tracking system across multiple cameras was attempted (Chang et al., 2010).
This system has been deployed especially to handle crowded areas. Having similar
scenarios in our case study, we proposed a cooperative detection scheme across multiple
cameras in our system to:
1 increase detection accuracy
2 reduce false positive, arising from crowdedness and occlusions.
In our case, when multiple cameras were deployed, especially in an enclosed cage
environment, it could allow the system to understand how human interaction takes place,
even in crowded conditions, to fully detect aggressive behaviour.
In this paper, we propose a new framework to extract candidate event(s) in an image,
and to classify them as potential aggressive behaviour, named GuARD. GuARD is a
surveillance system for detecting potential violent behaviour in a scene, named
aggressive-behaviour-like region(s), in a cage environment. The usefulness of this
proposed work is multiple as it is able to:
1 analyse multiple cameras input scene in real-time
2 raise an alarm when aggressive-behaviour-like region(s) is detected using
cooperative detection scheme
3 record the decision triggered in (2).
4 P.Y. Lau et al.
The remainder of this paper includes: Section 2 that outlines the GuARD framework;
Section 3 that shows implementation with analysis; and Section 4 that concludes the
paper with recommendations for future works.
2 GuARD framework
The guided aggressive behaviour detection system, abbreviated as GuARD, is developed
using OpenCV libraries that is widely used in real-time computer vision application.
GuARD system flow is illustrated in Figure 1. In Step 1, the video acquisition set-up is
discussed. In Step 2, for each input scene, we obtained a foreground region(s), being a
candidate region(s), using background subtraction techniques which extract the moving
regions. In Step 3, the resultant image from Step 2 will be thresholded using Tx, a value
which represents the speed of motion detected, obtained through rigorous testing. In
Step 4, we compensates the non-uniform perspective in the images, obtained using corner
mount cameras, by rotating the image until the perspective could be represented
horizontally, i.e., part further away from the camera will be smaller and area closer to the
camera be larger. Step 5 aims to divide the image into two grids, whereby candidate
region(s) from Step 4 that is in the far-grid will be compensate with additional pixels to
allow a fair qualitative study for all candidate regions. At first, Step 6 attempts to
pre-classify all the candidate region(s) into aggressive-behaviour-like region(s) or
non-aggressive-behaviour-like region(s), for each camera. Later, from the two sets of
aggressive-behaviour-like region(s), one from camera1 and another from camera2; we
classify these regions based on the proposed cooperative detection scheme, a majority
voting system. Regions classified as aggressive-behaviour-like region(s) here will be
stored in the system and an alarm will be provided, as the output, to the system
administrator
Figure 1 GuARD framework
Image
Acquisition
Change
Detection
Fast
Motion
Detection
Perspective
Correction
Scale
Correction
Decision
Step1 Step2 Step3 Step4 Step5 Step6
2.1 Step 1: Image acquisition
An experimental setup that enables subsequent acquisition of real-time data for analysis
is described [see Figure 2(a)]. Two corners mount cameras with the average vertical and
horizontal field-of-view (FoV) set to 80 to 91 degree and 100 to 120 degree, respectively,
is considered in an enclosed cage environment. This large FoV is to enable the whole
cage scenario being monitored, with minimum blind spot, from each camera. Camera1 is
A real time aggressive human behaviour detection system 5
installed at the top left corner, while the camera2 is installed at the top right corner, as
indicated in Figure 2(a). To prevent scenes in high resolution from greatly slowing down
the performance of the system, the input image is resized to 320 × 240 pixels, in RGB
colour format.
Figure 2 System output, (a) multi-camera set-up for monitoring (b) Step 3: change detection for
camera1 (c) Step 4: perspective correction for camera1 (d) Step 5: correction concept
for camera1 (e) Step 5: scale correction for camera1 (f) Step 6: individual region
analysis for camera1 (see online version for colours)
Corner mount camera
camera1
camera2
(a) (b) (c)
(d) (e) (f)
2.2 Step 2: Fast motion detection
Here, at first, the acquired image is pre-processed to enhance the contrast using the
contrast-limited adaptive histogram equalisation method. Later, in this step, a forward
motion estimation method is used to obtain candidate region(s) for each input scene,
being It(x, y). It1(x, y), being the current image, are subtracted with a past image, being
three-frame apart, namely It2(x, y) to obtain the estimated forward motion. The frame here
depends on the choice of past frame selected, such as selecting every fifth frame from the
input sequences (see Section 3 for further explanations).
2.3 Step 3: Change detection
A threshold parameter is applied to obtain the binary motions between consecutive
frames, being a candidate region(s) from Step 2. Taking into account that It1(x, y) and
It2(x, y) are from the same source, a forward motion analysis can be applied to estimate
the forward motion information by applying equation (1). To filter those motion
information candidate region(s), a forward threshold value named Tf is applied, being the
6 P.Y. Lau et al.
speed of change itself. The resultant image presents aggressive behaviour candidate
region(s). After rigorous testing, herewith Tf is set to 40 – see Figure 2(b).
( )
1 2 1 2
, ( , ) ( , )
t t t t
CD I I I x y I x y
= − (1)
2.4 Step 4: Perspective correction
In Step 4, we compensate the non-uniform perspective in the resultant image from Step 3,
by rotating the image until the perspective could be represented, i.e., part further away
from the camera will be far and area closer to the camera be near, to allow quantitative
evaluation for the cage environments, see Figure 2(c). The rotation angle should take into
consideration that pixel representation is much stronger further from the camera, and this
step prepares to compensate the pixel value, especially those pixel(s) further away from
each camera, and vice versa, respectively (Wakefield and Genin, 1987).
2.5 Step 5: Scale correction
In Step 5, the resultant image from Step 4 will undergo a perspective difference
correction, calculated at the candidate region(s) bounding box centroid location. This
method overlays two grids, i.e., grid A and grid B, to trade-off between the actual sizes
of area covered in the image with those acquired through the imaging device – see
Figures 2(d) and 2(e). In Figure 2(d), noticed that for regions further away from the
camera source (area B), the area covered by this region is much smaller, and vice versa,
for each camera, respectively for the same person. After rigorous testing, region B pixels
size will be compensated 1.5 times. As an example, if region B candidate region pixel
count is 60, its actual pixel region, after compensation, will be 90.
2.6 Step 6: Decision
In this final step, we processed all candidate regions and discard non-aggressive-
behaviour-like region(s). The image(s) will be classified as containing aggressive
behaviour if one or more aggressive-behaviour-like region(s) is detected from both
corner mount cameras. This image will be later stored and an alarmed shall be issued to
provide warning to the system administrator. In this step, candidate region(s) obtained in
Step 5 will be processed, taking into account the features such as their area and their
bounding box positions for each camera, i.e., localised event.
• Area: herewith, aggressive behaviour is being associated with large candidate
region(s). Therefore, a threshold value, Ty, for the grid A and the grid B in Step 5,
namely [A, B], after rigorous experimental results, are set to [60, 90], proportional to
the image size.
The condition above allow, for instance, discarding foreground candidate regions
that correspond to noise leaving only the aggressive-behaviour-like candidate
regions. All candidate regions, is further threshold, using Tz, set to 50 after rigorous
experiments, in a subsequent region-based background subtraction, being a more
refined process – see Figure 2(f). Here, each aggressive-behaviour-like region
decision will serve as a candidate for final decision.
A real time aggressive human behaviour detection system 7
• Cooperation: as described earlier, the purpose of this paper is to develop a
vision-based system that is able to monitor aggressive activities of individuals using
multiple cameras. The individual detection results for each image, i.e., from camera1
and camera2, as describe earlier, will be further analysed here. In this further
analysis, we employed a cooperative detection scheme to:
1 increase detection accuracy
2 reduce false positive, arising from crowdedness and occlusions – see Table 1.
Table 1 describe the decision as true positive only when both localised detection
decision, at a given time, from both cameras, i.e., when both camera detection
results, are true positive, indicating an aggressive behaviour is being detected.
Table 1 Cooperative detection scheme for decision making
Event Category Events
Aggressive behaviour Camera1: aggressive behaviour
Camera2: aggressive behaviour
Grouping formation Type1:
Camera1: non-aggressive behaviour
Camera2: aggressive behaviour
Type2:
Camera1: non-aggressive behaviour
Camera2: aggressive behaviour
3 Experimental results
A system was developed to evaluate the performance of the GuARD framework
discussed in Section 2. The processes are tested on an Intel i5 Core 1.80 GHz with 4 GB
of RAM. The evaluation includes analysing:
1 the success rate in detecting aggressive behaviour in cage environment
2 the performance in terms of processing time and latency.
A total of seven different videos obtained with no additional lighting were provided (as
listed in Table 2) and they were evaluated based on the following conditions:
• different frame selection (processing) performance analysis for single camera
• different scenario performance analysis for single camera
• different resolution and performance analysis for single camera
• performance analysis across multiple cameras.
8 P.Y. Lau et al.
Table 2 Descriptions of video used in experiments
No. Video clips length and resolution Description
6 persons in a cage environment
Video1 4 minutes with 320 × 240
2 scenes 3 persons fighting
7 persons in a cage environment
2 scenes 4 persons fighting
3 scenes 2 persons fighting
1 scene 3 persons fighting
Video2 13.51 minutes with 640 × 240
2 scenes 6 persons fighting
6 persons in a cage environment
1 scene 4 persons fighting
2 scenes 3 persons fighting
1 scene 2 persons fighting
Video3 6 minutes with 340 × 240
2 scenes 6 persons fighting
6 persons in a cage environment
Video4 2 minutes with 640 × 480
2 scenes 4 persons fighting
6 persons in a cage environment
2 scenes 4 persons fighting
Video5 4 minutes with 640 × 480
1 scene 2 persons fighting
6 persons in a cage environment
2 scenes 4 persons fighting
2 scenes 2 persons fighting
Video6 10:26 minutes 320 × 240
2 scenes 6 persons fighting
6 person in a cage environment
2 scenes 4 persons fighting
10:29 minutes 320 × 240
2 scenes 2 persons fighting
Video7
2 scenes 6 persons fighting
3.1 Different frame selection (processing) performance analysis for single
camera
In this experiment, we investigated different scene with different frame selection. As
shown in Table 1, we have investigated with different frame selection options. A
four-minute sequence was selected in this experiment, i.e., video1, with 320 × 240
resolutions (15 fps):
1 frame selection: processing every frame
2 frame selection: processing every fifth frame
3 frame selection: processing every tenth frame.
As shown in Figure 3, in order to have real-time system, the acquired image should be, at
least, processed every fifth frame.
A real time aggressive human behaviour detection system 9
Figure 3 Performance of different frame selection options and processing time (see online
version for colours)
3.2 Different scenario performance analysis for single camera
In this experiment, we investigated different scenes with different number of aggressive
behaviour involving different number of person(s) using video2. As shown in Table 3, all
scene(s) with different fighting characteristic(s) are able to be detected.
Table 3 System performances: detection results for different aggressive behaviour and number
of person involved (see online version for colours)
Scenario Camera1: detection result(s)
Scene 1 – 3 persons fighting
Scene 2 – 2 persons fighting 3 groups
10 P.Y. Lau et al.
Table 3 System performances: detection results for different aggressive behaviour and number
of person involved (continued) (see online version for colours)
Scenario Camera1: detection result(s)
Scene 3 – 6 persons fighting 1 group
Scene 4 – 6 persons fighting 2 groups
3.3 Different scene resolution and performance analysis for single camera
Table 4 lists four selected videos employed to investigate the performance of our
proposed work on scenes with different video resolutions and length. All videos selected
contain different type of aggressive behaviour. The four selected videos are:
1 video1: four minute video with 320 × 240 resolutions (15 fps)
2 video3: six minute video with 320 × 240 resolutions (15 fps)
3 video4: two minute video with 640 × 480 resolutions (14 fps)
4 video5: four minute video with 640 × 480 resolutions (14 fps).
As shown in Table 4, experimental results shown that, for a real-time system, the
resolution of processed image should be, at most, 320 × 240 resolutions.
Table 4 Performance of different input resolution and duration
Input video Processing duration Average performance
182 s
180 s
180 s
182 s
177 s
181 s
1
179 s
Ave. 180 seconds (3 minutes)
to process 4 minutes video
A real time aggressive human behaviour detection system 11
Table 4 Performance of different input resolution and duration (continued)
Input video Processing duration Average performance
270 s
293 s
270 s
276 s
264 s
270 s
3
271 s
Ave. 273 seconds
(4 minutes 33 seconds)
to process 6 minutes video
286 s
262 s
267 s
266 s
272 s
263 s
4
266 s
Ave. 268 seconds
(4 minutes 28 seconds)
to process 2 minutes video
572 s
531 s
534 s
529 s
518 s
528 s
5
538 s
Ave 535 seconds
(8 minutes 55 seconds)
to process 4 minutes video
3.4 Performance analysis across multiple camera
In this experiment, we investigated two different camera scenes:
1 first scene is obtained from camera1
2 second scene is obtained from camera2.
These videos selected contain different aggressive behaviour. Here, camera1 and camera2
duration is 10:26 and 10:29, namely video6 and video7, respectively, with 320 × 240
resolutions. This video’s has been annotated, i.e., the aggressive behaviour appeared in
the video has been studied in detail by experts to obtain the ground-truth, marked as the
red-line in the figures. The aggressive behaviour detection accuracy based on individual
camera is lower compared to the results obtained using the cooperative detection scheme
– see Figures 4(a)–4(c).
In these figures, the x-axis represents the region’s pixel size while the y-axis
represents the frame number [note that the pixels related to grouping formation have been
extensively removed in Figure 4(c)] Referring to the detection results for individual
camera shown in Figure 4(a), i.e., camera1, the results obtained consist of many false
positive errors because the aggressive behaviour happened at the far-side of the camera1.
In the case of the detection results for camera2, the same aggressive behaviour happens
near the camera, so detection is more accurate and reduces the false positive error, as
shown in Figure 4(b).
12 P.Y. Lau et al.
Figure 4 System performance, (a) detection results for camera1 (b) detection results for camera2
(c) detection results with cooperative detection scheme across multiple cameras
(based-on camera2 results) (see online version for colours)
0
1000
2000
3000
4000
5000
6000
7000
8000
0 100 200 300 400 500 600 700 800 900 1000110012001300140015001600170018001900
Pixel Value
t
(a)
0
1000
2000
3000
4000
5000
6000
7000
8000
0 100 200 300 400 500 600 700 800 900 1000110012001300140015001600170018001900
Pixel Value 
t
(b)
0
1000
2000
3000
4000
5000
6000
7000
8000
0 100 200 300 400 500 600 700 800 900 1000110012001300140015001600170018001900
Pixel Value
t
(c)
A real time aggressive human behaviour detection system 13
To further explain this correspondence, the cooperation detection scheme proposed
considers both aspects discussed earlier, thus, are able to see an improvement in overall
detection – see Figure 5. Noticed that Figure 4(c) shows the location of the aggressive
behaviour with respect to camera2, i.e., mostly the aggressive behaviour detected
happened nearer to camera2 (due to y-axis value higher than 1,000).
Figure 5 Detection of aggressive behaviour and corresponding position (a) in the graph and (b) in
the camera1 and (c) in camera2 (see online version for colours)
(a) (b) (c)
4 Discussion and conclusions
In this section, we further evaluate a 13.14 minute video with 320 × 240 resolutions,
namely camera1, and a 13.05 minute video with 320 × 240 resolutions, namely camera2,
respectively. These videos were annotated by experts, i.e., the ground-truth for the
aggressive behaviour has been obtained – see Table 5. Referring to the aggressive
behaviour detection from camera1: 2:56–3:15, the aggressive activity happens in the
‘front part of the camera’ or grid A, marked in green, see Figure 6(a). However, for the
aggressive behaviour detection from camera1:6:45–7:04, the aggressive behaviour
activity happens at the ‘far part of the camera’ or grid B, marked in blue, see Figure 6(a).
In this study, the detection results, based on a single camera, shows many falsely detected
aggressive behaviour – see Figure 6(b). In comparison, Figure 6(c) shows a much
improved detection results for aggressive behaviour.
Table 5 Ground truth for aggressive behaviour analysis
Input video Ground truth camera1 Ground truth camera2
320 × 240 resolution 2:56–3:15 2:57–3:15
3:33–3:52 3:35–3:53
4:37–5:00 4:38–5:02
5:25–5:43 5:25–5:44
6:44–7:00 6:45–7:04
7:09–7:30 7:15–7:34
8:21–8:40 8:26–8:45
8:54–9:17 8:55–9:23
14 P.Y. Lau et al.
Figure 6 System performance, (a) detection results for camera1 (b) detection results for camera2
(c) detection results with cooperative detection scheme across multiple cameras
(based-on camera1 detection) (see online version for colours)
(a)
(b)
(c)
A real time aggressive human behaviour detection system 15
A practical framework is proposed in this work to develop a vision-based system that is
able to monitor aggressive activities of individuals using multiple cameras. Figure 6(c)
shows the improved detection accuracy using the proposed cooperative detection scheme.
In general, it is now possible to study aggressive behaviour in cage environment by
employing intelligent video analysis technology. The experimental results indicate that
aggressive behaviour could be effectively detected.
Table 6 Comparison with other works
Method
Aggressive
behaviour module
Multiple camera
collaborative module
Real-time
system
Chen et al. (2008) Yes No No
Ouanane et al. (2012) Yes No No
Theodoridis and Hu (2013) Yes No No
Chang et al. (2010) Yes Yes, with calibrated cameras/tracking No
Proposed work Yes Yes, with uncalibrated cameras Yes
Further, we compared our proposed system with other aggressive behaviour detection
systems – see Table 6. In particular, the work of Chang et al., being closely related to the
proposed work, works with four standard CCTV cameras:
1 three for tracking
2 one for PTZ targeting.
The multi-camera multi-target tracking system presented there is sophisticate as it is used
for tracking individuals cooperatively in a synchronised manner, in real-time. These
events of interest are then fed to the operator for group analysis and group activity
recognition. However, the tracking system required pre-calibrated scenes and the cameras
are mounted in an open space, with high fencing and walls. In our case, these camera
set-ups will be difficult to be realised due to the enclosed cage environment with low
fencing. For the specific case of the real-time aggressive behaviour detection, where
hardware (camera maker) and software (system maker) should work together, instead of
the individual system mounted onto the scene, there is still considerable amount of work
ahead.
References
Chang, M-C., Krahnstoever, N., Lim, S. and Yu, T. (2010) ‘Group level activity recognition in
crowded environments across multiple cameras’, 2010 7th IEEE International Conference on
Advanced Video and Signal Based Surveillance, Boston, MA, pp.56–63.
Chen, D., Wactlar, H., Chen, M.Y., Gao, C., Bharucha, A. and Hauptmann, A. (2008) ‘Recognition
of aggressive human behavior using binary local motion descriptors’, 2008 30th Annual
International Conference of the IEEE Engineering in Medicine and Biology Society,
Vancouver, BC, pp.5238–5241.
Haritaoglu, I., Harwood, D. and Davis, L.S. (2000) ‘W4: real-time surveillance of people and their
activities’, IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 22, No. 8,
pp.809–830.
16 P.Y. Lau et al.
Lau, P.Y., Hon, H.W., Kadim, Z. and Liang, K.M. (2017) ‘GuARD: a real-time system for
detecting aggressive human behavior in cage environment’, in Phon-Amnuaisuk, S., Ang, S.P.
and Lee, S.Y. (Eds.): Multi-Disciplinary Trends in Artificial Intelligence. Lecture Notes in
Computer Science, Vol. 10607, pp.315–322, Springer.
Lyu, Y. and Yang, Y. (2015) ‘Violence detection algorithm based on local spatio-temporal features
and optical flow’, 2015 International Conference on Industrial Informatics – Computing
Technology, Intelligent Technology, Industrial Information Integration, Wuhan, pp.307–311.
Ouanane, A., Serir, A. and Kerouh, F. (2012) ‘New geometric descriptor for the recognition of
aggressive human behavior’, 2012 5th International Congress on Image and Signal
Processing, Chongqing, pp.148–153.
Theodoridis, T. and Hu, H. (2013) ‘Modeling aggressive behaviors with evolutionary taxonomers’,
IEEE Transactions on Human-Machine Systems, Vol. 43, No. 3, pp.302–313.
Wakefield, W.W. and Genin, A. (1987) ‘The use of a Canadian (perspective) grid in deep-sea
photography’, Deep Sea Research Part A. Oceanographic Research Papers, Vol. 34, No. 3,
pp.469–478.

More Related Content

What's hot

Volume 2-issue-6-1960-1964
Volume 2-issue-6-1960-1964Volume 2-issue-6-1960-1964
Volume 2-issue-6-1960-1964Editor IJARCET
 
IRJET- Real-Time Object Detection System using Caffe Model
IRJET- Real-Time Object Detection System using Caffe ModelIRJET- Real-Time Object Detection System using Caffe Model
IRJET- Real-Time Object Detection System using Caffe ModelIRJET Journal
 
IRJET- Application of MCNN in Object Detection
IRJET-  	  Application of MCNN in Object DetectionIRJET-  	  Application of MCNN in Object Detection
IRJET- Application of MCNN in Object DetectionIRJET Journal
 
Comparative Analysis of Computational Intelligence Paradigms in WSN: Review
Comparative Analysis of Computational Intelligence Paradigms in WSN: ReviewComparative Analysis of Computational Intelligence Paradigms in WSN: Review
Comparative Analysis of Computational Intelligence Paradigms in WSN: Reviewiosrjce
 
Java Implementation based Heterogeneous Video Sequence Automated Surveillance...
Java Implementation based Heterogeneous Video Sequence Automated Surveillance...Java Implementation based Heterogeneous Video Sequence Automated Surveillance...
Java Implementation based Heterogeneous Video Sequence Automated Surveillance...CSCJournals
 
Recognition of Silverleaf Whitefly and Western Flower Thrips Via Image Proces...
Recognition of Silverleaf Whitefly and Western Flower Thrips Via Image Proces...Recognition of Silverleaf Whitefly and Western Flower Thrips Via Image Proces...
Recognition of Silverleaf Whitefly and Western Flower Thrips Via Image Proces...IRJET Journal
 
SSII2021 [SS2] Deepfake Generation and Detection – An Overview (ディープフェイクの生成と検出)
SSII2021 [SS2] Deepfake Generation and Detection – An Overview (ディープフェイクの生成と検出)SSII2021 [SS2] Deepfake Generation and Detection – An Overview (ディープフェイクの生成と検出)
SSII2021 [SS2] Deepfake Generation and Detection – An Overview (ディープフェイクの生成と検出)SSII
 
IRJET - Deep Learning Applications and Frameworks – A Review
IRJET -  	  Deep Learning Applications and Frameworks – A ReviewIRJET -  	  Deep Learning Applications and Frameworks – A Review
IRJET - Deep Learning Applications and Frameworks – A ReviewIRJET Journal
 
Approaching Behaviour Monitor and Vibration Indication in Developing a Genera...
Approaching Behaviour Monitor and Vibration Indication in Developing a Genera...Approaching Behaviour Monitor and Vibration Indication in Developing a Genera...
Approaching Behaviour Monitor and Vibration Indication in Developing a Genera...toukaigi
 
Vehicle Monitoring System based On IOT, Using 4G/LTE
Vehicle Monitoring System based On IOT, Using 4G/LTEVehicle Monitoring System based On IOT, Using 4G/LTE
Vehicle Monitoring System based On IOT, Using 4G/LTEDr. Amarjeet Singh
 
05012013150050 computerised-paper-evaluation-using-neural-network
05012013150050 computerised-paper-evaluation-using-neural-network05012013150050 computerised-paper-evaluation-using-neural-network
05012013150050 computerised-paper-evaluation-using-neural-networknimmajji
 
11.0003www.iiste.org call for paper.survey on wireless intelligent video surv...
11.0003www.iiste.org call for paper.survey on wireless intelligent video surv...11.0003www.iiste.org call for paper.survey on wireless intelligent video surv...
11.0003www.iiste.org call for paper.survey on wireless intelligent video surv...Alexander Decker
 

What's hot (18)

Volume 2-issue-6-1960-1964
Volume 2-issue-6-1960-1964Volume 2-issue-6-1960-1964
Volume 2-issue-6-1960-1964
 
K1803027074
K1803027074K1803027074
K1803027074
 
SUB15786
SUB15786SUB15786
SUB15786
 
IRJET- Real-Time Object Detection System using Caffe Model
IRJET- Real-Time Object Detection System using Caffe ModelIRJET- Real-Time Object Detection System using Caffe Model
IRJET- Real-Time Object Detection System using Caffe Model
 
IRJET- Application of MCNN in Object Detection
IRJET-  	  Application of MCNN in Object DetectionIRJET-  	  Application of MCNN in Object Detection
IRJET- Application of MCNN in Object Detection
 
Comparative Analysis of Computational Intelligence Paradigms in WSN: Review
Comparative Analysis of Computational Intelligence Paradigms in WSN: ReviewComparative Analysis of Computational Intelligence Paradigms in WSN: Review
Comparative Analysis of Computational Intelligence Paradigms in WSN: Review
 
Java Implementation based Heterogeneous Video Sequence Automated Surveillance...
Java Implementation based Heterogeneous Video Sequence Automated Surveillance...Java Implementation based Heterogeneous Video Sequence Automated Surveillance...
Java Implementation based Heterogeneous Video Sequence Automated Surveillance...
 
Recognition of Silverleaf Whitefly and Western Flower Thrips Via Image Proces...
Recognition of Silverleaf Whitefly and Western Flower Thrips Via Image Proces...Recognition of Silverleaf Whitefly and Western Flower Thrips Via Image Proces...
Recognition of Silverleaf Whitefly and Western Flower Thrips Via Image Proces...
 
SSII2021 [SS2] Deepfake Generation and Detection – An Overview (ディープフェイクの生成と検出)
SSII2021 [SS2] Deepfake Generation and Detection – An Overview (ディープフェイクの生成と検出)SSII2021 [SS2] Deepfake Generation and Detection – An Overview (ディープフェイクの生成と検出)
SSII2021 [SS2] Deepfake Generation and Detection – An Overview (ディープフェイクの生成と検出)
 
USENIX OSDI2010 Report
USENIX OSDI2010 ReportUSENIX OSDI2010 Report
USENIX OSDI2010 Report
 
IRJET - Deep Learning Applications and Frameworks – A Review
IRJET -  	  Deep Learning Applications and Frameworks – A ReviewIRJET -  	  Deep Learning Applications and Frameworks – A Review
IRJET - Deep Learning Applications and Frameworks – A Review
 
Approaching Behaviour Monitor and Vibration Indication in Developing a Genera...
Approaching Behaviour Monitor and Vibration Indication in Developing a Genera...Approaching Behaviour Monitor and Vibration Indication in Developing a Genera...
Approaching Behaviour Monitor and Vibration Indication in Developing a Genera...
 
PhD Defense Natalia Díaz Rodríguez
PhD Defense Natalia Díaz RodríguezPhD Defense Natalia Díaz Rodríguez
PhD Defense Natalia Díaz Rodríguez
 
Nca pres saturday
Nca pres saturdayNca pres saturday
Nca pres saturday
 
Vehicle Monitoring System based On IOT, Using 4G/LTE
Vehicle Monitoring System based On IOT, Using 4G/LTEVehicle Monitoring System based On IOT, Using 4G/LTE
Vehicle Monitoring System based On IOT, Using 4G/LTE
 
590 599
590 599590 599
590 599
 
05012013150050 computerised-paper-evaluation-using-neural-network
05012013150050 computerised-paper-evaluation-using-neural-network05012013150050 computerised-paper-evaluation-using-neural-network
05012013150050 computerised-paper-evaluation-using-neural-network
 
11.0003www.iiste.org call for paper.survey on wireless intelligent video surv...
11.0003www.iiste.org call for paper.survey on wireless intelligent video surv...11.0003www.iiste.org call for paper.survey on wireless intelligent video surv...
11.0003www.iiste.org call for paper.survey on wireless intelligent video surv...
 

Similar to A real time aggressive human behaviour detection system in cage environment across multiple cameras

SURVEILLANCE VIDEO BASED ROBUST DETECTION AND NOTIFICATION OF REAL TIME SUSPI...
SURVEILLANCE VIDEO BASED ROBUST DETECTION AND NOTIFICATION OF REAL TIME SUSPI...SURVEILLANCE VIDEO BASED ROBUST DETECTION AND NOTIFICATION OF REAL TIME SUSPI...
SURVEILLANCE VIDEO BASED ROBUST DETECTION AND NOTIFICATION OF REAL TIME SUSPI...cscpconf
 
Surveillance Video Based Robust Detection and Notification of Real Time Suspi...
Surveillance Video Based Robust Detection and Notification of Real Time Suspi...Surveillance Video Based Robust Detection and Notification of Real Time Suspi...
Surveillance Video Based Robust Detection and Notification of Real Time Suspi...csandit
 
IRJET- Survey on Detection of Crime
IRJET-  	  Survey on Detection of CrimeIRJET-  	  Survey on Detection of Crime
IRJET- Survey on Detection of CrimeIRJET Journal
 
Survey on Human Behavior Recognition using CNN
Survey on Human Behavior Recognition using CNNSurvey on Human Behavior Recognition using CNN
Survey on Human Behavior Recognition using CNNIRJET Journal
 
Abnormal activity detection in surveillance video scenes
Abnormal activity detection in surveillance video scenesAbnormal activity detection in surveillance video scenes
Abnormal activity detection in surveillance video scenesTELKOMNIKA JOURNAL
 
Human Activity Recognition
Human Activity RecognitionHuman Activity Recognition
Human Activity RecognitionIRJET Journal
 
Event Detection Using Background Subtraction For Surveillance Systems
Event Detection Using Background Subtraction For Surveillance SystemsEvent Detection Using Background Subtraction For Surveillance Systems
Event Detection Using Background Subtraction For Surveillance SystemsIRJET Journal
 
Social Distance Monitoring and Mask Detection Using Deep Learning Techniques
Social Distance Monitoring and Mask Detection Using Deep Learning TechniquesSocial Distance Monitoring and Mask Detection Using Deep Learning Techniques
Social Distance Monitoring and Mask Detection Using Deep Learning TechniquesIRJET Journal
 
A proposal model using deep learning model integrated with knowledge graph fo...
A proposal model using deep learning model integrated with knowledge graph fo...A proposal model using deep learning model integrated with knowledge graph fo...
A proposal model using deep learning model integrated with knowledge graph fo...TELKOMNIKA JOURNAL
 
3.survey on wireless intelligent video surveillance system using moving objec...
3.survey on wireless intelligent video surveillance system using moving objec...3.survey on wireless intelligent video surveillance system using moving objec...
3.survey on wireless intelligent video surveillance system using moving objec...Alexander Decker
 
AN APPROACH TO DESIGN A RECTANGULAR MICROSTRIP PATCH ANTENNA IN S BAND BY TLM...
AN APPROACH TO DESIGN A RECTANGULAR MICROSTRIP PATCH ANTENNA IN S BAND BY TLM...AN APPROACH TO DESIGN A RECTANGULAR MICROSTRIP PATCH ANTENNA IN S BAND BY TLM...
AN APPROACH TO DESIGN A RECTANGULAR MICROSTRIP PATCH ANTENNA IN S BAND BY TLM...prj_publication
 
Pattern recognition using video surveillance for wildlife applications
Pattern recognition using video surveillance for wildlife applicationsPattern recognition using video surveillance for wildlife applications
Pattern recognition using video surveillance for wildlife applicationsprjpublications
 
DEEP LEARNING APPROACH FOR SUSPICIOUS ACTIVITY DETECTION FROM SURVEILLANCE VIDEO
DEEP LEARNING APPROACH FOR SUSPICIOUS ACTIVITY DETECTION FROM SURVEILLANCE VIDEODEEP LEARNING APPROACH FOR SUSPICIOUS ACTIVITY DETECTION FROM SURVEILLANCE VIDEO
DEEP LEARNING APPROACH FOR SUSPICIOUS ACTIVITY DETECTION FROM SURVEILLANCE VIDEOIRJET Journal
 
Robust Motion Detection and Tracking of Moving Objects using HOG Feature and ...
Robust Motion Detection and Tracking of Moving Objects using HOG Feature and ...Robust Motion Detection and Tracking of Moving Objects using HOG Feature and ...
Robust Motion Detection and Tracking of Moving Objects using HOG Feature and ...CSCJournals
 
Automatic video censoring system using deep learning
Automatic video censoring system using deep learningAutomatic video censoring system using deep learning
Automatic video censoring system using deep learningIJECEIAES
 
Real Time Crime Detection using Deep Learning
Real Time Crime Detection using Deep LearningReal Time Crime Detection using Deep Learning
Real Time Crime Detection using Deep LearningIRJET Journal
 
A novel enhanced algorithm for efficient human tracking
A novel enhanced algorithm for efficient human trackingA novel enhanced algorithm for efficient human tracking
A novel enhanced algorithm for efficient human trackingIJICTJOURNAL
 
Proposed Multi-object Tracking Algorithm Using Sobel Edge Detection operator
Proposed Multi-object Tracking Algorithm Using Sobel Edge Detection operatorProposed Multi-object Tracking Algorithm Using Sobel Edge Detection operator
Proposed Multi-object Tracking Algorithm Using Sobel Edge Detection operatorQUESTJOURNAL
 
IRJET- Prediction of Anomalous Activities in a Video
IRJET-  	  Prediction of Anomalous Activities in a VideoIRJET-  	  Prediction of Anomalous Activities in a Video
IRJET- Prediction of Anomalous Activities in a VideoIRJET Journal
 

Similar to A real time aggressive human behaviour detection system in cage environment across multiple cameras (20)

SURVEILLANCE VIDEO BASED ROBUST DETECTION AND NOTIFICATION OF REAL TIME SUSPI...
SURVEILLANCE VIDEO BASED ROBUST DETECTION AND NOTIFICATION OF REAL TIME SUSPI...SURVEILLANCE VIDEO BASED ROBUST DETECTION AND NOTIFICATION OF REAL TIME SUSPI...
SURVEILLANCE VIDEO BASED ROBUST DETECTION AND NOTIFICATION OF REAL TIME SUSPI...
 
Surveillance Video Based Robust Detection and Notification of Real Time Suspi...
Surveillance Video Based Robust Detection and Notification of Real Time Suspi...Surveillance Video Based Robust Detection and Notification of Real Time Suspi...
Surveillance Video Based Robust Detection and Notification of Real Time Suspi...
 
IRJET- Survey on Detection of Crime
IRJET-  	  Survey on Detection of CrimeIRJET-  	  Survey on Detection of Crime
IRJET- Survey on Detection of Crime
 
Survey on Human Behavior Recognition using CNN
Survey on Human Behavior Recognition using CNNSurvey on Human Behavior Recognition using CNN
Survey on Human Behavior Recognition using CNN
 
Abnormal activity detection in surveillance video scenes
Abnormal activity detection in surveillance video scenesAbnormal activity detection in surveillance video scenes
Abnormal activity detection in surveillance video scenes
 
Human Activity Recognition
Human Activity RecognitionHuman Activity Recognition
Human Activity Recognition
 
Event Detection Using Background Subtraction For Surveillance Systems
Event Detection Using Background Subtraction For Surveillance SystemsEvent Detection Using Background Subtraction For Surveillance Systems
Event Detection Using Background Subtraction For Surveillance Systems
 
Social Distance Monitoring and Mask Detection Using Deep Learning Techniques
Social Distance Monitoring and Mask Detection Using Deep Learning TechniquesSocial Distance Monitoring and Mask Detection Using Deep Learning Techniques
Social Distance Monitoring and Mask Detection Using Deep Learning Techniques
 
A proposal model using deep learning model integrated with knowledge graph fo...
A proposal model using deep learning model integrated with knowledge graph fo...A proposal model using deep learning model integrated with knowledge graph fo...
A proposal model using deep learning model integrated with knowledge graph fo...
 
3.survey on wireless intelligent video surveillance system using moving objec...
3.survey on wireless intelligent video surveillance system using moving objec...3.survey on wireless intelligent video surveillance system using moving objec...
3.survey on wireless intelligent video surveillance system using moving objec...
 
[IJET-V1I3P20] Authors:Prof. D.S.Patil, Miss. R.B.Khanderay, Prof.Teena Padvi.
[IJET-V1I3P20] Authors:Prof. D.S.Patil, Miss. R.B.Khanderay, Prof.Teena Padvi.[IJET-V1I3P20] Authors:Prof. D.S.Patil, Miss. R.B.Khanderay, Prof.Teena Padvi.
[IJET-V1I3P20] Authors:Prof. D.S.Patil, Miss. R.B.Khanderay, Prof.Teena Padvi.
 
AN APPROACH TO DESIGN A RECTANGULAR MICROSTRIP PATCH ANTENNA IN S BAND BY TLM...
AN APPROACH TO DESIGN A RECTANGULAR MICROSTRIP PATCH ANTENNA IN S BAND BY TLM...AN APPROACH TO DESIGN A RECTANGULAR MICROSTRIP PATCH ANTENNA IN S BAND BY TLM...
AN APPROACH TO DESIGN A RECTANGULAR MICROSTRIP PATCH ANTENNA IN S BAND BY TLM...
 
Pattern recognition using video surveillance for wildlife applications
Pattern recognition using video surveillance for wildlife applicationsPattern recognition using video surveillance for wildlife applications
Pattern recognition using video surveillance for wildlife applications
 
DEEP LEARNING APPROACH FOR SUSPICIOUS ACTIVITY DETECTION FROM SURVEILLANCE VIDEO
DEEP LEARNING APPROACH FOR SUSPICIOUS ACTIVITY DETECTION FROM SURVEILLANCE VIDEODEEP LEARNING APPROACH FOR SUSPICIOUS ACTIVITY DETECTION FROM SURVEILLANCE VIDEO
DEEP LEARNING APPROACH FOR SUSPICIOUS ACTIVITY DETECTION FROM SURVEILLANCE VIDEO
 
Robust Motion Detection and Tracking of Moving Objects using HOG Feature and ...
Robust Motion Detection and Tracking of Moving Objects using HOG Feature and ...Robust Motion Detection and Tracking of Moving Objects using HOG Feature and ...
Robust Motion Detection and Tracking of Moving Objects using HOG Feature and ...
 
Automatic video censoring system using deep learning
Automatic video censoring system using deep learningAutomatic video censoring system using deep learning
Automatic video censoring system using deep learning
 
Real Time Crime Detection using Deep Learning
Real Time Crime Detection using Deep LearningReal Time Crime Detection using Deep Learning
Real Time Crime Detection using Deep Learning
 
A novel enhanced algorithm for efficient human tracking
A novel enhanced algorithm for efficient human trackingA novel enhanced algorithm for efficient human tracking
A novel enhanced algorithm for efficient human tracking
 
Proposed Multi-object Tracking Algorithm Using Sobel Edge Detection operator
Proposed Multi-object Tracking Algorithm Using Sobel Edge Detection operatorProposed Multi-object Tracking Algorithm Using Sobel Edge Detection operator
Proposed Multi-object Tracking Algorithm Using Sobel Edge Detection operator
 
IRJET- Prediction of Anomalous Activities in a Video
IRJET-  	  Prediction of Anomalous Activities in a VideoIRJET-  	  Prediction of Anomalous Activities in a Video
IRJET- Prediction of Anomalous Activities in a Video
 

More from Journal Papers

Experiences in shift left test approach
Experiences in shift left test approachExperiences in shift left test approach
Experiences in shift left test approachJournal Papers
 
Graphene field effect transistor simulation with tcad on top-gate dielectric ...
Graphene field effect transistor simulation with tcad on top-gate dielectric ...Graphene field effect transistor simulation with tcad on top-gate dielectric ...
Graphene field effect transistor simulation with tcad on top-gate dielectric ...Journal Papers
 
Electrochemically reduced graphene oxide (ergo) as humidity sensor effect o...
Electrochemically reduced graphene oxide (ergo) as humidity sensor   effect o...Electrochemically reduced graphene oxide (ergo) as humidity sensor   effect o...
Electrochemically reduced graphene oxide (ergo) as humidity sensor effect o...Journal Papers
 
Electrical bistabilities behaviour of all solution-processed non-volatile mem...
Electrical bistabilities behaviour of all solution-processed non-volatile mem...Electrical bistabilities behaviour of all solution-processed non-volatile mem...
Electrical bistabilities behaviour of all solution-processed non-volatile mem...Journal Papers
 
Electrical transportation mechanisms of molybdenum disulfide flakes graphene ...
Electrical transportation mechanisms of molybdenum disulfide flakes graphene ...Electrical transportation mechanisms of molybdenum disulfide flakes graphene ...
Electrical transportation mechanisms of molybdenum disulfide flakes graphene ...Journal Papers
 
A numerical analysis of various p h level for fiber optic ph sensor based on ...
A numerical analysis of various p h level for fiber optic ph sensor based on ...A numerical analysis of various p h level for fiber optic ph sensor based on ...
A numerical analysis of various p h level for fiber optic ph sensor based on ...Journal Papers
 
A novel character segmentation reconstruction approach for license plate reco...
A novel character segmentation reconstruction approach for license plate reco...A novel character segmentation reconstruction approach for license plate reco...
A novel character segmentation reconstruction approach for license plate reco...Journal Papers
 
A hybrid model based on constraint oselm, adaptive weighted src and knn for l...
A hybrid model based on constraint oselm, adaptive weighted src and knn for l...A hybrid model based on constraint oselm, adaptive weighted src and knn for l...
A hybrid model based on constraint oselm, adaptive weighted src and knn for l...Journal Papers
 
Wafer scale fabrication of nitrogen-doped reduced graphene oxide with enhance...
Wafer scale fabrication of nitrogen-doped reduced graphene oxide with enhance...Wafer scale fabrication of nitrogen-doped reduced graphene oxide with enhance...
Wafer scale fabrication of nitrogen-doped reduced graphene oxide with enhance...Journal Papers
 
Ultrasonic atomization of graphene derivatives for heat spreader thin film de...
Ultrasonic atomization of graphene derivatives for heat spreader thin film de...Ultrasonic atomization of graphene derivatives for heat spreader thin film de...
Ultrasonic atomization of graphene derivatives for heat spreader thin film de...Journal Papers
 
Towards formulating dynamic model for predicting defects in system testing us...
Towards formulating dynamic model for predicting defects in system testing us...Towards formulating dynamic model for predicting defects in system testing us...
Towards formulating dynamic model for predicting defects in system testing us...Journal Papers
 
Test case prioritization using firefly algorithm for software testing
Test case prioritization using firefly algorithm for software testingTest case prioritization using firefly algorithm for software testing
Test case prioritization using firefly algorithm for software testingJournal Papers
 
Preliminary study of poly (tetrahydrofurturyl acrylate) thin film as a potent...
Preliminary study of poly (tetrahydrofurturyl acrylate) thin film as a potent...Preliminary study of poly (tetrahydrofurturyl acrylate) thin film as a potent...
Preliminary study of poly (tetrahydrofurturyl acrylate) thin film as a potent...Journal Papers
 
New weight function for adapting handover margin level over contiguous carrie...
New weight function for adapting handover margin level over contiguous carrie...New weight function for adapting handover margin level over contiguous carrie...
New weight function for adapting handover margin level over contiguous carrie...Journal Papers
 
Implementation of embedded real time monitoring temperature and humidity system
Implementation of embedded real time monitoring temperature and humidity systemImplementation of embedded real time monitoring temperature and humidity system
Implementation of embedded real time monitoring temperature and humidity systemJournal Papers
 
High voltage graphene nanowall trench mos barrier schottky diode characteriza...
High voltage graphene nanowall trench mos barrier schottky diode characteriza...High voltage graphene nanowall trench mos barrier schottky diode characteriza...
High voltage graphene nanowall trench mos barrier schottky diode characteriza...Journal Papers
 
High precision location tracking technology in ir4.0
High precision location tracking technology in ir4.0High precision location tracking technology in ir4.0
High precision location tracking technology in ir4.0Journal Papers
 
Positive developments but challenges still ahead a survey study on ux profe...
Positive developments but challenges still ahead   a survey study on ux profe...Positive developments but challenges still ahead   a survey study on ux profe...
Positive developments but challenges still ahead a survey study on ux profe...Journal Papers
 
Modeling of dirac voltage for highly p doped graphene field effect transistor...
Modeling of dirac voltage for highly p doped graphene field effect transistor...Modeling of dirac voltage for highly p doped graphene field effect transistor...
Modeling of dirac voltage for highly p doped graphene field effect transistor...Journal Papers
 
Implementation of vehicle ventilation system using node mcu esp8266 for remot...
Implementation of vehicle ventilation system using node mcu esp8266 for remot...Implementation of vehicle ventilation system using node mcu esp8266 for remot...
Implementation of vehicle ventilation system using node mcu esp8266 for remot...Journal Papers
 

More from Journal Papers (20)

Experiences in shift left test approach
Experiences in shift left test approachExperiences in shift left test approach
Experiences in shift left test approach
 
Graphene field effect transistor simulation with tcad on top-gate dielectric ...
Graphene field effect transistor simulation with tcad on top-gate dielectric ...Graphene field effect transistor simulation with tcad on top-gate dielectric ...
Graphene field effect transistor simulation with tcad on top-gate dielectric ...
 
Electrochemically reduced graphene oxide (ergo) as humidity sensor effect o...
Electrochemically reduced graphene oxide (ergo) as humidity sensor   effect o...Electrochemically reduced graphene oxide (ergo) as humidity sensor   effect o...
Electrochemically reduced graphene oxide (ergo) as humidity sensor effect o...
 
Electrical bistabilities behaviour of all solution-processed non-volatile mem...
Electrical bistabilities behaviour of all solution-processed non-volatile mem...Electrical bistabilities behaviour of all solution-processed non-volatile mem...
Electrical bistabilities behaviour of all solution-processed non-volatile mem...
 
Electrical transportation mechanisms of molybdenum disulfide flakes graphene ...
Electrical transportation mechanisms of molybdenum disulfide flakes graphene ...Electrical transportation mechanisms of molybdenum disulfide flakes graphene ...
Electrical transportation mechanisms of molybdenum disulfide flakes graphene ...
 
A numerical analysis of various p h level for fiber optic ph sensor based on ...
A numerical analysis of various p h level for fiber optic ph sensor based on ...A numerical analysis of various p h level for fiber optic ph sensor based on ...
A numerical analysis of various p h level for fiber optic ph sensor based on ...
 
A novel character segmentation reconstruction approach for license plate reco...
A novel character segmentation reconstruction approach for license plate reco...A novel character segmentation reconstruction approach for license plate reco...
A novel character segmentation reconstruction approach for license plate reco...
 
A hybrid model based on constraint oselm, adaptive weighted src and knn for l...
A hybrid model based on constraint oselm, adaptive weighted src and knn for l...A hybrid model based on constraint oselm, adaptive weighted src and knn for l...
A hybrid model based on constraint oselm, adaptive weighted src and knn for l...
 
Wafer scale fabrication of nitrogen-doped reduced graphene oxide with enhance...
Wafer scale fabrication of nitrogen-doped reduced graphene oxide with enhance...Wafer scale fabrication of nitrogen-doped reduced graphene oxide with enhance...
Wafer scale fabrication of nitrogen-doped reduced graphene oxide with enhance...
 
Ultrasonic atomization of graphene derivatives for heat spreader thin film de...
Ultrasonic atomization of graphene derivatives for heat spreader thin film de...Ultrasonic atomization of graphene derivatives for heat spreader thin film de...
Ultrasonic atomization of graphene derivatives for heat spreader thin film de...
 
Towards formulating dynamic model for predicting defects in system testing us...
Towards formulating dynamic model for predicting defects in system testing us...Towards formulating dynamic model for predicting defects in system testing us...
Towards formulating dynamic model for predicting defects in system testing us...
 
Test case prioritization using firefly algorithm for software testing
Test case prioritization using firefly algorithm for software testingTest case prioritization using firefly algorithm for software testing
Test case prioritization using firefly algorithm for software testing
 
Preliminary study of poly (tetrahydrofurturyl acrylate) thin film as a potent...
Preliminary study of poly (tetrahydrofurturyl acrylate) thin film as a potent...Preliminary study of poly (tetrahydrofurturyl acrylate) thin film as a potent...
Preliminary study of poly (tetrahydrofurturyl acrylate) thin film as a potent...
 
New weight function for adapting handover margin level over contiguous carrie...
New weight function for adapting handover margin level over contiguous carrie...New weight function for adapting handover margin level over contiguous carrie...
New weight function for adapting handover margin level over contiguous carrie...
 
Implementation of embedded real time monitoring temperature and humidity system
Implementation of embedded real time monitoring temperature and humidity systemImplementation of embedded real time monitoring temperature and humidity system
Implementation of embedded real time monitoring temperature and humidity system
 
High voltage graphene nanowall trench mos barrier schottky diode characteriza...
High voltage graphene nanowall trench mos barrier schottky diode characteriza...High voltage graphene nanowall trench mos barrier schottky diode characteriza...
High voltage graphene nanowall trench mos barrier schottky diode characteriza...
 
High precision location tracking technology in ir4.0
High precision location tracking technology in ir4.0High precision location tracking technology in ir4.0
High precision location tracking technology in ir4.0
 
Positive developments but challenges still ahead a survey study on ux profe...
Positive developments but challenges still ahead   a survey study on ux profe...Positive developments but challenges still ahead   a survey study on ux profe...
Positive developments but challenges still ahead a survey study on ux profe...
 
Modeling of dirac voltage for highly p doped graphene field effect transistor...
Modeling of dirac voltage for highly p doped graphene field effect transistor...Modeling of dirac voltage for highly p doped graphene field effect transistor...
Modeling of dirac voltage for highly p doped graphene field effect transistor...
 
Implementation of vehicle ventilation system using node mcu esp8266 for remot...
Implementation of vehicle ventilation system using node mcu esp8266 for remot...Implementation of vehicle ventilation system using node mcu esp8266 for remot...
Implementation of vehicle ventilation system using node mcu esp8266 for remot...
 

Recently uploaded

Artificial intelligence in the post-deep learning era
Artificial intelligence in the post-deep learning eraArtificial intelligence in the post-deep learning era
Artificial intelligence in the post-deep learning eraDeakin University
 
My Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 PresentationMy Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 PresentationRidwan Fadjar
 
SQL Database Design For Developers at php[tek] 2024
SQL Database Design For Developers at php[tek] 2024SQL Database Design For Developers at php[tek] 2024
SQL Database Design For Developers at php[tek] 2024Scott Keck-Warren
 
Science&tech:THE INFORMATION AGE STS.pdf
Science&tech:THE INFORMATION AGE STS.pdfScience&tech:THE INFORMATION AGE STS.pdf
Science&tech:THE INFORMATION AGE STS.pdfjimielynbastida
 
Swan(sea) Song – personal research during my six years at Swansea ... and bey...
Swan(sea) Song – personal research during my six years at Swansea ... and bey...Swan(sea) Song – personal research during my six years at Swansea ... and bey...
Swan(sea) Song – personal research during my six years at Swansea ... and bey...Alan Dix
 
"Federated learning: out of reach no matter how close",Oleksandr Lapshyn
"Federated learning: out of reach no matter how close",Oleksandr Lapshyn"Federated learning: out of reach no matter how close",Oleksandr Lapshyn
"Federated learning: out of reach no matter how close",Oleksandr LapshynFwdays
 
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmaticsKotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmaticscarlostorres15106
 
costume and set research powerpoint presentation
costume and set research powerpoint presentationcostume and set research powerpoint presentation
costume and set research powerpoint presentationphoebematthew05
 
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 3652toLead Limited
 
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024BookNet Canada
 
Unlocking the Potential of the Cloud for IBM Power Systems
Unlocking the Potential of the Cloud for IBM Power SystemsUnlocking the Potential of the Cloud for IBM Power Systems
Unlocking the Potential of the Cloud for IBM Power SystemsPrecisely
 
Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?Mattias Andersson
 
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks..."LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...Fwdays
 
Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Scott Keck-Warren
 
Connect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck PresentationConnect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck PresentationSlibray Presentation
 
Scanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL CertsScanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL CertsRizwan Syed
 
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...Patryk Bandurski
 
Human Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR SystemsHuman Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR SystemsMark Billinghurst
 

Recently uploaded (20)

Artificial intelligence in the post-deep learning era
Artificial intelligence in the post-deep learning eraArtificial intelligence in the post-deep learning era
Artificial intelligence in the post-deep learning era
 
My Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 PresentationMy Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 Presentation
 
SQL Database Design For Developers at php[tek] 2024
SQL Database Design For Developers at php[tek] 2024SQL Database Design For Developers at php[tek] 2024
SQL Database Design For Developers at php[tek] 2024
 
Science&tech:THE INFORMATION AGE STS.pdf
Science&tech:THE INFORMATION AGE STS.pdfScience&tech:THE INFORMATION AGE STS.pdf
Science&tech:THE INFORMATION AGE STS.pdf
 
Swan(sea) Song – personal research during my six years at Swansea ... and bey...
Swan(sea) Song – personal research during my six years at Swansea ... and bey...Swan(sea) Song – personal research during my six years at Swansea ... and bey...
Swan(sea) Song – personal research during my six years at Swansea ... and bey...
 
"Federated learning: out of reach no matter how close",Oleksandr Lapshyn
"Federated learning: out of reach no matter how close",Oleksandr Lapshyn"Federated learning: out of reach no matter how close",Oleksandr Lapshyn
"Federated learning: out of reach no matter how close",Oleksandr Lapshyn
 
E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptxE-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
 
DMCC Future of Trade Web3 - Special Edition
DMCC Future of Trade Web3 - Special EditionDMCC Future of Trade Web3 - Special Edition
DMCC Future of Trade Web3 - Special Edition
 
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmaticsKotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
 
costume and set research powerpoint presentation
costume and set research powerpoint presentationcostume and set research powerpoint presentation
costume and set research powerpoint presentation
 
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
 
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
 
Unlocking the Potential of the Cloud for IBM Power Systems
Unlocking the Potential of the Cloud for IBM Power SystemsUnlocking the Potential of the Cloud for IBM Power Systems
Unlocking the Potential of the Cloud for IBM Power Systems
 
Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?
 
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks..."LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
 
Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024
 
Connect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck PresentationConnect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck Presentation
 
Scanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL CertsScanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL Certs
 
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
 
Human Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR SystemsHuman Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR Systems
 

A real time aggressive human behaviour detection system in cage environment across multiple cameras

  • 1. Int. J. Computational Vision and Robotics, Vol. X, No. Y, xxxx 1 Copyright © 20XX Inderscience Enterprises Ltd. A real time aggressive human behaviour detection system in cage environment across multiple cameras Phooi Yee Lau* Faculty of Information and Communication Technology, Universiti Tunku Abdul Rahman, 1, Jalan Universiti, Bandar Barat, 31900 Kampar, Perak, Malaysia Email: laupy@utar.edu.my *Corresponding author Hock Woon Hon, Zulaikha Kadim and Kim Meng Liang MIMOS Bhd, Technology Park Malaysia, Kuala Lumpur, Malaysia Email: hockwoon.hon@mimos.my Email: zulaikha.kadim@mimos.my Email: liang.kimmeng@mimos.my Abstract: The monitoring of activities in the enclosed cage environments to detect abnormalities such as aggressive behaviour, employing a real-time video analysis technology, has become an emerging and challenging problem. Such system should be able: 1) to track individuals; 2) to identify their action; 3) to keep a record of how often the aggressive behaviour happened, at the scene. On top of that, the system should be implemented in real-time, whereby, the following limitations should be taken into consideration: 1) viewing angle (fish-eye); 2) low resolution; 3) number of people; 4) low lighting (normal); 5) number of cameras. This paper proposes to develop a vision-based system that is able to monitor aggressive activities of individuals in an enclosed cage environment using multiple cameras considering the above-mentioned conditions. Experimental results show that the proposed system is easily realised and achieved impressive real-time performance, even on low end computers. Keywords: surveillance system; behaviour monitoring; perspective correction; background subtraction; real-time video processing. Reference to this paper should be made as follows: Lau, P.Y., Hon, H.W., Kadim, Z. and Liang, K.M. (xxxx) ‘A real time aggressive human behaviour detection system in cage environment across multiple cameras’, Int. J. Computational Vision and Robotics, Vol. X, No. Y, pp.xxx–xxx. Biographical notes: Phooi Yee Lau received her BCompSc from the Universiti Teknologi Malaysia, Malaysia in 1996, MCompSc from the Universiti Malaya, Malaysia in 2002, and PhD in Engineering from the Keio University, Japan in 2006. From 2009 to 2011, with the BK21 Fellowship Grant from the Korean Government, was attached as a researcher at the Convergence Communications
  • 2. 2 P.Y. Lau et al. Laboratory, Hanyang University, Republic of Korea, and was previously a post-doc researcher from 2006 to 2008, under the Portuguese Government Grant, at the Multimedia Signal Processing Group (previously Image Group) of Instituto de Telecomunicações, Portugal. Her current research interests include multimedia signal processing and intelligent system. Hock Woon Hon is a Senior Principal Researcher and the Head of Advanced Informatics Lab at Mimos Berhad, a National ICT Research in Malaysia. He received his Bachelor’s degree in Electrical and Electronic Engineering and Doctorate degree from Nottingham Trent University in 1997 and 2000, respectively. His main research area is in imaging/image processing including intelligent surveillance, 3D visualisation and X-ray imaging. He has published a number of journal papers (IEE, NDT&T) and has filed a number of patents in the area of image processing locally and internationally. Zulaikha Kadim received her degree in Engineering from Multimedia University Malaysia in 2000. Subsequently, she received her Master’s degree in 2004 from the same university and currently pursuing her PhD in Computer Systems Engineering at Malaysia National University (UKM). She is currently a researcher at the MIMOS Berhad, a national R&D institution. Her research interests include object detection and tracking, and video analytics. Kim Meng Liang is the Principal Researcher in Advanced Informatics Department in MIMOS Berhad. He graduated with MS in Image Processing in 2003. He is certified with Green Belt Six Sigma, TRIZ (Problem Solving Methodology) and Infrared Thermography. With his vast knowledge in image processing and pattern recognition, he had more than 50 patents and 20 white papers filed under his name. This paper is a revised and expanded version of a paper entitled ‘GuARD: a real-time system for detecting aggressive human behaviour in cage environment’ presented at the Multi-disciplinary Trends in Artificial Intelligence: 11th International Workshop (MIWAI 2017), Gadong, Brunei, 20–22 November 2017. 1 Introduction Recent work in vision-based surveillance system aims to learn about the presence and the behaviour of a person in a pre-determined or closed environment (Haritaoglu et al., 2000; Chen et al., 2008; Ouanane et al., 2012; Theodoridis and Hu, 2013). These works often focus on monitoring activities such as violent behaviour, usually processing the scene in a fully automatic manner for surveillance purposes. Also, these systems often come with a well-designed alarm to be triggered depending on the situations defined, and to connect to remote security control centres. In these video surveillance systems, some are devoted to using low cost off-the-shelf cameras (Haritaoglu et al., 2000). In the past, CCTV is often used as a surveillance tool to be deployed together with security guards monitoring the scene captured. Nevertheless, humans are poor at remaining alert for long periods of time and this has limited human participation in the detection chain, especially in 24/7 systems. As such, a vast majority of CCTV camera footages remains unmonitored and it is unlikely that incidents can be detected immediately when they are happening. It is only after a serious crime has happened that
  • 3. A real time aggressive human behaviour detection system 3 those videos will only be used to ascertain what has happened, reducing it to a trace-driven tool, for verification or support. In 2008, Chen reported that video surveillance has become a self-reporting tool with the ability to detect and to monitor potential aggressive behaviour (Chen et al., 2008). His work describes a framework to recognise aggressive behaviour using local binary motion descriptors. However, aggressive behaviour in his work entails the involvement of an object, e.g., chair, as it is difficult to notice, due to occlusion, an aggressive action by itself. In 2012, Ouanane et al. proposed to recognise boxing action as aggressive behaviour. His proposed work is based on the geometrical approach associated with shape representation to recognise an aggressive human gesture. However, the work cannot resolve the occlusion when more than one person is present at the scene (Ouanane et al., 2012). In 2013, Theodoridis and Hu investigated both the recognition and modelling of aggressive behaviour using kinematics and electromyographic performance data. Their primary objective was to develop a recognition system capable of modelling and classifying aggressive behaviour using genetic programming, decomposing an action set into action groups to evolve specialised taxonomers for each behaviour. Nonetheless, no real-time system implementation was discussed (Theodoridis and Hu, 2013). In 2015, Lyu and Yang proposed a violence detection algorithm based on the local spatio-temporal points and optical flow method (Lyu and Yang, 2015). His proposed work is able to detect aggressive action regardless of the context and the number of involved persons. However, no real-time system implementation was discussed. In recent years, vision-based action recognition worked on classifying human behaviour based on human action (Chen et al., 2008; Lau et al., 2017; Chang et al., 2010). As human behaviour vary greatly, developing a video cookbook could be time consuming and tedious, as too many code words, or too few, will hurt the recognition performance, especially for real-time systems. In recent years, building a mature behaviour tracking system across multiple cameras was attempted (Chang et al., 2010). This system has been deployed especially to handle crowded areas. Having similar scenarios in our case study, we proposed a cooperative detection scheme across multiple cameras in our system to: 1 increase detection accuracy 2 reduce false positive, arising from crowdedness and occlusions. In our case, when multiple cameras were deployed, especially in an enclosed cage environment, it could allow the system to understand how human interaction takes place, even in crowded conditions, to fully detect aggressive behaviour. In this paper, we propose a new framework to extract candidate event(s) in an image, and to classify them as potential aggressive behaviour, named GuARD. GuARD is a surveillance system for detecting potential violent behaviour in a scene, named aggressive-behaviour-like region(s), in a cage environment. The usefulness of this proposed work is multiple as it is able to: 1 analyse multiple cameras input scene in real-time 2 raise an alarm when aggressive-behaviour-like region(s) is detected using cooperative detection scheme 3 record the decision triggered in (2).
  • 4. 4 P.Y. Lau et al. The remainder of this paper includes: Section 2 that outlines the GuARD framework; Section 3 that shows implementation with analysis; and Section 4 that concludes the paper with recommendations for future works. 2 GuARD framework The guided aggressive behaviour detection system, abbreviated as GuARD, is developed using OpenCV libraries that is widely used in real-time computer vision application. GuARD system flow is illustrated in Figure 1. In Step 1, the video acquisition set-up is discussed. In Step 2, for each input scene, we obtained a foreground region(s), being a candidate region(s), using background subtraction techniques which extract the moving regions. In Step 3, the resultant image from Step 2 will be thresholded using Tx, a value which represents the speed of motion detected, obtained through rigorous testing. In Step 4, we compensates the non-uniform perspective in the images, obtained using corner mount cameras, by rotating the image until the perspective could be represented horizontally, i.e., part further away from the camera will be smaller and area closer to the camera be larger. Step 5 aims to divide the image into two grids, whereby candidate region(s) from Step 4 that is in the far-grid will be compensate with additional pixels to allow a fair qualitative study for all candidate regions. At first, Step 6 attempts to pre-classify all the candidate region(s) into aggressive-behaviour-like region(s) or non-aggressive-behaviour-like region(s), for each camera. Later, from the two sets of aggressive-behaviour-like region(s), one from camera1 and another from camera2; we classify these regions based on the proposed cooperative detection scheme, a majority voting system. Regions classified as aggressive-behaviour-like region(s) here will be stored in the system and an alarm will be provided, as the output, to the system administrator Figure 1 GuARD framework Image Acquisition Change Detection Fast Motion Detection Perspective Correction Scale Correction Decision Step1 Step2 Step3 Step4 Step5 Step6 2.1 Step 1: Image acquisition An experimental setup that enables subsequent acquisition of real-time data for analysis is described [see Figure 2(a)]. Two corners mount cameras with the average vertical and horizontal field-of-view (FoV) set to 80 to 91 degree and 100 to 120 degree, respectively, is considered in an enclosed cage environment. This large FoV is to enable the whole cage scenario being monitored, with minimum blind spot, from each camera. Camera1 is
  • 5. A real time aggressive human behaviour detection system 5 installed at the top left corner, while the camera2 is installed at the top right corner, as indicated in Figure 2(a). To prevent scenes in high resolution from greatly slowing down the performance of the system, the input image is resized to 320 × 240 pixels, in RGB colour format. Figure 2 System output, (a) multi-camera set-up for monitoring (b) Step 3: change detection for camera1 (c) Step 4: perspective correction for camera1 (d) Step 5: correction concept for camera1 (e) Step 5: scale correction for camera1 (f) Step 6: individual region analysis for camera1 (see online version for colours) Corner mount camera camera1 camera2 (a) (b) (c) (d) (e) (f) 2.2 Step 2: Fast motion detection Here, at first, the acquired image is pre-processed to enhance the contrast using the contrast-limited adaptive histogram equalisation method. Later, in this step, a forward motion estimation method is used to obtain candidate region(s) for each input scene, being It(x, y). It1(x, y), being the current image, are subtracted with a past image, being three-frame apart, namely It2(x, y) to obtain the estimated forward motion. The frame here depends on the choice of past frame selected, such as selecting every fifth frame from the input sequences (see Section 3 for further explanations). 2.3 Step 3: Change detection A threshold parameter is applied to obtain the binary motions between consecutive frames, being a candidate region(s) from Step 2. Taking into account that It1(x, y) and It2(x, y) are from the same source, a forward motion analysis can be applied to estimate the forward motion information by applying equation (1). To filter those motion information candidate region(s), a forward threshold value named Tf is applied, being the
  • 6. 6 P.Y. Lau et al. speed of change itself. The resultant image presents aggressive behaviour candidate region(s). After rigorous testing, herewith Tf is set to 40 – see Figure 2(b). ( ) 1 2 1 2 , ( , ) ( , ) t t t t CD I I I x y I x y = − (1) 2.4 Step 4: Perspective correction In Step 4, we compensate the non-uniform perspective in the resultant image from Step 3, by rotating the image until the perspective could be represented, i.e., part further away from the camera will be far and area closer to the camera be near, to allow quantitative evaluation for the cage environments, see Figure 2(c). The rotation angle should take into consideration that pixel representation is much stronger further from the camera, and this step prepares to compensate the pixel value, especially those pixel(s) further away from each camera, and vice versa, respectively (Wakefield and Genin, 1987). 2.5 Step 5: Scale correction In Step 5, the resultant image from Step 4 will undergo a perspective difference correction, calculated at the candidate region(s) bounding box centroid location. This method overlays two grids, i.e., grid A and grid B, to trade-off between the actual sizes of area covered in the image with those acquired through the imaging device – see Figures 2(d) and 2(e). In Figure 2(d), noticed that for regions further away from the camera source (area B), the area covered by this region is much smaller, and vice versa, for each camera, respectively for the same person. After rigorous testing, region B pixels size will be compensated 1.5 times. As an example, if region B candidate region pixel count is 60, its actual pixel region, after compensation, will be 90. 2.6 Step 6: Decision In this final step, we processed all candidate regions and discard non-aggressive- behaviour-like region(s). The image(s) will be classified as containing aggressive behaviour if one or more aggressive-behaviour-like region(s) is detected from both corner mount cameras. This image will be later stored and an alarmed shall be issued to provide warning to the system administrator. In this step, candidate region(s) obtained in Step 5 will be processed, taking into account the features such as their area and their bounding box positions for each camera, i.e., localised event. • Area: herewith, aggressive behaviour is being associated with large candidate region(s). Therefore, a threshold value, Ty, for the grid A and the grid B in Step 5, namely [A, B], after rigorous experimental results, are set to [60, 90], proportional to the image size. The condition above allow, for instance, discarding foreground candidate regions that correspond to noise leaving only the aggressive-behaviour-like candidate regions. All candidate regions, is further threshold, using Tz, set to 50 after rigorous experiments, in a subsequent region-based background subtraction, being a more refined process – see Figure 2(f). Here, each aggressive-behaviour-like region decision will serve as a candidate for final decision.
  • 7. A real time aggressive human behaviour detection system 7 • Cooperation: as described earlier, the purpose of this paper is to develop a vision-based system that is able to monitor aggressive activities of individuals using multiple cameras. The individual detection results for each image, i.e., from camera1 and camera2, as describe earlier, will be further analysed here. In this further analysis, we employed a cooperative detection scheme to: 1 increase detection accuracy 2 reduce false positive, arising from crowdedness and occlusions – see Table 1. Table 1 describe the decision as true positive only when both localised detection decision, at a given time, from both cameras, i.e., when both camera detection results, are true positive, indicating an aggressive behaviour is being detected. Table 1 Cooperative detection scheme for decision making Event Category Events Aggressive behaviour Camera1: aggressive behaviour Camera2: aggressive behaviour Grouping formation Type1: Camera1: non-aggressive behaviour Camera2: aggressive behaviour Type2: Camera1: non-aggressive behaviour Camera2: aggressive behaviour 3 Experimental results A system was developed to evaluate the performance of the GuARD framework discussed in Section 2. The processes are tested on an Intel i5 Core 1.80 GHz with 4 GB of RAM. The evaluation includes analysing: 1 the success rate in detecting aggressive behaviour in cage environment 2 the performance in terms of processing time and latency. A total of seven different videos obtained with no additional lighting were provided (as listed in Table 2) and they were evaluated based on the following conditions: • different frame selection (processing) performance analysis for single camera • different scenario performance analysis for single camera • different resolution and performance analysis for single camera • performance analysis across multiple cameras.
  • 8. 8 P.Y. Lau et al. Table 2 Descriptions of video used in experiments No. Video clips length and resolution Description 6 persons in a cage environment Video1 4 minutes with 320 × 240 2 scenes 3 persons fighting 7 persons in a cage environment 2 scenes 4 persons fighting 3 scenes 2 persons fighting 1 scene 3 persons fighting Video2 13.51 minutes with 640 × 240 2 scenes 6 persons fighting 6 persons in a cage environment 1 scene 4 persons fighting 2 scenes 3 persons fighting 1 scene 2 persons fighting Video3 6 minutes with 340 × 240 2 scenes 6 persons fighting 6 persons in a cage environment Video4 2 minutes with 640 × 480 2 scenes 4 persons fighting 6 persons in a cage environment 2 scenes 4 persons fighting Video5 4 minutes with 640 × 480 1 scene 2 persons fighting 6 persons in a cage environment 2 scenes 4 persons fighting 2 scenes 2 persons fighting Video6 10:26 minutes 320 × 240 2 scenes 6 persons fighting 6 person in a cage environment 2 scenes 4 persons fighting 10:29 minutes 320 × 240 2 scenes 2 persons fighting Video7 2 scenes 6 persons fighting 3.1 Different frame selection (processing) performance analysis for single camera In this experiment, we investigated different scene with different frame selection. As shown in Table 1, we have investigated with different frame selection options. A four-minute sequence was selected in this experiment, i.e., video1, with 320 × 240 resolutions (15 fps): 1 frame selection: processing every frame 2 frame selection: processing every fifth frame 3 frame selection: processing every tenth frame. As shown in Figure 3, in order to have real-time system, the acquired image should be, at least, processed every fifth frame.
  • 9. A real time aggressive human behaviour detection system 9 Figure 3 Performance of different frame selection options and processing time (see online version for colours) 3.2 Different scenario performance analysis for single camera In this experiment, we investigated different scenes with different number of aggressive behaviour involving different number of person(s) using video2. As shown in Table 3, all scene(s) with different fighting characteristic(s) are able to be detected. Table 3 System performances: detection results for different aggressive behaviour and number of person involved (see online version for colours) Scenario Camera1: detection result(s) Scene 1 – 3 persons fighting Scene 2 – 2 persons fighting 3 groups
  • 10. 10 P.Y. Lau et al. Table 3 System performances: detection results for different aggressive behaviour and number of person involved (continued) (see online version for colours) Scenario Camera1: detection result(s) Scene 3 – 6 persons fighting 1 group Scene 4 – 6 persons fighting 2 groups 3.3 Different scene resolution and performance analysis for single camera Table 4 lists four selected videos employed to investigate the performance of our proposed work on scenes with different video resolutions and length. All videos selected contain different type of aggressive behaviour. The four selected videos are: 1 video1: four minute video with 320 × 240 resolutions (15 fps) 2 video3: six minute video with 320 × 240 resolutions (15 fps) 3 video4: two minute video with 640 × 480 resolutions (14 fps) 4 video5: four minute video with 640 × 480 resolutions (14 fps). As shown in Table 4, experimental results shown that, for a real-time system, the resolution of processed image should be, at most, 320 × 240 resolutions. Table 4 Performance of different input resolution and duration Input video Processing duration Average performance 182 s 180 s 180 s 182 s 177 s 181 s 1 179 s Ave. 180 seconds (3 minutes) to process 4 minutes video
  • 11. A real time aggressive human behaviour detection system 11 Table 4 Performance of different input resolution and duration (continued) Input video Processing duration Average performance 270 s 293 s 270 s 276 s 264 s 270 s 3 271 s Ave. 273 seconds (4 minutes 33 seconds) to process 6 minutes video 286 s 262 s 267 s 266 s 272 s 263 s 4 266 s Ave. 268 seconds (4 minutes 28 seconds) to process 2 minutes video 572 s 531 s 534 s 529 s 518 s 528 s 5 538 s Ave 535 seconds (8 minutes 55 seconds) to process 4 minutes video 3.4 Performance analysis across multiple camera In this experiment, we investigated two different camera scenes: 1 first scene is obtained from camera1 2 second scene is obtained from camera2. These videos selected contain different aggressive behaviour. Here, camera1 and camera2 duration is 10:26 and 10:29, namely video6 and video7, respectively, with 320 × 240 resolutions. This video’s has been annotated, i.e., the aggressive behaviour appeared in the video has been studied in detail by experts to obtain the ground-truth, marked as the red-line in the figures. The aggressive behaviour detection accuracy based on individual camera is lower compared to the results obtained using the cooperative detection scheme – see Figures 4(a)–4(c). In these figures, the x-axis represents the region’s pixel size while the y-axis represents the frame number [note that the pixels related to grouping formation have been extensively removed in Figure 4(c)] Referring to the detection results for individual camera shown in Figure 4(a), i.e., camera1, the results obtained consist of many false positive errors because the aggressive behaviour happened at the far-side of the camera1. In the case of the detection results for camera2, the same aggressive behaviour happens near the camera, so detection is more accurate and reduces the false positive error, as shown in Figure 4(b).
  • 12. 12 P.Y. Lau et al. Figure 4 System performance, (a) detection results for camera1 (b) detection results for camera2 (c) detection results with cooperative detection scheme across multiple cameras (based-on camera2 results) (see online version for colours) 0 1000 2000 3000 4000 5000 6000 7000 8000 0 100 200 300 400 500 600 700 800 900 1000110012001300140015001600170018001900 Pixel Value t (a) 0 1000 2000 3000 4000 5000 6000 7000 8000 0 100 200 300 400 500 600 700 800 900 1000110012001300140015001600170018001900 Pixel Value  t (b) 0 1000 2000 3000 4000 5000 6000 7000 8000 0 100 200 300 400 500 600 700 800 900 1000110012001300140015001600170018001900 Pixel Value t (c)
  • 13. A real time aggressive human behaviour detection system 13 To further explain this correspondence, the cooperation detection scheme proposed considers both aspects discussed earlier, thus, are able to see an improvement in overall detection – see Figure 5. Noticed that Figure 4(c) shows the location of the aggressive behaviour with respect to camera2, i.e., mostly the aggressive behaviour detected happened nearer to camera2 (due to y-axis value higher than 1,000). Figure 5 Detection of aggressive behaviour and corresponding position (a) in the graph and (b) in the camera1 and (c) in camera2 (see online version for colours) (a) (b) (c) 4 Discussion and conclusions In this section, we further evaluate a 13.14 minute video with 320 × 240 resolutions, namely camera1, and a 13.05 minute video with 320 × 240 resolutions, namely camera2, respectively. These videos were annotated by experts, i.e., the ground-truth for the aggressive behaviour has been obtained – see Table 5. Referring to the aggressive behaviour detection from camera1: 2:56–3:15, the aggressive activity happens in the ‘front part of the camera’ or grid A, marked in green, see Figure 6(a). However, for the aggressive behaviour detection from camera1:6:45–7:04, the aggressive behaviour activity happens at the ‘far part of the camera’ or grid B, marked in blue, see Figure 6(a). In this study, the detection results, based on a single camera, shows many falsely detected aggressive behaviour – see Figure 6(b). In comparison, Figure 6(c) shows a much improved detection results for aggressive behaviour. Table 5 Ground truth for aggressive behaviour analysis Input video Ground truth camera1 Ground truth camera2 320 × 240 resolution 2:56–3:15 2:57–3:15 3:33–3:52 3:35–3:53 4:37–5:00 4:38–5:02 5:25–5:43 5:25–5:44 6:44–7:00 6:45–7:04 7:09–7:30 7:15–7:34 8:21–8:40 8:26–8:45 8:54–9:17 8:55–9:23
  • 14. 14 P.Y. Lau et al. Figure 6 System performance, (a) detection results for camera1 (b) detection results for camera2 (c) detection results with cooperative detection scheme across multiple cameras (based-on camera1 detection) (see online version for colours) (a) (b) (c)
  • 15. A real time aggressive human behaviour detection system 15 A practical framework is proposed in this work to develop a vision-based system that is able to monitor aggressive activities of individuals using multiple cameras. Figure 6(c) shows the improved detection accuracy using the proposed cooperative detection scheme. In general, it is now possible to study aggressive behaviour in cage environment by employing intelligent video analysis technology. The experimental results indicate that aggressive behaviour could be effectively detected. Table 6 Comparison with other works Method Aggressive behaviour module Multiple camera collaborative module Real-time system Chen et al. (2008) Yes No No Ouanane et al. (2012) Yes No No Theodoridis and Hu (2013) Yes No No Chang et al. (2010) Yes Yes, with calibrated cameras/tracking No Proposed work Yes Yes, with uncalibrated cameras Yes Further, we compared our proposed system with other aggressive behaviour detection systems – see Table 6. In particular, the work of Chang et al., being closely related to the proposed work, works with four standard CCTV cameras: 1 three for tracking 2 one for PTZ targeting. The multi-camera multi-target tracking system presented there is sophisticate as it is used for tracking individuals cooperatively in a synchronised manner, in real-time. These events of interest are then fed to the operator for group analysis and group activity recognition. However, the tracking system required pre-calibrated scenes and the cameras are mounted in an open space, with high fencing and walls. In our case, these camera set-ups will be difficult to be realised due to the enclosed cage environment with low fencing. For the specific case of the real-time aggressive behaviour detection, where hardware (camera maker) and software (system maker) should work together, instead of the individual system mounted onto the scene, there is still considerable amount of work ahead. References Chang, M-C., Krahnstoever, N., Lim, S. and Yu, T. (2010) ‘Group level activity recognition in crowded environments across multiple cameras’, 2010 7th IEEE International Conference on Advanced Video and Signal Based Surveillance, Boston, MA, pp.56–63. Chen, D., Wactlar, H., Chen, M.Y., Gao, C., Bharucha, A. and Hauptmann, A. (2008) ‘Recognition of aggressive human behavior using binary local motion descriptors’, 2008 30th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, Vancouver, BC, pp.5238–5241. Haritaoglu, I., Harwood, D. and Davis, L.S. (2000) ‘W4: real-time surveillance of people and their activities’, IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 22, No. 8, pp.809–830.
  • 16. 16 P.Y. Lau et al. Lau, P.Y., Hon, H.W., Kadim, Z. and Liang, K.M. (2017) ‘GuARD: a real-time system for detecting aggressive human behavior in cage environment’, in Phon-Amnuaisuk, S., Ang, S.P. and Lee, S.Y. (Eds.): Multi-Disciplinary Trends in Artificial Intelligence. Lecture Notes in Computer Science, Vol. 10607, pp.315–322, Springer. Lyu, Y. and Yang, Y. (2015) ‘Violence detection algorithm based on local spatio-temporal features and optical flow’, 2015 International Conference on Industrial Informatics – Computing Technology, Intelligent Technology, Industrial Information Integration, Wuhan, pp.307–311. Ouanane, A., Serir, A. and Kerouh, F. (2012) ‘New geometric descriptor for the recognition of aggressive human behavior’, 2012 5th International Congress on Image and Signal Processing, Chongqing, pp.148–153. Theodoridis, T. and Hu, H. (2013) ‘Modeling aggressive behaviors with evolutionary taxonomers’, IEEE Transactions on Human-Machine Systems, Vol. 43, No. 3, pp.302–313. Wakefield, W.W. and Genin, A. (1987) ‘The use of a Canadian (perspective) grid in deep-sea photography’, Deep Sea Research Part A. Oceanographic Research Papers, Vol. 34, No. 3, pp.469–478.