SlideShare a Scribd company logo
1 of 8
Download to read offline
International Journal of Electrical and Computer Engineering (IJECE)
Vol. 10, No. 5, October 2020, pp. 4687∼4694
ISSN: 2088-8708, DOI: 10.11591/ijece.v10i5.pp4687-4694 Ì 4687
Object tracking using motion flow projection for pan-tilt
configuration
Luma Issa Abdul-Kreem1
, Hussam K. Abdul-Ameer2
1
Department of Control and Systems Engineering, University of Technology, Iraq
2
Department of Biomedical Engineering, Al-Khawarizmi College of Engineering, University of Baghdad, Iraq
Article Info
Article history:
Received Jul 12, 2019
Revised Mar 1, 2020
Accepted Mar 8, 2020
Keywords:
Camera model
Object tracking
Pt monitoring mechanism
Pt system
ABSTRACT
We propose a new object tracking model for two degrees of freedom mechanism.
Our model uses a reverse projection from a camera plane to a world plane.
Here, the model takes advantage of optic flow technique by re-projecting the flow
vectors from the image space into world space. A pan-tilt (PT) mounting system
is used to verify the performance of our model and maintain the tracked object within
a region of interest (ROI). This system contains two servo motors to enable a webcam
rotating along PT axes. The PT rotation angles are estimated based on a rigid transfor-
mation of the the optic flow vectors in which an idealized translation matrix followed
by two rotational matrices around PT axes are used. Our model was tested and evalu-
ated using different objects with different motions. The results reveal that our model
can keep the target object within a certain region in the camera view.
Copyright © 2020 Insitute of Advanced Engineeering and Science.
All rights reserved.
Corresponding Author:
Luma Issa Abdul-Kreem,
Department of Control and Systems Engineering,
University of Technology,
Baghdad, Iraq.
Email: 60021@uotechnology.edu.iq
1. INTRODUCTION
Visual object tracking is one of the interesting fields in computer vision which is intervened in different
applications such as surveillance, traffic monitoring, human computer interaction, and robotics [1–4]. Object
tracking, in essence, seeks to estimate motion trajectory of an object in visual scenario. This research seeks
to address the following questions: how a path for a moving object can be estimated in world plane? and
how this estimation can be analyzed to generate a feedback signals to enable a PT system tracking an object?
A considerable amount of literature has been published on object tracking (for more discussion, see the review
in [5]). The related studies to our work will be reviewed. In [2], the authors used raw pixel representation
for object tracking in which colors and intensity values of an image are used to represent the object regions.
Such representation is simple and effect, however, raw pixel information alone is not sufficient for robust
tracking [6]. This issue led some researchers [7, 8] to include other visual features such as shape, edge and
texture into the raw pixel representation.
Other studies such in [9, 10], on the other hand, used histogram representation to track an object in
the visual scenario. In general, histogram extracts features based on object distribution in the visual field.
Thus, the use of histogram representation might result in the loss of spatial information [6]. Several studies
(see, [11, 12]) used optical flow representation for object tracking. In principle, optic flow represents displace-
ment vectors of each pixel in the image. This representation is commonly used to capture the spatio-temporal
motion information of an object in the visual scene [11]. PT cameras are widely used as a part of surveillance
systems. PT camera is mounted and adjusted to certain viewing configurations. For an effective application
Journal homepage: http://ijece.iaescore.com
4688 Ì ISSN: 2088-8708
of object tracking, accurate calibration is required [13]. A number of authors (see, e.g., [14–17]) estimated
parameters of a calibration model for PT cameras by computing both focal length and radial distortion at many
different zoom settings. L. Diogo in [18], on the other hand, introduced a method for PTZ camera calibration
over the camera’s zoom range by minimizing re-projection errors of feature points detected in images which
are captured by the camera at different orientations and zoom levels. Recently, [19] presented a method to
calibrate the rotation angles of a PT camera by using only one control point and assuming that the intrinsic
parameters and position of the camera are known in advance.
In the field of object recognition based on human visual system, group of studies (see [20–24])
demonstrate the importance of motion information for object recognition. The studies showed that the
visual perception can detect and focus on an object based on motion features. In essence, when an object
is moving across the visual field, our eyes follow the object movement. The use of PT cameras in autonomous
surveillance systems is still a challenge in spite of their great potential for real applications. There is still a need
for an easy and accurate technique to estimate the rotation angles of a PT system. In this paper we suggest a
new model for estimating PT rotation angles. We used a reverse projection from an image space to a world
space in order to calculate the PT angles. In keeping with the importance of the motion information, our model
takes advantage of optic flow technique by re-projecting the flow vectors from the image space into world
space. Here, PT rotation angles are estimated based on Pinhole camera model and the rigid transformation
of flow vectors in which an idealized translation matrix followed by two rotational matrices around PT axis
are used. Our model was evaluated using rigid motions of different objects that have different shapes such as
circular, square and silhouette objects. Such objects were moved with various directions to show the robustness
of the model to track and focus on the object. The results reveal that when the object moves out of the region
of interest, the reverse projection of the motion vectors from the camera plane to the real-world plane can be
transformed to two values that represent the required rotational angles along pan and tilt axes to keep the object
within a certain view. The paper is organized along the following structure and content. Section 2, demon-
strates the methodology of the tracking model using PT system. The methodology has three main stages:(i)
motion detection, (ii) object tracking, and (iii) PT rotational angles calculation. Section 3, shows system set up
and results. Section 4, summarizes the discussion and the proposed perspectives of this work.
2. METHODOLOGY
2.1. Re-projection camera model
Our model proposes that the camera is rotated around Yc and Xc axes as shown in Figure 1. These
axes demonstrates the coordinate system of the camera model. From the previous figure, (pw) represents
the position of a 3D point in the world plane {w} and (pc) represents its projection in the camera plane {c}.
Assume that (pw) has moved to ( ¯pw) in the {w} plane and ( ¯pc) defined its projection in the {c} plane.
Yc
Xc
Yw
Zw
Xw
pw
pc
Zc
pc
pw
{w}
{c}
Figure 1. Point projection from real-world plane {w} to camera plane {c}. (pw) shows a 3D point in the world
plane {w} and (pc) represents the point projection in the camera plane {c}. ( ¯pw) represents the next location
of the point in the {w} plane and ( ¯pc) defined its projection in the {c} plane
Int J Elec & Comp Eng, Vol. 10, No. 5, October 2020 : 4687 – 4694
Int J Elec & Comp Eng ISSN: 2088-8708 Ì 4689
We used Pinhole camera model and rigid transformation of parallelogram in the {w} plane. The ideal
Pinhole camera model transcribes the relationship between a point in 3D space {w} and its projection in
2D space {c} (for underlying principle of Pinhole camera model see [25]). To simplify further derivation,
we define this relationship using homogeneous coordinate:
Pc = M.Pw (1)
where Pc [xc yc f 1]T
represents the projection of the 3D point Pw [xw yw zw 1]T
on the image
plane {c} in which f denotes focal length. M represents a tomography matrix operation which defines two
types of parameters: intrinsic and extrinsic parameters. Since most cameras have overcame the distortion and
skew problems, our model focuses on the extrinsic parameters. Given that the coordinate systems of the camera
reference frame is simply set at the object centroid in the world reference frame, the tomography matrix is then
M = Tpc
pw
+ Pw. (2)
where Tpc
pw
= [tx ty tz 1] represents the translation vector from the center of the world space {w}
to the center of the image plane {c}. In order to keep the camera model tracking an object moves away
from the center of the plane (i.e. the object moves from pw to ´pw, a rotation operation R is incorporated.
The rotation matrix R is here addresses the rotation around pan-axis, Rpan(α) and tilt-axis, Rtilt(β).
Thus, the tomography matrix is defined as a sequence of matrices operations that include translation T
and rotation R. The tomography matrix is then define by the following equation:
M = Tpc
pw
+ PwR. (3)
R = Rpan(α)Rtilt(β) (4)
Rpan(α) =


cos(α) 0 sin(α)
0 1 0
−sin(α) 0 cosα

 (5)
Rtilt(β) =


1 0 0
0 cos(β) −sin(β)
0 sin(β) cosβ)

 (6)
As a consequence, the object tracking model can be written as an idealized translation matrix followed
by two rotational matrices around pan and tilt axes. The camera tracking model is defined by
´Pc = Tpc
pw
+ Rpan(α).Rtilt(β).Pw. (7)
where ´Pc = ( ´xc ´yc f 1) represents the homogeneous coordinates of the moving object.
2.2. Motion detection
In order to detect the motion of an object in the visual scene, an optical flow algorithm is used.
The optical flow depicts the motion of an object in the visual scene by two dimensional velocity vectors.
These two vectors describe the velocity of pixels in a time sequence of two consequent images [26]. For the
sake of simplicity, the 3D object in real time world plan is transfer into 2D object in the image plane in which the
image can be defined by means of 2D dynamic brightness function I(x, y, t). Given that the brightness between
two consecutive image frames is constant within small space-time step (brightness constancy constraint) [27],
the brightness constancy equation can be defined by
I(x, y, t) = I(x + ∆x, y + ∆y, t + ∆t) (8)
where (∆x, ∆y) represents the motion displacement for an object in x, and y directions, respectively, within
short time (∆t). Given the assumption of small object motion, the brightness constancy equation 8 can be
developed using Taylor series to get I(x+∆x, y+∆y, t+∆t) = I(x, y, t)+ dI
dx ∆x+ dI
dy ∆y+ dI
dt ∆t+H.O.T.
By neglecting higher order (HOT) term and rearranging the series [26], we get
Object tracking using motion flow projection for... (Luma Issa Abdul-Kreem)
4690 Ì ISSN: 2088-8708
I.v = −It (9)
where (∆I) represents the spatial gradient of brightness intensity and v denotes the optical flow
vectors (vx, vy) and It defines the derivative of the image intensity along time (t). Since equation 9 has two
unknown parameters (aperture problem), another set of equations is needed. Several studies have been
conducted to solve this equation (see for examples [27–29]). In this article, we solved equation 2 and
estimated the optic flow following to the work in [30].
2.3. Calculate pan and tilt rotational angles
The rotation angle increments, δα and δβ, are calculated relative to the current camera coordinate
frame. These increments keep the object within a region of interest (ROI) in the image frame {c} . In order
to account the effect of different units (points in image plane are expressed in pixels, while points in word
space are represented in physical measurements (centimeters)), we introduce scale parameters k and l with unit
of cm
pixel where these parameters correspond to the change of units along x and y axis respectively. Thus, we
adjust equation 7 by adding the scale parameters {k, l} ∈ s, as defined by
s. ´Pc = Tpc
pw + Rpan(α).Rtilt(β).Pw. (10)
With the knowledge of the object movement in the image plane, it is possible to track the object in
the word space by back-projecting the 2D point in the image plane to a 3D point in the world plane. Here, we
calculate the increments in two parts, the pan angle increment δα ∈ [0o
, 180o
] and then tilt angle increment
δβ ∈ [−90o
, 90o
] , where they are derived from equation 10. For simplicity, we utilize camera projection with
plane world z=0 rather than a general 3D case. The rotational angles is defined by
α = cos−
1(k. ´xc − tx)/xw (11)
β = cos−
1(l. ´yc − ty)/yw (12)
3. SYSTEM SET UP AND RESULTS
In the following, we demonstrate the capability of the suggested model for object tracking by using a
set of test motions. The camera was mounted on a pan-tilt system in which camera surface is aligned parallel
and centred to the object. The block diagram of the suggeted model can be shown in Figure 2. The system
was placed 27 cm away from the moving object. The object motions were recorded using webcam camera
with a resolution of (480 × 640) in which real world view {w} is about (16.5 × 12) cm. Due to the low
resolution of the camera, it is challenging to estimate object motion in the visual scene. As mention in sec.(2.3.),
the rotational range of the motor along pan is [00
, 1800
] and [−900
, 900
] along tilt axes.
Webcam
pan-tilt servo motors Microcontroller
Personal computer
Figure 2. The block diagram of suggested model. The microcontroller (arduino MEGA) controls two servo
motors (S3300). The servo motors drive the camera motion along pan-tilt axes
Int J Elec & Comp Eng, Vol. 10, No. 5, October 2020 : 4687 – 4694
Int J Elec & Comp Eng ISSN: 2088-8708 Ì 4691
The tracking model keeps the object within the camera fovea which represents the region of interest
ROI. Here, we select a window with (252, 233) that is placed in the center of the camera view. In order to
keep the target object within ROI, the camera is moved along pan-tilt axes via the PT system. In our model,
the pan-tilt angles are calculated based on equations 11 and 12 respectively. Since xw = tzsin(α) and
´xc = u, equation 11 can be defined by α = tan−1
(ku/tz). Equation 12, on the other hand, can be
written as β = tan−1
(lv/tz) due to the fact that yw = tzsin(β) and ´yc = v. Based on camera resolution and
view dimensions along x and y directions, the values of the scale parameters k and l are 16.5(cm)/480(pixel)
and 12(cm)/480(pixel), respectively. In order to calculate the optic flow vectors (u, v), we utilized the algorithm
that introduced by [30].
Our model was tested and evaluated using rigid motions of different directions. Such motions are
generated from different shapes of objects that moved with different directions. With no loss of generality,
we used simplified objects to keep the computational complexity and the preprocessing as simple as possible.
Figure 3 shows moving objects (circle, square, tiger, bird) with four directions, down, up, right and diagonal
(up-left), respectively. As a consequence to these motions, the objects become outside the ROI as shown in the
second column of Figure 3. The third column of the figure shows the camera view {c} after camera moving
along pan and tilt axes. As an extended results, our model was tested with silhouette motion for a human that
moves to the left direction, see Figure 4.The results reveal that when the object moves out of the ROI, the
reverse projection of the motion vectors from the camera plane to the real world plane can be transformed to
two values that represent the required rotational angles along pan and tilt axes to keep the object within ROI.
Figure 3. The first column, shows the target object in which the camera fovea is surrounded by dashed line.
The second column, demonstrates the object movement in a particular direction, while the third column,
shows the optic flow of the object motion and the camera view after the pan-tilt rotation
Object tracking using motion flow projection for... (Luma Issa Abdul-Kreem)
4692 Ì ISSN: 2088-8708
Figure 4. The first column shows the object moves out of the ROI demonstrates the man movement to the left
direction. Second column shows the camera view after the pan-tilt rotation
4. DISCUSSION
In this paper, we introduce a model for object tracking using motion vectors and PT system.
The model utilizes the pinhole camera algorithm and rigid transformation from camera plane to the world
plane. Our tracking model keeps the target object within the fovea of the camera view, i.e., ROI. Here, we used
pan-tilt system that controls the rotation of the camera view. The angles of the pan-tilt axes are derived from
the reverse projection of the optic flow.
In order to account the different units (pixels in the camera plane and centimeter in real world plane)
we incorporate a scale parameter k and l in the mathematical expression of α and β along pan and tilt axes,
respectively. Optical flow technique is found to be more robust and efficient for object tracking due to higher
detection accuracy (see a recent review in [31]). This prompted several researchers (see for e.g., [12, 32]) to
use optic flow for object tracking. Our model differs from others approaches in term of using reverse projection
from the camera plane {c} to the world plane {w} for estimating the pan-tilt rotational angles. These estimated
angles are fed to two motors that control camera moving along PT axes. In our model, the camera keeps the
target object within the fovea of the camera view which we called it region of interest (ROI). Here, we utilized
the the rigid transformation of flow vectors in which an idealized translation matrix followed by two rotational
matrices around PT axis are used.
To evaluate the performance of our model, we performed a set of rigid motions using different
objects. These motions are generated from different shapes of objects in which circular and square objects
are used. Such objects are moved to upward and downward directions, respectively. In addition, our model was
probed with silhouette objects which are bird, tiger and human that move in diagonal,right and left directions,
respectively. The results reveal that our model can keep tracking the object within a certain view.
Our model can be extended and improved by adding additional preprocessing techniques that enrich
the model to deal with more sophisticated biological motions. The biological motions represent a complex
and articulated motions which consist different speeds and directions. The model could be boosted with an
algorithm that segregates the body motion from the limbs motion (see [33] for a discussion underling the limbs
and body motions). As a consequence the tracking model could be depended on the body motion without taking
into account the movement of the limbs.
5. CONCLUSION
We introduce a new model of PT configuration to control the movement of the surveillance camera.
The PT angles of the camera movement are estimated using Pinhole camera model and the projection of the
motion vectors in which these vectors are transformed from the image space into a world space. The motion
vectors are estimated using the optic flow technique in which these vectors are transformed from the camera
plane into the real-world plane using an idealized translation matrix followed by two rotational matrices around
PT axis. Our model was tested and evaluated using different objects that have different shapes (such as circular,
square and silhouette objects) where these objects are moved in different directions. This work has shown that
when an object moves out of the region of interest, the projection of the motion vectors can transcribe into
two values. These values are transform into PT angles that control the movement of the surveillance camera.
Building upon, The transformation of the motion vectors enables the surveillance camera moving relative to
the movement of the target object. As a consequence, the camera preserves the target object within the fovea
of the camera view.
Int J Elec & Comp Eng, Vol. 10, No. 5, October 2020 : 4687 – 4694
Int J Elec & Comp Eng ISSN: 2088-8708 Ì 4693
REFERENCES
[1] V. Ramalakshmi, Kanthimathi, and M. G. Alex, ”A survey on visual object tracking: Datasets, methods
and metrics,” International Research Journal of Engineering and Technology (IRJET), vol. 11, no. 3,
pp. 2395–0056, 2016.
[2] I. Haritaoglu and M. Flickner, ”Detection and tracking of shopping groups in stores,” IEEE International
Conference on Computer Vision and Pattern Recognition, vol. 1, pp. 431–438, 2001.
[3] S. A. Mohand, N. Bouguila, and D. Ziou, ”A robust video foreground segmentation by using
generalized gaussian mixture modeling,” Fourth Canadian Conference on Computer and Robot Vision
(CRV ’07), pp. 503-509, 2007.
[4] A. Yilmaz, O. Javed, and M. Shah, ”Object tracking: A survey,” ACM Computing Surveys, vol. 38,
no. 4, pp. 1–45, 2006.
[5] S. R. Balaji and S. Karthikeyan, ”A survey on moving object tracking using image processing,”
11 th International Conference on Intelligent Systems and Control (ISCO) At: Coimbatore, Tamilnadu,
India, pp. 469-474, 2017.
[6] L. Xi, H. Weiming, S. Chunhua, Z. Zhongfei, D. Anthony, and V. Anton, ”A survey of appearance mod-
els in visual object tracking,” ACM Transactions on Intelligent Systems and Technology, vol. 4, no. 4,
pp. 1-48, 2013.
[7] H. Wang, D. Suter, K. Schindler, and C. Shen, ”Adaptive object tracking based on an effective appearance
filter,” IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 29, no. 9, pp. 1661–1667, 2007.
[8] M. S. Allili and D. Ziou, ”Object of interest segmentation and tracking by using feature selection
and active contours,” IEEE International Conference on Computer Vision and Pattern Recognition,
pp. 1–8, 2007.
[9] Q. Zhao, Z. Yang, and H. Tao, ”Differential earth mover’s distance with its applications to visual
trackingr,” IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 32, no. 2, p. 274–287, 2010.
[10] D. Comaniciu, V. Ramesh, and P. Meer, ”Kernel-based object tracking,” IEEE Trans. on Pattern Analysis
and Machine Intelligence, vol. 25, no. 5, pp. 564–577, 2003.
[11] Y. Ma and C. Cluster, ”An object tracking algorithm based on optical flow and temporal–spatial context,”
Springer Science+Business Media, LLC, part of Springer Nature, pp. 1-9, 2017.
[12] K. Kale, S. Pawar, S. Pawar, and P. Dhulekar, ”Moving object tracking using optical flow and motion
vector estimation,” 2015 4th International Conference on Reliability, Infocom Technologies and
Optimization (ICRITO) (Trends and Future Directions), pp. 1–8, 2007.
[13] N. S. Sinha and M. Pollefeys, ”Pan–tilt–zoom camera calibration and high-resolution mosaic generation,”
Computer Vision and Image Understanding, vol. 103, no. 3, pp. 170–183, 2006.
[14] R. Collins and Y. Tsin, ”Calibration of an outdoor active camera system,” In Proceedings of IEEE
Computer Society, CVPR99, vol. 1, pp. 528-534, 1999.
[15] R. G. Willson, ”Modeling and calibration of automated zoom lenses,” Carnegie Mellon University,
PhD thesis, 1994.
[16] K. Pankaj and D. Anthony, ”Real time target tracking with pan tilt zoom camera,” IEEE International
Conference on Digital Image Computing: Techniques and Applications, Melbourne, VIC, Australia,
pp. 492-497, 2009.
[17] V. Diogo, C. N. Jacinto, and G. Jose, ”Assessing control modalities designed for pan-tilt surveillance
cameras,” 15th portuguese Conference on pattern recognition, 2009.
[18] L. Diogo, ”Target tracking with pan-tilt-zoom cameras,” M.Sc. Thesis, Instituto Superior Technico, 2011.
[19] L. Yunting, Z. Jun, W. Hu, and J. Tian, ”Method for pan-tilt camera calibrationusing single control point,”
Journal of the Optical Society of America, vol. 32, no. 1, pp. 156–163, 2015.
[20] A. Casile and M. A. Giese, ”Critical features for the recognition of biological motion,” Journal of Vision,
vol. 5, no. 4, pp. 6-6, 2005.
[21] L. I. Abdul-Kreem and H. Neumann, ”Estimating visual motion using an event-based artificial retina,”
Series of CCIS Communications in Computer and Information Science. Computer Vision, Imag-
ing and Computer Graphics Theory and Applications. Springer International Publishing Switzerland,
pp. 396-415, 2016.
[22] L. I. Abdul Kreem and H. Neumann, ”Neural mechanisms of cortical motion computation based on a
neuromorphics sensory system,” PLOS ONE, vol. 10, no. 11, pp. 1-33, 2015.
[23] L. I. Abdul-Kreem and H. Neumann, ”Bioinspired model for motion estimation using address event
Object tracking using motion flow projection for... (Luma Issa Abdul-Kreem)
4694 Ì ISSN: 2088-8708
representation,” 10th International Conference on computer vision theory and application, VISIGRAPP,
pp. 335-346, 2015.
[24] L. I. Abdul-Kreem, ”Computational architecture of a visual model for biological motions segregation,”
Network: Computation in Neural Systems, vol. 30, no. 1-4, pp. 58-78, 2019.
[25] P. Sturm, S. Ramalingam, J. Tardif, S. Gasparini, and J. Barreto, ”Camera models and fundamen-
tal concepts used in geometric computer vision,” Found Trends Comput Graph Vis, vol. 6, no. 1-2,
pp.1-183, 2011.
[26] S. Aslani and H. Mahdavi-Nasab, ”Optical flow based moving object detection and tracking for
traffic surveillance,” International Journal of Electrical and Computer Engineering, vol. 7, no. 9,
pp.1252-1256, 2013.
[27] B. Horn and B. Schunck, ”Determining optical flow,” Artif. Intell., vol. 17, no. 1-3, pp. 185-203, 1981.
[28] J.Barron, D.Fleet, andS.S.Beauchemin, ”Performance of optical flow techniques,” Int. J. Comput. Vision,
vol. 12, no. 1, pp. 43-77, 1994.
[29] A. Bruhn, J. Weickert, and C. Sschnorr, ”Lucas/kanade meets horn/schunck: combining local and global
optic flow methods,” Int. J. Comput. Vision, vol. 61, no. 3, pp. 211-231, 2005.
[30] T. Brox, A. Bruhn, N. Papenberg, and J. Weickert, ”High accuracy optical flow estimation based on a
theory for warping,” Proc. European Conf. Computer Vision, pp. 25-36, 2004.
[31] A. Agarwal, S. Gupta, D. Kumar, and D. K. Singh, ”Review of optical flow technique for moving object
detection,” Conference: 2016 2nd International Conference on Contemporary Computing and Informatics
(IC3I), pp. 409-413, 2016.
[32] D. Pathak, R. Girshick, P. Doll´ar, T. Darrell, and B. Hariharan, ”Learning features by watching objects
move,” IEEE Transactions on Industrial Electronics, pp. 2701-2710, 2017.
[33] L. I. Abdul-Kreem, ”Human action recognition based on motion velocity,” Proc. SCEE2018; 3rd ieee
scientific conference of electrical engineering Baghdad/ Iraq, pp. 45-50, 2018.
Int J Elec & Comp Eng, Vol. 10, No. 5, October 2020 : 4687 – 4694

More Related Content

What's hot

MULTIPLE REGION OF INTEREST TRACKING OF NON-RIGID OBJECTS USING DEMON'S ALGOR...
MULTIPLE REGION OF INTEREST TRACKING OF NON-RIGID OBJECTS USING DEMON'S ALGOR...MULTIPLE REGION OF INTEREST TRACKING OF NON-RIGID OBJECTS USING DEMON'S ALGOR...
MULTIPLE REGION OF INTEREST TRACKING OF NON-RIGID OBJECTS USING DEMON'S ALGOR...cscpconf
 
Multiple region of interest tracking of non rigid objects using demon's algor...
Multiple region of interest tracking of non rigid objects using demon's algor...Multiple region of interest tracking of non rigid objects using demon's algor...
Multiple region of interest tracking of non rigid objects using demon's algor...csandit
 
IRJET- Design the Surveillance Algorithm and Motion Detection of Objects for ...
IRJET- Design the Surveillance Algorithm and Motion Detection of Objects for ...IRJET- Design the Surveillance Algorithm and Motion Detection of Objects for ...
IRJET- Design the Surveillance Algorithm and Motion Detection of Objects for ...IRJET Journal
 
SkyStitch: a Cooperative Multi-UAV-based Real-time Video Surveillance System ...
SkyStitch: a Cooperative Multi-UAV-based Real-time Video Surveillance System ...SkyStitch: a Cooperative Multi-UAV-based Real-time Video Surveillance System ...
SkyStitch: a Cooperative Multi-UAV-based Real-time Video Surveillance System ...Kitsukawa Yuki
 
Human action recognition using local space time features and adaboost svm
Human action recognition using local space time features and adaboost svmHuman action recognition using local space time features and adaboost svm
Human action recognition using local space time features and adaboost svmeSAT Publishing House
 
Enhanced target tracking based on mean shift algorithm for satellite imagery
Enhanced target tracking based on mean shift algorithm for satellite imageryEnhanced target tracking based on mean shift algorithm for satellite imagery
Enhanced target tracking based on mean shift algorithm for satellite imageryeSAT Journals
 
Human action recognition with kinect using a joint motion descriptor
Human action recognition with kinect using a joint motion descriptorHuman action recognition with kinect using a joint motion descriptor
Human action recognition with kinect using a joint motion descriptorSoma Boubou
 
Enhanced Tracking Aerial Image by Applying Fusion & Image Registration Technique
Enhanced Tracking Aerial Image by Applying Fusion & Image Registration TechniqueEnhanced Tracking Aerial Image by Applying Fusion & Image Registration Technique
Enhanced Tracking Aerial Image by Applying Fusion & Image Registration TechniqueIRJET Journal
 
The flow of baseline estimation using a single omnidirectional camera
The flow of baseline estimation using a single omnidirectional cameraThe flow of baseline estimation using a single omnidirectional camera
The flow of baseline estimation using a single omnidirectional cameraTELKOMNIKA JOURNAL
 
Enhanced target tracking based on mean shift
Enhanced target tracking based on mean shiftEnhanced target tracking based on mean shift
Enhanced target tracking based on mean shifteSAT Publishing House
 
Scanning 3 d full human bodies using kinects
Scanning 3 d full human bodies using kinectsScanning 3 d full human bodies using kinects
Scanning 3 d full human bodies using kinectsFensa Saj
 

What's hot (18)

Ijcatr02011007
Ijcatr02011007Ijcatr02011007
Ijcatr02011007
 
MULTIPLE REGION OF INTEREST TRACKING OF NON-RIGID OBJECTS USING DEMON'S ALGOR...
MULTIPLE REGION OF INTEREST TRACKING OF NON-RIGID OBJECTS USING DEMON'S ALGOR...MULTIPLE REGION OF INTEREST TRACKING OF NON-RIGID OBJECTS USING DEMON'S ALGOR...
MULTIPLE REGION OF INTEREST TRACKING OF NON-RIGID OBJECTS USING DEMON'S ALGOR...
 
Multiple region of interest tracking of non rigid objects using demon's algor...
Multiple region of interest tracking of non rigid objects using demon's algor...Multiple region of interest tracking of non rigid objects using demon's algor...
Multiple region of interest tracking of non rigid objects using demon's algor...
 
IRJET- Design the Surveillance Algorithm and Motion Detection of Objects for ...
IRJET- Design the Surveillance Algorithm and Motion Detection of Objects for ...IRJET- Design the Surveillance Algorithm and Motion Detection of Objects for ...
IRJET- Design the Surveillance Algorithm and Motion Detection of Objects for ...
 
C42011318
C42011318C42011318
C42011318
 
SkyStitch: a Cooperative Multi-UAV-based Real-time Video Surveillance System ...
SkyStitch: a Cooperative Multi-UAV-based Real-time Video Surveillance System ...SkyStitch: a Cooperative Multi-UAV-based Real-time Video Surveillance System ...
SkyStitch: a Cooperative Multi-UAV-based Real-time Video Surveillance System ...
 
05397385
0539738505397385
05397385
 
B0441418
B0441418B0441418
B0441418
 
Vf sift
Vf siftVf sift
Vf sift
 
Human action recognition using local space time features and adaboost svm
Human action recognition using local space time features and adaboost svmHuman action recognition using local space time features and adaboost svm
Human action recognition using local space time features and adaboost svm
 
Enhanced target tracking based on mean shift algorithm for satellite imagery
Enhanced target tracking based on mean shift algorithm for satellite imageryEnhanced target tracking based on mean shift algorithm for satellite imagery
Enhanced target tracking based on mean shift algorithm for satellite imagery
 
Human action recognition with kinect using a joint motion descriptor
Human action recognition with kinect using a joint motion descriptorHuman action recognition with kinect using a joint motion descriptor
Human action recognition with kinect using a joint motion descriptor
 
Enhanced Tracking Aerial Image by Applying Fusion & Image Registration Technique
Enhanced Tracking Aerial Image by Applying Fusion & Image Registration TechniqueEnhanced Tracking Aerial Image by Applying Fusion & Image Registration Technique
Enhanced Tracking Aerial Image by Applying Fusion & Image Registration Technique
 
The flow of baseline estimation using a single omnidirectional camera
The flow of baseline estimation using a single omnidirectional cameraThe flow of baseline estimation using a single omnidirectional camera
The flow of baseline estimation using a single omnidirectional camera
 
iwvp11-vivet
iwvp11-vivetiwvp11-vivet
iwvp11-vivet
 
Enhanced target tracking based on mean shift
Enhanced target tracking based on mean shiftEnhanced target tracking based on mean shift
Enhanced target tracking based on mean shift
 
Scanning 3 d full human bodies using kinects
Scanning 3 d full human bodies using kinectsScanning 3 d full human bodies using kinects
Scanning 3 d full human bodies using kinects
 
report
reportreport
report
 

Similar to Object tracking using motion flow projection for pan-tilt configuration

Three-dimensional structure from motion recovery of a moving object with nois...
Three-dimensional structure from motion recovery of a moving object with nois...Three-dimensional structure from motion recovery of a moving object with nois...
Three-dimensional structure from motion recovery of a moving object with nois...IJECEIAES
 
Leader Follower Formation Control of Ground Vehicles Using Dynamic Pixel Coun...
Leader Follower Formation Control of Ground Vehicles Using Dynamic Pixel Coun...Leader Follower Formation Control of Ground Vehicles Using Dynamic Pixel Coun...
Leader Follower Formation Control of Ground Vehicles Using Dynamic Pixel Coun...ijma
 
Visual Odometry using Stereo Vision
Visual Odometry using Stereo VisionVisual Odometry using Stereo Vision
Visual Odometry using Stereo VisionRSIS International
 
Fuzzy-proportional-integral-derivative-based controller for object tracking i...
Fuzzy-proportional-integral-derivative-based controller for object tracking i...Fuzzy-proportional-integral-derivative-based controller for object tracking i...
Fuzzy-proportional-integral-derivative-based controller for object tracking i...IJECEIAES
 
IRJET- Study on the Feature of Cavitation Bubbles in Hydraulic Valve by using...
IRJET- Study on the Feature of Cavitation Bubbles in Hydraulic Valve by using...IRJET- Study on the Feature of Cavitation Bubbles in Hydraulic Valve by using...
IRJET- Study on the Feature of Cavitation Bubbles in Hydraulic Valve by using...IRJET Journal
 
HUMAN BODY DETECTION AND SAFETY CARE SYSTEM FOR A FLYING ROBOT
HUMAN BODY DETECTION AND SAFETY CARE SYSTEM FOR A FLYING ROBOTHUMAN BODY DETECTION AND SAFETY CARE SYSTEM FOR A FLYING ROBOT
HUMAN BODY DETECTION AND SAFETY CARE SYSTEM FOR A FLYING ROBOTcsandit
 
Real-time 3D Object Pose Estimation and Tracking for Natural Landmark Based V...
Real-time 3D Object Pose Estimation and Tracking for Natural Landmark Based V...Real-time 3D Object Pose Estimation and Tracking for Natural Landmark Based V...
Real-time 3D Object Pose Estimation and Tracking for Natural Landmark Based V...c.choi
 
APPLYING R-SPATIOGRAM IN OBJECT TRACKING FOR OCCLUSION HANDLING
APPLYING R-SPATIOGRAM IN OBJECT TRACKING FOR OCCLUSION HANDLINGAPPLYING R-SPATIOGRAM IN OBJECT TRACKING FOR OCCLUSION HANDLING
APPLYING R-SPATIOGRAM IN OBJECT TRACKING FOR OCCLUSION HANDLINGsipij
 
Intelligent two axis dual-ccd image-servo shooting platform design
Intelligent two axis dual-ccd image-servo shooting platform designIntelligent two axis dual-ccd image-servo shooting platform design
Intelligent two axis dual-ccd image-servo shooting platform designeSAT Publishing House
 
Intelligent two axis dual-ccd image-servo shooting platform design
Intelligent two axis dual-ccd image-servo shooting platform designIntelligent two axis dual-ccd image-servo shooting platform design
Intelligent two axis dual-ccd image-servo shooting platform designeSAT Publishing House
 
Dense Visual Odometry Using Genetic Algorithm
Dense Visual Odometry Using Genetic AlgorithmDense Visual Odometry Using Genetic Algorithm
Dense Visual Odometry Using Genetic AlgorithmSlimane Djema
 
Visual Mapping and Collision Avoidance Dynamic Environments in Dynamic Enviro...
Visual Mapping and Collision Avoidance Dynamic Environments in Dynamic Enviro...Visual Mapping and Collision Avoidance Dynamic Environments in Dynamic Enviro...
Visual Mapping and Collision Avoidance Dynamic Environments in Dynamic Enviro...Darius Burschka
 
Goal location prediction based on deep learning using RGB-D camera
Goal location prediction based on deep learning using RGB-D cameraGoal location prediction based on deep learning using RGB-D camera
Goal location prediction based on deep learning using RGB-D camerajournalBEEI
 
Integration of poses to enhance the shape of the object tracking from a singl...
Integration of poses to enhance the shape of the object tracking from a singl...Integration of poses to enhance the shape of the object tracking from a singl...
Integration of poses to enhance the shape of the object tracking from a singl...eSAT Journals
 
Trajectory Based Unusual Human Movement Identification for ATM System
	 Trajectory Based Unusual Human Movement Identification for ATM System	 Trajectory Based Unusual Human Movement Identification for ATM System
Trajectory Based Unusual Human Movement Identification for ATM SystemIRJET Journal
 
HUMAN BODY DETECTION AND SAFETY CARE SYSTEM FOR A FLYING ROBOT
HUMAN BODY DETECTION AND SAFETY CARE SYSTEM FOR A FLYING ROBOTHUMAN BODY DETECTION AND SAFETY CARE SYSTEM FOR A FLYING ROBOT
HUMAN BODY DETECTION AND SAFETY CARE SYSTEM FOR A FLYING ROBOTcscpconf
 
Gait Based Person Recognition Using Partial Least Squares Selection Scheme
Gait Based Person Recognition Using Partial Least Squares Selection Scheme Gait Based Person Recognition Using Partial Least Squares Selection Scheme
Gait Based Person Recognition Using Partial Least Squares Selection Scheme ijcisjournal
 

Similar to Object tracking using motion flow projection for pan-tilt configuration (20)

Three-dimensional structure from motion recovery of a moving object with nois...
Three-dimensional structure from motion recovery of a moving object with nois...Three-dimensional structure from motion recovery of a moving object with nois...
Three-dimensional structure from motion recovery of a moving object with nois...
 
Leader Follower Formation Control of Ground Vehicles Using Dynamic Pixel Coun...
Leader Follower Formation Control of Ground Vehicles Using Dynamic Pixel Coun...Leader Follower Formation Control of Ground Vehicles Using Dynamic Pixel Coun...
Leader Follower Formation Control of Ground Vehicles Using Dynamic Pixel Coun...
 
Visual Odometry using Stereo Vision
Visual Odometry using Stereo VisionVisual Odometry using Stereo Vision
Visual Odometry using Stereo Vision
 
Fuzzy-proportional-integral-derivative-based controller for object tracking i...
Fuzzy-proportional-integral-derivative-based controller for object tracking i...Fuzzy-proportional-integral-derivative-based controller for object tracking i...
Fuzzy-proportional-integral-derivative-based controller for object tracking i...
 
998-isvc16
998-isvc16998-isvc16
998-isvc16
 
Oc2423022305
Oc2423022305Oc2423022305
Oc2423022305
 
IRJET- Study on the Feature of Cavitation Bubbles in Hydraulic Valve by using...
IRJET- Study on the Feature of Cavitation Bubbles in Hydraulic Valve by using...IRJET- Study on the Feature of Cavitation Bubbles in Hydraulic Valve by using...
IRJET- Study on the Feature of Cavitation Bubbles in Hydraulic Valve by using...
 
HUMAN BODY DETECTION AND SAFETY CARE SYSTEM FOR A FLYING ROBOT
HUMAN BODY DETECTION AND SAFETY CARE SYSTEM FOR A FLYING ROBOTHUMAN BODY DETECTION AND SAFETY CARE SYSTEM FOR A FLYING ROBOT
HUMAN BODY DETECTION AND SAFETY CARE SYSTEM FOR A FLYING ROBOT
 
Real-time 3D Object Pose Estimation and Tracking for Natural Landmark Based V...
Real-time 3D Object Pose Estimation and Tracking for Natural Landmark Based V...Real-time 3D Object Pose Estimation and Tracking for Natural Landmark Based V...
Real-time 3D Object Pose Estimation and Tracking for Natural Landmark Based V...
 
APPLYING R-SPATIOGRAM IN OBJECT TRACKING FOR OCCLUSION HANDLING
APPLYING R-SPATIOGRAM IN OBJECT TRACKING FOR OCCLUSION HANDLINGAPPLYING R-SPATIOGRAM IN OBJECT TRACKING FOR OCCLUSION HANDLING
APPLYING R-SPATIOGRAM IN OBJECT TRACKING FOR OCCLUSION HANDLING
 
Intelligent two axis dual-ccd image-servo shooting platform design
Intelligent two axis dual-ccd image-servo shooting platform designIntelligent two axis dual-ccd image-servo shooting platform design
Intelligent two axis dual-ccd image-servo shooting platform design
 
Intelligent two axis dual-ccd image-servo shooting platform design
Intelligent two axis dual-ccd image-servo shooting platform designIntelligent two axis dual-ccd image-servo shooting platform design
Intelligent two axis dual-ccd image-servo shooting platform design
 
Dense Visual Odometry Using Genetic Algorithm
Dense Visual Odometry Using Genetic AlgorithmDense Visual Odometry Using Genetic Algorithm
Dense Visual Odometry Using Genetic Algorithm
 
Visual Mapping and Collision Avoidance Dynamic Environments in Dynamic Enviro...
Visual Mapping and Collision Avoidance Dynamic Environments in Dynamic Enviro...Visual Mapping and Collision Avoidance Dynamic Environments in Dynamic Enviro...
Visual Mapping and Collision Avoidance Dynamic Environments in Dynamic Enviro...
 
Goal location prediction based on deep learning using RGB-D camera
Goal location prediction based on deep learning using RGB-D cameraGoal location prediction based on deep learning using RGB-D camera
Goal location prediction based on deep learning using RGB-D camera
 
D04432528
D04432528D04432528
D04432528
 
Integration of poses to enhance the shape of the object tracking from a singl...
Integration of poses to enhance the shape of the object tracking from a singl...Integration of poses to enhance the shape of the object tracking from a singl...
Integration of poses to enhance the shape of the object tracking from a singl...
 
Trajectory Based Unusual Human Movement Identification for ATM System
	 Trajectory Based Unusual Human Movement Identification for ATM System	 Trajectory Based Unusual Human Movement Identification for ATM System
Trajectory Based Unusual Human Movement Identification for ATM System
 
HUMAN BODY DETECTION AND SAFETY CARE SYSTEM FOR A FLYING ROBOT
HUMAN BODY DETECTION AND SAFETY CARE SYSTEM FOR A FLYING ROBOTHUMAN BODY DETECTION AND SAFETY CARE SYSTEM FOR A FLYING ROBOT
HUMAN BODY DETECTION AND SAFETY CARE SYSTEM FOR A FLYING ROBOT
 
Gait Based Person Recognition Using Partial Least Squares Selection Scheme
Gait Based Person Recognition Using Partial Least Squares Selection Scheme Gait Based Person Recognition Using Partial Least Squares Selection Scheme
Gait Based Person Recognition Using Partial Least Squares Selection Scheme
 

More from IJECEIAES

Cloud service ranking with an integration of k-means algorithm and decision-m...
Cloud service ranking with an integration of k-means algorithm and decision-m...Cloud service ranking with an integration of k-means algorithm and decision-m...
Cloud service ranking with an integration of k-means algorithm and decision-m...IJECEIAES
 
Prediction of the risk of developing heart disease using logistic regression
Prediction of the risk of developing heart disease using logistic regressionPrediction of the risk of developing heart disease using logistic regression
Prediction of the risk of developing heart disease using logistic regressionIJECEIAES
 
Predictive analysis of terrorist activities in Thailand's Southern provinces:...
Predictive analysis of terrorist activities in Thailand's Southern provinces:...Predictive analysis of terrorist activities in Thailand's Southern provinces:...
Predictive analysis of terrorist activities in Thailand's Southern provinces:...IJECEIAES
 
Optimal model of vehicular ad-hoc network assisted by unmanned aerial vehicl...
Optimal model of vehicular ad-hoc network assisted by  unmanned aerial vehicl...Optimal model of vehicular ad-hoc network assisted by  unmanned aerial vehicl...
Optimal model of vehicular ad-hoc network assisted by unmanned aerial vehicl...IJECEIAES
 
Improving cyberbullying detection through multi-level machine learning
Improving cyberbullying detection through multi-level machine learningImproving cyberbullying detection through multi-level machine learning
Improving cyberbullying detection through multi-level machine learningIJECEIAES
 
Comparison of time series temperature prediction with autoregressive integrat...
Comparison of time series temperature prediction with autoregressive integrat...Comparison of time series temperature prediction with autoregressive integrat...
Comparison of time series temperature prediction with autoregressive integrat...IJECEIAES
 
Strengthening data integrity in academic document recording with blockchain a...
Strengthening data integrity in academic document recording with blockchain a...Strengthening data integrity in academic document recording with blockchain a...
Strengthening data integrity in academic document recording with blockchain a...IJECEIAES
 
Design of storage benchmark kit framework for supporting the file storage ret...
Design of storage benchmark kit framework for supporting the file storage ret...Design of storage benchmark kit framework for supporting the file storage ret...
Design of storage benchmark kit framework for supporting the file storage ret...IJECEIAES
 
Detection of diseases in rice leaf using convolutional neural network with tr...
Detection of diseases in rice leaf using convolutional neural network with tr...Detection of diseases in rice leaf using convolutional neural network with tr...
Detection of diseases in rice leaf using convolutional neural network with tr...IJECEIAES
 
A systematic review of in-memory database over multi-tenancy
A systematic review of in-memory database over multi-tenancyA systematic review of in-memory database over multi-tenancy
A systematic review of in-memory database over multi-tenancyIJECEIAES
 
Agriculture crop yield prediction using inertia based cat swarm optimization
Agriculture crop yield prediction using inertia based cat swarm optimizationAgriculture crop yield prediction using inertia based cat swarm optimization
Agriculture crop yield prediction using inertia based cat swarm optimizationIJECEIAES
 
Three layer hybrid learning to improve intrusion detection system performance
Three layer hybrid learning to improve intrusion detection system performanceThree layer hybrid learning to improve intrusion detection system performance
Three layer hybrid learning to improve intrusion detection system performanceIJECEIAES
 
Non-binary codes approach on the performance of short-packet full-duplex tran...
Non-binary codes approach on the performance of short-packet full-duplex tran...Non-binary codes approach on the performance of short-packet full-duplex tran...
Non-binary codes approach on the performance of short-packet full-duplex tran...IJECEIAES
 
Improved design and performance of the global rectenna system for wireless po...
Improved design and performance of the global rectenna system for wireless po...Improved design and performance of the global rectenna system for wireless po...
Improved design and performance of the global rectenna system for wireless po...IJECEIAES
 
Advanced hybrid algorithms for precise multipath channel estimation in next-g...
Advanced hybrid algorithms for precise multipath channel estimation in next-g...Advanced hybrid algorithms for precise multipath channel estimation in next-g...
Advanced hybrid algorithms for precise multipath channel estimation in next-g...IJECEIAES
 
Performance analysis of 2D optical code division multiple access through unde...
Performance analysis of 2D optical code division multiple access through unde...Performance analysis of 2D optical code division multiple access through unde...
Performance analysis of 2D optical code division multiple access through unde...IJECEIAES
 
On performance analysis of non-orthogonal multiple access downlink for cellul...
On performance analysis of non-orthogonal multiple access downlink for cellul...On performance analysis of non-orthogonal multiple access downlink for cellul...
On performance analysis of non-orthogonal multiple access downlink for cellul...IJECEIAES
 
Phase delay through slot-line beam switching microstrip patch array antenna d...
Phase delay through slot-line beam switching microstrip patch array antenna d...Phase delay through slot-line beam switching microstrip patch array antenna d...
Phase delay through slot-line beam switching microstrip patch array antenna d...IJECEIAES
 
A simple feed orthogonal excitation X-band dual circular polarized microstrip...
A simple feed orthogonal excitation X-band dual circular polarized microstrip...A simple feed orthogonal excitation X-band dual circular polarized microstrip...
A simple feed orthogonal excitation X-band dual circular polarized microstrip...IJECEIAES
 
A taxonomy on power optimization techniques for fifthgeneration heterogenous ...
A taxonomy on power optimization techniques for fifthgeneration heterogenous ...A taxonomy on power optimization techniques for fifthgeneration heterogenous ...
A taxonomy on power optimization techniques for fifthgeneration heterogenous ...IJECEIAES
 

More from IJECEIAES (20)

Cloud service ranking with an integration of k-means algorithm and decision-m...
Cloud service ranking with an integration of k-means algorithm and decision-m...Cloud service ranking with an integration of k-means algorithm and decision-m...
Cloud service ranking with an integration of k-means algorithm and decision-m...
 
Prediction of the risk of developing heart disease using logistic regression
Prediction of the risk of developing heart disease using logistic regressionPrediction of the risk of developing heart disease using logistic regression
Prediction of the risk of developing heart disease using logistic regression
 
Predictive analysis of terrorist activities in Thailand's Southern provinces:...
Predictive analysis of terrorist activities in Thailand's Southern provinces:...Predictive analysis of terrorist activities in Thailand's Southern provinces:...
Predictive analysis of terrorist activities in Thailand's Southern provinces:...
 
Optimal model of vehicular ad-hoc network assisted by unmanned aerial vehicl...
Optimal model of vehicular ad-hoc network assisted by  unmanned aerial vehicl...Optimal model of vehicular ad-hoc network assisted by  unmanned aerial vehicl...
Optimal model of vehicular ad-hoc network assisted by unmanned aerial vehicl...
 
Improving cyberbullying detection through multi-level machine learning
Improving cyberbullying detection through multi-level machine learningImproving cyberbullying detection through multi-level machine learning
Improving cyberbullying detection through multi-level machine learning
 
Comparison of time series temperature prediction with autoregressive integrat...
Comparison of time series temperature prediction with autoregressive integrat...Comparison of time series temperature prediction with autoregressive integrat...
Comparison of time series temperature prediction with autoregressive integrat...
 
Strengthening data integrity in academic document recording with blockchain a...
Strengthening data integrity in academic document recording with blockchain a...Strengthening data integrity in academic document recording with blockchain a...
Strengthening data integrity in academic document recording with blockchain a...
 
Design of storage benchmark kit framework for supporting the file storage ret...
Design of storage benchmark kit framework for supporting the file storage ret...Design of storage benchmark kit framework for supporting the file storage ret...
Design of storage benchmark kit framework for supporting the file storage ret...
 
Detection of diseases in rice leaf using convolutional neural network with tr...
Detection of diseases in rice leaf using convolutional neural network with tr...Detection of diseases in rice leaf using convolutional neural network with tr...
Detection of diseases in rice leaf using convolutional neural network with tr...
 
A systematic review of in-memory database over multi-tenancy
A systematic review of in-memory database over multi-tenancyA systematic review of in-memory database over multi-tenancy
A systematic review of in-memory database over multi-tenancy
 
Agriculture crop yield prediction using inertia based cat swarm optimization
Agriculture crop yield prediction using inertia based cat swarm optimizationAgriculture crop yield prediction using inertia based cat swarm optimization
Agriculture crop yield prediction using inertia based cat swarm optimization
 
Three layer hybrid learning to improve intrusion detection system performance
Three layer hybrid learning to improve intrusion detection system performanceThree layer hybrid learning to improve intrusion detection system performance
Three layer hybrid learning to improve intrusion detection system performance
 
Non-binary codes approach on the performance of short-packet full-duplex tran...
Non-binary codes approach on the performance of short-packet full-duplex tran...Non-binary codes approach on the performance of short-packet full-duplex tran...
Non-binary codes approach on the performance of short-packet full-duplex tran...
 
Improved design and performance of the global rectenna system for wireless po...
Improved design and performance of the global rectenna system for wireless po...Improved design and performance of the global rectenna system for wireless po...
Improved design and performance of the global rectenna system for wireless po...
 
Advanced hybrid algorithms for precise multipath channel estimation in next-g...
Advanced hybrid algorithms for precise multipath channel estimation in next-g...Advanced hybrid algorithms for precise multipath channel estimation in next-g...
Advanced hybrid algorithms for precise multipath channel estimation in next-g...
 
Performance analysis of 2D optical code division multiple access through unde...
Performance analysis of 2D optical code division multiple access through unde...Performance analysis of 2D optical code division multiple access through unde...
Performance analysis of 2D optical code division multiple access through unde...
 
On performance analysis of non-orthogonal multiple access downlink for cellul...
On performance analysis of non-orthogonal multiple access downlink for cellul...On performance analysis of non-orthogonal multiple access downlink for cellul...
On performance analysis of non-orthogonal multiple access downlink for cellul...
 
Phase delay through slot-line beam switching microstrip patch array antenna d...
Phase delay through slot-line beam switching microstrip patch array antenna d...Phase delay through slot-line beam switching microstrip patch array antenna d...
Phase delay through slot-line beam switching microstrip patch array antenna d...
 
A simple feed orthogonal excitation X-band dual circular polarized microstrip...
A simple feed orthogonal excitation X-band dual circular polarized microstrip...A simple feed orthogonal excitation X-band dual circular polarized microstrip...
A simple feed orthogonal excitation X-band dual circular polarized microstrip...
 
A taxonomy on power optimization techniques for fifthgeneration heterogenous ...
A taxonomy on power optimization techniques for fifthgeneration heterogenous ...A taxonomy on power optimization techniques for fifthgeneration heterogenous ...
A taxonomy on power optimization techniques for fifthgeneration heterogenous ...
 

Recently uploaded

main PPT.pptx of girls hostel security using rfid
main PPT.pptx of girls hostel security using rfidmain PPT.pptx of girls hostel security using rfid
main PPT.pptx of girls hostel security using rfidNikhilNagaraju
 
Extrusion Processes and Their Limitations
Extrusion Processes and Their LimitationsExtrusion Processes and Their Limitations
Extrusion Processes and Their Limitations120cr0395
 
HARMONY IN THE NATURE AND EXISTENCE - Unit-IV
HARMONY IN THE NATURE AND EXISTENCE - Unit-IVHARMONY IN THE NATURE AND EXISTENCE - Unit-IV
HARMONY IN THE NATURE AND EXISTENCE - Unit-IVRajaP95
 
SPICE PARK APR2024 ( 6,793 SPICE Models )
SPICE PARK APR2024 ( 6,793 SPICE Models )SPICE PARK APR2024 ( 6,793 SPICE Models )
SPICE PARK APR2024 ( 6,793 SPICE Models )Tsuyoshi Horigome
 
Call Girls in Nagpur Suman Call 7001035870 Meet With Nagpur Escorts
Call Girls in Nagpur Suman Call 7001035870 Meet With Nagpur EscortsCall Girls in Nagpur Suman Call 7001035870 Meet With Nagpur Escorts
Call Girls in Nagpur Suman Call 7001035870 Meet With Nagpur EscortsCall Girls in Nagpur High Profile
 
College Call Girls Nashik Nehal 7001305949 Independent Escort Service Nashik
College Call Girls Nashik Nehal 7001305949 Independent Escort Service NashikCollege Call Girls Nashik Nehal 7001305949 Independent Escort Service Nashik
College Call Girls Nashik Nehal 7001305949 Independent Escort Service NashikCall Girls in Nagpur High Profile
 
High Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur EscortsHigh Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur Escortsranjana rawat
 
APPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICS
APPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICSAPPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICS
APPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICSKurinjimalarL3
 
Call for Papers - African Journal of Biological Sciences, E-ISSN: 2663-2187, ...
Call for Papers - African Journal of Biological Sciences, E-ISSN: 2663-2187, ...Call for Papers - African Journal of Biological Sciences, E-ISSN: 2663-2187, ...
Call for Papers - African Journal of Biological Sciences, E-ISSN: 2663-2187, ...Christo Ananth
 
(RIA) Call Girls Bhosari ( 7001035870 ) HI-Fi Pune Escorts Service
(RIA) Call Girls Bhosari ( 7001035870 ) HI-Fi Pune Escorts Service(RIA) Call Girls Bhosari ( 7001035870 ) HI-Fi Pune Escorts Service
(RIA) Call Girls Bhosari ( 7001035870 ) HI-Fi Pune Escorts Serviceranjana rawat
 
High Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur EscortsHigh Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur EscortsCall Girls in Nagpur High Profile
 
Introduction to Multiple Access Protocol.pptx
Introduction to Multiple Access Protocol.pptxIntroduction to Multiple Access Protocol.pptx
Introduction to Multiple Access Protocol.pptxupamatechverse
 
Introduction and different types of Ethernet.pptx
Introduction and different types of Ethernet.pptxIntroduction and different types of Ethernet.pptx
Introduction and different types of Ethernet.pptxupamatechverse
 
(ANJALI) Dange Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANJALI) Dange Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...(ANJALI) Dange Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANJALI) Dange Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...ranjana rawat
 
What are the advantages and disadvantages of membrane structures.pptx
What are the advantages and disadvantages of membrane structures.pptxWhat are the advantages and disadvantages of membrane structures.pptx
What are the advantages and disadvantages of membrane structures.pptxwendy cai
 
HARDNESS, FRACTURE TOUGHNESS AND STRENGTH OF CERAMICS
HARDNESS, FRACTURE TOUGHNESS AND STRENGTH OF CERAMICSHARDNESS, FRACTURE TOUGHNESS AND STRENGTH OF CERAMICS
HARDNESS, FRACTURE TOUGHNESS AND STRENGTH OF CERAMICSRajkumarAkumalla
 
OSVC_Meta-Data based Simulation Automation to overcome Verification Challenge...
OSVC_Meta-Data based Simulation Automation to overcome Verification Challenge...OSVC_Meta-Data based Simulation Automation to overcome Verification Challenge...
OSVC_Meta-Data based Simulation Automation to overcome Verification Challenge...Soham Mondal
 

Recently uploaded (20)

main PPT.pptx of girls hostel security using rfid
main PPT.pptx of girls hostel security using rfidmain PPT.pptx of girls hostel security using rfid
main PPT.pptx of girls hostel security using rfid
 
Extrusion Processes and Their Limitations
Extrusion Processes and Their LimitationsExtrusion Processes and Their Limitations
Extrusion Processes and Their Limitations
 
HARMONY IN THE NATURE AND EXISTENCE - Unit-IV
HARMONY IN THE NATURE AND EXISTENCE - Unit-IVHARMONY IN THE NATURE AND EXISTENCE - Unit-IV
HARMONY IN THE NATURE AND EXISTENCE - Unit-IV
 
SPICE PARK APR2024 ( 6,793 SPICE Models )
SPICE PARK APR2024 ( 6,793 SPICE Models )SPICE PARK APR2024 ( 6,793 SPICE Models )
SPICE PARK APR2024 ( 6,793 SPICE Models )
 
Call Girls in Nagpur Suman Call 7001035870 Meet With Nagpur Escorts
Call Girls in Nagpur Suman Call 7001035870 Meet With Nagpur EscortsCall Girls in Nagpur Suman Call 7001035870 Meet With Nagpur Escorts
Call Girls in Nagpur Suman Call 7001035870 Meet With Nagpur Escorts
 
College Call Girls Nashik Nehal 7001305949 Independent Escort Service Nashik
College Call Girls Nashik Nehal 7001305949 Independent Escort Service NashikCollege Call Girls Nashik Nehal 7001305949 Independent Escort Service Nashik
College Call Girls Nashik Nehal 7001305949 Independent Escort Service Nashik
 
High Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur EscortsHigh Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur Escorts
 
Roadmap to Membership of RICS - Pathways and Routes
Roadmap to Membership of RICS - Pathways and RoutesRoadmap to Membership of RICS - Pathways and Routes
Roadmap to Membership of RICS - Pathways and Routes
 
APPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICS
APPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICSAPPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICS
APPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICS
 
9953056974 Call Girls In South Ex, Escorts (Delhi) NCR.pdf
9953056974 Call Girls In South Ex, Escorts (Delhi) NCR.pdf9953056974 Call Girls In South Ex, Escorts (Delhi) NCR.pdf
9953056974 Call Girls In South Ex, Escorts (Delhi) NCR.pdf
 
Call for Papers - African Journal of Biological Sciences, E-ISSN: 2663-2187, ...
Call for Papers - African Journal of Biological Sciences, E-ISSN: 2663-2187, ...Call for Papers - African Journal of Biological Sciences, E-ISSN: 2663-2187, ...
Call for Papers - African Journal of Biological Sciences, E-ISSN: 2663-2187, ...
 
(RIA) Call Girls Bhosari ( 7001035870 ) HI-Fi Pune Escorts Service
(RIA) Call Girls Bhosari ( 7001035870 ) HI-Fi Pune Escorts Service(RIA) Call Girls Bhosari ( 7001035870 ) HI-Fi Pune Escorts Service
(RIA) Call Girls Bhosari ( 7001035870 ) HI-Fi Pune Escorts Service
 
High Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur EscortsHigh Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur Escorts
 
Introduction to Multiple Access Protocol.pptx
Introduction to Multiple Access Protocol.pptxIntroduction to Multiple Access Protocol.pptx
Introduction to Multiple Access Protocol.pptx
 
Introduction and different types of Ethernet.pptx
Introduction and different types of Ethernet.pptxIntroduction and different types of Ethernet.pptx
Introduction and different types of Ethernet.pptx
 
(ANJALI) Dange Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANJALI) Dange Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...(ANJALI) Dange Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANJALI) Dange Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
 
What are the advantages and disadvantages of membrane structures.pptx
What are the advantages and disadvantages of membrane structures.pptxWhat are the advantages and disadvantages of membrane structures.pptx
What are the advantages and disadvantages of membrane structures.pptx
 
HARDNESS, FRACTURE TOUGHNESS AND STRENGTH OF CERAMICS
HARDNESS, FRACTURE TOUGHNESS AND STRENGTH OF CERAMICSHARDNESS, FRACTURE TOUGHNESS AND STRENGTH OF CERAMICS
HARDNESS, FRACTURE TOUGHNESS AND STRENGTH OF CERAMICS
 
★ CALL US 9953330565 ( HOT Young Call Girls In Badarpur delhi NCR
★ CALL US 9953330565 ( HOT Young Call Girls In Badarpur delhi NCR★ CALL US 9953330565 ( HOT Young Call Girls In Badarpur delhi NCR
★ CALL US 9953330565 ( HOT Young Call Girls In Badarpur delhi NCR
 
OSVC_Meta-Data based Simulation Automation to overcome Verification Challenge...
OSVC_Meta-Data based Simulation Automation to overcome Verification Challenge...OSVC_Meta-Data based Simulation Automation to overcome Verification Challenge...
OSVC_Meta-Data based Simulation Automation to overcome Verification Challenge...
 

Object tracking using motion flow projection for pan-tilt configuration

  • 1. International Journal of Electrical and Computer Engineering (IJECE) Vol. 10, No. 5, October 2020, pp. 4687∼4694 ISSN: 2088-8708, DOI: 10.11591/ijece.v10i5.pp4687-4694 Ì 4687 Object tracking using motion flow projection for pan-tilt configuration Luma Issa Abdul-Kreem1 , Hussam K. Abdul-Ameer2 1 Department of Control and Systems Engineering, University of Technology, Iraq 2 Department of Biomedical Engineering, Al-Khawarizmi College of Engineering, University of Baghdad, Iraq Article Info Article history: Received Jul 12, 2019 Revised Mar 1, 2020 Accepted Mar 8, 2020 Keywords: Camera model Object tracking Pt monitoring mechanism Pt system ABSTRACT We propose a new object tracking model for two degrees of freedom mechanism. Our model uses a reverse projection from a camera plane to a world plane. Here, the model takes advantage of optic flow technique by re-projecting the flow vectors from the image space into world space. A pan-tilt (PT) mounting system is used to verify the performance of our model and maintain the tracked object within a region of interest (ROI). This system contains two servo motors to enable a webcam rotating along PT axes. The PT rotation angles are estimated based on a rigid transfor- mation of the the optic flow vectors in which an idealized translation matrix followed by two rotational matrices around PT axes are used. Our model was tested and evalu- ated using different objects with different motions. The results reveal that our model can keep the target object within a certain region in the camera view. Copyright © 2020 Insitute of Advanced Engineeering and Science. All rights reserved. Corresponding Author: Luma Issa Abdul-Kreem, Department of Control and Systems Engineering, University of Technology, Baghdad, Iraq. Email: 60021@uotechnology.edu.iq 1. INTRODUCTION Visual object tracking is one of the interesting fields in computer vision which is intervened in different applications such as surveillance, traffic monitoring, human computer interaction, and robotics [1–4]. Object tracking, in essence, seeks to estimate motion trajectory of an object in visual scenario. This research seeks to address the following questions: how a path for a moving object can be estimated in world plane? and how this estimation can be analyzed to generate a feedback signals to enable a PT system tracking an object? A considerable amount of literature has been published on object tracking (for more discussion, see the review in [5]). The related studies to our work will be reviewed. In [2], the authors used raw pixel representation for object tracking in which colors and intensity values of an image are used to represent the object regions. Such representation is simple and effect, however, raw pixel information alone is not sufficient for robust tracking [6]. This issue led some researchers [7, 8] to include other visual features such as shape, edge and texture into the raw pixel representation. Other studies such in [9, 10], on the other hand, used histogram representation to track an object in the visual scenario. In general, histogram extracts features based on object distribution in the visual field. Thus, the use of histogram representation might result in the loss of spatial information [6]. Several studies (see, [11, 12]) used optical flow representation for object tracking. In principle, optic flow represents displace- ment vectors of each pixel in the image. This representation is commonly used to capture the spatio-temporal motion information of an object in the visual scene [11]. PT cameras are widely used as a part of surveillance systems. PT camera is mounted and adjusted to certain viewing configurations. For an effective application Journal homepage: http://ijece.iaescore.com
  • 2. 4688 Ì ISSN: 2088-8708 of object tracking, accurate calibration is required [13]. A number of authors (see, e.g., [14–17]) estimated parameters of a calibration model for PT cameras by computing both focal length and radial distortion at many different zoom settings. L. Diogo in [18], on the other hand, introduced a method for PTZ camera calibration over the camera’s zoom range by minimizing re-projection errors of feature points detected in images which are captured by the camera at different orientations and zoom levels. Recently, [19] presented a method to calibrate the rotation angles of a PT camera by using only one control point and assuming that the intrinsic parameters and position of the camera are known in advance. In the field of object recognition based on human visual system, group of studies (see [20–24]) demonstrate the importance of motion information for object recognition. The studies showed that the visual perception can detect and focus on an object based on motion features. In essence, when an object is moving across the visual field, our eyes follow the object movement. The use of PT cameras in autonomous surveillance systems is still a challenge in spite of their great potential for real applications. There is still a need for an easy and accurate technique to estimate the rotation angles of a PT system. In this paper we suggest a new model for estimating PT rotation angles. We used a reverse projection from an image space to a world space in order to calculate the PT angles. In keeping with the importance of the motion information, our model takes advantage of optic flow technique by re-projecting the flow vectors from the image space into world space. Here, PT rotation angles are estimated based on Pinhole camera model and the rigid transformation of flow vectors in which an idealized translation matrix followed by two rotational matrices around PT axis are used. Our model was evaluated using rigid motions of different objects that have different shapes such as circular, square and silhouette objects. Such objects were moved with various directions to show the robustness of the model to track and focus on the object. The results reveal that when the object moves out of the region of interest, the reverse projection of the motion vectors from the camera plane to the real-world plane can be transformed to two values that represent the required rotational angles along pan and tilt axes to keep the object within a certain view. The paper is organized along the following structure and content. Section 2, demon- strates the methodology of the tracking model using PT system. The methodology has three main stages:(i) motion detection, (ii) object tracking, and (iii) PT rotational angles calculation. Section 3, shows system set up and results. Section 4, summarizes the discussion and the proposed perspectives of this work. 2. METHODOLOGY 2.1. Re-projection camera model Our model proposes that the camera is rotated around Yc and Xc axes as shown in Figure 1. These axes demonstrates the coordinate system of the camera model. From the previous figure, (pw) represents the position of a 3D point in the world plane {w} and (pc) represents its projection in the camera plane {c}. Assume that (pw) has moved to ( ¯pw) in the {w} plane and ( ¯pc) defined its projection in the {c} plane. Yc Xc Yw Zw Xw pw pc Zc pc pw {w} {c} Figure 1. Point projection from real-world plane {w} to camera plane {c}. (pw) shows a 3D point in the world plane {w} and (pc) represents the point projection in the camera plane {c}. ( ¯pw) represents the next location of the point in the {w} plane and ( ¯pc) defined its projection in the {c} plane Int J Elec & Comp Eng, Vol. 10, No. 5, October 2020 : 4687 – 4694
  • 3. Int J Elec & Comp Eng ISSN: 2088-8708 Ì 4689 We used Pinhole camera model and rigid transformation of parallelogram in the {w} plane. The ideal Pinhole camera model transcribes the relationship between a point in 3D space {w} and its projection in 2D space {c} (for underlying principle of Pinhole camera model see [25]). To simplify further derivation, we define this relationship using homogeneous coordinate: Pc = M.Pw (1) where Pc [xc yc f 1]T represents the projection of the 3D point Pw [xw yw zw 1]T on the image plane {c} in which f denotes focal length. M represents a tomography matrix operation which defines two types of parameters: intrinsic and extrinsic parameters. Since most cameras have overcame the distortion and skew problems, our model focuses on the extrinsic parameters. Given that the coordinate systems of the camera reference frame is simply set at the object centroid in the world reference frame, the tomography matrix is then M = Tpc pw + Pw. (2) where Tpc pw = [tx ty tz 1] represents the translation vector from the center of the world space {w} to the center of the image plane {c}. In order to keep the camera model tracking an object moves away from the center of the plane (i.e. the object moves from pw to ´pw, a rotation operation R is incorporated. The rotation matrix R is here addresses the rotation around pan-axis, Rpan(α) and tilt-axis, Rtilt(β). Thus, the tomography matrix is defined as a sequence of matrices operations that include translation T and rotation R. The tomography matrix is then define by the following equation: M = Tpc pw + PwR. (3) R = Rpan(α)Rtilt(β) (4) Rpan(α) =   cos(α) 0 sin(α) 0 1 0 −sin(α) 0 cosα   (5) Rtilt(β) =   1 0 0 0 cos(β) −sin(β) 0 sin(β) cosβ)   (6) As a consequence, the object tracking model can be written as an idealized translation matrix followed by two rotational matrices around pan and tilt axes. The camera tracking model is defined by ´Pc = Tpc pw + Rpan(α).Rtilt(β).Pw. (7) where ´Pc = ( ´xc ´yc f 1) represents the homogeneous coordinates of the moving object. 2.2. Motion detection In order to detect the motion of an object in the visual scene, an optical flow algorithm is used. The optical flow depicts the motion of an object in the visual scene by two dimensional velocity vectors. These two vectors describe the velocity of pixels in a time sequence of two consequent images [26]. For the sake of simplicity, the 3D object in real time world plan is transfer into 2D object in the image plane in which the image can be defined by means of 2D dynamic brightness function I(x, y, t). Given that the brightness between two consecutive image frames is constant within small space-time step (brightness constancy constraint) [27], the brightness constancy equation can be defined by I(x, y, t) = I(x + ∆x, y + ∆y, t + ∆t) (8) where (∆x, ∆y) represents the motion displacement for an object in x, and y directions, respectively, within short time (∆t). Given the assumption of small object motion, the brightness constancy equation 8 can be developed using Taylor series to get I(x+∆x, y+∆y, t+∆t) = I(x, y, t)+ dI dx ∆x+ dI dy ∆y+ dI dt ∆t+H.O.T. By neglecting higher order (HOT) term and rearranging the series [26], we get Object tracking using motion flow projection for... (Luma Issa Abdul-Kreem)
  • 4. 4690 Ì ISSN: 2088-8708 I.v = −It (9) where (∆I) represents the spatial gradient of brightness intensity and v denotes the optical flow vectors (vx, vy) and It defines the derivative of the image intensity along time (t). Since equation 9 has two unknown parameters (aperture problem), another set of equations is needed. Several studies have been conducted to solve this equation (see for examples [27–29]). In this article, we solved equation 2 and estimated the optic flow following to the work in [30]. 2.3. Calculate pan and tilt rotational angles The rotation angle increments, δα and δβ, are calculated relative to the current camera coordinate frame. These increments keep the object within a region of interest (ROI) in the image frame {c} . In order to account the effect of different units (points in image plane are expressed in pixels, while points in word space are represented in physical measurements (centimeters)), we introduce scale parameters k and l with unit of cm pixel where these parameters correspond to the change of units along x and y axis respectively. Thus, we adjust equation 7 by adding the scale parameters {k, l} ∈ s, as defined by s. ´Pc = Tpc pw + Rpan(α).Rtilt(β).Pw. (10) With the knowledge of the object movement in the image plane, it is possible to track the object in the word space by back-projecting the 2D point in the image plane to a 3D point in the world plane. Here, we calculate the increments in two parts, the pan angle increment δα ∈ [0o , 180o ] and then tilt angle increment δβ ∈ [−90o , 90o ] , where they are derived from equation 10. For simplicity, we utilize camera projection with plane world z=0 rather than a general 3D case. The rotational angles is defined by α = cos− 1(k. ´xc − tx)/xw (11) β = cos− 1(l. ´yc − ty)/yw (12) 3. SYSTEM SET UP AND RESULTS In the following, we demonstrate the capability of the suggested model for object tracking by using a set of test motions. The camera was mounted on a pan-tilt system in which camera surface is aligned parallel and centred to the object. The block diagram of the suggeted model can be shown in Figure 2. The system was placed 27 cm away from the moving object. The object motions were recorded using webcam camera with a resolution of (480 × 640) in which real world view {w} is about (16.5 × 12) cm. Due to the low resolution of the camera, it is challenging to estimate object motion in the visual scene. As mention in sec.(2.3.), the rotational range of the motor along pan is [00 , 1800 ] and [−900 , 900 ] along tilt axes. Webcam pan-tilt servo motors Microcontroller Personal computer Figure 2. The block diagram of suggested model. The microcontroller (arduino MEGA) controls two servo motors (S3300). The servo motors drive the camera motion along pan-tilt axes Int J Elec & Comp Eng, Vol. 10, No. 5, October 2020 : 4687 – 4694
  • 5. Int J Elec & Comp Eng ISSN: 2088-8708 Ì 4691 The tracking model keeps the object within the camera fovea which represents the region of interest ROI. Here, we select a window with (252, 233) that is placed in the center of the camera view. In order to keep the target object within ROI, the camera is moved along pan-tilt axes via the PT system. In our model, the pan-tilt angles are calculated based on equations 11 and 12 respectively. Since xw = tzsin(α) and ´xc = u, equation 11 can be defined by α = tan−1 (ku/tz). Equation 12, on the other hand, can be written as β = tan−1 (lv/tz) due to the fact that yw = tzsin(β) and ´yc = v. Based on camera resolution and view dimensions along x and y directions, the values of the scale parameters k and l are 16.5(cm)/480(pixel) and 12(cm)/480(pixel), respectively. In order to calculate the optic flow vectors (u, v), we utilized the algorithm that introduced by [30]. Our model was tested and evaluated using rigid motions of different directions. Such motions are generated from different shapes of objects that moved with different directions. With no loss of generality, we used simplified objects to keep the computational complexity and the preprocessing as simple as possible. Figure 3 shows moving objects (circle, square, tiger, bird) with four directions, down, up, right and diagonal (up-left), respectively. As a consequence to these motions, the objects become outside the ROI as shown in the second column of Figure 3. The third column of the figure shows the camera view {c} after camera moving along pan and tilt axes. As an extended results, our model was tested with silhouette motion for a human that moves to the left direction, see Figure 4.The results reveal that when the object moves out of the ROI, the reverse projection of the motion vectors from the camera plane to the real world plane can be transformed to two values that represent the required rotational angles along pan and tilt axes to keep the object within ROI. Figure 3. The first column, shows the target object in which the camera fovea is surrounded by dashed line. The second column, demonstrates the object movement in a particular direction, while the third column, shows the optic flow of the object motion and the camera view after the pan-tilt rotation Object tracking using motion flow projection for... (Luma Issa Abdul-Kreem)
  • 6. 4692 Ì ISSN: 2088-8708 Figure 4. The first column shows the object moves out of the ROI demonstrates the man movement to the left direction. Second column shows the camera view after the pan-tilt rotation 4. DISCUSSION In this paper, we introduce a model for object tracking using motion vectors and PT system. The model utilizes the pinhole camera algorithm and rigid transformation from camera plane to the world plane. Our tracking model keeps the target object within the fovea of the camera view, i.e., ROI. Here, we used pan-tilt system that controls the rotation of the camera view. The angles of the pan-tilt axes are derived from the reverse projection of the optic flow. In order to account the different units (pixels in the camera plane and centimeter in real world plane) we incorporate a scale parameter k and l in the mathematical expression of α and β along pan and tilt axes, respectively. Optical flow technique is found to be more robust and efficient for object tracking due to higher detection accuracy (see a recent review in [31]). This prompted several researchers (see for e.g., [12, 32]) to use optic flow for object tracking. Our model differs from others approaches in term of using reverse projection from the camera plane {c} to the world plane {w} for estimating the pan-tilt rotational angles. These estimated angles are fed to two motors that control camera moving along PT axes. In our model, the camera keeps the target object within the fovea of the camera view which we called it region of interest (ROI). Here, we utilized the the rigid transformation of flow vectors in which an idealized translation matrix followed by two rotational matrices around PT axis are used. To evaluate the performance of our model, we performed a set of rigid motions using different objects. These motions are generated from different shapes of objects in which circular and square objects are used. Such objects are moved to upward and downward directions, respectively. In addition, our model was probed with silhouette objects which are bird, tiger and human that move in diagonal,right and left directions, respectively. The results reveal that our model can keep tracking the object within a certain view. Our model can be extended and improved by adding additional preprocessing techniques that enrich the model to deal with more sophisticated biological motions. The biological motions represent a complex and articulated motions which consist different speeds and directions. The model could be boosted with an algorithm that segregates the body motion from the limbs motion (see [33] for a discussion underling the limbs and body motions). As a consequence the tracking model could be depended on the body motion without taking into account the movement of the limbs. 5. CONCLUSION We introduce a new model of PT configuration to control the movement of the surveillance camera. The PT angles of the camera movement are estimated using Pinhole camera model and the projection of the motion vectors in which these vectors are transformed from the image space into a world space. The motion vectors are estimated using the optic flow technique in which these vectors are transformed from the camera plane into the real-world plane using an idealized translation matrix followed by two rotational matrices around PT axis. Our model was tested and evaluated using different objects that have different shapes (such as circular, square and silhouette objects) where these objects are moved in different directions. This work has shown that when an object moves out of the region of interest, the projection of the motion vectors can transcribe into two values. These values are transform into PT angles that control the movement of the surveillance camera. Building upon, The transformation of the motion vectors enables the surveillance camera moving relative to the movement of the target object. As a consequence, the camera preserves the target object within the fovea of the camera view. Int J Elec & Comp Eng, Vol. 10, No. 5, October 2020 : 4687 – 4694
  • 7. Int J Elec & Comp Eng ISSN: 2088-8708 Ì 4693 REFERENCES [1] V. Ramalakshmi, Kanthimathi, and M. G. Alex, ”A survey on visual object tracking: Datasets, methods and metrics,” International Research Journal of Engineering and Technology (IRJET), vol. 11, no. 3, pp. 2395–0056, 2016. [2] I. Haritaoglu and M. Flickner, ”Detection and tracking of shopping groups in stores,” IEEE International Conference on Computer Vision and Pattern Recognition, vol. 1, pp. 431–438, 2001. [3] S. A. Mohand, N. Bouguila, and D. Ziou, ”A robust video foreground segmentation by using generalized gaussian mixture modeling,” Fourth Canadian Conference on Computer and Robot Vision (CRV ’07), pp. 503-509, 2007. [4] A. Yilmaz, O. Javed, and M. Shah, ”Object tracking: A survey,” ACM Computing Surveys, vol. 38, no. 4, pp. 1–45, 2006. [5] S. R. Balaji and S. Karthikeyan, ”A survey on moving object tracking using image processing,” 11 th International Conference on Intelligent Systems and Control (ISCO) At: Coimbatore, Tamilnadu, India, pp. 469-474, 2017. [6] L. Xi, H. Weiming, S. Chunhua, Z. Zhongfei, D. Anthony, and V. Anton, ”A survey of appearance mod- els in visual object tracking,” ACM Transactions on Intelligent Systems and Technology, vol. 4, no. 4, pp. 1-48, 2013. [7] H. Wang, D. Suter, K. Schindler, and C. Shen, ”Adaptive object tracking based on an effective appearance filter,” IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 29, no. 9, pp. 1661–1667, 2007. [8] M. S. Allili and D. Ziou, ”Object of interest segmentation and tracking by using feature selection and active contours,” IEEE International Conference on Computer Vision and Pattern Recognition, pp. 1–8, 2007. [9] Q. Zhao, Z. Yang, and H. Tao, ”Differential earth mover’s distance with its applications to visual trackingr,” IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 32, no. 2, p. 274–287, 2010. [10] D. Comaniciu, V. Ramesh, and P. Meer, ”Kernel-based object tracking,” IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 25, no. 5, pp. 564–577, 2003. [11] Y. Ma and C. Cluster, ”An object tracking algorithm based on optical flow and temporal–spatial context,” Springer Science+Business Media, LLC, part of Springer Nature, pp. 1-9, 2017. [12] K. Kale, S. Pawar, S. Pawar, and P. Dhulekar, ”Moving object tracking using optical flow and motion vector estimation,” 2015 4th International Conference on Reliability, Infocom Technologies and Optimization (ICRITO) (Trends and Future Directions), pp. 1–8, 2007. [13] N. S. Sinha and M. Pollefeys, ”Pan–tilt–zoom camera calibration and high-resolution mosaic generation,” Computer Vision and Image Understanding, vol. 103, no. 3, pp. 170–183, 2006. [14] R. Collins and Y. Tsin, ”Calibration of an outdoor active camera system,” In Proceedings of IEEE Computer Society, CVPR99, vol. 1, pp. 528-534, 1999. [15] R. G. Willson, ”Modeling and calibration of automated zoom lenses,” Carnegie Mellon University, PhD thesis, 1994. [16] K. Pankaj and D. Anthony, ”Real time target tracking with pan tilt zoom camera,” IEEE International Conference on Digital Image Computing: Techniques and Applications, Melbourne, VIC, Australia, pp. 492-497, 2009. [17] V. Diogo, C. N. Jacinto, and G. Jose, ”Assessing control modalities designed for pan-tilt surveillance cameras,” 15th portuguese Conference on pattern recognition, 2009. [18] L. Diogo, ”Target tracking with pan-tilt-zoom cameras,” M.Sc. Thesis, Instituto Superior Technico, 2011. [19] L. Yunting, Z. Jun, W. Hu, and J. Tian, ”Method for pan-tilt camera calibrationusing single control point,” Journal of the Optical Society of America, vol. 32, no. 1, pp. 156–163, 2015. [20] A. Casile and M. A. Giese, ”Critical features for the recognition of biological motion,” Journal of Vision, vol. 5, no. 4, pp. 6-6, 2005. [21] L. I. Abdul-Kreem and H. Neumann, ”Estimating visual motion using an event-based artificial retina,” Series of CCIS Communications in Computer and Information Science. Computer Vision, Imag- ing and Computer Graphics Theory and Applications. Springer International Publishing Switzerland, pp. 396-415, 2016. [22] L. I. Abdul Kreem and H. Neumann, ”Neural mechanisms of cortical motion computation based on a neuromorphics sensory system,” PLOS ONE, vol. 10, no. 11, pp. 1-33, 2015. [23] L. I. Abdul-Kreem and H. Neumann, ”Bioinspired model for motion estimation using address event Object tracking using motion flow projection for... (Luma Issa Abdul-Kreem)
  • 8. 4694 Ì ISSN: 2088-8708 representation,” 10th International Conference on computer vision theory and application, VISIGRAPP, pp. 335-346, 2015. [24] L. I. Abdul-Kreem, ”Computational architecture of a visual model for biological motions segregation,” Network: Computation in Neural Systems, vol. 30, no. 1-4, pp. 58-78, 2019. [25] P. Sturm, S. Ramalingam, J. Tardif, S. Gasparini, and J. Barreto, ”Camera models and fundamen- tal concepts used in geometric computer vision,” Found Trends Comput Graph Vis, vol. 6, no. 1-2, pp.1-183, 2011. [26] S. Aslani and H. Mahdavi-Nasab, ”Optical flow based moving object detection and tracking for traffic surveillance,” International Journal of Electrical and Computer Engineering, vol. 7, no. 9, pp.1252-1256, 2013. [27] B. Horn and B. Schunck, ”Determining optical flow,” Artif. Intell., vol. 17, no. 1-3, pp. 185-203, 1981. [28] J.Barron, D.Fleet, andS.S.Beauchemin, ”Performance of optical flow techniques,” Int. J. Comput. Vision, vol. 12, no. 1, pp. 43-77, 1994. [29] A. Bruhn, J. Weickert, and C. Sschnorr, ”Lucas/kanade meets horn/schunck: combining local and global optic flow methods,” Int. J. Comput. Vision, vol. 61, no. 3, pp. 211-231, 2005. [30] T. Brox, A. Bruhn, N. Papenberg, and J. Weickert, ”High accuracy optical flow estimation based on a theory for warping,” Proc. European Conf. Computer Vision, pp. 25-36, 2004. [31] A. Agarwal, S. Gupta, D. Kumar, and D. K. Singh, ”Review of optical flow technique for moving object detection,” Conference: 2016 2nd International Conference on Contemporary Computing and Informatics (IC3I), pp. 409-413, 2016. [32] D. Pathak, R. Girshick, P. Doll´ar, T. Darrell, and B. Hariharan, ”Learning features by watching objects move,” IEEE Transactions on Industrial Electronics, pp. 2701-2710, 2017. [33] L. I. Abdul-Kreem, ”Human action recognition based on motion velocity,” Proc. SCEE2018; 3rd ieee scientific conference of electrical engineering Baghdad/ Iraq, pp. 45-50, 2018. Int J Elec & Comp Eng, Vol. 10, No. 5, October 2020 : 4687 – 4694