SlideShare a Scribd company logo
1 of 8
Download to read offline
TELKOMNIKA, Vol.16, No.6, December 2018, pp.3008~3015
ISSN: 1693-6930, accredited First Grade by Kemenristekdikti, Decree No: 21/E/KPT/2018
DOI: 10.12928/TELKOMNIKA.v16i6.9466  3008
Received April 2, 2018; Revised October 2, 2018; Accepted October 24, 2018
Particle Filter with Integrated Multiple Features for
Object Detection and Tracking
Muhammad Attamimi*1
, Takayuki Nagai2
, Djoko Purwanto3
1,3
Department of Electrical Engineering, Institut Teknologi Sepuluh Nopember, Surabaya, Indonesia
2
Department of Mechanical Engineering and Intelligent Systems,
The University of eElectro-Communications, Tokyo, Japan
*Corresponding author, e-mail: attamimi@ee.its.ac.id
Abstract
Considering objects in the environments (or scenes), object detection is the first task needed to
be accomplished to recognize those objects. There are two problems needed to be considered in object
detection. First, a single feature based object detection is difficult regarding types of the objects and
scenes. For example, object detection that is based on color information will fail in the dark place. The
second problem is the object’s pose in the scene that is arbitrary in general. This paper aims to tackle such
problems for enabling the object detection and tracking of various types of objects in the various scenes.
This study proposes a method for object detection and tracking by using a particle filter and multiple
features consisting of color, texture, and depth information that are integrated by adaptive weights. To
validate the proposed method, the experiments have been conducted. The results revealed that the
proposed method outperformed the previous method, which is based only on color information.
Keywords: object detection, object tracking, multiple features, features integration, particle filter
Copyright © 2018 Universitas Ahmad Dahlan. All rights reserved.
1. Introduction
Recognizing objects in the scene needs several steps. The first step is detecting those
objects. One of the challenges in object detection is to detect all the target objects without fail.
This means that the detection system needs to adapt to different kinds of scenes especially
when the illuminating conditions change drastically such as in the dark places. It also comes
with the diversity of the objects in which the system needs to deal with. These issues motivate
us to propose object detection that able to adapt to the illuminating changes. For solving the
object detection problems, it is necessary to refer the issues. First, the detection using single
feature is difficult, because there is such situation when the detection will definitely fail using that
feature. For example, in the dark scene, feature such as color does not give useful information.
There is also another situation, i.e. the detection that involved various types of objects; such
detection will be difficult even if in the normal illuminating condition. For instance, it is
straightforward using color information when dealing with colorful objects. However, such
system will fail when detecting the object without texture, such as white dishes, cup, and so
forth. Therefore, the use of multiple features is crucial to handle the feature weakness in
particular cases to build a stronger object detection.
Next, the pose of the target objects in the scene that is arbitrary. In general, there are
several terms needs to be considered when dealing with the pose, such as rotation, translation,
and the depth (or the position of target object in the real world; this position is responsible for
the projection size of the target object in the two-dimensional scene). If all possible poses are
considered, the computational cost will high. Hence, the inference of likely pose is needed for
object detection system. This study proposed an object detection based on multiple features
and particle filter. Features such as color, texture, and depth information are used in this study
to solve the first problem. The second problem can be solved by approximating the target object
with a combination of several parts such as rectangle part of the target image. Particle filter
completes the object detection in solving the second problem due to its ability to infer the
existence of the object in the scene. To determine the candidate's regions of target objects, a
probability map is generated.
TELKOMNIKA ISSN: 1693-6930 
Particle Filter with Integrated Multiple Features for Object Detection... (Muhammad Attamimi)
3009
Ultimately, the regions with high probabilities are selected as target's regions. There are
several works related to object detection and/or object tracking using particle filter [1–4]. Feature
hierarchies was studied in [5], whereas Convolutional Neural Networks (CNN) was modified and
used in [6, 7]. Some works on detection of moving objects were done in [8, 9]. In robotics area,
the study of visual recognition for cleaning task that includes detection of objects on table top
was done in [10]. Almost of these studies used single feature that is color information whereas
our method based on multiple features; which consist of not only color information but also
texture and depth information; that are adaptively integrated. It is also considered using depth
information while the previous method does not.
The remainder of this study is organized as follows. An overview of the proposed
method is presented in section 2. A discussion on the extraction of multiple features used in this
study will be given in section 3. Details of proposed object detection and tracking using particle
filter are provided in section 4. The experimental setting and results are discussed in section 5.
Finally, section 6. concludes this study.
2. Overview of the Proposed Method
In this study, a 3D visual sensor [11] is used. This sensor consists of one time-of-flight
(TOF) and two CCD cameras, which is able to acquire color information and point clouds
including depth information in real time by calibrating the TOF and the two CCD cameras.
Therefore, color, texture, and depth information, which are captured by the sensor, are used to
realize the object detection and tracking. Multiple features can be the key solution in dealing
with object detection in various environments. However, those features can make the decision
in detection confusing due to false recognition of a particular situation or scene. For example,
the information from color feature cannot be used when a scene is dark because it leads to false
recognition. Hence, the problem of integrating multiple features is crucial when using multiple
features. It should be noted that the effective features for object detection are depended on the
object's properties and the environment where the object is laid. For instance, a colorful objects
or objects that have rich textures can be detected using color and/or texture information.
However, textureless objects such as white dishes are difficult to be detected using color and/or
texture information. Moreover, the environments are also affected the object detection. For
example, a yellow plushy normally easy to be detected using color information. However, if the
background shares the same color with the plushy or due to bad illuminating condition, the
detection will be difficult.
On the other hands, textureless objects will be easier to be detected in the colorful
background, because its textureless has an important meaning in detection on such situation. It
can be said that it is difficult to determine the feature should be used in detection beforehand
because it is difficult to know what kind of objects are laid in unknown scenes. Therefore, the
proposed method utilizes a likelihood as an input in weighting process of the features. This
weighting process can be used as feature selection while performing object detection.
Figure 1 illustrates the architecture of proposed method. In the learning phase, multiple
features that consist of color, texture, and depth information are extracted. The object-learning
scenario used in [12] is adopt; where the user shows the object to the robot during the learning
phase. The reference images are divided into several parts to solve the problem of detection in
the arbitrary pose. To realize scale invariant, the features are extracted after rescaling the
reference images. In the detection phase, at the first time, particle are located randomly on the
scene. After resampling the particles several times, the particles will focus on a certain position;
this position is a candidate of target objects. Finally, probability map is generated to determine
the target object's region. For object tracking, the processes on detection except the
initialization step are similar. The initialization step of object tracking is the detected position of
the object.
 ISSN: 1693-6930
TELKOMNIKA Vol. 16, No. 6, December 2018: 3008-3015
3010
Figure 1. Object detection and tracking system.
3. Extraction of Multiple Features
In this study, the features that are invariant to scale, rotation, and translation; are
needed. The histogram-based feature is employed on our proposed method because it makes
features rotation- and translation-invariant. Moreover, the depth information can be used for
normalizing the scale, which results in the scale-invariant features. Since it is difficult to realize
the feature that has view invariance, this property is ensured by matching of all features from
various viewpoints, which are accumulated in the learning phase. The features employed in this
study will be discussed. Color and texture information are used to distinguish among objects of
similar shape but different color or texture. Color information is calculated as a color histogram
of hue (H) and saturation (S) in the HSV color space, which is chosen considering its
robustness to illumination changes. The values of H and S at each pixel inside the target object
are quantized into bins of size 32 and 10, respectively. For implementation, a 320-dimensional
feature vector is utilized.
As for texture information, a circular census transform histogram (CCT) is proposed.
This feature is an extension of census transform histogram (CENTRIST) [13]. Since CENTRIST
calculate feature by comparing the pixel to its 8-neighborhood pixels, the computational cost is
low. However, CENTRIST is not rotation invariant and still lack in feature representation due to
its simplicity. To detect objects in various scenes, the CCT, that is the feature which can
compensate the weakness in CENTRIST; is introduced. Unlike the original CENTRIST that uses
2-valued thresholding (i.e., 0 if less than its neighborhood pixel and 1 otherwise), our proposed
CCT uses 3-valued thresholding (i.e., 0: if the difference value of pixel and its neighborhood is
between the negative predetermined threshold and the positive one, 1: if the difference value of
the pixel and its neighborhood is less then the negative predetermined threshold, and 2: if the
difference value of the pixel and its neighborhood is more then the positive predetermined
threshold). To achieve rotation invariant, the comparison of original CENTRIST is modified from
using 8-neighborhoods into circular-𝑃-neighborhoods. Here, the circular-𝑃-neighborhoods can
be found by calculating 𝑃 positition (𝑥 𝑝, 𝑦𝑝) of “virtual pixel” 𝑝 as follows.
𝑥 𝑝 = 𝑥 𝑐 + 𝑅 𝑐𝑜𝑠(2𝜋𝑝 𝑃⁄ ), 𝑦𝑝 = 𝑦𝑐 + 𝑅 𝑠𝑖𝑛(2𝜋𝑝 𝑃⁄ ), (1)
where, 𝑥 𝑐 and 𝑦𝑐 represent the position of a target pixel, and 𝑅 is a radius from the target pixel
which is predetermined. The value of “virtual pixel” is calculated using interpolation. In this
・・・
Reference
image
Rerefence image
division
Learning phase
Pyramid Multiple features
・・・
Position (x, y)
Likelihood
Scale
Detection phase
Tracking phase
・
・
・
・
・
・
・
・
・
・
Input image with
random initialization
・
Scale selection
Particle
Multiple
features
Likelihood
calculation
・・・・・
iterations
Output: detected regions
iterations
Output: tracked regions
Probability Map
Generation
Candidate
Determination
TELKOMNIKA ISSN: 1693-6930 
Particle Filter with Integrated Multiple Features for Object Detection... (Muhammad Attamimi)
3011
study, 𝑃 is set to 8. The combination of 8-possibilities comes from comparison with 3-valued
thresholding are put into the same bin. Finally, texture information is represented as an
834-dimensional feature vector.
The histogram of depth (HOD) [11] is used as a depth information. This is a histogram
of depth values of all pixels in an object region. To detect the object in the scene, HOD is
modified. First, in the learning phase, the mean distance 𝑧̅ of the object region is calculated.
Next, 𝑧 𝑚𝑖𝑛 = 𝑧̅ − 𝑘 and 𝑧 𝑚𝑎𝑥 = 𝑧̅ + 𝑘 are determined; where 𝑘 is the largest distance from 𝑧̅.
Then, voting the depth value 𝑧 into bin 𝑏 is done as follows.
𝑏 = ⌊𝑁𝑑
𝑧−𝑧 𝑚𝑖𝑛
𝑧 𝑚𝑎𝑥−𝑧 𝑚𝑖𝑛
⌋, (2)
where, 𝑁𝑑 is number of bins.
Moreover, there is a possibility that background can have a bad effect on the calculation
of HOD inside particle's region. To tackle such problem, the depth information is filtered by
selecting only the depth that its value is between the minimum depth value that is inside the
region and predetermined threshold.
4. Particle Filter Based Object Detection and Tracking
The important processes of this study will be discussed as follows.
4.1. Features Integration
In this paper, particle filter processing, in general, is used to perform object detection.
For particle 𝑛, the particle's likelihood 𝑃𝑛 is calculated as follows.
𝑃𝑛 ∝ 𝑒𝑥𝑝{−𝑤 𝑐
𝜆 𝑐(𝐷 𝑛
𝑐)2
− 𝑤 𝑡
𝜆𝑡(𝐷 𝑛
𝑡 )2
− 𝑤 𝑠
𝜆 𝑠(𝐷 𝑛
𝑠)2}, (3)
where, 𝐷 𝑐
, 𝐷 𝑡
, 𝐷 𝑠
are represented respectively, the Bhattacharyya distance of color, texture,
and depth information. The Bhattacharyya distance of feature vector 𝒉1 and 𝒉2 (both of them
are 𝑀-dimensional) that is represented as 𝐷(𝒉1, 𝒉2) is calculated as follows:
𝐷(𝒉1, 𝒉2) = √1 − ∑ 𝒉1(𝑚) × 𝒉2(𝑚)𝑀
𝑚=1 . (4)
moreover, 𝜆∗
is a predetermined coefficient to adjust the variance of distances and dependent
on the type of a feature. The weight of each feature 𝑤∗
, is updated as following.
𝑤∗
= 𝑚𝑖𝑛 (1
𝑚𝑖𝑛(𝜆∗(𝐷 𝑛
∗)2)⁄ , 1). (5)
For a given feature, if the distances of all particles are large, such feature will not afford
effective detection toward the scene. This will decrease the integrated likelihood of all particles.
Hence, as for the dark scene which is color information cannot be used due to extremely low
intensities, the distances of all particles will be large and the weight of color information will be
small that can nullify automatically the color information. On the other hands, if the color of the
objects and the background is similar, which is the case when the distances are all small in
every place where the objects are located; all of the particles will have the same likelihoods of
color information that is close to one which means that the color information is nullified.
4.2. Determination of Window Size of the Particle based on Distance
In general, the window size of each particle is one of the parameters of a particle filter
which is processed by sampling it. Thanks to the 3D visual sensor, the depth information can be
utilized to calculate the window size of the objects as well as the window size of the particles.
Therefore, the parameters of the particle filter are its position on the scene (𝑥, 𝑦). Moreover, to
realize the scale invariant, the pyramid method of reference images is adopt (i.e. providing
several reference images that are captured in several scales). Then select the closest scale by
comparing the distance between the reference image and the particle size. Finally, likelihood
calculation of the feature is done by using the selected scale.
 ISSN: 1693-6930
TELKOMNIKA Vol. 16, No. 6, December 2018: 3008-3015
3012
4.3. Reference Image Division
Although the features used in this study is rotation invariant, they are not
three-dimensional-rotation invariant. This type of rotation is necessary for particle filter to detect
the objects which are in general laid in arbitrary positions. For example, if the plastic bottle lying
horizontal (or not stand) on the table, it will be difficult to be detected even using a rotation
invariant features. To solve such problem, the reference images are divided into several
rectangles as shown in Figure 1. By dividing the reference image into several parts, the problem
of three-dimensional-rotation in object detection can be solved; as well as the occlusion problem
because the partial part can be detected using the proposed method.
4.4. Candidate's Regions Determination
In this study, the region of candidates in object detection is represented by a probability
map. For 𝑁 particles each of which has likelihood 𝑃𝑛, the probability map of position (𝑥, 𝑦) on the
scene is calculated as follows.
𝑃(𝑥, 𝑦) =
1
𝑃 𝑚𝑎𝑥
∑
𝑃 𝑛(𝑥,𝑦)−𝑃 𝑚𝑖𝑛
𝑃 𝑚𝑎𝑥−𝑃 𝑚𝑖𝑛
,𝑁
𝑛=1 (6)
where, 𝑃 𝑚𝑖𝑛 and 𝑃𝑚𝑎𝑥 represent respectively minimum and maximum likelihood of object
detection for a given scene. For a given input image in Figure 2 (a), the probability map is
generated as shown in Figure 2 (b). The position of a particle (𝑥, 𝑦) that has probability 𝑃(𝑥, 𝑦),
which is higher than a predetermined threshold, is chosen to be a candidate’s region.
Figure 2. Target object determination: (a) input image, and (b) corresponding probability map.
4.5. Object Tracking
Object tracking is realized by putting the initial position of detected objects. This can be
done sequentially after object detection because the same framework is used.
5. Experiments
Experiments were conducted to validate the proposed object detection and tracking.
5.1. Experimental Setting
Experiments were carried out in the living room, using 18 objects as shown
in Figure 3 (b). A user showed each object in various angles to the robot, and a database which
contains feature vectors of 10 frames per object was generated by the robot in the learning
phase (see Figure 3 (a) for object learning scenario). In the detection phase, several scenes
were used to test the proposed method.
TELKOMNIKA ISSN: 1693-6930 
Particle Filter with Integrated Multiple Features for Object Detection... (Muhammad Attamimi)
3013
Figure 3. (a) Object learning scenario: a user is showing the target object to the robot [12],
whereas the red rectangle depicts the 3D visual sensor proposed in [11], and (b) examples of
objects used in the experiment.
5.2. Evaluation of Object Detection
In this study, to validate the proposed object detection, 20 scenes were captured and
100 synthesized scenes were generated (hereafter, these scenes were defined as
normal-scenes). These scenes were generated by locating targets object randomly in captured
scenes. Moreover, 100 scenes were also provided with a different color (hereafter, these
scenes were defined as different-color-scenes) and 100 dark scenes (hereafter, these scenes
were defined as dark-scenes), which resulted in 300 scenes in total were used in this
experiments. Figure 4 shows the example of synthesized scenes.
Figure 4. Examples of synthesized scenes: (a) normal scene, (b) different-color-scene,
(c) dark-scene. Each of which is corresponded.
The synthesized scenes were then inputted to the detection system, and recall,
precision, and F-measure of detected objects were calculated. Here, if “true detected” is defined
as the regions detected by the system that belong to the true regions, then recall is the fraction
of “true detected” over true regions, whereas the precision is the fraction of “true detected” over
detected regions. Meanwhile, F-measure is is the harmonic mean of precision and recall which
can describe both the values at once. The higher all of those values are the better the detection
system performs. Our proposed method was compared to the previous method. Here, the
previous method is a method that used the color information only. Table 1 shows the results of
object detection in various types of scenes. In normal-scenes (see life side of Table 1), there is
a slight improvement in recall and a large improvement in precision and F-measure. However, in
different-color-scenes (see middle side of Table 1) and dark-scenes (see right side of Table 1),
the proposed method outperformed the previous method which cannot detect objects almost at
all. Thanks to the depth information, a reliable information in dark- and different-color-scenes
still can be used. The results also show that proposed method can adapt by selecting features
that are appropriate when the environments changed. Several examples of object detection is
shown in Figure 5 (a). It can be seen that the detection of multiple objects can be done by the
proposed method as shown in Figure 5 (b) (top-side). Figure 5 (c) (bottom-side) shows the
detection results of object with different pose with learning phase.
 ISSN: 1693-6930
TELKOMNIKA Vol. 16, No. 6, December 2018: 3008-3015
3014
Table 1. The Detection Rate in Various Types of Scenes.
Detection In normal scenes In scenes with different color In dark scenes
Method Recall Precision F-measure Recall Precision F-measure Recall Precision F-measure
Previous
Proposed
0.825
0.863
0.623
0.825
0.651
0.837
0.059
0.775
0.059
0.488
0.059
0.552
0
0.605
0
0.483
0
0.516
Figure 5. Examples of object detection results: (a) target object, (b) input scene with the final
state of particles, and (c) detected objects.
5.3. Discussion
As mentioned above, to perform object detection with various types of objects in various
environments, a feature selection system is needed to choose the suitable features for object
detection. To demonstrate this, our proposed object detection and tracking was tested in the
scenes with illumination that gradually changes from normal to dark. Figure 6 shows the object
tracking results in such condition. One can see that object tracking can be done by the
proposed system (see Figure 6 (top)). The weights of features were changed during the tracking
phase as shown in Figure 6 (bottom). In the scenes with a normal illuminating condition, the
weight of both of color information and depth information was high, whereas the weight of
texture information was changed from high in the first frame to low. In the tracking phase, the
pose of objects was difficult to detect using texture information due to lack reference images
collected in the learning phase. However, in the dark scenes, the weight of color and texture
information was low, whereas the depth information was high; because the reliable information
was depth information. Overall, object tracking can be performed by the proposed method in the
various environments.
Figure 6. Examples of object tracking results. Top-side of the figure depicts the tracked target
objects in various illuminating condition, whereas the bottom-side illustrates
the corresponding weights.
(a) (b) (c)
TELKOMNIKA ISSN: 1693-6930 
Particle Filter with Integrated Multiple Features for Object Detection... (Muhammad Attamimi)
3015
6. Conclusions
In this paper, a method to detect and track the target objects based on a particle filter
with integrated multiple features, has been proposed. The proposed method used color, texture,
and depth information that is combined by an adaptive weighting mechanism which considered
both the object’s types and the complexities of scenes. Thanks to the proposed method, the
detection of various types of objects in various scenes can be done. The results showed that in
a normal scene our proposed method outperformed the previous method which is based on
color information only. In the extreme scenes, that are the scenes with different color and dark
scene, the previous method was not able to detect the objects, whereas our proposed method
was able to perform object detection. The experimental results have also shown that the
proposed method can detect the objects located in arbitrary poses. Moreover, our proposed
method can also perform object tracking in such scenes.
The current object detection system can be considered as detection of an instance.
Developing not only instance detection but also object detection of a category is our future work.
Moreover, inspired by work on [14], integration and improvement of detection systems through
hierarchical object detection will be studied in the future.
References
[1] Okuma K, Taleghani A, de Freitas N, Little JJ, Lowe DG. A Boosted Particle Filter: Multitarget
Detection and Tracking. 2004; 28–39.
[2] Czyz J. Object Detection in Video via Particle Filters. 18th Int Conf Pattern Recognit. 2006: 820–3.
[3] Czyz J, Ristic B, Macq BM. A particle filter for joint detection and tracking of color objects. Image Vis
Comput. 2007; 25:1271–1281.
[4] Obata M, Nishida T, Miyagawa H, Ohkawa F. Target tracking and posture estimation of 3d objects by
using particle filter and parametric eigenspace method. Proc of Comput Vis, & Pattern Recognit.
2007; 67–72.
[5] Girshick R, Donahue J, Darrell T, Malik J. Rich feature hierarchies for accurate object detection and
semantic segmentation. Proc IEEE Comput Soc Conf Comput Vis Pattern Recognit. 2014; 580–587.
[6] Girshick R. Fast R-CNN. Proc IEEE Int Conf Comput Vis. 2015 International Conference on
Computer Vision, ICCV 2015: 1440–8.
[7] Ren S, He K, Girshick R, Sun J. Faster R_CNN: Towards real-time object detection with region
proposal networks. Adv Neural Inf Process Syst. 2015; 91–99.
[8] Dong W, Wang Y, Jing W, Peng T. An Improved Gaussian Mixture Model Method for Moving Object
Detection. TELKOMNIKA Telecommunication Computing Electronics Control. 2016; 14(3A): 115.
[9] Sumardi, Taufiqurrahman M, Riyadi MA. Street mark detection using raspberry pi for self-driving
system.TELKOMNIKA Telecommunication Computing Electronics Control. 2018; 16(2):629–634.
[10] Attamimi M, Araki T, Nakamura T, Nagai T. Visual recognition system for cleaning tasks by humanoid
robots. Int J Adv Robot Syst. 2013; 10.
[11] Attamimi M, Mizutani A, Nakamura T, Nagai T, Funakoshi K, Nakano M. Real-time 3D visual sensor
for robust object recognition. In: IEEE/RSJ 2010 International Conference on Intelligent Robots and
Systems, IROS 2010 - Conference Proceedings. 2010.
[12] Attamimi M, Mizutani A, Nakamura T, Sugiura K, Nagai T, Iwahashi N, et al. Learning novel objects
using out-of-vocabulary word segmentation and object extraction for home assistant robots. In:
Proceedings - IEEE International Conference on Robotics and Automation. 2010.
[13] Wu J, Christensen HI, Rehg JM. Visual Place Categorization: Problem, dataset, and algorithm. In:
2009 IEEE/RSJ International Conference on Intelligent Robots and Systems. 2009. p. 4763–4770.
[14] Attamimi M, Nakamura T, Nagai T. Hierarchical multilevel object recognition using Markov model. In:
Proceedings - International Conference on Pattern Recognition. 2012.

More Related Content

What's hot

A Genetic Algorithm-Based Moving Object Detection For Real-Time Traffic Surv...
 A Genetic Algorithm-Based Moving Object Detection For Real-Time Traffic Surv... A Genetic Algorithm-Based Moving Object Detection For Real-Time Traffic Surv...
A Genetic Algorithm-Based Moving Object Detection For Real-Time Traffic Surv...Chennai Networks
 
Digital image classification22oct
Digital image classification22octDigital image classification22oct
Digital image classification22octAleemuddin Abbasi
 
Multiple Object Tracking
Multiple Object TrackingMultiple Object Tracking
Multiple Object TrackingRainakSharma
 
Image classification in remote sensing
Image classification in remote sensingImage classification in remote sensing
Image classification in remote sensingAlexander Decker
 
Overview Of Video Object Tracking System
Overview Of Video Object Tracking SystemOverview Of Video Object Tracking System
Overview Of Video Object Tracking SystemEditor IJMTER
 
multiple object tracking using particle filter
multiple object tracking using particle filtermultiple object tracking using particle filter
multiple object tracking using particle filterSRIKANTH DANDE
 
Moving object detection
Moving object detectionMoving object detection
Moving object detectionManav Mittal
 
Visual object tracking using particle clustering - ICITACEE 2014
Visual object tracking using particle clustering - ICITACEE 2014Visual object tracking using particle clustering - ICITACEE 2014
Visual object tracking using particle clustering - ICITACEE 2014Harindra Pradhana
 
Ijarcet vol-2-issue-4-1298-1303
Ijarcet vol-2-issue-4-1298-1303Ijarcet vol-2-issue-4-1298-1303
Ijarcet vol-2-issue-4-1298-1303Editor IJARCET
 
A New Algorithm for Tracking Objects in Videos of Cluttered Scenes
A New Algorithm for Tracking Objects in Videos of Cluttered ScenesA New Algorithm for Tracking Objects in Videos of Cluttered Scenes
A New Algorithm for Tracking Objects in Videos of Cluttered ScenesZac Darcy
 
EXPLOITING REFERENCE IMAGES IN EXPOSING GEOMETRICAL DISTORTIONS
EXPLOITING REFERENCE IMAGES IN EXPOSING GEOMETRICAL DISTORTIONSEXPLOITING REFERENCE IMAGES IN EXPOSING GEOMETRICAL DISTORTIONS
EXPLOITING REFERENCE IMAGES IN EXPOSING GEOMETRICAL DISTORTIONSijma
 
A HYBRID COPY-MOVE FORGERY DETECTION TECHNIQUE USING REGIONAL SIMILARITY INDICES
A HYBRID COPY-MOVE FORGERY DETECTION TECHNIQUE USING REGIONAL SIMILARITY INDICESA HYBRID COPY-MOVE FORGERY DETECTION TECHNIQUE USING REGIONAL SIMILARITY INDICES
A HYBRID COPY-MOVE FORGERY DETECTION TECHNIQUE USING REGIONAL SIMILARITY INDICESijcsit
 
Real-time Object Tracking
Real-time Object TrackingReal-time Object Tracking
Real-time Object TrackingWonsang You
 
Remote Sensing: Image Classification
Remote Sensing: Image ClassificationRemote Sensing: Image Classification
Remote Sensing: Image ClassificationKamlesh Kumar
 
Comparison of Methods for Batik Classification Using Multi Texton Histogram
Comparison of Methods for Batik Classification Using Multi Texton HistogramComparison of Methods for Batik Classification Using Multi Texton Histogram
Comparison of Methods for Batik Classification Using Multi Texton HistogramTELKOMNIKA JOURNAL
 

What's hot (20)

A Genetic Algorithm-Based Moving Object Detection For Real-Time Traffic Surv...
 A Genetic Algorithm-Based Moving Object Detection For Real-Time Traffic Surv... A Genetic Algorithm-Based Moving Object Detection For Real-Time Traffic Surv...
A Genetic Algorithm-Based Moving Object Detection For Real-Time Traffic Surv...
 
Digital image classification22oct
Digital image classification22octDigital image classification22oct
Digital image classification22oct
 
Object tracking
Object trackingObject tracking
Object tracking
 
Multiple Object Tracking
Multiple Object TrackingMultiple Object Tracking
Multiple Object Tracking
 
Q0460398103
Q0460398103Q0460398103
Q0460398103
 
Image classification in remote sensing
Image classification in remote sensingImage classification in remote sensing
Image classification in remote sensing
 
Yolo
YoloYolo
Yolo
 
Overview Of Video Object Tracking System
Overview Of Video Object Tracking SystemOverview Of Video Object Tracking System
Overview Of Video Object Tracking System
 
multiple object tracking using particle filter
multiple object tracking using particle filtermultiple object tracking using particle filter
multiple object tracking using particle filter
 
Moving object detection
Moving object detectionMoving object detection
Moving object detection
 
Visual object tracking using particle clustering - ICITACEE 2014
Visual object tracking using particle clustering - ICITACEE 2014Visual object tracking using particle clustering - ICITACEE 2014
Visual object tracking using particle clustering - ICITACEE 2014
 
Ijarcet vol-2-issue-4-1298-1303
Ijarcet vol-2-issue-4-1298-1303Ijarcet vol-2-issue-4-1298-1303
Ijarcet vol-2-issue-4-1298-1303
 
Object tracking
Object trackingObject tracking
Object tracking
 
A New Algorithm for Tracking Objects in Videos of Cluttered Scenes
A New Algorithm for Tracking Objects in Videos of Cluttered ScenesA New Algorithm for Tracking Objects in Videos of Cluttered Scenes
A New Algorithm for Tracking Objects in Videos of Cluttered Scenes
 
EXPLOITING REFERENCE IMAGES IN EXPOSING GEOMETRICAL DISTORTIONS
EXPLOITING REFERENCE IMAGES IN EXPOSING GEOMETRICAL DISTORTIONSEXPLOITING REFERENCE IMAGES IN EXPOSING GEOMETRICAL DISTORTIONS
EXPLOITING REFERENCE IMAGES IN EXPOSING GEOMETRICAL DISTORTIONS
 
A HYBRID COPY-MOVE FORGERY DETECTION TECHNIQUE USING REGIONAL SIMILARITY INDICES
A HYBRID COPY-MOVE FORGERY DETECTION TECHNIQUE USING REGIONAL SIMILARITY INDICESA HYBRID COPY-MOVE FORGERY DETECTION TECHNIQUE USING REGIONAL SIMILARITY INDICES
A HYBRID COPY-MOVE FORGERY DETECTION TECHNIQUE USING REGIONAL SIMILARITY INDICES
 
Real-time Object Tracking
Real-time Object TrackingReal-time Object Tracking
Real-time Object Tracking
 
Remote Sensing: Image Classification
Remote Sensing: Image ClassificationRemote Sensing: Image Classification
Remote Sensing: Image Classification
 
Comparison of Methods for Batik Classification Using Multi Texton Histogram
Comparison of Methods for Batik Classification Using Multi Texton HistogramComparison of Methods for Batik Classification Using Multi Texton Histogram
Comparison of Methods for Batik Classification Using Multi Texton Histogram
 
Object tracking final
Object tracking finalObject tracking final
Object tracking final
 

Similar to Particle Filter with Integrated Multiple Features for Object Detection and Tracking

International Journal of Engineering Research and Development
International Journal of Engineering Research and DevelopmentInternational Journal of Engineering Research and Development
International Journal of Engineering Research and DevelopmentIJERD Editor
 
Ijarcet vol-2-issue-4-1383-1388
Ijarcet vol-2-issue-4-1383-1388Ijarcet vol-2-issue-4-1383-1388
Ijarcet vol-2-issue-4-1383-1388Editor IJARCET
 
An improved particle filter tracking
An improved particle filter trackingAn improved particle filter tracking
An improved particle filter trackingijcsit
 
CVGIP 2010 Part 2
CVGIP 2010 Part 2CVGIP 2010 Part 2
CVGIP 2010 Part 2Cody Liu
 
Feature Extraction for Image Classification and Analysis with Ant Colony Opti...
Feature Extraction for Image Classification and Analysis with Ant Colony Opti...Feature Extraction for Image Classification and Analysis with Ant Colony Opti...
Feature Extraction for Image Classification and Analysis with Ant Colony Opti...sipij
 
Shadow Detection and Removal in Still Images by using Hue Properties of Color...
Shadow Detection and Removal in Still Images by using Hue Properties of Color...Shadow Detection and Removal in Still Images by using Hue Properties of Color...
Shadow Detection and Removal in Still Images by using Hue Properties of Color...ijsrd.com
 
Survey of The Problem of Object Detection In Real Images
Survey of The Problem of Object Detection In Real ImagesSurvey of The Problem of Object Detection In Real Images
Survey of The Problem of Object Detection In Real ImagesCSCJournals
 
SPEEDY OBJECT DETECTION BASED ON SHAPE
SPEEDY OBJECT DETECTION BASED ON SHAPESPEEDY OBJECT DETECTION BASED ON SHAPE
SPEEDY OBJECT DETECTION BASED ON SHAPEijma
 
IRJET - Direct Me-Nevigation for Blind People
IRJET -  	  Direct Me-Nevigation for Blind PeopleIRJET -  	  Direct Me-Nevigation for Blind People
IRJET - Direct Me-Nevigation for Blind PeopleIRJET Journal
 
A Fast Laser Motion Detection and Approaching Behavior Monitoring Method for ...
A Fast Laser Motion Detection and Approaching Behavior Monitoring Method for ...A Fast Laser Motion Detection and Approaching Behavior Monitoring Method for ...
A Fast Laser Motion Detection and Approaching Behavior Monitoring Method for ...toukaigi
 
MULTIPLE HUMAN TRACKING USING RETINANET FEATURES, SIAMESE NEURAL NETWORK, AND...
MULTIPLE HUMAN TRACKING USING RETINANET FEATURES, SIAMESE NEURAL NETWORK, AND...MULTIPLE HUMAN TRACKING USING RETINANET FEATURES, SIAMESE NEURAL NETWORK, AND...
MULTIPLE HUMAN TRACKING USING RETINANET FEATURES, SIAMESE NEURAL NETWORK, AND...IAEME Publication
 
Object-Oriented Approach of Information Extraction from High Resolution Satel...
Object-Oriented Approach of Information Extraction from High Resolution Satel...Object-Oriented Approach of Information Extraction from High Resolution Satel...
Object-Oriented Approach of Information Extraction from High Resolution Satel...iosrjce
 
Proposed Multi-object Tracking Algorithm Using Sobel Edge Detection operator
Proposed Multi-object Tracking Algorithm Using Sobel Edge Detection operatorProposed Multi-object Tracking Algorithm Using Sobel Edge Detection operator
Proposed Multi-object Tracking Algorithm Using Sobel Edge Detection operatorQUESTJOURNAL
 
Detection of skin diasease using curvlets
Detection of skin diasease using curvletsDetection of skin diasease using curvlets
Detection of skin diasease using curvletseSAT Publishing House
 
Enhancing the Design pattern Framework of Robots Object Selection Mechanism -...
Enhancing the Design pattern Framework of Robots Object Selection Mechanism -...Enhancing the Design pattern Framework of Robots Object Selection Mechanism -...
Enhancing the Design pattern Framework of Robots Object Selection Mechanism -...INFOGAIN PUBLICATION
 
DETECTION OF DENSE, OVERLAPPING, GEOMETRIC OBJECTS
DETECTION OF DENSE, OVERLAPPING, GEOMETRIC OBJECTSDETECTION OF DENSE, OVERLAPPING, GEOMETRIC OBJECTS
DETECTION OF DENSE, OVERLAPPING, GEOMETRIC OBJECTSgerogepatton
 
DETECTION OF DENSE, OVERLAPPING, GEOMETRIC OBJECTS
DETECTION OF DENSE, OVERLAPPING, GEOMETRIC OBJECTSDETECTION OF DENSE, OVERLAPPING, GEOMETRIC OBJECTS
DETECTION OF DENSE, OVERLAPPING, GEOMETRIC OBJECTSijaia
 

Similar to Particle Filter with Integrated Multiple Features for Object Detection and Tracking (20)

International Journal of Engineering Research and Development
International Journal of Engineering Research and DevelopmentInternational Journal of Engineering Research and Development
International Journal of Engineering Research and Development
 
Handling Uncertainty under Spatial Feature Extraction through Probabilistic S...
Handling Uncertainty under Spatial Feature Extraction through Probabilistic S...Handling Uncertainty under Spatial Feature Extraction through Probabilistic S...
Handling Uncertainty under Spatial Feature Extraction through Probabilistic S...
 
Ijarcet vol-2-issue-4-1383-1388
Ijarcet vol-2-issue-4-1383-1388Ijarcet vol-2-issue-4-1383-1388
Ijarcet vol-2-issue-4-1383-1388
 
An improved particle filter tracking
An improved particle filter trackingAn improved particle filter tracking
An improved particle filter tracking
 
CVGIP 2010 Part 2
CVGIP 2010 Part 2CVGIP 2010 Part 2
CVGIP 2010 Part 2
 
Feature Extraction for Image Classification and Analysis with Ant Colony Opti...
Feature Extraction for Image Classification and Analysis with Ant Colony Opti...Feature Extraction for Image Classification and Analysis with Ant Colony Opti...
Feature Extraction for Image Classification and Analysis with Ant Colony Opti...
 
Shadow Detection and Removal in Still Images by using Hue Properties of Color...
Shadow Detection and Removal in Still Images by using Hue Properties of Color...Shadow Detection and Removal in Still Images by using Hue Properties of Color...
Shadow Detection and Removal in Still Images by using Hue Properties of Color...
 
Survey of The Problem of Object Detection In Real Images
Survey of The Problem of Object Detection In Real ImagesSurvey of The Problem of Object Detection In Real Images
Survey of The Problem of Object Detection In Real Images
 
SPEEDY OBJECT DETECTION BASED ON SHAPE
SPEEDY OBJECT DETECTION BASED ON SHAPESPEEDY OBJECT DETECTION BASED ON SHAPE
SPEEDY OBJECT DETECTION BASED ON SHAPE
 
IRJET - Direct Me-Nevigation for Blind People
IRJET -  	  Direct Me-Nevigation for Blind PeopleIRJET -  	  Direct Me-Nevigation for Blind People
IRJET - Direct Me-Nevigation for Blind People
 
A Fast Laser Motion Detection and Approaching Behavior Monitoring Method for ...
A Fast Laser Motion Detection and Approaching Behavior Monitoring Method for ...A Fast Laser Motion Detection and Approaching Behavior Monitoring Method for ...
A Fast Laser Motion Detection and Approaching Behavior Monitoring Method for ...
 
O180305103105
O180305103105O180305103105
O180305103105
 
MULTIPLE HUMAN TRACKING USING RETINANET FEATURES, SIAMESE NEURAL NETWORK, AND...
MULTIPLE HUMAN TRACKING USING RETINANET FEATURES, SIAMESE NEURAL NETWORK, AND...MULTIPLE HUMAN TRACKING USING RETINANET FEATURES, SIAMESE NEURAL NETWORK, AND...
MULTIPLE HUMAN TRACKING USING RETINANET FEATURES, SIAMESE NEURAL NETWORK, AND...
 
Object-Oriented Approach of Information Extraction from High Resolution Satel...
Object-Oriented Approach of Information Extraction from High Resolution Satel...Object-Oriented Approach of Information Extraction from High Resolution Satel...
Object-Oriented Approach of Information Extraction from High Resolution Satel...
 
H017344752
H017344752H017344752
H017344752
 
Proposed Multi-object Tracking Algorithm Using Sobel Edge Detection operator
Proposed Multi-object Tracking Algorithm Using Sobel Edge Detection operatorProposed Multi-object Tracking Algorithm Using Sobel Edge Detection operator
Proposed Multi-object Tracking Algorithm Using Sobel Edge Detection operator
 
Detection of skin diasease using curvlets
Detection of skin diasease using curvletsDetection of skin diasease using curvlets
Detection of skin diasease using curvlets
 
Enhancing the Design pattern Framework of Robots Object Selection Mechanism -...
Enhancing the Design pattern Framework of Robots Object Selection Mechanism -...Enhancing the Design pattern Framework of Robots Object Selection Mechanism -...
Enhancing the Design pattern Framework of Robots Object Selection Mechanism -...
 
DETECTION OF DENSE, OVERLAPPING, GEOMETRIC OBJECTS
DETECTION OF DENSE, OVERLAPPING, GEOMETRIC OBJECTSDETECTION OF DENSE, OVERLAPPING, GEOMETRIC OBJECTS
DETECTION OF DENSE, OVERLAPPING, GEOMETRIC OBJECTS
 
DETECTION OF DENSE, OVERLAPPING, GEOMETRIC OBJECTS
DETECTION OF DENSE, OVERLAPPING, GEOMETRIC OBJECTSDETECTION OF DENSE, OVERLAPPING, GEOMETRIC OBJECTS
DETECTION OF DENSE, OVERLAPPING, GEOMETRIC OBJECTS
 

More from TELKOMNIKA JOURNAL

Amazon products reviews classification based on machine learning, deep learni...
Amazon products reviews classification based on machine learning, deep learni...Amazon products reviews classification based on machine learning, deep learni...
Amazon products reviews classification based on machine learning, deep learni...TELKOMNIKA JOURNAL
 
Design, simulation, and analysis of microstrip patch antenna for wireless app...
Design, simulation, and analysis of microstrip patch antenna for wireless app...Design, simulation, and analysis of microstrip patch antenna for wireless app...
Design, simulation, and analysis of microstrip patch antenna for wireless app...TELKOMNIKA JOURNAL
 
Design and simulation an optimal enhanced PI controller for congestion avoida...
Design and simulation an optimal enhanced PI controller for congestion avoida...Design and simulation an optimal enhanced PI controller for congestion avoida...
Design and simulation an optimal enhanced PI controller for congestion avoida...TELKOMNIKA JOURNAL
 
Improving the detection of intrusion in vehicular ad-hoc networks with modifi...
Improving the detection of intrusion in vehicular ad-hoc networks with modifi...Improving the detection of intrusion in vehicular ad-hoc networks with modifi...
Improving the detection of intrusion in vehicular ad-hoc networks with modifi...TELKOMNIKA JOURNAL
 
Conceptual model of internet banking adoption with perceived risk and trust f...
Conceptual model of internet banking adoption with perceived risk and trust f...Conceptual model of internet banking adoption with perceived risk and trust f...
Conceptual model of internet banking adoption with perceived risk and trust f...TELKOMNIKA JOURNAL
 
Efficient combined fuzzy logic and LMS algorithm for smart antenna
Efficient combined fuzzy logic and LMS algorithm for smart antennaEfficient combined fuzzy logic and LMS algorithm for smart antenna
Efficient combined fuzzy logic and LMS algorithm for smart antennaTELKOMNIKA JOURNAL
 
Design and implementation of a LoRa-based system for warning of forest fire
Design and implementation of a LoRa-based system for warning of forest fireDesign and implementation of a LoRa-based system for warning of forest fire
Design and implementation of a LoRa-based system for warning of forest fireTELKOMNIKA JOURNAL
 
Wavelet-based sensing technique in cognitive radio network
Wavelet-based sensing technique in cognitive radio networkWavelet-based sensing technique in cognitive radio network
Wavelet-based sensing technique in cognitive radio networkTELKOMNIKA JOURNAL
 
A novel compact dual-band bandstop filter with enhanced rejection bands
A novel compact dual-band bandstop filter with enhanced rejection bandsA novel compact dual-band bandstop filter with enhanced rejection bands
A novel compact dual-band bandstop filter with enhanced rejection bandsTELKOMNIKA JOURNAL
 
Deep learning approach to DDoS attack with imbalanced data at the application...
Deep learning approach to DDoS attack with imbalanced data at the application...Deep learning approach to DDoS attack with imbalanced data at the application...
Deep learning approach to DDoS attack with imbalanced data at the application...TELKOMNIKA JOURNAL
 
Brief note on match and miss-match uncertainties
Brief note on match and miss-match uncertaintiesBrief note on match and miss-match uncertainties
Brief note on match and miss-match uncertaintiesTELKOMNIKA JOURNAL
 
Implementation of FinFET technology based low power 4×4 Wallace tree multipli...
Implementation of FinFET technology based low power 4×4 Wallace tree multipli...Implementation of FinFET technology based low power 4×4 Wallace tree multipli...
Implementation of FinFET technology based low power 4×4 Wallace tree multipli...TELKOMNIKA JOURNAL
 
Evaluation of the weighted-overlap add model with massive MIMO in a 5G system
Evaluation of the weighted-overlap add model with massive MIMO in a 5G systemEvaluation of the weighted-overlap add model with massive MIMO in a 5G system
Evaluation of the weighted-overlap add model with massive MIMO in a 5G systemTELKOMNIKA JOURNAL
 
Reflector antenna design in different frequencies using frequency selective s...
Reflector antenna design in different frequencies using frequency selective s...Reflector antenna design in different frequencies using frequency selective s...
Reflector antenna design in different frequencies using frequency selective s...TELKOMNIKA JOURNAL
 
Reagentless iron detection in water based on unclad fiber optical sensor
Reagentless iron detection in water based on unclad fiber optical sensorReagentless iron detection in water based on unclad fiber optical sensor
Reagentless iron detection in water based on unclad fiber optical sensorTELKOMNIKA JOURNAL
 
Impact of CuS counter electrode calcination temperature on quantum dot sensit...
Impact of CuS counter electrode calcination temperature on quantum dot sensit...Impact of CuS counter electrode calcination temperature on quantum dot sensit...
Impact of CuS counter electrode calcination temperature on quantum dot sensit...TELKOMNIKA JOURNAL
 
A progressive learning for structural tolerance online sequential extreme lea...
A progressive learning for structural tolerance online sequential extreme lea...A progressive learning for structural tolerance online sequential extreme lea...
A progressive learning for structural tolerance online sequential extreme lea...TELKOMNIKA JOURNAL
 
Electroencephalography-based brain-computer interface using neural networks
Electroencephalography-based brain-computer interface using neural networksElectroencephalography-based brain-computer interface using neural networks
Electroencephalography-based brain-computer interface using neural networksTELKOMNIKA JOURNAL
 
Adaptive segmentation algorithm based on level set model in medical imaging
Adaptive segmentation algorithm based on level set model in medical imagingAdaptive segmentation algorithm based on level set model in medical imaging
Adaptive segmentation algorithm based on level set model in medical imagingTELKOMNIKA JOURNAL
 
Automatic channel selection using shuffled frog leaping algorithm for EEG bas...
Automatic channel selection using shuffled frog leaping algorithm for EEG bas...Automatic channel selection using shuffled frog leaping algorithm for EEG bas...
Automatic channel selection using shuffled frog leaping algorithm for EEG bas...TELKOMNIKA JOURNAL
 

More from TELKOMNIKA JOURNAL (20)

Amazon products reviews classification based on machine learning, deep learni...
Amazon products reviews classification based on machine learning, deep learni...Amazon products reviews classification based on machine learning, deep learni...
Amazon products reviews classification based on machine learning, deep learni...
 
Design, simulation, and analysis of microstrip patch antenna for wireless app...
Design, simulation, and analysis of microstrip patch antenna for wireless app...Design, simulation, and analysis of microstrip patch antenna for wireless app...
Design, simulation, and analysis of microstrip patch antenna for wireless app...
 
Design and simulation an optimal enhanced PI controller for congestion avoida...
Design and simulation an optimal enhanced PI controller for congestion avoida...Design and simulation an optimal enhanced PI controller for congestion avoida...
Design and simulation an optimal enhanced PI controller for congestion avoida...
 
Improving the detection of intrusion in vehicular ad-hoc networks with modifi...
Improving the detection of intrusion in vehicular ad-hoc networks with modifi...Improving the detection of intrusion in vehicular ad-hoc networks with modifi...
Improving the detection of intrusion in vehicular ad-hoc networks with modifi...
 
Conceptual model of internet banking adoption with perceived risk and trust f...
Conceptual model of internet banking adoption with perceived risk and trust f...Conceptual model of internet banking adoption with perceived risk and trust f...
Conceptual model of internet banking adoption with perceived risk and trust f...
 
Efficient combined fuzzy logic and LMS algorithm for smart antenna
Efficient combined fuzzy logic and LMS algorithm for smart antennaEfficient combined fuzzy logic and LMS algorithm for smart antenna
Efficient combined fuzzy logic and LMS algorithm for smart antenna
 
Design and implementation of a LoRa-based system for warning of forest fire
Design and implementation of a LoRa-based system for warning of forest fireDesign and implementation of a LoRa-based system for warning of forest fire
Design and implementation of a LoRa-based system for warning of forest fire
 
Wavelet-based sensing technique in cognitive radio network
Wavelet-based sensing technique in cognitive radio networkWavelet-based sensing technique in cognitive radio network
Wavelet-based sensing technique in cognitive radio network
 
A novel compact dual-band bandstop filter with enhanced rejection bands
A novel compact dual-band bandstop filter with enhanced rejection bandsA novel compact dual-band bandstop filter with enhanced rejection bands
A novel compact dual-band bandstop filter with enhanced rejection bands
 
Deep learning approach to DDoS attack with imbalanced data at the application...
Deep learning approach to DDoS attack with imbalanced data at the application...Deep learning approach to DDoS attack with imbalanced data at the application...
Deep learning approach to DDoS attack with imbalanced data at the application...
 
Brief note on match and miss-match uncertainties
Brief note on match and miss-match uncertaintiesBrief note on match and miss-match uncertainties
Brief note on match and miss-match uncertainties
 
Implementation of FinFET technology based low power 4×4 Wallace tree multipli...
Implementation of FinFET technology based low power 4×4 Wallace tree multipli...Implementation of FinFET technology based low power 4×4 Wallace tree multipli...
Implementation of FinFET technology based low power 4×4 Wallace tree multipli...
 
Evaluation of the weighted-overlap add model with massive MIMO in a 5G system
Evaluation of the weighted-overlap add model with massive MIMO in a 5G systemEvaluation of the weighted-overlap add model with massive MIMO in a 5G system
Evaluation of the weighted-overlap add model with massive MIMO in a 5G system
 
Reflector antenna design in different frequencies using frequency selective s...
Reflector antenna design in different frequencies using frequency selective s...Reflector antenna design in different frequencies using frequency selective s...
Reflector antenna design in different frequencies using frequency selective s...
 
Reagentless iron detection in water based on unclad fiber optical sensor
Reagentless iron detection in water based on unclad fiber optical sensorReagentless iron detection in water based on unclad fiber optical sensor
Reagentless iron detection in water based on unclad fiber optical sensor
 
Impact of CuS counter electrode calcination temperature on quantum dot sensit...
Impact of CuS counter electrode calcination temperature on quantum dot sensit...Impact of CuS counter electrode calcination temperature on quantum dot sensit...
Impact of CuS counter electrode calcination temperature on quantum dot sensit...
 
A progressive learning for structural tolerance online sequential extreme lea...
A progressive learning for structural tolerance online sequential extreme lea...A progressive learning for structural tolerance online sequential extreme lea...
A progressive learning for structural tolerance online sequential extreme lea...
 
Electroencephalography-based brain-computer interface using neural networks
Electroencephalography-based brain-computer interface using neural networksElectroencephalography-based brain-computer interface using neural networks
Electroencephalography-based brain-computer interface using neural networks
 
Adaptive segmentation algorithm based on level set model in medical imaging
Adaptive segmentation algorithm based on level set model in medical imagingAdaptive segmentation algorithm based on level set model in medical imaging
Adaptive segmentation algorithm based on level set model in medical imaging
 
Automatic channel selection using shuffled frog leaping algorithm for EEG bas...
Automatic channel selection using shuffled frog leaping algorithm for EEG bas...Automatic channel selection using shuffled frog leaping algorithm for EEG bas...
Automatic channel selection using shuffled frog leaping algorithm for EEG bas...
 

Recently uploaded

Introduction to Serverless with AWS Lambda
Introduction to Serverless with AWS LambdaIntroduction to Serverless with AWS Lambda
Introduction to Serverless with AWS LambdaOmar Fathy
 
Augmented Reality (AR) with Augin Software.pptx
Augmented Reality (AR) with Augin Software.pptxAugmented Reality (AR) with Augin Software.pptx
Augmented Reality (AR) with Augin Software.pptxMustafa Ahmed
 
Memory Interfacing of 8086 with DMA 8257
Memory Interfacing of 8086 with DMA 8257Memory Interfacing of 8086 with DMA 8257
Memory Interfacing of 8086 with DMA 8257subhasishdas79
 
Standard vs Custom Battery Packs - Decoding the Power Play
Standard vs Custom Battery Packs - Decoding the Power PlayStandard vs Custom Battery Packs - Decoding the Power Play
Standard vs Custom Battery Packs - Decoding the Power PlayEpec Engineered Technologies
 
Unit 4_Part 1 CSE2001 Exception Handling and Function Template and Class Temp...
Unit 4_Part 1 CSE2001 Exception Handling and Function Template and Class Temp...Unit 4_Part 1 CSE2001 Exception Handling and Function Template and Class Temp...
Unit 4_Part 1 CSE2001 Exception Handling and Function Template and Class Temp...drmkjayanthikannan
 
HOA1&2 - Module 3 - PREHISTORCI ARCHITECTURE OF KERALA.pptx
HOA1&2 - Module 3 - PREHISTORCI ARCHITECTURE OF KERALA.pptxHOA1&2 - Module 3 - PREHISTORCI ARCHITECTURE OF KERALA.pptx
HOA1&2 - Module 3 - PREHISTORCI ARCHITECTURE OF KERALA.pptxSCMS School of Architecture
 
fitting shop and tools used in fitting shop .ppt
fitting shop and tools used in fitting shop .pptfitting shop and tools used in fitting shop .ppt
fitting shop and tools used in fitting shop .pptAfnanAhmad53
 
Basic Electronics for diploma students as per technical education Kerala Syll...
Basic Electronics for diploma students as per technical education Kerala Syll...Basic Electronics for diploma students as per technical education Kerala Syll...
Basic Electronics for diploma students as per technical education Kerala Syll...ppkakm
 
S1S2 B.Arch MGU - HOA1&2 Module 3 -Temple Architecture of Kerala.pptx
S1S2 B.Arch MGU - HOA1&2 Module 3 -Temple Architecture of Kerala.pptxS1S2 B.Arch MGU - HOA1&2 Module 3 -Temple Architecture of Kerala.pptx
S1S2 B.Arch MGU - HOA1&2 Module 3 -Temple Architecture of Kerala.pptxSCMS School of Architecture
 
1_Introduction + EAM Vocabulary + how to navigate in EAM.pdf
1_Introduction + EAM Vocabulary + how to navigate in EAM.pdf1_Introduction + EAM Vocabulary + how to navigate in EAM.pdf
1_Introduction + EAM Vocabulary + how to navigate in EAM.pdfAldoGarca30
 
Post office management system project ..pdf
Post office management system project ..pdfPost office management system project ..pdf
Post office management system project ..pdfKamal Acharya
 
8th International Conference on Soft Computing, Mathematics and Control (SMC ...
8th International Conference on Soft Computing, Mathematics and Control (SMC ...8th International Conference on Soft Computing, Mathematics and Control (SMC ...
8th International Conference on Soft Computing, Mathematics and Control (SMC ...josephjonse
 
Kuwait City MTP kit ((+919101817206)) Buy Abortion Pills Kuwait
Kuwait City MTP kit ((+919101817206)) Buy Abortion Pills KuwaitKuwait City MTP kit ((+919101817206)) Buy Abortion Pills Kuwait
Kuwait City MTP kit ((+919101817206)) Buy Abortion Pills Kuwaitjaanualu31
 
Employee leave management system project.
Employee leave management system project.Employee leave management system project.
Employee leave management system project.Kamal Acharya
 
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXssuser89054b
 
Worksharing and 3D Modeling with Revit.pptx
Worksharing and 3D Modeling with Revit.pptxWorksharing and 3D Modeling with Revit.pptx
Worksharing and 3D Modeling with Revit.pptxMustafa Ahmed
 
Electromagnetic relays used for power system .pptx
Electromagnetic relays used for power system .pptxElectromagnetic relays used for power system .pptx
Electromagnetic relays used for power system .pptxNANDHAKUMARA10
 
AIRCANVAS[1].pdf mini project for btech students
AIRCANVAS[1].pdf mini project for btech studentsAIRCANVAS[1].pdf mini project for btech students
AIRCANVAS[1].pdf mini project for btech studentsvanyagupta248
 
HAND TOOLS USED AT ELECTRONICS WORK PRESENTED BY KOUSTAV SARKAR
HAND TOOLS USED AT ELECTRONICS WORK PRESENTED BY KOUSTAV SARKARHAND TOOLS USED AT ELECTRONICS WORK PRESENTED BY KOUSTAV SARKAR
HAND TOOLS USED AT ELECTRONICS WORK PRESENTED BY KOUSTAV SARKARKOUSTAV SARKAR
 

Recently uploaded (20)

Introduction to Serverless with AWS Lambda
Introduction to Serverless with AWS LambdaIntroduction to Serverless with AWS Lambda
Introduction to Serverless with AWS Lambda
 
Augmented Reality (AR) with Augin Software.pptx
Augmented Reality (AR) with Augin Software.pptxAugmented Reality (AR) with Augin Software.pptx
Augmented Reality (AR) with Augin Software.pptx
 
Memory Interfacing of 8086 with DMA 8257
Memory Interfacing of 8086 with DMA 8257Memory Interfacing of 8086 with DMA 8257
Memory Interfacing of 8086 with DMA 8257
 
Standard vs Custom Battery Packs - Decoding the Power Play
Standard vs Custom Battery Packs - Decoding the Power PlayStandard vs Custom Battery Packs - Decoding the Power Play
Standard vs Custom Battery Packs - Decoding the Power Play
 
Unit 4_Part 1 CSE2001 Exception Handling and Function Template and Class Temp...
Unit 4_Part 1 CSE2001 Exception Handling and Function Template and Class Temp...Unit 4_Part 1 CSE2001 Exception Handling and Function Template and Class Temp...
Unit 4_Part 1 CSE2001 Exception Handling and Function Template and Class Temp...
 
HOA1&2 - Module 3 - PREHISTORCI ARCHITECTURE OF KERALA.pptx
HOA1&2 - Module 3 - PREHISTORCI ARCHITECTURE OF KERALA.pptxHOA1&2 - Module 3 - PREHISTORCI ARCHITECTURE OF KERALA.pptx
HOA1&2 - Module 3 - PREHISTORCI ARCHITECTURE OF KERALA.pptx
 
fitting shop and tools used in fitting shop .ppt
fitting shop and tools used in fitting shop .pptfitting shop and tools used in fitting shop .ppt
fitting shop and tools used in fitting shop .ppt
 
Basic Electronics for diploma students as per technical education Kerala Syll...
Basic Electronics for diploma students as per technical education Kerala Syll...Basic Electronics for diploma students as per technical education Kerala Syll...
Basic Electronics for diploma students as per technical education Kerala Syll...
 
S1S2 B.Arch MGU - HOA1&2 Module 3 -Temple Architecture of Kerala.pptx
S1S2 B.Arch MGU - HOA1&2 Module 3 -Temple Architecture of Kerala.pptxS1S2 B.Arch MGU - HOA1&2 Module 3 -Temple Architecture of Kerala.pptx
S1S2 B.Arch MGU - HOA1&2 Module 3 -Temple Architecture of Kerala.pptx
 
Cara Menggugurkan Sperma Yang Masuk Rahim Biyar Tidak Hamil
Cara Menggugurkan Sperma Yang Masuk Rahim Biyar Tidak HamilCara Menggugurkan Sperma Yang Masuk Rahim Biyar Tidak Hamil
Cara Menggugurkan Sperma Yang Masuk Rahim Biyar Tidak Hamil
 
1_Introduction + EAM Vocabulary + how to navigate in EAM.pdf
1_Introduction + EAM Vocabulary + how to navigate in EAM.pdf1_Introduction + EAM Vocabulary + how to navigate in EAM.pdf
1_Introduction + EAM Vocabulary + how to navigate in EAM.pdf
 
Post office management system project ..pdf
Post office management system project ..pdfPost office management system project ..pdf
Post office management system project ..pdf
 
8th International Conference on Soft Computing, Mathematics and Control (SMC ...
8th International Conference on Soft Computing, Mathematics and Control (SMC ...8th International Conference on Soft Computing, Mathematics and Control (SMC ...
8th International Conference on Soft Computing, Mathematics and Control (SMC ...
 
Kuwait City MTP kit ((+919101817206)) Buy Abortion Pills Kuwait
Kuwait City MTP kit ((+919101817206)) Buy Abortion Pills KuwaitKuwait City MTP kit ((+919101817206)) Buy Abortion Pills Kuwait
Kuwait City MTP kit ((+919101817206)) Buy Abortion Pills Kuwait
 
Employee leave management system project.
Employee leave management system project.Employee leave management system project.
Employee leave management system project.
 
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
 
Worksharing and 3D Modeling with Revit.pptx
Worksharing and 3D Modeling with Revit.pptxWorksharing and 3D Modeling with Revit.pptx
Worksharing and 3D Modeling with Revit.pptx
 
Electromagnetic relays used for power system .pptx
Electromagnetic relays used for power system .pptxElectromagnetic relays used for power system .pptx
Electromagnetic relays used for power system .pptx
 
AIRCANVAS[1].pdf mini project for btech students
AIRCANVAS[1].pdf mini project for btech studentsAIRCANVAS[1].pdf mini project for btech students
AIRCANVAS[1].pdf mini project for btech students
 
HAND TOOLS USED AT ELECTRONICS WORK PRESENTED BY KOUSTAV SARKAR
HAND TOOLS USED AT ELECTRONICS WORK PRESENTED BY KOUSTAV SARKARHAND TOOLS USED AT ELECTRONICS WORK PRESENTED BY KOUSTAV SARKAR
HAND TOOLS USED AT ELECTRONICS WORK PRESENTED BY KOUSTAV SARKAR
 

Particle Filter with Integrated Multiple Features for Object Detection and Tracking

  • 1. TELKOMNIKA, Vol.16, No.6, December 2018, pp.3008~3015 ISSN: 1693-6930, accredited First Grade by Kemenristekdikti, Decree No: 21/E/KPT/2018 DOI: 10.12928/TELKOMNIKA.v16i6.9466  3008 Received April 2, 2018; Revised October 2, 2018; Accepted October 24, 2018 Particle Filter with Integrated Multiple Features for Object Detection and Tracking Muhammad Attamimi*1 , Takayuki Nagai2 , Djoko Purwanto3 1,3 Department of Electrical Engineering, Institut Teknologi Sepuluh Nopember, Surabaya, Indonesia 2 Department of Mechanical Engineering and Intelligent Systems, The University of eElectro-Communications, Tokyo, Japan *Corresponding author, e-mail: attamimi@ee.its.ac.id Abstract Considering objects in the environments (or scenes), object detection is the first task needed to be accomplished to recognize those objects. There are two problems needed to be considered in object detection. First, a single feature based object detection is difficult regarding types of the objects and scenes. For example, object detection that is based on color information will fail in the dark place. The second problem is the object’s pose in the scene that is arbitrary in general. This paper aims to tackle such problems for enabling the object detection and tracking of various types of objects in the various scenes. This study proposes a method for object detection and tracking by using a particle filter and multiple features consisting of color, texture, and depth information that are integrated by adaptive weights. To validate the proposed method, the experiments have been conducted. The results revealed that the proposed method outperformed the previous method, which is based only on color information. Keywords: object detection, object tracking, multiple features, features integration, particle filter Copyright © 2018 Universitas Ahmad Dahlan. All rights reserved. 1. Introduction Recognizing objects in the scene needs several steps. The first step is detecting those objects. One of the challenges in object detection is to detect all the target objects without fail. This means that the detection system needs to adapt to different kinds of scenes especially when the illuminating conditions change drastically such as in the dark places. It also comes with the diversity of the objects in which the system needs to deal with. These issues motivate us to propose object detection that able to adapt to the illuminating changes. For solving the object detection problems, it is necessary to refer the issues. First, the detection using single feature is difficult, because there is such situation when the detection will definitely fail using that feature. For example, in the dark scene, feature such as color does not give useful information. There is also another situation, i.e. the detection that involved various types of objects; such detection will be difficult even if in the normal illuminating condition. For instance, it is straightforward using color information when dealing with colorful objects. However, such system will fail when detecting the object without texture, such as white dishes, cup, and so forth. Therefore, the use of multiple features is crucial to handle the feature weakness in particular cases to build a stronger object detection. Next, the pose of the target objects in the scene that is arbitrary. In general, there are several terms needs to be considered when dealing with the pose, such as rotation, translation, and the depth (or the position of target object in the real world; this position is responsible for the projection size of the target object in the two-dimensional scene). If all possible poses are considered, the computational cost will high. Hence, the inference of likely pose is needed for object detection system. This study proposed an object detection based on multiple features and particle filter. Features such as color, texture, and depth information are used in this study to solve the first problem. The second problem can be solved by approximating the target object with a combination of several parts such as rectangle part of the target image. Particle filter completes the object detection in solving the second problem due to its ability to infer the existence of the object in the scene. To determine the candidate's regions of target objects, a probability map is generated.
  • 2. TELKOMNIKA ISSN: 1693-6930  Particle Filter with Integrated Multiple Features for Object Detection... (Muhammad Attamimi) 3009 Ultimately, the regions with high probabilities are selected as target's regions. There are several works related to object detection and/or object tracking using particle filter [1–4]. Feature hierarchies was studied in [5], whereas Convolutional Neural Networks (CNN) was modified and used in [6, 7]. Some works on detection of moving objects were done in [8, 9]. In robotics area, the study of visual recognition for cleaning task that includes detection of objects on table top was done in [10]. Almost of these studies used single feature that is color information whereas our method based on multiple features; which consist of not only color information but also texture and depth information; that are adaptively integrated. It is also considered using depth information while the previous method does not. The remainder of this study is organized as follows. An overview of the proposed method is presented in section 2. A discussion on the extraction of multiple features used in this study will be given in section 3. Details of proposed object detection and tracking using particle filter are provided in section 4. The experimental setting and results are discussed in section 5. Finally, section 6. concludes this study. 2. Overview of the Proposed Method In this study, a 3D visual sensor [11] is used. This sensor consists of one time-of-flight (TOF) and two CCD cameras, which is able to acquire color information and point clouds including depth information in real time by calibrating the TOF and the two CCD cameras. Therefore, color, texture, and depth information, which are captured by the sensor, are used to realize the object detection and tracking. Multiple features can be the key solution in dealing with object detection in various environments. However, those features can make the decision in detection confusing due to false recognition of a particular situation or scene. For example, the information from color feature cannot be used when a scene is dark because it leads to false recognition. Hence, the problem of integrating multiple features is crucial when using multiple features. It should be noted that the effective features for object detection are depended on the object's properties and the environment where the object is laid. For instance, a colorful objects or objects that have rich textures can be detected using color and/or texture information. However, textureless objects such as white dishes are difficult to be detected using color and/or texture information. Moreover, the environments are also affected the object detection. For example, a yellow plushy normally easy to be detected using color information. However, if the background shares the same color with the plushy or due to bad illuminating condition, the detection will be difficult. On the other hands, textureless objects will be easier to be detected in the colorful background, because its textureless has an important meaning in detection on such situation. It can be said that it is difficult to determine the feature should be used in detection beforehand because it is difficult to know what kind of objects are laid in unknown scenes. Therefore, the proposed method utilizes a likelihood as an input in weighting process of the features. This weighting process can be used as feature selection while performing object detection. Figure 1 illustrates the architecture of proposed method. In the learning phase, multiple features that consist of color, texture, and depth information are extracted. The object-learning scenario used in [12] is adopt; where the user shows the object to the robot during the learning phase. The reference images are divided into several parts to solve the problem of detection in the arbitrary pose. To realize scale invariant, the features are extracted after rescaling the reference images. In the detection phase, at the first time, particle are located randomly on the scene. After resampling the particles several times, the particles will focus on a certain position; this position is a candidate of target objects. Finally, probability map is generated to determine the target object's region. For object tracking, the processes on detection except the initialization step are similar. The initialization step of object tracking is the detected position of the object.
  • 3.  ISSN: 1693-6930 TELKOMNIKA Vol. 16, No. 6, December 2018: 3008-3015 3010 Figure 1. Object detection and tracking system. 3. Extraction of Multiple Features In this study, the features that are invariant to scale, rotation, and translation; are needed. The histogram-based feature is employed on our proposed method because it makes features rotation- and translation-invariant. Moreover, the depth information can be used for normalizing the scale, which results in the scale-invariant features. Since it is difficult to realize the feature that has view invariance, this property is ensured by matching of all features from various viewpoints, which are accumulated in the learning phase. The features employed in this study will be discussed. Color and texture information are used to distinguish among objects of similar shape but different color or texture. Color information is calculated as a color histogram of hue (H) and saturation (S) in the HSV color space, which is chosen considering its robustness to illumination changes. The values of H and S at each pixel inside the target object are quantized into bins of size 32 and 10, respectively. For implementation, a 320-dimensional feature vector is utilized. As for texture information, a circular census transform histogram (CCT) is proposed. This feature is an extension of census transform histogram (CENTRIST) [13]. Since CENTRIST calculate feature by comparing the pixel to its 8-neighborhood pixels, the computational cost is low. However, CENTRIST is not rotation invariant and still lack in feature representation due to its simplicity. To detect objects in various scenes, the CCT, that is the feature which can compensate the weakness in CENTRIST; is introduced. Unlike the original CENTRIST that uses 2-valued thresholding (i.e., 0 if less than its neighborhood pixel and 1 otherwise), our proposed CCT uses 3-valued thresholding (i.e., 0: if the difference value of pixel and its neighborhood is between the negative predetermined threshold and the positive one, 1: if the difference value of the pixel and its neighborhood is less then the negative predetermined threshold, and 2: if the difference value of the pixel and its neighborhood is more then the positive predetermined threshold). To achieve rotation invariant, the comparison of original CENTRIST is modified from using 8-neighborhoods into circular-𝑃-neighborhoods. Here, the circular-𝑃-neighborhoods can be found by calculating 𝑃 positition (𝑥 𝑝, 𝑦𝑝) of “virtual pixel” 𝑝 as follows. 𝑥 𝑝 = 𝑥 𝑐 + 𝑅 𝑐𝑜𝑠(2𝜋𝑝 𝑃⁄ ), 𝑦𝑝 = 𝑦𝑐 + 𝑅 𝑠𝑖𝑛(2𝜋𝑝 𝑃⁄ ), (1) where, 𝑥 𝑐 and 𝑦𝑐 represent the position of a target pixel, and 𝑅 is a radius from the target pixel which is predetermined. The value of “virtual pixel” is calculated using interpolation. In this ・・・ Reference image Rerefence image division Learning phase Pyramid Multiple features ・・・ Position (x, y) Likelihood Scale Detection phase Tracking phase ・ ・ ・ ・ ・ ・ ・ ・ ・ ・ Input image with random initialization ・ Scale selection Particle Multiple features Likelihood calculation ・・・・・ iterations Output: detected regions iterations Output: tracked regions Probability Map Generation Candidate Determination
  • 4. TELKOMNIKA ISSN: 1693-6930  Particle Filter with Integrated Multiple Features for Object Detection... (Muhammad Attamimi) 3011 study, 𝑃 is set to 8. The combination of 8-possibilities comes from comparison with 3-valued thresholding are put into the same bin. Finally, texture information is represented as an 834-dimensional feature vector. The histogram of depth (HOD) [11] is used as a depth information. This is a histogram of depth values of all pixels in an object region. To detect the object in the scene, HOD is modified. First, in the learning phase, the mean distance 𝑧̅ of the object region is calculated. Next, 𝑧 𝑚𝑖𝑛 = 𝑧̅ − 𝑘 and 𝑧 𝑚𝑎𝑥 = 𝑧̅ + 𝑘 are determined; where 𝑘 is the largest distance from 𝑧̅. Then, voting the depth value 𝑧 into bin 𝑏 is done as follows. 𝑏 = ⌊𝑁𝑑 𝑧−𝑧 𝑚𝑖𝑛 𝑧 𝑚𝑎𝑥−𝑧 𝑚𝑖𝑛 ⌋, (2) where, 𝑁𝑑 is number of bins. Moreover, there is a possibility that background can have a bad effect on the calculation of HOD inside particle's region. To tackle such problem, the depth information is filtered by selecting only the depth that its value is between the minimum depth value that is inside the region and predetermined threshold. 4. Particle Filter Based Object Detection and Tracking The important processes of this study will be discussed as follows. 4.1. Features Integration In this paper, particle filter processing, in general, is used to perform object detection. For particle 𝑛, the particle's likelihood 𝑃𝑛 is calculated as follows. 𝑃𝑛 ∝ 𝑒𝑥𝑝{−𝑤 𝑐 𝜆 𝑐(𝐷 𝑛 𝑐)2 − 𝑤 𝑡 𝜆𝑡(𝐷 𝑛 𝑡 )2 − 𝑤 𝑠 𝜆 𝑠(𝐷 𝑛 𝑠)2}, (3) where, 𝐷 𝑐 , 𝐷 𝑡 , 𝐷 𝑠 are represented respectively, the Bhattacharyya distance of color, texture, and depth information. The Bhattacharyya distance of feature vector 𝒉1 and 𝒉2 (both of them are 𝑀-dimensional) that is represented as 𝐷(𝒉1, 𝒉2) is calculated as follows: 𝐷(𝒉1, 𝒉2) = √1 − ∑ 𝒉1(𝑚) × 𝒉2(𝑚)𝑀 𝑚=1 . (4) moreover, 𝜆∗ is a predetermined coefficient to adjust the variance of distances and dependent on the type of a feature. The weight of each feature 𝑤∗ , is updated as following. 𝑤∗ = 𝑚𝑖𝑛 (1 𝑚𝑖𝑛(𝜆∗(𝐷 𝑛 ∗)2)⁄ , 1). (5) For a given feature, if the distances of all particles are large, such feature will not afford effective detection toward the scene. This will decrease the integrated likelihood of all particles. Hence, as for the dark scene which is color information cannot be used due to extremely low intensities, the distances of all particles will be large and the weight of color information will be small that can nullify automatically the color information. On the other hands, if the color of the objects and the background is similar, which is the case when the distances are all small in every place where the objects are located; all of the particles will have the same likelihoods of color information that is close to one which means that the color information is nullified. 4.2. Determination of Window Size of the Particle based on Distance In general, the window size of each particle is one of the parameters of a particle filter which is processed by sampling it. Thanks to the 3D visual sensor, the depth information can be utilized to calculate the window size of the objects as well as the window size of the particles. Therefore, the parameters of the particle filter are its position on the scene (𝑥, 𝑦). Moreover, to realize the scale invariant, the pyramid method of reference images is adopt (i.e. providing several reference images that are captured in several scales). Then select the closest scale by comparing the distance between the reference image and the particle size. Finally, likelihood calculation of the feature is done by using the selected scale.
  • 5.  ISSN: 1693-6930 TELKOMNIKA Vol. 16, No. 6, December 2018: 3008-3015 3012 4.3. Reference Image Division Although the features used in this study is rotation invariant, they are not three-dimensional-rotation invariant. This type of rotation is necessary for particle filter to detect the objects which are in general laid in arbitrary positions. For example, if the plastic bottle lying horizontal (or not stand) on the table, it will be difficult to be detected even using a rotation invariant features. To solve such problem, the reference images are divided into several rectangles as shown in Figure 1. By dividing the reference image into several parts, the problem of three-dimensional-rotation in object detection can be solved; as well as the occlusion problem because the partial part can be detected using the proposed method. 4.4. Candidate's Regions Determination In this study, the region of candidates in object detection is represented by a probability map. For 𝑁 particles each of which has likelihood 𝑃𝑛, the probability map of position (𝑥, 𝑦) on the scene is calculated as follows. 𝑃(𝑥, 𝑦) = 1 𝑃 𝑚𝑎𝑥 ∑ 𝑃 𝑛(𝑥,𝑦)−𝑃 𝑚𝑖𝑛 𝑃 𝑚𝑎𝑥−𝑃 𝑚𝑖𝑛 ,𝑁 𝑛=1 (6) where, 𝑃 𝑚𝑖𝑛 and 𝑃𝑚𝑎𝑥 represent respectively minimum and maximum likelihood of object detection for a given scene. For a given input image in Figure 2 (a), the probability map is generated as shown in Figure 2 (b). The position of a particle (𝑥, 𝑦) that has probability 𝑃(𝑥, 𝑦), which is higher than a predetermined threshold, is chosen to be a candidate’s region. Figure 2. Target object determination: (a) input image, and (b) corresponding probability map. 4.5. Object Tracking Object tracking is realized by putting the initial position of detected objects. This can be done sequentially after object detection because the same framework is used. 5. Experiments Experiments were conducted to validate the proposed object detection and tracking. 5.1. Experimental Setting Experiments were carried out in the living room, using 18 objects as shown in Figure 3 (b). A user showed each object in various angles to the robot, and a database which contains feature vectors of 10 frames per object was generated by the robot in the learning phase (see Figure 3 (a) for object learning scenario). In the detection phase, several scenes were used to test the proposed method.
  • 6. TELKOMNIKA ISSN: 1693-6930  Particle Filter with Integrated Multiple Features for Object Detection... (Muhammad Attamimi) 3013 Figure 3. (a) Object learning scenario: a user is showing the target object to the robot [12], whereas the red rectangle depicts the 3D visual sensor proposed in [11], and (b) examples of objects used in the experiment. 5.2. Evaluation of Object Detection In this study, to validate the proposed object detection, 20 scenes were captured and 100 synthesized scenes were generated (hereafter, these scenes were defined as normal-scenes). These scenes were generated by locating targets object randomly in captured scenes. Moreover, 100 scenes were also provided with a different color (hereafter, these scenes were defined as different-color-scenes) and 100 dark scenes (hereafter, these scenes were defined as dark-scenes), which resulted in 300 scenes in total were used in this experiments. Figure 4 shows the example of synthesized scenes. Figure 4. Examples of synthesized scenes: (a) normal scene, (b) different-color-scene, (c) dark-scene. Each of which is corresponded. The synthesized scenes were then inputted to the detection system, and recall, precision, and F-measure of detected objects were calculated. Here, if “true detected” is defined as the regions detected by the system that belong to the true regions, then recall is the fraction of “true detected” over true regions, whereas the precision is the fraction of “true detected” over detected regions. Meanwhile, F-measure is is the harmonic mean of precision and recall which can describe both the values at once. The higher all of those values are the better the detection system performs. Our proposed method was compared to the previous method. Here, the previous method is a method that used the color information only. Table 1 shows the results of object detection in various types of scenes. In normal-scenes (see life side of Table 1), there is a slight improvement in recall and a large improvement in precision and F-measure. However, in different-color-scenes (see middle side of Table 1) and dark-scenes (see right side of Table 1), the proposed method outperformed the previous method which cannot detect objects almost at all. Thanks to the depth information, a reliable information in dark- and different-color-scenes still can be used. The results also show that proposed method can adapt by selecting features that are appropriate when the environments changed. Several examples of object detection is shown in Figure 5 (a). It can be seen that the detection of multiple objects can be done by the proposed method as shown in Figure 5 (b) (top-side). Figure 5 (c) (bottom-side) shows the detection results of object with different pose with learning phase.
  • 7.  ISSN: 1693-6930 TELKOMNIKA Vol. 16, No. 6, December 2018: 3008-3015 3014 Table 1. The Detection Rate in Various Types of Scenes. Detection In normal scenes In scenes with different color In dark scenes Method Recall Precision F-measure Recall Precision F-measure Recall Precision F-measure Previous Proposed 0.825 0.863 0.623 0.825 0.651 0.837 0.059 0.775 0.059 0.488 0.059 0.552 0 0.605 0 0.483 0 0.516 Figure 5. Examples of object detection results: (a) target object, (b) input scene with the final state of particles, and (c) detected objects. 5.3. Discussion As mentioned above, to perform object detection with various types of objects in various environments, a feature selection system is needed to choose the suitable features for object detection. To demonstrate this, our proposed object detection and tracking was tested in the scenes with illumination that gradually changes from normal to dark. Figure 6 shows the object tracking results in such condition. One can see that object tracking can be done by the proposed system (see Figure 6 (top)). The weights of features were changed during the tracking phase as shown in Figure 6 (bottom). In the scenes with a normal illuminating condition, the weight of both of color information and depth information was high, whereas the weight of texture information was changed from high in the first frame to low. In the tracking phase, the pose of objects was difficult to detect using texture information due to lack reference images collected in the learning phase. However, in the dark scenes, the weight of color and texture information was low, whereas the depth information was high; because the reliable information was depth information. Overall, object tracking can be performed by the proposed method in the various environments. Figure 6. Examples of object tracking results. Top-side of the figure depicts the tracked target objects in various illuminating condition, whereas the bottom-side illustrates the corresponding weights. (a) (b) (c)
  • 8. TELKOMNIKA ISSN: 1693-6930  Particle Filter with Integrated Multiple Features for Object Detection... (Muhammad Attamimi) 3015 6. Conclusions In this paper, a method to detect and track the target objects based on a particle filter with integrated multiple features, has been proposed. The proposed method used color, texture, and depth information that is combined by an adaptive weighting mechanism which considered both the object’s types and the complexities of scenes. Thanks to the proposed method, the detection of various types of objects in various scenes can be done. The results showed that in a normal scene our proposed method outperformed the previous method which is based on color information only. In the extreme scenes, that are the scenes with different color and dark scene, the previous method was not able to detect the objects, whereas our proposed method was able to perform object detection. The experimental results have also shown that the proposed method can detect the objects located in arbitrary poses. Moreover, our proposed method can also perform object tracking in such scenes. The current object detection system can be considered as detection of an instance. Developing not only instance detection but also object detection of a category is our future work. Moreover, inspired by work on [14], integration and improvement of detection systems through hierarchical object detection will be studied in the future. References [1] Okuma K, Taleghani A, de Freitas N, Little JJ, Lowe DG. A Boosted Particle Filter: Multitarget Detection and Tracking. 2004; 28–39. [2] Czyz J. Object Detection in Video via Particle Filters. 18th Int Conf Pattern Recognit. 2006: 820–3. [3] Czyz J, Ristic B, Macq BM. A particle filter for joint detection and tracking of color objects. Image Vis Comput. 2007; 25:1271–1281. [4] Obata M, Nishida T, Miyagawa H, Ohkawa F. Target tracking and posture estimation of 3d objects by using particle filter and parametric eigenspace method. Proc of Comput Vis, & Pattern Recognit. 2007; 67–72. [5] Girshick R, Donahue J, Darrell T, Malik J. Rich feature hierarchies for accurate object detection and semantic segmentation. Proc IEEE Comput Soc Conf Comput Vis Pattern Recognit. 2014; 580–587. [6] Girshick R. Fast R-CNN. Proc IEEE Int Conf Comput Vis. 2015 International Conference on Computer Vision, ICCV 2015: 1440–8. [7] Ren S, He K, Girshick R, Sun J. Faster R_CNN: Towards real-time object detection with region proposal networks. Adv Neural Inf Process Syst. 2015; 91–99. [8] Dong W, Wang Y, Jing W, Peng T. An Improved Gaussian Mixture Model Method for Moving Object Detection. TELKOMNIKA Telecommunication Computing Electronics Control. 2016; 14(3A): 115. [9] Sumardi, Taufiqurrahman M, Riyadi MA. Street mark detection using raspberry pi for self-driving system.TELKOMNIKA Telecommunication Computing Electronics Control. 2018; 16(2):629–634. [10] Attamimi M, Araki T, Nakamura T, Nagai T. Visual recognition system for cleaning tasks by humanoid robots. Int J Adv Robot Syst. 2013; 10. [11] Attamimi M, Mizutani A, Nakamura T, Nagai T, Funakoshi K, Nakano M. Real-time 3D visual sensor for robust object recognition. In: IEEE/RSJ 2010 International Conference on Intelligent Robots and Systems, IROS 2010 - Conference Proceedings. 2010. [12] Attamimi M, Mizutani A, Nakamura T, Sugiura K, Nagai T, Iwahashi N, et al. Learning novel objects using out-of-vocabulary word segmentation and object extraction for home assistant robots. In: Proceedings - IEEE International Conference on Robotics and Automation. 2010. [13] Wu J, Christensen HI, Rehg JM. Visual Place Categorization: Problem, dataset, and algorithm. In: 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems. 2009. p. 4763–4770. [14] Attamimi M, Nakamura T, Nagai T. Hierarchical multilevel object recognition using Markov model. In: Proceedings - International Conference on Pattern Recognition. 2012.