SlideShare a Scribd company logo
1 of 6
Download to read offline
Some Enhanced Algorithms for Robot Navigation by Omnidirectional
Cameras
Khoa Dang Dang, Ngoc Quoc Ly, Truong The Nguyen
Abstract— To localize a point in a 3D scene, a previous
approach used two omnidirectional cameras and combine with
the Sum of Absolute Difference (SAD) to search for similar
points. The old method gives low performance on repetitive
textures. So an improvement for the SAD is proposed in this
work. In the ego-motion estimation task, to take advantage of
omnidirectional cameras, a new approach is presented: first, the
Kanade-Lucas-Tomasi (KLT) feature tracker is improved for
building a set of feature points, then the RANSAC algorithm
is used to find the best consensus set from this, movement
parameters are estimated in the next step by Newton-Gauss
algorithm, and the final result is filtered again by Kalman filter.
The improved KLT is robust against cameras’ large rotation
angles. By combining the proposed methods, a robot is able
to reconstruct a 3D structure at a simple level. Experiments
are performed using simulators: Google Sketchup and Pov-ray.
Results show the new approach outperforms previous ones on
problems that have been pointed out.
I. INTRODUCTION
In this work, a system of two 360◦
cameras is used. Each
of them is constructed from a normal camera combining with
a parabolic mirror as in [2]. In their method. By arranging
two cameras vertically, all epipolar lines become vertical
when de-wrapping into panoramic images. So corresponding
feature points could be found by searching along vertical
lines. One problem with the Sum of Absolute Difference
(SAD) algorithm they used is it usually mismatches similar
points on repetitive textures. This paper utilizes this approach
because of its low cost and simplicity yet effective to
compute. However, the SAD is enhanced to perform better on
repetitive textures. Moreover, by applying the Scaramuzza’s
methods [3][4] to find projection vectors from a point to
two camera, we can estimate that point’s position in the en-
vironment and distances between other points. This enables
performing the 3D reconstruction along edges.
In the ego-motion estimation based on visual features, we
need to keep track of visual features from time t to time t+1
while cameras are moving. Scaramuzza [5] used the Kanade-
Lucas-Tomasi (KLT) to track features and removed outliers
by RANSAC algorithm, these two methods are quite popular
for this task. Our method differs from others (e.g. [7])in some
points. First, we combine SAD with KLT to track features
to increase the matching ratio, because both of them are
based on pixel intensity variance. Then, motion parameters
are estimated using the Gauss-Newton algorithm from the
best consensus set chosen by RANSAC. The Kalman filter
is applied at the end to refine final results because there
are some cases that a large amount of errors could produce
a lot of noise in the results. This paper assumes that the
noise follows a Gaussian distribution. However, the KLT is
suffered from camera’s large rotation angles making it miss
many feature points. So an improvement for KLT is proposed
to deal with this problem.
Experiments are performed using simulators: environments
and 3D models are constructed by Google Sketchup 8, and
omni cameras are simulated by Pov-ray 3.7 as in [6] to
capture scenes. First, we compare the enhanced SAD with
the original SAD when reconstruct repetitive textures. Next,
we’ll see how well the improved KLT can estimate camera’s
motion when the camera rotate large angles. Finally, we com-
bine two proposed algorithms to recover both 3D structures
and camera motion from a sequence of images.
The rest of this paper is structured as following: First,
a review of typical works is given. Section 3 will present
the improvement for the SAD. The KLT enhancement is
described next in section 4. Two proposed algorithms are
evaluated separately and the whole model will also be eval-
uated in section 5. Conclusion and future work are discussed
in the final section.
II. RELATED WORKS
There are many types of omni camera. Jonathan Foote and
Don Kimber [8] built a system call FlyCam which is a combi-
nation of multiple similar cameras, the same as the Ringcam
from Microsoft Research. This system gives high resolution
images and robust, but also requires high operational cost or
expensive. Nalwa from Bell AT&T Laboratories integrated
mirrors into his camera system. But boundaries still remain
in captured imaged and it is hard to be eliminated. A more
simpler omni camera is constructed of one mirror and one
normal camera. This paper utilizes this kind of omni camera
due to its low cost and simplicity.
Gaspar [1] proposed a method for de-wrapping images
from a omni camera into panoramic and bird eye view
images in order to extract vertical and horizontal line for
localizing and navigating robots. Their approach is to use
only one omni camera so that it is cheap and fast, but it
cannot detect obstruction.
In the robot ego-motion research field, Gluckman and
Nayar [9] present a method for the recovery of ego-motion
using omni cameras by mapping the image velocity vectors
to a sphere using the Jacobian matrix. Scaramuzza et. al [5]
used RANSAC for excluding outliers for a better recovery
trajectory of robots moving on along distance.
In the 3D reconstruction research, Herran [6] presented
the main theory of a 3D reconstruction method and motion
estimation using only one omni camera.
2012 International Conference on Control, Automation and Information Sciences (ICCAIS)
978-1-4673-0813-7/12/$31.00 ©2012 IEEE 241
III. AN SAD IMPROVEMENT FOR
RECONSTRUCTING REPETITIVE TEXTURES IN
3D ENVIRONMENT
A. An Omni Camera System Arrangement
In this paper, a system of two 360◦
filed of view cameras
is studied as in [2] due to its low cost, simplicity yet efficient
in computation. Fig. 1 adapted from [11] shows the camera
model. Each camera includes one normal cameras and one
parabolic mirror to capture a whole 360◦
horizontal view.
Each camera are fixed so that the parameters are already
known. Specifically, the translation vector T = (0, 0, t), with
t is the vertical distance of two cameras, and the rotation
matrix R is a unitary matrix I.
First, captured images are de-wrapped into panoramic
images. By this way, feature points will lay on the same
vertical lines (Fig. 2). Then they are blurred with a Gaussian
mask to make it less sensitive to low contrast pixels and a
Sobel filter is applied to find dominant edges. Finally the
SAD method searches for similar points in the same vertical
lines only if these points’ intensity are larger than a threshold.
The displacement between two captured images will be very
small due to the arrangement.
The SAD measurement between point A and B is a sum
of absolute difference of pixels’ intensity surrounding point
A and corresponding pixels surrounding point B:
SAD(A, B) =
i j
IA (xA + i, yA + j) − IB (xB + i, yB + j) (1)
B. An Improved SAD Algorithm for Matching Points on
Repetitive Texture
Suppose that the two red points are matched exactly
between two images because their color features are quite
different from others. When searching for matching the green
point from image A, two points found in image B (Fig. 3),
marked as the yellow point and the green point, that both
look like the green point in A if only the surrounding pixels
of each point are taken into account. So, an improvement for
SAD to overcome this shortcoming is proposed. The idea is
Parabolic mirror
Camera
Camera
distance
First omni
camera
Second omni
camera
Fig. 1. An omni camera system built from a mirror and a normal camera.
if there are more than one point in image B both match a
point in image A, we then consider more pixels along the
lines connecting these point with another point that has been
matched accurately before (i.e. the red points in Fig. 3).
The original SAD is modified as following:
Step 1: Calculate the SAD values of candidate pixels (e.g.
green and yellow pixels in image B)
Step 2: Calculate the sum of difference of intensity values
between pixels on the lines connecting candidate pixels
with another pixel has been found before (e.g. the red
lines connecting to the red point in image B)
Step 3: Compare the total values of step 1 and 2 of
each candidates to each other and choose the candidate
corresponding to the lowest value
So, while searching a point in image A for a corresponding
point in image B, the original SAD is applied first to find
a pair of pixels having SAD values less than a threshold. If
only one pair is found, we call it a successful match and
proceed to the next point in A. But if there are more than
one pair satisfying the threshold, another pair of pixels in A
and B that has been matched successfully before is chosen
and follow the three steps presented above.
The enhanced SAD is more efficient and effective in this
situation than SIFT and Harris corner detector, so it is really
suitable for the 3D reconstruction task. But it requires two
cameras must base on the same color system.
C. An Application in 3D Reconstruction
After finding corresponding points, the Scaramuzza’s cal-
ibration toolbox is used to find two projection vectors from
Fig. 2. Two similar points lay on a vertical line in panorama images.
Fig. 3. The proposed improvement for SAD.
242
a point in 3D environment to two cameras’ images. The
intersection of two projection vectors reveals its position.
Let camera A be the origin, and B(0, 0, t) is a translation
vector with t is the vertical distance between A and B (Fig.
4). Two projection equations are given as below:
V1 = λ.n1
V2 = µ.n2 + B
(2)
with n1, n2 are two projection vectors from a point M onto
two cameras. λ and µ are two ratio coefficients.
Let V 1 = V 2 to solve this equation system for λ and
µ. However, in a real world system there could be some
errors make projection vectors cross each other instead of
intersecting. The following solution is derived from [11].
Suppose d is the distance between n1 and n2, we have
d = (B − A).
n1 × n2
n1 × n2
= B.
n1 × n2
n1 × n2
(3)
From (2) and (3) we have:
λ.n1 = n2 (4)
As observed in (4), vectors nl and n2 have the same direction
and scale of λ, so the value of λ is given by:
λ =
n2[x]
n1[x]
=
n2[y]
n1[y]
=
n2[z]
n1[z]
(5)
µ value can be found by the same way. Finally, replace λ
and µ values into (2) to calculate M’s coordinate.
As a conclusion for this section, this localization method
can locate points along edges in a 3D environment. It takes
advantage on two vertically arranged omni cameras for a
simple and fast matching approach. However, it performs
worse on smooth textures.
IV. EGO MOTION ESTIMATION USING OMNI
CAMERAS
In this paper, we focus on the ego motion estimation based
on visual features. Suppose Xk−1 and Xk are the coordinates
of a point X at time k −1 and k, to estimate camera motion,
we need to monitor feature changing from time k − 1 to k:
Xk = RXk−1 + T (6)
with R is a rotation matrix 3×3, T is a translation vector
3×1. Equation (6) is rewritten in the homogeneous coor-
dinates as in (7). Motion parameters need to be estimated
Fig. 4. Localization of a point M in the 3D environment.
are: rx, ry, rz, tx, ty, tz. This paper’s approach is as fol-
lowing: first, feature points are tracked by SAD and KLT
methods; next, the RANSAC will remove outliers; then
motion parameters are estimated by Gauss-Newton algorithm
from the best consensus set; results are filtered by Kalman
filter in the end with an assumption that noises follow a
Gaussian distribution. However, the original KLT suffers
from camera’s large rotation angles. This could affect the
input to RANSAC. So an enhancement for the KLT to deal
with this problem is proposed later in this section. Finally,
we present a method for 3D map reconstruction based on
the ego motion estimation and 3D reconstruction algorithms
presented so far.
A. Searching, Tracking and Localizing Feature Points
In the first step, features are tracked by KLT. The positions
of these features could be located by SAD. Both methods
depend on the changing in image pixels intensity, so feature
points detected by KLT and located by SAD could be well
matched together. Here is the process:
Step 1: 3D point coordinates localization by SAD
The SAD is used for building these sets:
Da = {Points’ relative position to camera at time ta
based on two images Ia1, Ia2}
Pa = {Points’ coordinates in Ia1 located by SAD (their
positions are stored in Da)}
Db = {Points’ relative position to camera at time tb
based on two images Ib1, Ib2}
Pb = {Points’ coordinates in Ib1 located by SAD (their
positions are stored in Db)}
Step 2: Features searching and tracking
First, KLT searches features in image Ia1 and then
tracked them in image Ib1. After this step, we’ll have:
Fa = {Features’ coordinates in image Ia1 which can
be tracked in image Ib1}
Fb = {Features’ coordinates in image Ib1}
Step 3: Features localization
In this step, features points can not be localized in
step 1 are removed by the following strategy: for each
feature point in Fa, check if this point is in Pa. If not,
this point is removed from Fa and the corresponding
point in Fb is also removed. After finishing with Fa,
this process is re-performed for Fb. A point a(x, y) in
Fa is considered to be in Pa if:
∃b(x , y ) s.t. (x − x )2 + (y − y )2 < ε (8)
with ε is a predefined threshold.
From Da, Db and the new Fa, Fb, we build two follow-
ing sets:
Ma = {3D coordinates of feature points in Fa from
camera 1 at time a}
Mb = {3D coordinates of feature points from camera
1 at time b}
243
Xk = F(R, T, X) = MkXk−1
=




cos ry cos rz − cos rx sin ry + sin rx sin ry cos rz sin rx sin rz + cos rx sin ry cos rz tx
cos ry sin rz cos rx cos ry + sin rx sin ry sin rz − sin rx cos rz + cos rx sin ry sin rz ty
− sin ry sin rx cos ry cos rx cos ry tz
0 0 0 1



 (7)
Note that Ma and Mb have the same number of ele-
ments, and a pair of ith
elements in both of these set
correspond to coordinates of the ith
feature point but in
different times.
Now, features’ coordinates at different time have been
known. Some errors could present in this step. So next, the
RANSAC algorithm is used to remove outliers.
B. Outliers Removing by RANSAC
Step 1: Choose initial elements randomly
Choose 3 elements in Ma randomly, and choose 3
corresponding elements in Mb
Step 2: Estimate motion parameters
Estimate motion parameters from chosen elements. The
next section will present how to perform this task.
Step 3: Built a rule matrix M and consensus set Ck
A consensus element is an element satisfying the rule
matrix M based on (7). An element in Ma satisfies
the rule matrix M when applying M for that element
and receiving an error comparing to a corresponding
element in Mb is less than a predefined threshold. If
an Ma element satisfy this rule matrix, it is put in the
current consensus set Ck.
Step 4: Choose the best consensus set C
Each time, the largest consensus set is kept. After a
number of iteration, the best consensus set C is found.
C. Motion Parameters Estimation
This process is not only applied in the Step 2 to find
the best consensus set C, but also applied on C itself to
estimate the best motion parameters by the optimization
algorithm Gauss-Newton to minimize the following square
error function:
i
Xki − F(R, T, X(ki−1)i
)
2
(9)
with Xki is the coordinates of Xi from the camera at time
k and Xki is the coordinates at time k − 1.
D. Filtering by Kalman Filter
The estimated results will be filtered by Kalman filter.
Here, the velocity v and acceleration a of the camera need
to be updated, with v = (RT)T
/∆t, ∆t is the time shift
between two frames. We have the state function like this:
v
a
(t)
=
I ∆tI
0 I
v
a
(t−1)
+ e (10)
The result is updated by Kalman filter as below:
1
∆t
r
t
(t)
= I 0
v
a
(t)
(11)
with I is a unitary matrix 6 × 6, e is Gaussian noise.
E. Camera Motion Estimation
The final results received so far are rotation and translation
values of feature points regarding to the camera. The camera
motion is calculated in reverse. Let Ok−1 and Ok is the
camera coordinates at time k − 1 and k, and the system’s
coordinate origin is the camera’s first position. The relation-
ship of Ok−1 and Ok is given as following:
Ok = (Mk)−1
Ok−1 (12)
with Mk is the matrix from (7).
F. Solving camera’s large rotation angle problem
By using omni cameras, feature points are still retained
despite large rotation angles, but the KLT can not keep up
with those changes. So, an intermediate step is proposed:
instead of matching images at time K and time K+1, image
at time K now is matched with several temporary images
at time K + 1, which have been transformed using different
rotation angles, until the KLT finds sufficient points. Then the
final rotation angle will be the sum of the estimated rotation
angle (from the intermediate image) and the rotation angle
when transforming from the raw image to the intermediate
image. The enhanced KLT algorithm is summarized as below
and in Fig. 5:
Step 1: Rotate the image at time K + 1 an angle into an
image K’
Step 2: Compare image at time K with K using KLT,
assumed that N points are successfully matched
Step 3: If N is more than needed, process to step 4.
Otherwise, Θ = Θ + 18, if Θ < 360◦
then go
back to step 1, if Θ > 360◦
then stop and return
the Θi corresponding to the largest number of points
successfully matched
Step 4: Transform coordinates in the image K corre-
sponding to Θi into coordinates in K + 1
Figure 7a is feature points found in image K. As observed
in Fig. 7b only 36 feature points are matched in image K +
1 when camera rotates 18. Meanwhile, the improved KLT
matched 91 feature points as in Fig. 7c.
G. An Application in 3D Structure Reconstruction from
Robot Motion
The proposed methods are combined in a structure-from-
motion application by the following process (Fig. 6):
Step 1: Perform 3D reconstruction by the enhanced SAD
from two cameras’ images at time t − 1 and time t
Step 2: Estimate the camera motion
244
Input:
A pair of images at time t-1
Step 1: Detect, track and identify features’
positions
Measure points’
distance by
SAD algorithm
Feature
detection by
KLT algorithm
Identify features
can be tracked
3D coordinates
Step 2: Eliminate
outliers by RANSAC
Randomly choose 3
points from the
features’ 3D coordinate
set
A set of features’ 3D
coordinates regarding
to cameras at time t-
1 and t
A pair of images at time t
Estimate motion
parameters by Gauss-
Newton
Apply Kalman filter
Select a consensus set
Choose the best
consensus set
A set of 3D
coordinates of
features in the
best consensus set
Step 3: Estimate the final motion’s
parameters
Estimate the motion parameters
by Gauss-Newton algorithm
Step 4: Refine the final result
Refine the final result by KLT
algorithm
Output:
rx, ry, rz, tx, ty, tz
Are rotation angles and
displacement amount of
cameras between time t and t-1,
respectively
Fig. 5. Ego motion estimation diagram.
Input:
An upper camera’s
image at time t-1
Output: a
reconstructed 3D
image
A lower camera’s
images at time t-1
Estimate
camera motion
Transform 3D coordinates
constructed at time t
compared to time t-1
camera motion
parameters
Input:
An upper camera’s
image at time t
A lower camera’s
images at time t
3D reconstruction by SAD
3D reconstruction by SAD
Fig. 6. Structure from motion based on proposed methods.
(a)
(b)
(c)
Fig. 7. Comparison between original KLT and improved KLT on number
of feature points tracked.
Step 3: Map coordinates of points at time t into camera
coordinate system at time t − 1 based on estimated
motion parameters. Combine two reconstructed images.
V. EXPERIMENTS
A. Experiment Environment
3D objects are constructed by the Google Sketchup 8, Pov-
ray 3.7 is used to render 3D scenes and simulate the light
and omni cameras. They are free and enough for setting
up experiments. Another advantage is virtual environment
conditions are controlled easily. Moreover, camera’s faults
and lopsided arrangement will be eliminated.
The process of setting up experiments:
1) Build 3D models using Google Sketchup
2) Convert Google Sketchup 3D models into Pov-ray
specification files
3) Use Pov-ray for creating two mirrors and set the built-
in cameras at the focuses of the mirrors
4) Simulate camera movement: Pov-ray supports creating
a sequence of continuous frames to simulate camera
movement along a spline through predefined points
5) Render the frames from step 4. Images captured from
two simulated omni cameras are saved into hard disk.
In experiments, it took 4 to 5 hours to render 200
frames from two cameras.
6) Proposed algorithms are evaluated.
B. Point Localization Using the Improved SAD
This part evaluates the performance of the improved SAD
in the reconstruction task and compare with the original
SAD. The experiment is constructed as following:
1) A wall with brick texture is rendered by Pov-ray.
2) Two omni cameras (at the height of 1.2m and 1.4m)
take two photos at the distance 2m (Fig. 8a). Both SAD
and enhanced SAD is used for matching similar points
and reconstructing the captured scene.
As observed in Fig. 8b, the reconstructed wall by the
enhanced SAD (left) is more clear and less errors comparing
to the original SAD (right). This is because the brick texture
looks similar causing the SAD to be mismatched. However,
experiments on non-repetitive or complex objects give the
same results for both algorithms.
C. Ego-motion estimation with the Improved KLT
This evaluates the enhanced KLT performance against
camera’s large rotation angles. Here is the process:
245
1) The camera system moves and takes photos along a
spline (Fig. 9a) between two walls.
2) It takes one photo each 5cm, total 99 photos in 490cm.
3) KLT and its enhancement are compared based on road
shapes reconstructed and road lengths calculated.
Figure 9b shows a road shape reconstructed with the im-
proved KLT and the estimated road length is 488.7cm. While
the road map estimated by the original KLT in Fig. 9c is
475.7cm length. Due to a big turn at the end of the road, the
required point number for the original KLT is not sufficient
for motion estimating and calculating rotation angles. The
improved KLT outperforms at this part of the road. This
result shows the effectiveness of the intermediate step.
D. Proposed Methods Combination
In this final experiment, all proposed methods are com-
bined to test their performance in a large environment for a
long term working. A building with rooms and objects along
the walls is constructed as in Fig. 10a. The camera system
follows the corridor (Fig. 10b) and changes speed overtime
(faster when going straight, slower when making a turn).
Each camera captured 349 frames in total. The reconstructed
room and road map in Fig. 10c are similar to the original
ones in Fig. 10a and Fig. 10b.
VI. CONCLUSIONS AND FUTURE WORKS
Some enhancement are presented to deal with problems of
localizing points and ego-motion estimation using two omni
cameras. The enhanced SAD can deal with repetitive textures
better than the original method. The ability to capture a wide
view of omni camera is a big advantage when improving the
KLT algorithm, combining with the optimization algorithm
(a)
(b)
Fig. 8. Wall photos taken by two cameras and reconstructed (b) using the
improved SAD (left) and original SAD (right).
(a) (b) (c)
Fig. 9. Road maps (a) reconstructed with the improved KLT (b) and
original KLT (c) compare to the predefined road map.
(a) (b) (c)
Fig. 10. The room (a), the road map model (b) are reconstructed using
proposed methods (c).
Gauss-Newton, consensus building RANSAC and filter with
Kalman filter for a better movement estimation.
The experiments are performed using simulators closed
to real world conditions, yet easier to be controlled, which
could be a premise for later research. The combination of
proposed methods still retains the simplicity, effectiveness
and much faster than using SIFT feature [10].
In future, we are planning to improve more functions
for practical robot navigation systems such as an obstacle
detection function like in [11] and localization based on
landmarks [12]. Besides, we could use Gaussian Mixture
Model to model the noise and apply Particle Filter.
REFERENCES
[1] Jos’e Ant’onio da Cruz Pinto Gaspar, ”Omnidirectional Vision for
Mobile Robot navigation,” PhD thesis, 2002.
[2] Hiroshi Koyasu, Jun Miura and Yoshiaki Shirai, ”Realtime omni-
directional stereo for obstacle detection and tracking in dynamic
environments,” Proc. IEEE/RSJ Int. Conf. on Intelligent Robots and
Systems Maui, Hawaii, 2001, pp 31-36.
[3] Davide Scaramuzza, A. Martinelli, and R. Siegwart, ”A toolbox
for easy calibrating ominidirectional cameras,” IEEE International
Conference on Intelligent Robots and System (IROS), 2006.
[4] Davide Scaramuzza, ”Omnidirectional vision: from calibration to
robot motion estimation,” PhD thesis, M.S. Electronic Engineering,
Universit di Perugia, Italy, 2008.
[5] Davide Scaramuzza, Friedrich Fraundorfer, and Roland Siegwart,
”Real-time monocular visual odometry for on-road vihicles with 1-
point RANSAC”, IEEE Conference on Robotics and Automation,
Japan, 2009, pp. 4293–4299.
[6] Jose L. R. Herran, ”OMNIVIS: 3D space and camera path recon-
struction for omnidirectional vision”, Master thesis in the Field of
Information Technology, Harvard University, 2010.
[7] Andrew Howard, ”Real-Time Stereo Visual Odometry for Autonomous
Ground Vehicles”, IEEE International Conference on Intelligent
Robots and Systems (IROS), France, 2008.
[8] Jonathan Foote and Don Kimber, ”FlyCam: practical panoramic video
and automatic camera control,” In proceedings of the IEEE Interna-
tional Conference on Multimedia, 2000, pp 1419-1422.
[9] Joshua Gluckman and Shree K. Nayar, ”Ego-Motion and Omni-
directional Cameras,” In Proceedings of of the Sixth International
Conference on Computer (ICCV), 1998.
[10] Dong-Fan Shen, Jong-Shill Lee, Se-Kee Kil, Je-Goon Ryu, Eung-
Hyuk Lee, Seung-Hong Hong, ”3D Reconstruction of Scale-Invariant
Features for Mobile Robot localization,” Int. Journal of Computer
Science and Network Security (IJCSNS), vol. 6, no.3B, 2006.
[11] Ola Millnert, Toon Goedem, Tinne Tuytelaars, Luc Van Gool, Alexan-
der Huntemann, and Marnix Nuttin. ”Range determination for mo-
bile robots using an omnidirectional camera,”. Journal of Integrated
Comput.-Aided Engineering Informatics in Control, Automation and
Robotics, vol 14, Issue 1, 2007, pp. 63-72.
[12] C. Madsen, C. Andersen, ”Optical landmark selection for triangulation
of robot,” J. Robotics and Autonomous System, 1998.
246

More Related Content

What's hot

Segmentation of Color Image using Adaptive Thresholding and Masking with Wate...
Segmentation of Color Image using Adaptive Thresholding and Masking with Wate...Segmentation of Color Image using Adaptive Thresholding and Masking with Wate...
Segmentation of Color Image using Adaptive Thresholding and Masking with Wate...Habibur Rahman
 
Iaetsd an enhanced circular detection technique rpsw using circular hough t...
Iaetsd an enhanced circular detection technique   rpsw using circular hough t...Iaetsd an enhanced circular detection technique   rpsw using circular hough t...
Iaetsd an enhanced circular detection technique rpsw using circular hough t...Iaetsd Iaetsd
 
COMPLEMENTARY VISION BASED DATA FUSION FOR ROBUST POSITIONING AND DIRECTED FL...
COMPLEMENTARY VISION BASED DATA FUSION FOR ROBUST POSITIONING AND DIRECTED FL...COMPLEMENTARY VISION BASED DATA FUSION FOR ROBUST POSITIONING AND DIRECTED FL...
COMPLEMENTARY VISION BASED DATA FUSION FOR ROBUST POSITIONING AND DIRECTED FL...ijaia
 
Using Subspace Pursuit Algorithm to Improve Performance of the Distributed Co...
Using Subspace Pursuit Algorithm to Improve Performance of the Distributed Co...Using Subspace Pursuit Algorithm to Improve Performance of the Distributed Co...
Using Subspace Pursuit Algorithm to Improve Performance of the Distributed Co...Polytechnique Montreal
 
Application of Image Retrieval Techniques to Understand Evolving Weather
Application of Image Retrieval Techniques to Understand Evolving WeatherApplication of Image Retrieval Techniques to Understand Evolving Weather
Application of Image Retrieval Techniques to Understand Evolving Weatherijsrd.com
 
New geometric interpretation and analytic solution for quadrilateral reconstr...
New geometric interpretation and analytic solution for quadrilateral reconstr...New geometric interpretation and analytic solution for quadrilateral reconstr...
New geometric interpretation and analytic solution for quadrilateral reconstr...Joo-Haeng Lee
 
Interferogram Filtering Using Gaussians Scale Mixtures in Steerable Wavelet D...
Interferogram Filtering Using Gaussians Scale Mixtures in Steerable Wavelet D...Interferogram Filtering Using Gaussians Scale Mixtures in Steerable Wavelet D...
Interferogram Filtering Using Gaussians Scale Mixtures in Steerable Wavelet D...CSCJournals
 
DTAM: Dense Tracking and Mapping in Real-Time, Robot vision Group
DTAM: Dense Tracking and Mapping in Real-Time, Robot vision GroupDTAM: Dense Tracking and Mapping in Real-Time, Robot vision Group
DTAM: Dense Tracking and Mapping in Real-Time, Robot vision GroupLihang Li
 
FR4.L09.5 - THREE DIMENSIONAL RECONSTRUCTION OF URBAN AREAS USING JOINTLY PHA...
FR4.L09.5 - THREE DIMENSIONAL RECONSTRUCTION OF URBAN AREAS USING JOINTLY PHA...FR4.L09.5 - THREE DIMENSIONAL RECONSTRUCTION OF URBAN AREAS USING JOINTLY PHA...
FR4.L09.5 - THREE DIMENSIONAL RECONSTRUCTION OF URBAN AREAS USING JOINTLY PHA...grssieee
 
Final Project Report Nadar
Final Project Report NadarFinal Project Report Nadar
Final Project Report NadarMaher Nadar
 
Parallel implementation of geodesic distance transform with application in su...
Parallel implementation of geodesic distance transform with application in su...Parallel implementation of geodesic distance transform with application in su...
Parallel implementation of geodesic distance transform with application in su...Tuan Q. Pham
 
Single Image Fog Removal Based on Fusion Strategy
Single Image Fog Removal Based on Fusion Strategy Single Image Fog Removal Based on Fusion Strategy
Single Image Fog Removal Based on Fusion Strategy csandit
 
isvc_draft6_final_1_harvey_mudd (1)
isvc_draft6_final_1_harvey_mudd (1)isvc_draft6_final_1_harvey_mudd (1)
isvc_draft6_final_1_harvey_mudd (1)David Tenorio
 
Multi-hypothesis projection-based shift estimation for sweeping panorama reco...
Multi-hypothesis projection-based shift estimation for sweeping panorama reco...Multi-hypothesis projection-based shift estimation for sweeping panorama reco...
Multi-hypothesis projection-based shift estimation for sweeping panorama reco...Tuan Q. Pham
 
Time Multiplexed VLSI Architecture for Real-Time Barrel Distortion Correction...
Time Multiplexed VLSI Architecture for Real-Time Barrel Distortion Correction...Time Multiplexed VLSI Architecture for Real-Time Barrel Distortion Correction...
Time Multiplexed VLSI Architecture for Real-Time Barrel Distortion Correction...ijsrd.com
 
Current issues - Signal & Image Processing: An International Journal (SIPIJ)
Current issues - Signal & Image Processing: An International Journal (SIPIJ)Current issues - Signal & Image Processing: An International Journal (SIPIJ)
Current issues - Signal & Image Processing: An International Journal (SIPIJ)sipij
 
Visual odometry & slam utilizing indoor structured environments
Visual odometry & slam utilizing indoor structured environmentsVisual odometry & slam utilizing indoor structured environments
Visual odometry & slam utilizing indoor structured environmentsNAVER Engineering
 

What's hot (20)

Graphics
GraphicsGraphics
Graphics
 
Segmentation of Color Image using Adaptive Thresholding and Masking with Wate...
Segmentation of Color Image using Adaptive Thresholding and Masking with Wate...Segmentation of Color Image using Adaptive Thresholding and Masking with Wate...
Segmentation of Color Image using Adaptive Thresholding and Masking with Wate...
 
Iaetsd an enhanced circular detection technique rpsw using circular hough t...
Iaetsd an enhanced circular detection technique   rpsw using circular hough t...Iaetsd an enhanced circular detection technique   rpsw using circular hough t...
Iaetsd an enhanced circular detection technique rpsw using circular hough t...
 
COMPLEMENTARY VISION BASED DATA FUSION FOR ROBUST POSITIONING AND DIRECTED FL...
COMPLEMENTARY VISION BASED DATA FUSION FOR ROBUST POSITIONING AND DIRECTED FL...COMPLEMENTARY VISION BASED DATA FUSION FOR ROBUST POSITIONING AND DIRECTED FL...
COMPLEMENTARY VISION BASED DATA FUSION FOR ROBUST POSITIONING AND DIRECTED FL...
 
Using Subspace Pursuit Algorithm to Improve Performance of the Distributed Co...
Using Subspace Pursuit Algorithm to Improve Performance of the Distributed Co...Using Subspace Pursuit Algorithm to Improve Performance of the Distributed Co...
Using Subspace Pursuit Algorithm to Improve Performance of the Distributed Co...
 
Application of Image Retrieval Techniques to Understand Evolving Weather
Application of Image Retrieval Techniques to Understand Evolving WeatherApplication of Image Retrieval Techniques to Understand Evolving Weather
Application of Image Retrieval Techniques to Understand Evolving Weather
 
DICTA 2017 poster
DICTA 2017 posterDICTA 2017 poster
DICTA 2017 poster
 
New geometric interpretation and analytic solution for quadrilateral reconstr...
New geometric interpretation and analytic solution for quadrilateral reconstr...New geometric interpretation and analytic solution for quadrilateral reconstr...
New geometric interpretation and analytic solution for quadrilateral reconstr...
 
Interferogram Filtering Using Gaussians Scale Mixtures in Steerable Wavelet D...
Interferogram Filtering Using Gaussians Scale Mixtures in Steerable Wavelet D...Interferogram Filtering Using Gaussians Scale Mixtures in Steerable Wavelet D...
Interferogram Filtering Using Gaussians Scale Mixtures in Steerable Wavelet D...
 
DTAM: Dense Tracking and Mapping in Real-Time, Robot vision Group
DTAM: Dense Tracking and Mapping in Real-Time, Robot vision GroupDTAM: Dense Tracking and Mapping in Real-Time, Robot vision Group
DTAM: Dense Tracking and Mapping in Real-Time, Robot vision Group
 
FR4.L09.5 - THREE DIMENSIONAL RECONSTRUCTION OF URBAN AREAS USING JOINTLY PHA...
FR4.L09.5 - THREE DIMENSIONAL RECONSTRUCTION OF URBAN AREAS USING JOINTLY PHA...FR4.L09.5 - THREE DIMENSIONAL RECONSTRUCTION OF URBAN AREAS USING JOINTLY PHA...
FR4.L09.5 - THREE DIMENSIONAL RECONSTRUCTION OF URBAN AREAS USING JOINTLY PHA...
 
Final Project Report Nadar
Final Project Report NadarFinal Project Report Nadar
Final Project Report Nadar
 
Parallel implementation of geodesic distance transform with application in su...
Parallel implementation of geodesic distance transform with application in su...Parallel implementation of geodesic distance transform with application in su...
Parallel implementation of geodesic distance transform with application in su...
 
Single Image Fog Removal Based on Fusion Strategy
Single Image Fog Removal Based on Fusion Strategy Single Image Fog Removal Based on Fusion Strategy
Single Image Fog Removal Based on Fusion Strategy
 
isvc_draft6_final_1_harvey_mudd (1)
isvc_draft6_final_1_harvey_mudd (1)isvc_draft6_final_1_harvey_mudd (1)
isvc_draft6_final_1_harvey_mudd (1)
 
Multi-hypothesis projection-based shift estimation for sweeping panorama reco...
Multi-hypothesis projection-based shift estimation for sweeping panorama reco...Multi-hypothesis projection-based shift estimation for sweeping panorama reco...
Multi-hypothesis projection-based shift estimation for sweeping panorama reco...
 
Time Multiplexed VLSI Architecture for Real-Time Barrel Distortion Correction...
Time Multiplexed VLSI Architecture for Real-Time Barrel Distortion Correction...Time Multiplexed VLSI Architecture for Real-Time Barrel Distortion Correction...
Time Multiplexed VLSI Architecture for Real-Time Barrel Distortion Correction...
 
Current issues - Signal & Image Processing: An International Journal (SIPIJ)
Current issues - Signal & Image Processing: An International Journal (SIPIJ)Current issues - Signal & Image Processing: An International Journal (SIPIJ)
Current issues - Signal & Image Processing: An International Journal (SIPIJ)
 
N045077984
N045077984N045077984
N045077984
 
Visual odometry & slam utilizing indoor structured environments
Visual odometry & slam utilizing indoor structured environmentsVisual odometry & slam utilizing indoor structured environments
Visual odometry & slam utilizing indoor structured environments
 

Similar to 06466595

3D Reconstruction from Multiple uncalibrated 2D Images of an Object
3D Reconstruction from Multiple uncalibrated 2D Images of an Object3D Reconstruction from Multiple uncalibrated 2D Images of an Object
3D Reconstruction from Multiple uncalibrated 2D Images of an ObjectAnkur Tyagi
 
Report bep thomas_blanken
Report bep thomas_blankenReport bep thomas_blanken
Report bep thomas_blankenxepost
 
Augmented reality session 4
Augmented reality session 4Augmented reality session 4
Augmented reality session 4NirsandhG
 
Matching algorithm performance analysis for autocalibration method of stereo ...
Matching algorithm performance analysis for autocalibration method of stereo ...Matching algorithm performance analysis for autocalibration method of stereo ...
Matching algorithm performance analysis for autocalibration method of stereo ...TELKOMNIKA JOURNAL
 
Application of Vision based Techniques for Position Estimation
Application of Vision based Techniques for Position EstimationApplication of Vision based Techniques for Position Estimation
Application of Vision based Techniques for Position EstimationIRJET Journal
 
Autonomous Perching Quadcopter
Autonomous Perching QuadcopterAutonomous Perching Quadcopter
Autonomous Perching QuadcopterYucheng Chen
 
MULTIPLE REGION OF INTEREST TRACKING OF NON-RIGID OBJECTS USING DEMON'S ALGOR...
MULTIPLE REGION OF INTEREST TRACKING OF NON-RIGID OBJECTS USING DEMON'S ALGOR...MULTIPLE REGION OF INTEREST TRACKING OF NON-RIGID OBJECTS USING DEMON'S ALGOR...
MULTIPLE REGION OF INTEREST TRACKING OF NON-RIGID OBJECTS USING DEMON'S ALGOR...cscpconf
 
Multiple region of interest tracking of non rigid objects using demon's algor...
Multiple region of interest tracking of non rigid objects using demon's algor...Multiple region of interest tracking of non rigid objects using demon's algor...
Multiple region of interest tracking of non rigid objects using demon's algor...csandit
 
Multiple Ant Colony Optimizations for Stereo Matching
Multiple Ant Colony Optimizations for Stereo MatchingMultiple Ant Colony Optimizations for Stereo Matching
Multiple Ant Colony Optimizations for Stereo MatchingCSCJournals
 
An automatic algorithm for object recognition and detection based on asift ke...
An automatic algorithm for object recognition and detection based on asift ke...An automatic algorithm for object recognition and detection based on asift ke...
An automatic algorithm for object recognition and detection based on asift ke...Kunal Kishor Nirala
 
Building 3D Morphable Models from 2D Images
Building 3D Morphable Models from 2D ImagesBuilding 3D Morphable Models from 2D Images
Building 3D Morphable Models from 2D ImagesShanglin Yang
 
MEDIAN BASED PARALLEL STEERING KERNEL REGRESSION FOR IMAGE RECONSTRUCTION
MEDIAN BASED PARALLEL STEERING KERNEL REGRESSION FOR IMAGE RECONSTRUCTIONMEDIAN BASED PARALLEL STEERING KERNEL REGRESSION FOR IMAGE RECONSTRUCTION
MEDIAN BASED PARALLEL STEERING KERNEL REGRESSION FOR IMAGE RECONSTRUCTIONcscpconf
 
MEDIAN BASED PARALLEL STEERING KERNEL REGRESSION FOR IMAGE RECONSTRUCTION
MEDIAN BASED PARALLEL STEERING KERNEL REGRESSION FOR IMAGE RECONSTRUCTIONMEDIAN BASED PARALLEL STEERING KERNEL REGRESSION FOR IMAGE RECONSTRUCTION
MEDIAN BASED PARALLEL STEERING KERNEL REGRESSION FOR IMAGE RECONSTRUCTIONcsandit
 
Median based parallel steering kernel regression for image reconstruction
Median based parallel steering kernel regression for image reconstructionMedian based parallel steering kernel regression for image reconstruction
Median based parallel steering kernel regression for image reconstructioncsandit
 
Efficient 3D stereo vision stabilization for multi-camera viewpoints
Efficient 3D stereo vision stabilization for multi-camera viewpointsEfficient 3D stereo vision stabilization for multi-camera viewpoints
Efficient 3D stereo vision stabilization for multi-camera viewpointsjournalBEEI
 
A NOVEL APPROACH TO SMOOTHING ON 3D STRUCTURED ADAPTIVE MESH OF THE KINECT-BA...
A NOVEL APPROACH TO SMOOTHING ON 3D STRUCTURED ADAPTIVE MESH OF THE KINECT-BA...A NOVEL APPROACH TO SMOOTHING ON 3D STRUCTURED ADAPTIVE MESH OF THE KINECT-BA...
A NOVEL APPROACH TO SMOOTHING ON 3D STRUCTURED ADAPTIVE MESH OF THE KINECT-BA...csandit
 

Similar to 06466595 (20)

3D Reconstruction from Multiple uncalibrated 2D Images of an Object
3D Reconstruction from Multiple uncalibrated 2D Images of an Object3D Reconstruction from Multiple uncalibrated 2D Images of an Object
3D Reconstruction from Multiple uncalibrated 2D Images of an Object
 
Report bep thomas_blanken
Report bep thomas_blankenReport bep thomas_blanken
Report bep thomas_blanken
 
Augmented reality session 4
Augmented reality session 4Augmented reality session 4
Augmented reality session 4
 
Matching algorithm performance analysis for autocalibration method of stereo ...
Matching algorithm performance analysis for autocalibration method of stereo ...Matching algorithm performance analysis for autocalibration method of stereo ...
Matching algorithm performance analysis for autocalibration method of stereo ...
 
Application of Vision based Techniques for Position Estimation
Application of Vision based Techniques for Position EstimationApplication of Vision based Techniques for Position Estimation
Application of Vision based Techniques for Position Estimation
 
Autonomous Perching Quadcopter
Autonomous Perching QuadcopterAutonomous Perching Quadcopter
Autonomous Perching Quadcopter
 
998-isvc16
998-isvc16998-isvc16
998-isvc16
 
Oc2423022305
Oc2423022305Oc2423022305
Oc2423022305
 
MULTIPLE REGION OF INTEREST TRACKING OF NON-RIGID OBJECTS USING DEMON'S ALGOR...
MULTIPLE REGION OF INTEREST TRACKING OF NON-RIGID OBJECTS USING DEMON'S ALGOR...MULTIPLE REGION OF INTEREST TRACKING OF NON-RIGID OBJECTS USING DEMON'S ALGOR...
MULTIPLE REGION OF INTEREST TRACKING OF NON-RIGID OBJECTS USING DEMON'S ALGOR...
 
Multiple region of interest tracking of non rigid objects using demon's algor...
Multiple region of interest tracking of non rigid objects using demon's algor...Multiple region of interest tracking of non rigid objects using demon's algor...
Multiple region of interest tracking of non rigid objects using demon's algor...
 
Multiple Ant Colony Optimizations for Stereo Matching
Multiple Ant Colony Optimizations for Stereo MatchingMultiple Ant Colony Optimizations for Stereo Matching
Multiple Ant Colony Optimizations for Stereo Matching
 
An automatic algorithm for object recognition and detection based on asift ke...
An automatic algorithm for object recognition and detection based on asift ke...An automatic algorithm for object recognition and detection based on asift ke...
An automatic algorithm for object recognition and detection based on asift ke...
 
Building 3D Morphable Models from 2D Images
Building 3D Morphable Models from 2D ImagesBuilding 3D Morphable Models from 2D Images
Building 3D Morphable Models from 2D Images
 
MEDIAN BASED PARALLEL STEERING KERNEL REGRESSION FOR IMAGE RECONSTRUCTION
MEDIAN BASED PARALLEL STEERING KERNEL REGRESSION FOR IMAGE RECONSTRUCTIONMEDIAN BASED PARALLEL STEERING KERNEL REGRESSION FOR IMAGE RECONSTRUCTION
MEDIAN BASED PARALLEL STEERING KERNEL REGRESSION FOR IMAGE RECONSTRUCTION
 
MEDIAN BASED PARALLEL STEERING KERNEL REGRESSION FOR IMAGE RECONSTRUCTION
MEDIAN BASED PARALLEL STEERING KERNEL REGRESSION FOR IMAGE RECONSTRUCTIONMEDIAN BASED PARALLEL STEERING KERNEL REGRESSION FOR IMAGE RECONSTRUCTION
MEDIAN BASED PARALLEL STEERING KERNEL REGRESSION FOR IMAGE RECONSTRUCTION
 
Median based parallel steering kernel regression for image reconstruction
Median based parallel steering kernel regression for image reconstructionMedian based parallel steering kernel regression for image reconstruction
Median based parallel steering kernel regression for image reconstruction
 
Efficient 3D stereo vision stabilization for multi-camera viewpoints
Efficient 3D stereo vision stabilization for multi-camera viewpointsEfficient 3D stereo vision stabilization for multi-camera viewpoints
Efficient 3D stereo vision stabilization for multi-camera viewpoints
 
A NOVEL APPROACH TO SMOOTHING ON 3D STRUCTURED ADAPTIVE MESH OF THE KINECT-BA...
A NOVEL APPROACH TO SMOOTHING ON 3D STRUCTURED ADAPTIVE MESH OF THE KINECT-BA...A NOVEL APPROACH TO SMOOTHING ON 3D STRUCTURED ADAPTIVE MESH OF THE KINECT-BA...
A NOVEL APPROACH TO SMOOTHING ON 3D STRUCTURED ADAPTIVE MESH OF THE KINECT-BA...
 
ei2106-submit-opt-415
ei2106-submit-opt-415ei2106-submit-opt-415
ei2106-submit-opt-415
 
poster
posterposter
poster
 

Recently uploaded

Sexy Call Girl Kumbakonam Arshi 💚9058824046💚 Kumbakonam Escort Service
Sexy Call Girl Kumbakonam Arshi 💚9058824046💚 Kumbakonam Escort ServiceSexy Call Girl Kumbakonam Arshi 💚9058824046💚 Kumbakonam Escort Service
Sexy Call Girl Kumbakonam Arshi 💚9058824046💚 Kumbakonam Escort Servicejaanseema653
 
Kolkata Call Girls Miss Inaaya ❤️ at @30% discount Everyday Call girl
Kolkata Call Girls Miss Inaaya ❤️ at @30% discount Everyday Call girlKolkata Call Girls Miss Inaaya ❤️ at @30% discount Everyday Call girl
Kolkata Call Girls Miss Inaaya ❤️ at @30% discount Everyday Call girlonly4webmaster01
 
AECS Layout Escorts (Bangalore) 9352852248 Women seeking Men Real Service
AECS Layout Escorts (Bangalore) 9352852248 Women seeking Men Real ServiceAECS Layout Escorts (Bangalore) 9352852248 Women seeking Men Real Service
AECS Layout Escorts (Bangalore) 9352852248 Women seeking Men Real ServiceAhmedabad Call Girls
 
💞 Safe And Secure Call Girls Coimbatore 🧿 9332606886 🧿 High Class Call Girl S...
💞 Safe And Secure Call Girls Coimbatore 🧿 9332606886 🧿 High Class Call Girl S...💞 Safe And Secure Call Girls Coimbatore 🧿 9332606886 🧿 High Class Call Girl S...
💞 Safe And Secure Call Girls Coimbatore 🧿 9332606886 🧿 High Class Call Girl S...India Call Girls
 
(Big Boobs Indian Girls) 💓 9257276172 💓High Profile Call Girls Jaipur You Can...
(Big Boobs Indian Girls) 💓 9257276172 💓High Profile Call Girls Jaipur You Can...(Big Boobs Indian Girls) 💓 9257276172 💓High Profile Call Girls Jaipur You Can...
(Big Boobs Indian Girls) 💓 9257276172 💓High Profile Call Girls Jaipur You Can...Joya Singh
 
Top 20 Famous Indian Female Pornstars Name List 2024
Top 20 Famous Indian Female Pornstars Name List 2024Top 20 Famous Indian Female Pornstars Name List 2024
Top 20 Famous Indian Female Pornstars Name List 2024Sheetaleventcompany
 
Call Girls Service Mohali {7435815124} ❤️VVIP PALAK Call Girl in Mohali Punjab
Call Girls Service Mohali {7435815124} ❤️VVIP PALAK Call Girl in Mohali PunjabCall Girls Service Mohali {7435815124} ❤️VVIP PALAK Call Girl in Mohali Punjab
Call Girls Service Mohali {7435815124} ❤️VVIP PALAK Call Girl in Mohali PunjabSheetaleventcompany
 
Rishikesh Call Girls Service 6398383382 Real Russian Girls Looking Models
Rishikesh Call Girls Service 6398383382 Real Russian Girls Looking ModelsRishikesh Call Girls Service 6398383382 Real Russian Girls Looking Models
Rishikesh Call Girls Service 6398383382 Real Russian Girls Looking ModelsRupali Sharma
 
Vip Call Girls Makarba 👙 6367187148 👙 Genuine WhatsApp Number for Real Meet
Vip Call Girls Makarba 👙 6367187148 👙 Genuine WhatsApp Number for Real MeetVip Call Girls Makarba 👙 6367187148 👙 Genuine WhatsApp Number for Real Meet
Vip Call Girls Makarba 👙 6367187148 👙 Genuine WhatsApp Number for Real MeetAhmedabad Call Girls
 
Low Rate Call Girls Pune {9xx000xx09} ❤️VVIP NISHA Call Girls in Pune Maharas...
Low Rate Call Girls Pune {9xx000xx09} ❤️VVIP NISHA Call Girls in Pune Maharas...Low Rate Call Girls Pune {9xx000xx09} ❤️VVIP NISHA Call Girls in Pune Maharas...
Low Rate Call Girls Pune {9xx000xx09} ❤️VVIP NISHA Call Girls in Pune Maharas...Sheetaleventcompany
 
Independent Call Girls Hyderabad 💋 9352988975 💋 Genuine WhatsApp Number for R...
Independent Call Girls Hyderabad 💋 9352988975 💋 Genuine WhatsApp Number for R...Independent Call Girls Hyderabad 💋 9352988975 💋 Genuine WhatsApp Number for R...
Independent Call Girls Hyderabad 💋 9352988975 💋 Genuine WhatsApp Number for R...Ahmedabad Call Girls
 
Indore Call Girl Service 📞9235973566📞Just Call Inaaya📲 Call Girls In Indore N...
Indore Call Girl Service 📞9235973566📞Just Call Inaaya📲 Call Girls In Indore N...Indore Call Girl Service 📞9235973566📞Just Call Inaaya📲 Call Girls In Indore N...
Indore Call Girl Service 📞9235973566📞Just Call Inaaya📲 Call Girls In Indore N...Sheetaleventcompany
 
Independent Call Girls Service Chandigarh | 8868886958 | Call Girl Service Nu...
Independent Call Girls Service Chandigarh | 8868886958 | Call Girl Service Nu...Independent Call Girls Service Chandigarh | 8868886958 | Call Girl Service Nu...
Independent Call Girls Service Chandigarh | 8868886958 | Call Girl Service Nu...Sheetaleventcompany
 
9316020077📞Majorda Beach Call Girls Numbers, Call Girls Whatsapp Numbers Ma...
9316020077📞Majorda Beach Call Girls  Numbers, Call Girls  Whatsapp Numbers Ma...9316020077📞Majorda Beach Call Girls  Numbers, Call Girls  Whatsapp Numbers Ma...
9316020077📞Majorda Beach Call Girls Numbers, Call Girls Whatsapp Numbers Ma...Goa cutee sexy top girl
 
(Deeksha) 💓 9920725232 💓High Profile Call Girls Navi Mumbai You Can Get The S...
(Deeksha) 💓 9920725232 💓High Profile Call Girls Navi Mumbai You Can Get The S...(Deeksha) 💓 9920725232 💓High Profile Call Girls Navi Mumbai You Can Get The S...
(Deeksha) 💓 9920725232 💓High Profile Call Girls Navi Mumbai You Can Get The S...Ahmedabad Call Girls
 
vadodara Call Girls 👙 6297143586 👙 Genuine WhatsApp Number for Real Meet
vadodara Call Girls 👙 6297143586 👙 Genuine WhatsApp Number for Real Meetvadodara Call Girls 👙 6297143586 👙 Genuine WhatsApp Number for Real Meet
vadodara Call Girls 👙 6297143586 👙 Genuine WhatsApp Number for Real MeetCall Girls Chandigarh
 
Escorts Lahore || 🔞 03274100048 || Escort service in Lahore
Escorts Lahore || 🔞 03274100048 || Escort service in LahoreEscorts Lahore || 🔞 03274100048 || Escort service in Lahore
Escorts Lahore || 🔞 03274100048 || Escort service in LahoreDeny Daniel
 
visakhapatnam Call Girls 👙 6297143586 👙 Genuine WhatsApp Number for Real Meet
visakhapatnam Call Girls 👙 6297143586 👙 Genuine WhatsApp Number for Real Meetvisakhapatnam Call Girls 👙 6297143586 👙 Genuine WhatsApp Number for Real Meet
visakhapatnam Call Girls 👙 6297143586 👙 Genuine WhatsApp Number for Real MeetCall Girls Chandigarh
 
Call Girl in Indore 8827247818 {Low Price}👉 Meghna Indore Call Girls * DXZ...
Call Girl in Indore 8827247818 {Low Price}👉   Meghna Indore Call Girls  * DXZ...Call Girl in Indore 8827247818 {Low Price}👉   Meghna Indore Call Girls  * DXZ...
Call Girl in Indore 8827247818 {Low Price}👉 Meghna Indore Call Girls * DXZ...mahaiklolahd
 
Sexy Call Girl Villupuram Arshi 💚9058824046💚 Villupuram Escort Service
Sexy Call Girl Villupuram Arshi 💚9058824046💚 Villupuram Escort ServiceSexy Call Girl Villupuram Arshi 💚9058824046💚 Villupuram Escort Service
Sexy Call Girl Villupuram Arshi 💚9058824046💚 Villupuram Escort Servicejaanseema653
 

Recently uploaded (20)

Sexy Call Girl Kumbakonam Arshi 💚9058824046💚 Kumbakonam Escort Service
Sexy Call Girl Kumbakonam Arshi 💚9058824046💚 Kumbakonam Escort ServiceSexy Call Girl Kumbakonam Arshi 💚9058824046💚 Kumbakonam Escort Service
Sexy Call Girl Kumbakonam Arshi 💚9058824046💚 Kumbakonam Escort Service
 
Kolkata Call Girls Miss Inaaya ❤️ at @30% discount Everyday Call girl
Kolkata Call Girls Miss Inaaya ❤️ at @30% discount Everyday Call girlKolkata Call Girls Miss Inaaya ❤️ at @30% discount Everyday Call girl
Kolkata Call Girls Miss Inaaya ❤️ at @30% discount Everyday Call girl
 
AECS Layout Escorts (Bangalore) 9352852248 Women seeking Men Real Service
AECS Layout Escorts (Bangalore) 9352852248 Women seeking Men Real ServiceAECS Layout Escorts (Bangalore) 9352852248 Women seeking Men Real Service
AECS Layout Escorts (Bangalore) 9352852248 Women seeking Men Real Service
 
💞 Safe And Secure Call Girls Coimbatore 🧿 9332606886 🧿 High Class Call Girl S...
💞 Safe And Secure Call Girls Coimbatore 🧿 9332606886 🧿 High Class Call Girl S...💞 Safe And Secure Call Girls Coimbatore 🧿 9332606886 🧿 High Class Call Girl S...
💞 Safe And Secure Call Girls Coimbatore 🧿 9332606886 🧿 High Class Call Girl S...
 
(Big Boobs Indian Girls) 💓 9257276172 💓High Profile Call Girls Jaipur You Can...
(Big Boobs Indian Girls) 💓 9257276172 💓High Profile Call Girls Jaipur You Can...(Big Boobs Indian Girls) 💓 9257276172 💓High Profile Call Girls Jaipur You Can...
(Big Boobs Indian Girls) 💓 9257276172 💓High Profile Call Girls Jaipur You Can...
 
Top 20 Famous Indian Female Pornstars Name List 2024
Top 20 Famous Indian Female Pornstars Name List 2024Top 20 Famous Indian Female Pornstars Name List 2024
Top 20 Famous Indian Female Pornstars Name List 2024
 
Call Girls Service Mohali {7435815124} ❤️VVIP PALAK Call Girl in Mohali Punjab
Call Girls Service Mohali {7435815124} ❤️VVIP PALAK Call Girl in Mohali PunjabCall Girls Service Mohali {7435815124} ❤️VVIP PALAK Call Girl in Mohali Punjab
Call Girls Service Mohali {7435815124} ❤️VVIP PALAK Call Girl in Mohali Punjab
 
Rishikesh Call Girls Service 6398383382 Real Russian Girls Looking Models
Rishikesh Call Girls Service 6398383382 Real Russian Girls Looking ModelsRishikesh Call Girls Service 6398383382 Real Russian Girls Looking Models
Rishikesh Call Girls Service 6398383382 Real Russian Girls Looking Models
 
Vip Call Girls Makarba 👙 6367187148 👙 Genuine WhatsApp Number for Real Meet
Vip Call Girls Makarba 👙 6367187148 👙 Genuine WhatsApp Number for Real MeetVip Call Girls Makarba 👙 6367187148 👙 Genuine WhatsApp Number for Real Meet
Vip Call Girls Makarba 👙 6367187148 👙 Genuine WhatsApp Number for Real Meet
 
Low Rate Call Girls Pune {9xx000xx09} ❤️VVIP NISHA Call Girls in Pune Maharas...
Low Rate Call Girls Pune {9xx000xx09} ❤️VVIP NISHA Call Girls in Pune Maharas...Low Rate Call Girls Pune {9xx000xx09} ❤️VVIP NISHA Call Girls in Pune Maharas...
Low Rate Call Girls Pune {9xx000xx09} ❤️VVIP NISHA Call Girls in Pune Maharas...
 
Independent Call Girls Hyderabad 💋 9352988975 💋 Genuine WhatsApp Number for R...
Independent Call Girls Hyderabad 💋 9352988975 💋 Genuine WhatsApp Number for R...Independent Call Girls Hyderabad 💋 9352988975 💋 Genuine WhatsApp Number for R...
Independent Call Girls Hyderabad 💋 9352988975 💋 Genuine WhatsApp Number for R...
 
Indore Call Girl Service 📞9235973566📞Just Call Inaaya📲 Call Girls In Indore N...
Indore Call Girl Service 📞9235973566📞Just Call Inaaya📲 Call Girls In Indore N...Indore Call Girl Service 📞9235973566📞Just Call Inaaya📲 Call Girls In Indore N...
Indore Call Girl Service 📞9235973566📞Just Call Inaaya📲 Call Girls In Indore N...
 
Independent Call Girls Service Chandigarh | 8868886958 | Call Girl Service Nu...
Independent Call Girls Service Chandigarh | 8868886958 | Call Girl Service Nu...Independent Call Girls Service Chandigarh | 8868886958 | Call Girl Service Nu...
Independent Call Girls Service Chandigarh | 8868886958 | Call Girl Service Nu...
 
9316020077📞Majorda Beach Call Girls Numbers, Call Girls Whatsapp Numbers Ma...
9316020077📞Majorda Beach Call Girls  Numbers, Call Girls  Whatsapp Numbers Ma...9316020077📞Majorda Beach Call Girls  Numbers, Call Girls  Whatsapp Numbers Ma...
9316020077📞Majorda Beach Call Girls Numbers, Call Girls Whatsapp Numbers Ma...
 
(Deeksha) 💓 9920725232 💓High Profile Call Girls Navi Mumbai You Can Get The S...
(Deeksha) 💓 9920725232 💓High Profile Call Girls Navi Mumbai You Can Get The S...(Deeksha) 💓 9920725232 💓High Profile Call Girls Navi Mumbai You Can Get The S...
(Deeksha) 💓 9920725232 💓High Profile Call Girls Navi Mumbai You Can Get The S...
 
vadodara Call Girls 👙 6297143586 👙 Genuine WhatsApp Number for Real Meet
vadodara Call Girls 👙 6297143586 👙 Genuine WhatsApp Number for Real Meetvadodara Call Girls 👙 6297143586 👙 Genuine WhatsApp Number for Real Meet
vadodara Call Girls 👙 6297143586 👙 Genuine WhatsApp Number for Real Meet
 
Escorts Lahore || 🔞 03274100048 || Escort service in Lahore
Escorts Lahore || 🔞 03274100048 || Escort service in LahoreEscorts Lahore || 🔞 03274100048 || Escort service in Lahore
Escorts Lahore || 🔞 03274100048 || Escort service in Lahore
 
visakhapatnam Call Girls 👙 6297143586 👙 Genuine WhatsApp Number for Real Meet
visakhapatnam Call Girls 👙 6297143586 👙 Genuine WhatsApp Number for Real Meetvisakhapatnam Call Girls 👙 6297143586 👙 Genuine WhatsApp Number for Real Meet
visakhapatnam Call Girls 👙 6297143586 👙 Genuine WhatsApp Number for Real Meet
 
Call Girl in Indore 8827247818 {Low Price}👉 Meghna Indore Call Girls * DXZ...
Call Girl in Indore 8827247818 {Low Price}👉   Meghna Indore Call Girls  * DXZ...Call Girl in Indore 8827247818 {Low Price}👉   Meghna Indore Call Girls  * DXZ...
Call Girl in Indore 8827247818 {Low Price}👉 Meghna Indore Call Girls * DXZ...
 
Sexy Call Girl Villupuram Arshi 💚9058824046💚 Villupuram Escort Service
Sexy Call Girl Villupuram Arshi 💚9058824046💚 Villupuram Escort ServiceSexy Call Girl Villupuram Arshi 💚9058824046💚 Villupuram Escort Service
Sexy Call Girl Villupuram Arshi 💚9058824046💚 Villupuram Escort Service
 

06466595

  • 1. Some Enhanced Algorithms for Robot Navigation by Omnidirectional Cameras Khoa Dang Dang, Ngoc Quoc Ly, Truong The Nguyen Abstract— To localize a point in a 3D scene, a previous approach used two omnidirectional cameras and combine with the Sum of Absolute Difference (SAD) to search for similar points. The old method gives low performance on repetitive textures. So an improvement for the SAD is proposed in this work. In the ego-motion estimation task, to take advantage of omnidirectional cameras, a new approach is presented: first, the Kanade-Lucas-Tomasi (KLT) feature tracker is improved for building a set of feature points, then the RANSAC algorithm is used to find the best consensus set from this, movement parameters are estimated in the next step by Newton-Gauss algorithm, and the final result is filtered again by Kalman filter. The improved KLT is robust against cameras’ large rotation angles. By combining the proposed methods, a robot is able to reconstruct a 3D structure at a simple level. Experiments are performed using simulators: Google Sketchup and Pov-ray. Results show the new approach outperforms previous ones on problems that have been pointed out. I. INTRODUCTION In this work, a system of two 360◦ cameras is used. Each of them is constructed from a normal camera combining with a parabolic mirror as in [2]. In their method. By arranging two cameras vertically, all epipolar lines become vertical when de-wrapping into panoramic images. So corresponding feature points could be found by searching along vertical lines. One problem with the Sum of Absolute Difference (SAD) algorithm they used is it usually mismatches similar points on repetitive textures. This paper utilizes this approach because of its low cost and simplicity yet effective to compute. However, the SAD is enhanced to perform better on repetitive textures. Moreover, by applying the Scaramuzza’s methods [3][4] to find projection vectors from a point to two camera, we can estimate that point’s position in the en- vironment and distances between other points. This enables performing the 3D reconstruction along edges. In the ego-motion estimation based on visual features, we need to keep track of visual features from time t to time t+1 while cameras are moving. Scaramuzza [5] used the Kanade- Lucas-Tomasi (KLT) to track features and removed outliers by RANSAC algorithm, these two methods are quite popular for this task. Our method differs from others (e.g. [7])in some points. First, we combine SAD with KLT to track features to increase the matching ratio, because both of them are based on pixel intensity variance. Then, motion parameters are estimated using the Gauss-Newton algorithm from the best consensus set chosen by RANSAC. The Kalman filter is applied at the end to refine final results because there are some cases that a large amount of errors could produce a lot of noise in the results. This paper assumes that the noise follows a Gaussian distribution. However, the KLT is suffered from camera’s large rotation angles making it miss many feature points. So an improvement for KLT is proposed to deal with this problem. Experiments are performed using simulators: environments and 3D models are constructed by Google Sketchup 8, and omni cameras are simulated by Pov-ray 3.7 as in [6] to capture scenes. First, we compare the enhanced SAD with the original SAD when reconstruct repetitive textures. Next, we’ll see how well the improved KLT can estimate camera’s motion when the camera rotate large angles. Finally, we com- bine two proposed algorithms to recover both 3D structures and camera motion from a sequence of images. The rest of this paper is structured as following: First, a review of typical works is given. Section 3 will present the improvement for the SAD. The KLT enhancement is described next in section 4. Two proposed algorithms are evaluated separately and the whole model will also be eval- uated in section 5. Conclusion and future work are discussed in the final section. II. RELATED WORKS There are many types of omni camera. Jonathan Foote and Don Kimber [8] built a system call FlyCam which is a combi- nation of multiple similar cameras, the same as the Ringcam from Microsoft Research. This system gives high resolution images and robust, but also requires high operational cost or expensive. Nalwa from Bell AT&T Laboratories integrated mirrors into his camera system. But boundaries still remain in captured imaged and it is hard to be eliminated. A more simpler omni camera is constructed of one mirror and one normal camera. This paper utilizes this kind of omni camera due to its low cost and simplicity. Gaspar [1] proposed a method for de-wrapping images from a omni camera into panoramic and bird eye view images in order to extract vertical and horizontal line for localizing and navigating robots. Their approach is to use only one omni camera so that it is cheap and fast, but it cannot detect obstruction. In the robot ego-motion research field, Gluckman and Nayar [9] present a method for the recovery of ego-motion using omni cameras by mapping the image velocity vectors to a sphere using the Jacobian matrix. Scaramuzza et. al [5] used RANSAC for excluding outliers for a better recovery trajectory of robots moving on along distance. In the 3D reconstruction research, Herran [6] presented the main theory of a 3D reconstruction method and motion estimation using only one omni camera. 2012 International Conference on Control, Automation and Information Sciences (ICCAIS) 978-1-4673-0813-7/12/$31.00 ©2012 IEEE 241
  • 2. III. AN SAD IMPROVEMENT FOR RECONSTRUCTING REPETITIVE TEXTURES IN 3D ENVIRONMENT A. An Omni Camera System Arrangement In this paper, a system of two 360◦ filed of view cameras is studied as in [2] due to its low cost, simplicity yet efficient in computation. Fig. 1 adapted from [11] shows the camera model. Each camera includes one normal cameras and one parabolic mirror to capture a whole 360◦ horizontal view. Each camera are fixed so that the parameters are already known. Specifically, the translation vector T = (0, 0, t), with t is the vertical distance of two cameras, and the rotation matrix R is a unitary matrix I. First, captured images are de-wrapped into panoramic images. By this way, feature points will lay on the same vertical lines (Fig. 2). Then they are blurred with a Gaussian mask to make it less sensitive to low contrast pixels and a Sobel filter is applied to find dominant edges. Finally the SAD method searches for similar points in the same vertical lines only if these points’ intensity are larger than a threshold. The displacement between two captured images will be very small due to the arrangement. The SAD measurement between point A and B is a sum of absolute difference of pixels’ intensity surrounding point A and corresponding pixels surrounding point B: SAD(A, B) = i j IA (xA + i, yA + j) − IB (xB + i, yB + j) (1) B. An Improved SAD Algorithm for Matching Points on Repetitive Texture Suppose that the two red points are matched exactly between two images because their color features are quite different from others. When searching for matching the green point from image A, two points found in image B (Fig. 3), marked as the yellow point and the green point, that both look like the green point in A if only the surrounding pixels of each point are taken into account. So, an improvement for SAD to overcome this shortcoming is proposed. The idea is Parabolic mirror Camera Camera distance First omni camera Second omni camera Fig. 1. An omni camera system built from a mirror and a normal camera. if there are more than one point in image B both match a point in image A, we then consider more pixels along the lines connecting these point with another point that has been matched accurately before (i.e. the red points in Fig. 3). The original SAD is modified as following: Step 1: Calculate the SAD values of candidate pixels (e.g. green and yellow pixels in image B) Step 2: Calculate the sum of difference of intensity values between pixels on the lines connecting candidate pixels with another pixel has been found before (e.g. the red lines connecting to the red point in image B) Step 3: Compare the total values of step 1 and 2 of each candidates to each other and choose the candidate corresponding to the lowest value So, while searching a point in image A for a corresponding point in image B, the original SAD is applied first to find a pair of pixels having SAD values less than a threshold. If only one pair is found, we call it a successful match and proceed to the next point in A. But if there are more than one pair satisfying the threshold, another pair of pixels in A and B that has been matched successfully before is chosen and follow the three steps presented above. The enhanced SAD is more efficient and effective in this situation than SIFT and Harris corner detector, so it is really suitable for the 3D reconstruction task. But it requires two cameras must base on the same color system. C. An Application in 3D Reconstruction After finding corresponding points, the Scaramuzza’s cal- ibration toolbox is used to find two projection vectors from Fig. 2. Two similar points lay on a vertical line in panorama images. Fig. 3. The proposed improvement for SAD. 242
  • 3. a point in 3D environment to two cameras’ images. The intersection of two projection vectors reveals its position. Let camera A be the origin, and B(0, 0, t) is a translation vector with t is the vertical distance between A and B (Fig. 4). Two projection equations are given as below: V1 = λ.n1 V2 = µ.n2 + B (2) with n1, n2 are two projection vectors from a point M onto two cameras. λ and µ are two ratio coefficients. Let V 1 = V 2 to solve this equation system for λ and µ. However, in a real world system there could be some errors make projection vectors cross each other instead of intersecting. The following solution is derived from [11]. Suppose d is the distance between n1 and n2, we have d = (B − A). n1 × n2 n1 × n2 = B. n1 × n2 n1 × n2 (3) From (2) and (3) we have: λ.n1 = n2 (4) As observed in (4), vectors nl and n2 have the same direction and scale of λ, so the value of λ is given by: λ = n2[x] n1[x] = n2[y] n1[y] = n2[z] n1[z] (5) µ value can be found by the same way. Finally, replace λ and µ values into (2) to calculate M’s coordinate. As a conclusion for this section, this localization method can locate points along edges in a 3D environment. It takes advantage on two vertically arranged omni cameras for a simple and fast matching approach. However, it performs worse on smooth textures. IV. EGO MOTION ESTIMATION USING OMNI CAMERAS In this paper, we focus on the ego motion estimation based on visual features. Suppose Xk−1 and Xk are the coordinates of a point X at time k −1 and k, to estimate camera motion, we need to monitor feature changing from time k − 1 to k: Xk = RXk−1 + T (6) with R is a rotation matrix 3×3, T is a translation vector 3×1. Equation (6) is rewritten in the homogeneous coor- dinates as in (7). Motion parameters need to be estimated Fig. 4. Localization of a point M in the 3D environment. are: rx, ry, rz, tx, ty, tz. This paper’s approach is as fol- lowing: first, feature points are tracked by SAD and KLT methods; next, the RANSAC will remove outliers; then motion parameters are estimated by Gauss-Newton algorithm from the best consensus set; results are filtered by Kalman filter in the end with an assumption that noises follow a Gaussian distribution. However, the original KLT suffers from camera’s large rotation angles. This could affect the input to RANSAC. So an enhancement for the KLT to deal with this problem is proposed later in this section. Finally, we present a method for 3D map reconstruction based on the ego motion estimation and 3D reconstruction algorithms presented so far. A. Searching, Tracking and Localizing Feature Points In the first step, features are tracked by KLT. The positions of these features could be located by SAD. Both methods depend on the changing in image pixels intensity, so feature points detected by KLT and located by SAD could be well matched together. Here is the process: Step 1: 3D point coordinates localization by SAD The SAD is used for building these sets: Da = {Points’ relative position to camera at time ta based on two images Ia1, Ia2} Pa = {Points’ coordinates in Ia1 located by SAD (their positions are stored in Da)} Db = {Points’ relative position to camera at time tb based on two images Ib1, Ib2} Pb = {Points’ coordinates in Ib1 located by SAD (their positions are stored in Db)} Step 2: Features searching and tracking First, KLT searches features in image Ia1 and then tracked them in image Ib1. After this step, we’ll have: Fa = {Features’ coordinates in image Ia1 which can be tracked in image Ib1} Fb = {Features’ coordinates in image Ib1} Step 3: Features localization In this step, features points can not be localized in step 1 are removed by the following strategy: for each feature point in Fa, check if this point is in Pa. If not, this point is removed from Fa and the corresponding point in Fb is also removed. After finishing with Fa, this process is re-performed for Fb. A point a(x, y) in Fa is considered to be in Pa if: ∃b(x , y ) s.t. (x − x )2 + (y − y )2 < ε (8) with ε is a predefined threshold. From Da, Db and the new Fa, Fb, we build two follow- ing sets: Ma = {3D coordinates of feature points in Fa from camera 1 at time a} Mb = {3D coordinates of feature points from camera 1 at time b} 243
  • 4. Xk = F(R, T, X) = MkXk−1 =     cos ry cos rz − cos rx sin ry + sin rx sin ry cos rz sin rx sin rz + cos rx sin ry cos rz tx cos ry sin rz cos rx cos ry + sin rx sin ry sin rz − sin rx cos rz + cos rx sin ry sin rz ty − sin ry sin rx cos ry cos rx cos ry tz 0 0 0 1     (7) Note that Ma and Mb have the same number of ele- ments, and a pair of ith elements in both of these set correspond to coordinates of the ith feature point but in different times. Now, features’ coordinates at different time have been known. Some errors could present in this step. So next, the RANSAC algorithm is used to remove outliers. B. Outliers Removing by RANSAC Step 1: Choose initial elements randomly Choose 3 elements in Ma randomly, and choose 3 corresponding elements in Mb Step 2: Estimate motion parameters Estimate motion parameters from chosen elements. The next section will present how to perform this task. Step 3: Built a rule matrix M and consensus set Ck A consensus element is an element satisfying the rule matrix M based on (7). An element in Ma satisfies the rule matrix M when applying M for that element and receiving an error comparing to a corresponding element in Mb is less than a predefined threshold. If an Ma element satisfy this rule matrix, it is put in the current consensus set Ck. Step 4: Choose the best consensus set C Each time, the largest consensus set is kept. After a number of iteration, the best consensus set C is found. C. Motion Parameters Estimation This process is not only applied in the Step 2 to find the best consensus set C, but also applied on C itself to estimate the best motion parameters by the optimization algorithm Gauss-Newton to minimize the following square error function: i Xki − F(R, T, X(ki−1)i ) 2 (9) with Xki is the coordinates of Xi from the camera at time k and Xki is the coordinates at time k − 1. D. Filtering by Kalman Filter The estimated results will be filtered by Kalman filter. Here, the velocity v and acceleration a of the camera need to be updated, with v = (RT)T /∆t, ∆t is the time shift between two frames. We have the state function like this: v a (t) = I ∆tI 0 I v a (t−1) + e (10) The result is updated by Kalman filter as below: 1 ∆t r t (t) = I 0 v a (t) (11) with I is a unitary matrix 6 × 6, e is Gaussian noise. E. Camera Motion Estimation The final results received so far are rotation and translation values of feature points regarding to the camera. The camera motion is calculated in reverse. Let Ok−1 and Ok is the camera coordinates at time k − 1 and k, and the system’s coordinate origin is the camera’s first position. The relation- ship of Ok−1 and Ok is given as following: Ok = (Mk)−1 Ok−1 (12) with Mk is the matrix from (7). F. Solving camera’s large rotation angle problem By using omni cameras, feature points are still retained despite large rotation angles, but the KLT can not keep up with those changes. So, an intermediate step is proposed: instead of matching images at time K and time K+1, image at time K now is matched with several temporary images at time K + 1, which have been transformed using different rotation angles, until the KLT finds sufficient points. Then the final rotation angle will be the sum of the estimated rotation angle (from the intermediate image) and the rotation angle when transforming from the raw image to the intermediate image. The enhanced KLT algorithm is summarized as below and in Fig. 5: Step 1: Rotate the image at time K + 1 an angle into an image K’ Step 2: Compare image at time K with K using KLT, assumed that N points are successfully matched Step 3: If N is more than needed, process to step 4. Otherwise, Θ = Θ + 18, if Θ < 360◦ then go back to step 1, if Θ > 360◦ then stop and return the Θi corresponding to the largest number of points successfully matched Step 4: Transform coordinates in the image K corre- sponding to Θi into coordinates in K + 1 Figure 7a is feature points found in image K. As observed in Fig. 7b only 36 feature points are matched in image K + 1 when camera rotates 18. Meanwhile, the improved KLT matched 91 feature points as in Fig. 7c. G. An Application in 3D Structure Reconstruction from Robot Motion The proposed methods are combined in a structure-from- motion application by the following process (Fig. 6): Step 1: Perform 3D reconstruction by the enhanced SAD from two cameras’ images at time t − 1 and time t Step 2: Estimate the camera motion 244
  • 5. Input: A pair of images at time t-1 Step 1: Detect, track and identify features’ positions Measure points’ distance by SAD algorithm Feature detection by KLT algorithm Identify features can be tracked 3D coordinates Step 2: Eliminate outliers by RANSAC Randomly choose 3 points from the features’ 3D coordinate set A set of features’ 3D coordinates regarding to cameras at time t- 1 and t A pair of images at time t Estimate motion parameters by Gauss- Newton Apply Kalman filter Select a consensus set Choose the best consensus set A set of 3D coordinates of features in the best consensus set Step 3: Estimate the final motion’s parameters Estimate the motion parameters by Gauss-Newton algorithm Step 4: Refine the final result Refine the final result by KLT algorithm Output: rx, ry, rz, tx, ty, tz Are rotation angles and displacement amount of cameras between time t and t-1, respectively Fig. 5. Ego motion estimation diagram. Input: An upper camera’s image at time t-1 Output: a reconstructed 3D image A lower camera’s images at time t-1 Estimate camera motion Transform 3D coordinates constructed at time t compared to time t-1 camera motion parameters Input: An upper camera’s image at time t A lower camera’s images at time t 3D reconstruction by SAD 3D reconstruction by SAD Fig. 6. Structure from motion based on proposed methods. (a) (b) (c) Fig. 7. Comparison between original KLT and improved KLT on number of feature points tracked. Step 3: Map coordinates of points at time t into camera coordinate system at time t − 1 based on estimated motion parameters. Combine two reconstructed images. V. EXPERIMENTS A. Experiment Environment 3D objects are constructed by the Google Sketchup 8, Pov- ray 3.7 is used to render 3D scenes and simulate the light and omni cameras. They are free and enough for setting up experiments. Another advantage is virtual environment conditions are controlled easily. Moreover, camera’s faults and lopsided arrangement will be eliminated. The process of setting up experiments: 1) Build 3D models using Google Sketchup 2) Convert Google Sketchup 3D models into Pov-ray specification files 3) Use Pov-ray for creating two mirrors and set the built- in cameras at the focuses of the mirrors 4) Simulate camera movement: Pov-ray supports creating a sequence of continuous frames to simulate camera movement along a spline through predefined points 5) Render the frames from step 4. Images captured from two simulated omni cameras are saved into hard disk. In experiments, it took 4 to 5 hours to render 200 frames from two cameras. 6) Proposed algorithms are evaluated. B. Point Localization Using the Improved SAD This part evaluates the performance of the improved SAD in the reconstruction task and compare with the original SAD. The experiment is constructed as following: 1) A wall with brick texture is rendered by Pov-ray. 2) Two omni cameras (at the height of 1.2m and 1.4m) take two photos at the distance 2m (Fig. 8a). Both SAD and enhanced SAD is used for matching similar points and reconstructing the captured scene. As observed in Fig. 8b, the reconstructed wall by the enhanced SAD (left) is more clear and less errors comparing to the original SAD (right). This is because the brick texture looks similar causing the SAD to be mismatched. However, experiments on non-repetitive or complex objects give the same results for both algorithms. C. Ego-motion estimation with the Improved KLT This evaluates the enhanced KLT performance against camera’s large rotation angles. Here is the process: 245
  • 6. 1) The camera system moves and takes photos along a spline (Fig. 9a) between two walls. 2) It takes one photo each 5cm, total 99 photos in 490cm. 3) KLT and its enhancement are compared based on road shapes reconstructed and road lengths calculated. Figure 9b shows a road shape reconstructed with the im- proved KLT and the estimated road length is 488.7cm. While the road map estimated by the original KLT in Fig. 9c is 475.7cm length. Due to a big turn at the end of the road, the required point number for the original KLT is not sufficient for motion estimating and calculating rotation angles. The improved KLT outperforms at this part of the road. This result shows the effectiveness of the intermediate step. D. Proposed Methods Combination In this final experiment, all proposed methods are com- bined to test their performance in a large environment for a long term working. A building with rooms and objects along the walls is constructed as in Fig. 10a. The camera system follows the corridor (Fig. 10b) and changes speed overtime (faster when going straight, slower when making a turn). Each camera captured 349 frames in total. The reconstructed room and road map in Fig. 10c are similar to the original ones in Fig. 10a and Fig. 10b. VI. CONCLUSIONS AND FUTURE WORKS Some enhancement are presented to deal with problems of localizing points and ego-motion estimation using two omni cameras. The enhanced SAD can deal with repetitive textures better than the original method. The ability to capture a wide view of omni camera is a big advantage when improving the KLT algorithm, combining with the optimization algorithm (a) (b) Fig. 8. Wall photos taken by two cameras and reconstructed (b) using the improved SAD (left) and original SAD (right). (a) (b) (c) Fig. 9. Road maps (a) reconstructed with the improved KLT (b) and original KLT (c) compare to the predefined road map. (a) (b) (c) Fig. 10. The room (a), the road map model (b) are reconstructed using proposed methods (c). Gauss-Newton, consensus building RANSAC and filter with Kalman filter for a better movement estimation. The experiments are performed using simulators closed to real world conditions, yet easier to be controlled, which could be a premise for later research. The combination of proposed methods still retains the simplicity, effectiveness and much faster than using SIFT feature [10]. In future, we are planning to improve more functions for practical robot navigation systems such as an obstacle detection function like in [11] and localization based on landmarks [12]. Besides, we could use Gaussian Mixture Model to model the noise and apply Particle Filter. REFERENCES [1] Jos’e Ant’onio da Cruz Pinto Gaspar, ”Omnidirectional Vision for Mobile Robot navigation,” PhD thesis, 2002. [2] Hiroshi Koyasu, Jun Miura and Yoshiaki Shirai, ”Realtime omni- directional stereo for obstacle detection and tracking in dynamic environments,” Proc. IEEE/RSJ Int. Conf. on Intelligent Robots and Systems Maui, Hawaii, 2001, pp 31-36. [3] Davide Scaramuzza, A. Martinelli, and R. Siegwart, ”A toolbox for easy calibrating ominidirectional cameras,” IEEE International Conference on Intelligent Robots and System (IROS), 2006. [4] Davide Scaramuzza, ”Omnidirectional vision: from calibration to robot motion estimation,” PhD thesis, M.S. Electronic Engineering, Universit di Perugia, Italy, 2008. [5] Davide Scaramuzza, Friedrich Fraundorfer, and Roland Siegwart, ”Real-time monocular visual odometry for on-road vihicles with 1- point RANSAC”, IEEE Conference on Robotics and Automation, Japan, 2009, pp. 4293–4299. [6] Jose L. R. Herran, ”OMNIVIS: 3D space and camera path recon- struction for omnidirectional vision”, Master thesis in the Field of Information Technology, Harvard University, 2010. [7] Andrew Howard, ”Real-Time Stereo Visual Odometry for Autonomous Ground Vehicles”, IEEE International Conference on Intelligent Robots and Systems (IROS), France, 2008. [8] Jonathan Foote and Don Kimber, ”FlyCam: practical panoramic video and automatic camera control,” In proceedings of the IEEE Interna- tional Conference on Multimedia, 2000, pp 1419-1422. [9] Joshua Gluckman and Shree K. Nayar, ”Ego-Motion and Omni- directional Cameras,” In Proceedings of of the Sixth International Conference on Computer (ICCV), 1998. [10] Dong-Fan Shen, Jong-Shill Lee, Se-Kee Kil, Je-Goon Ryu, Eung- Hyuk Lee, Seung-Hong Hong, ”3D Reconstruction of Scale-Invariant Features for Mobile Robot localization,” Int. Journal of Computer Science and Network Security (IJCSNS), vol. 6, no.3B, 2006. [11] Ola Millnert, Toon Goedem, Tinne Tuytelaars, Luc Van Gool, Alexan- der Huntemann, and Marnix Nuttin. ”Range determination for mo- bile robots using an omnidirectional camera,”. Journal of Integrated Comput.-Aided Engineering Informatics in Control, Automation and Robotics, vol 14, Issue 1, 2007, pp. 63-72. [12] C. Madsen, C. Andersen, ”Optical landmark selection for triangulation of robot,” J. Robotics and Autonomous System, 1998. 246