Visual odometry(2)                          23rd/May/2011                                          Cyphy Lab.             ...
Contents                     • Paper name, authors, publication.                     • Abstract                     • Visu...
Paper name                     •   “A Visual Odometry Method Based on the                         SwissRanger SR4000”     ...
Abstract                 This paper presents a pose estimation method based on a 3D camera⎯the SwissRanger Input         ...
Visual odometry(1)   SIFT feature detection and descriptor                                                     Remove outl...
Visual odometry(2)        Descriptors matching                         t-1          100ms              t                  ...
Visual odometry(3)   SIFT matching                                  t-1              100ms                t    My Idea  1....
Visual odometry(4)   Finding R,T      Prerequisite      2D image plane and 3D depth data are correspond each other.      T...
Visual odometry(5)   Finding R,T                                    3D data sets pi and pi i = 1, ..., N N is the number o...
Visual odometry(6)   Finding R,T                                                                  ˆ , T f or pi and p usin...
Visual odometry(7)   Finding R,T           3-4. Calculate det(U)              If det(U) = 1    ˆ                          ...
Experiment(1)                The SR4000 and the Packbot robot with the sensor installedMonday,	 23	 May
Experiment(2)                      Specification of SR4000Monday,	 23	 May
Experiment(3)                     Characteristic of SR4000 and Bumblebee2Monday,	 23	 May
Experiment(4)                     Original image   Bumblebee2    SR4000                                      depth image  ...
Experiment(5)  The      graph below was obtained by  pointing the Kinect at a planar surface,  fitting a plane (using RANS...
Experiment(6)                                           Specification of the                                            Mod...
Experiment(7)     Pose error distribution           1. Zero movement                     Measurement error                ...
Experiment(8)     Rotation error distribution             Movement (θ, ψ, x, y) = (−5.9◦ , 5.0◦ , 80mm, 130mm)            ...
Translation error distribution                               Experiment(9)             Movement (θ, ψ, x, y) = (−5.9◦ , 5....
Experiment(10)              Rotation experiments and results Increase pitch angle                        Increase yaw angl...
Experiment(11)   Translation experiments and results           x position                 y positionMonday,	 23	 May
My though                     1 Is this approach capable of real-time processing?                     2 How accurate and r...
Current status                     Done                     Future works       1. Retrieve RGB image and PointCloud using ...
Thank you.Monday,	 23	 May
Upcoming SlideShare
Loading in …5
×

Visual Odomtery(2)

1,438 views

Published on

Presentation material about "A Visual Odometry Method based on the SwissRanger SR4000" by Cang Ye, Michael Bruch.

Published in: Technology
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
1,438
On SlideShare
0
From Embeds
0
Number of Embeds
105
Actions
Shares
0
Downloads
0
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide

Visual Odomtery(2)

  1. 1. Visual odometry(2) 23rd/May/2011 Cyphy Lab. Inkyu SaMonday, 23 May
  2. 2. Contents • Paper name, authors, publication. • Abstract • Visual odometry • Experiments and results • My thought • Current statusMonday, 23 May
  3. 3. Paper name • “A Visual Odometry Method Based on the SwissRanger SR4000” • Cang Ye* and Michael Bruch†, * University of Arkansas at Little Rock, 2801 S. University Ave, Little Rock, AR, USA 72204 † Space and Naval Warfare Systems Center Pacific, 53560 Hull Street, San Diego, CA 92152 • Unmanned Systems Technology XII, Proc. of SPIE Vol. 7692, 76921I. in 2010.Monday, 23 May
  4. 4. Abstract This paper presents a pose estimation method based on a 3D camera⎯the SwissRanger Input SR4000. The proposed method estimates the camera’s ego-motion by using intensity and a range data produced by the camera. It detects the SIFT (Scale-Invariant Feature b Transform) features in one intensity image and match them to that in the next intensity Output c image. The resulting 3D data point pairs are used to compute the least-square rotation and translation matrices, from which the attitude and position changes between the two image frames are determined. The method uses feature descriptors to perform feature matching. 1 It works well with large image motion between two frames without the need of spatial correlation search. Due to the SR4000’s consistent accuracy in depth measurement, the 2 proposed method may achieve a better pose estimation accuracy than a stereo vision-based approach. Another advantage of the proposed method is that the range data of the SR4000 3 is complete and therefore can be used for obstacle avoidance/negotiation. This makes it possible to navigate a mobile robot by using a single perception sensor. In this paper, we will validate the idea of the pose estimation method and characterize the method’s pose Results estimation performance.Monday, 23 May
  5. 5. Visual odometry(1) SIFT feature detection and descriptor Remove outliers using RANSAC Descriptors matching SIFT feature detection Unfortunately, the matching method is not presented in this paper. I guess that it could be either “L1” or “L2” matching of descriptors.Monday, 23 May
  6. 6. Visual odometry(2) Descriptors matching t-1 100ms t 1 a b c d Source from wikipedia L2 distance L1 distance A 1. Calculate Euclidian distance(L2) or Manhattan distance(L1) between a feature 1 and features a,b,c,d ... 2. Find the minimum value e = min |xi − xj | 2 + |y − y |, i j 2 Complexity= O(n2 ) where i and j note features in t-1 and t frame respectively. This method often produces mismatched pairs.Monday, 23 May
  7. 7. Visual odometry(3) SIFT matching t-1 100ms t My Idea 1. Select a feature in t-1 frame and calculate L2 distance for all features in t frame. 2. Our model is e and is threshold of e. 3. If the value of e , put i, j and e into vector V. 4. Repeat 1 to 3 for a certain number of times or until exhausting all features in t-1 frame. 5. Find the lowest value in V. 6. If e value is lower than the threshold, i and j are inlier and others are outliers.Monday, 23 May
  8. 8. Visual odometry(4) Finding R,T Prerequisite 2D image plane and 3D depth data are correspond each other. Therefore we knew the intensity and depth of a certain pixel. 2D image plane 3D depth dataMonday, 23 May
  9. 9. Visual odometry(5) Finding R,T 3D data sets pi and pi i = 1, ..., N N is the number of matched SIFT features. Our objective is obtaining R,T when e has the minimum value. i=N 2 e = pi − Rpi − T 2 (1) i=1 Algorithm 1. Find pi and pi using SIFT and matcher. 2. Randomly select 4 associated points from the two data sets. ˆ ˆ 3. Find the least-square rotation and translation matrices R , T f or pi and pi using SVD. 4. Find e using Equation(1). ˆ ˆ 5. If the value of e , put R , T into vector E. 6. Repeat 2 to 5 for a certain number of times or until exhausting all combination of point set selections ˆ ˆ 7. Find the lowest value e with R , T in E.Monday, 23 May
  10. 10. Visual odometry(6) Finding R,T ˆ , T f or pi and p using SVD. 3. Find the least-square rotation and translation matrices R ˆ i 3-1. Compute the centroid p and pʹ′ of {pm } and {pm } qm = pm − p and qm = pm − p 3-2. Compute the 3×3 matrix M t Ω= qm qm m=1 3-3. Find the SVD of Ω Ω = U ΛV t where U and V are 3 × 3 orthogonal matrices Λ = diag(λ1 , λ2 , λ3 ) is a diagonal matrix with non − negative elements. 3-4. Calculate det(U)Monday, 23 May
  11. 11. Visual odometry(7) Finding R,T 3-4. Calculate det(U) If det(U) = 1 ˆ R = V Ut If det(U) = -1 ˆ = V Ut R Otherwise skip the current frame and use the next frame. This could be happen when measurement of sensor has large noise. 3-5. Calculate T ˆ = p − Rp T ˆ Determination of 6D pose change φ =atan2(r13 cos ψ + r23 sin ψ, r11 cos ψ + r21 sin ψ) ˆ X=x component of T ψ =atan2(r22 , −r11 ) ˆ Y=y component of T θ =atan2(r32 , −r12 sin ψ + r22 cos ψ) ˆ Z=z component of T where φ, ψ, θ note roll,yaw and pitch respectively.Monday, 23 May
  12. 12. Experiment(1) The SR4000 and the Packbot robot with the sensor installedMonday, 23 May
  13. 13. Experiment(2) Specification of SR4000Monday, 23 May
  14. 14. Experiment(3) Characteristic of SR4000 and Bumblebee2Monday, 23 May
  15. 15. Experiment(4) Original image Bumblebee2 SR4000 depth image depth image Characteristic of SR4000 and Bumblebee2 Kinect 11bit depth imageMonday, 23 May
  16. 16. Experiment(5) The graph below was obtained by pointing the Kinect at a planar surface, fitting a plane (using RANSAC) through the pointcloud, and checking the distance of the points in the pointcloud to that plane. source from: ROS kinect. SR4000 VS KinectMonday, 23 May
  17. 17. Experiment(6) Specification of the Modified Kinect Range 50cm~5m Accuracy 10mm~15mm Pixel array size 640x480 ,Programmable 320x240 Field of view 57(h)x47(v) Frequency 30Hz Weight 200g Price: $9095 Dimension 35x25x128mm Price 180AUD SR4000 VS KinectMonday, 23 May
  18. 18. Experiment(7) Pose error distribution 1. Zero movement Measurement error (φ, θ, ψ, x, y, z) = (0.1◦ , 0.2◦ , 0.2◦ , 7mm, 3mm, 6mm) Because of sensor measurement noise.(White noise) and mean, standard deviation is very good. 2. Rotation and translation movement Movement (θ, ψ, x, y) = (−5.9◦ , 5.0◦ , 80mm, 130mm) mean error (φ, θ, ψ, x, y, z) = (−0.1◦ , 0.2◦ , −0.3◦ , 8mm, −2mm, 11mm) standard deviation error (φ, θ, ψ, x, y, z) = (0.5◦ , 0.4◦ , 0.4◦ , 13mm, 5mm, 11mm)Monday, 23 May
  19. 19. Experiment(8) Rotation error distribution Movement (θ, ψ, x, y) = (−5.9◦ , 5.0◦ , 80mm, 130mm) mean error (φ, θ, ψ, x, y, z) = (−0.1◦ , 0.2◦ , −0.3◦ , 8mm, −2mm, 11mm) standard deviation error (φ, θ, ψ, x, y, z) = (0.5◦ , 0.4◦ , 0.4◦ , 13mm, 5mm, 11mm)Monday, 23 May
  20. 20. Translation error distribution Experiment(9) Movement (θ, ψ, x, y) = (−5.9◦ , 5.0◦ , 80mm, 130mm) mean error (φ, θ, ψ, x, y, z) = (−0.1◦ , 0.2◦ , −0.3◦ , 8mm, −2mm, 11mm) standard deviation error (φ, θ, ψ, x, y, z) = (0.5◦ , 0.4◦ , 0.4◦ , 13mm, 5mm, 11mm)Monday, 23 May
  21. 21. Experiment(10) Rotation experiments and results Increase pitch angle Increase yaw angleMonday, 23 May
  22. 22. Experiment(11) Translation experiments and results x position y positionMonday, 23 May
  23. 23. My though 1 Is this approach capable of real-time processing? 2 How accurate and robust is this proposed method in dynamic sensing environments such as on Quadrotor of on the unstructured road? 3 Kinect shows more noisy depth measurement than SR4000. Which means det(Y) can’t be 1 or -1 and fail to calculate R,T. How can we compensate this frame drop problem?Monday, 23 May
  24. 24. Current status Done Future works 1. Retrieve RGB image and PointCloud using Openni Kinect driver in ROS. 2. Feature detection using gpusurf from University of Toronto, ASRL 3. Feature matching using gpu L2 matching and cross-check using OpenCV. 4. Obtain depth information of each pixel using PointCloud in ROS. 5. Calculate R,T using the proposed algorithm in this paper. 6. Compute 6D pose of camera every frame. “The method proposed in this paper might not be satisfied with our demand. Then figure out problems and solve them.”Monday, 23 May
  25. 25. Thank you.Monday, 23 May

×