Slideshare uses cookies to improve functionality and performance, and to provide you with relevant advertising. If you continue browsing the site, you agree to the use of cookies on this website. See our User Agreement and Privacy Policy.

Slideshare uses cookies to improve functionality and performance, and to provide you with relevant advertising. If you continue browsing the site, you agree to the use of cookies on this website. See our Privacy Policy and User Agreement for details.

Like this presentation? Why not share!

- Visual Odomtery(2) by Ian Sa 1257 views
- Visual SLAM, Structure from Motion,... by Yu Huang 12742 views
- Chadormalu Urban Robot by Pouya Mansournia 505 views
- Visual odometry presentation_withou... by Ian Sa 1125 views
- Programming with kinect v2 by Matteo Valoriani 2796 views
- Object tracking by chirase44 2931 views

786 views

690 views

690 views

Published on

No Downloads

Total views

786

On SlideShare

0

From Embeds

0

Number of Embeds

1

Shares

0

Downloads

28

Comments

0

Likes

1

No embeds

No notes for slide

- 1. Visual odometry by Inkyu Sa
- 2. Motivation
- 3. Motivation his laser scanner is good enough to obtain the position (x, y, θ, z) of the quadrotor at 10Hz. This data provides from ROS canonical scan matcher package. 0.5 0.4 0.3 0.2 0.1 y position(m) 0 −0.1 −0.2 −0.3 −0.4 −0.5 −0.5 −0.4 −0.3 −0.2 −0.1 0 0.1 0.2 0.3 0.4 0.5 x position(m)
- 4. Motivation his laser scanner is good enough to obtain the position (x, y, θ, z) of the quadrotor at 10Hz. This data provides from ROS canonical scan matcher package. 0.5 0.4 0.3 0.2- Relatively high accuracy. 0.1 y position(m)- ROS device driver support. 0 −0.1 −0.2 −0.3 −0.4 −0.5 −0.5 −0.4 −0.3 −0.2 −0.1 0 0.1 0.2 0.3 0.4 0.5 x position(m)
- 5. Motivation his laser scanner is good enough to obtain the position (x, y, θ, z) of the quadrotor at 10Hz. This data provides from ROS canonical scan matcher package. 0.5 0.4 0.3 0.2- Relatively high accuracy. 0.1 y position(m)- ROS device driver support. 0 −0.1 −0.2- Expensive, USD 2375 −0.3- Low frequency 10Hz −0.4- Only for 2D. −0.5 −0.5 −0.4 −0.3 −0.2 −0.1 0 0.1 0.2 0.3 0.4 0.5 x position(m)
- 6. Motivationhttp://www.iﬁxit.com
- 7. Motivation inect 3D depth camera can provide not only 2D RGB images but 3D depth images at 30Hz.http://www.iﬁxit.com
- 8. Motivation inect 3D depth camera can provide not only 2D RGB images but 3D depth images at 30Hz. http://www.iﬁxit.com- Reasonable price. AUD 180.- 3 Dimensional information.- Openni Kinect ROS device driver andpoint could library support.- Available to use for visual odometry andobject recognition, 3D SLAM and so on.
- 9. Motivation inect 3D depth camera can provide not only 2D RGB images but 3D depth images at 30Hz. http://www.iﬁxit.com - Relatively low accuracy and many noise.- Reasonable price. AUD 180. - Heavy weight. original kinect over 500g.- 3 Dimensional information.- Openni Kinect ROS device driver and - Requires high computational power.point could library support. ◦ ◦ - Narrow filed of view. H=57,V=43- Available to use for visual odometry andobject recognition, 3D SLAM and so on.
- 10. Motivation inect 3D depth camera can provide not only 2D RGB images but 3D depth images at 30Hz. http://www.iﬁxit.com - Relatively low accuracy and many noise.- Reasonable price. AUD 180. - Heavy weight. original kinect over 500g.- 3 Dimensional information.- Openni Kinect ROS device driver and - Requires high computational power.point could library support. ◦ ◦ - Narrow filed of view. H=57,V=43- Available to use for visual odometry andobject recognition, 3D SLAM and so on.
- 11. Motivation inect 3D depth camera can provide not only 2D RGB images but 3D depth images at 30Hz. http://www.iﬁxit.com - Relatively low accuracy and many noise.- Reasonable price. AUD 180. - Heavy weight. original kinect over 500g.- 3 Dimensional information.- Openni Kinect ROS device driver and - Requires high computational power.point could library support. ◦ ◦ - Narrow filed of view. H=57,V=43- Available to use for visual odometry andobject recognition, 3D SLAM and so on.
- 12. Motivation inect 3D depth camera can provide not only 2D RGB images but 3D depth images at 30Hz. http://www.iﬁxit.com - Relatively low accuracy and many noise.- Reasonable price. AUD 180. - Heavy weight. original kinect over 500g.- 3 Dimensional information.- Openni Kinect ROS device driver and - Requires high computational power.point could library support. ◦ ◦ - Narrow filed of view. H=57,V=43- Available to use for visual odometry andobject recognition, 3D SLAM and so on.
- 13. Motivation inect 3D depth camera can provide not only 2D RGB images but 3D depth images at 30Hz. http://www.iﬁxit.com - Relatively low accuracy and many noise.- Reasonable price. AUD 180. - Heavy weight. original kinect over 500g.- 3 Dimensional information.- Openni Kinect ROS device driver and - Requires high computational power.point could library support. ◦ ◦ - Narrow filed of view. H=57,V=43- Available to use for visual odometry andobject recognition, 3D SLAM and so on.
- 14. Contents
- 15. Contents
- 16. ◦◦ ◦
- 17. ◦◦ ◦
- 18. ◦◦ ◦
- 19. x a y = b z 1a = tan{α tan−1 u/f } cos βb = tan{α tan−1 v/f } sin βu =x point of image plane.v =y point of image plane.
- 20. (∆x, ∆y, ∆θ) (x , y ) (u, v) (u , v )(x, y) ˆ ˆ (du, dv) = P (u, v, {u0 , v0 , f, α}, {∆x, ∆y, ∆θ}) P is optical flow function of the feature coordinate. t t+1
- 21. e1 = med ˆ ˆ (dui − dui )2 ) + (dvi − dvi )2 )e1
- 22. Solar powered robot, Hyperion,developed by CMU.
- 23. Solar powered robot, Hyperion,developed by CMU.The parameter estimates aresomewhat noisy but closely withthose determined using a CMUcalibration method. estimates=(Value) Calibration method=(True)
- 24. R W x ˙ x ˙ R = RZ (θ) W y ˙ y ˙Then integration of the robotvelocity using sample timecan be produce the positionof the robot as shown theleft image. R R x x ˙ R = R ∆t y y ˙
- 25. Using the following equation,the observed robot coordinatevelocity can be calculated. R W x ˙ x ˙ R = RZ (θ) W y ˙ y ˙Then integration of the robotvelocity using sample timecan be produce the positionof the robot as shown theleft image. R R x x ˙ R = R ∆t y y ˙
- 26. 6DOF of camera position + 3DOFof features position. Observation vector,the projectiondata for the current image. Process noise covariance,shouldbe known. Measurement noise covariance,should be know. isotropic withvariance(4.0 pixels). Error covariance Kalman gain. Observation matrix
- 27. − − xk = ˆ xk ˆ + Kk (zk − H xk ) ˆ The measurement is re- projection of point. Tzj = (R(ρ) Zj + t)ρ, t are the camera-to-world rotation Euler angles and translation of the camera.Zj is the 3D world coordinate system position of point j.This measurement is nonlinear in the estimated parameters andthis motivates use of the iterated extended Kalman filter.
- 28. − − xk = ˆ xk ˆ + Kk (zk − H xk ) ˆ The measurement is re- projection of point. Tzj = (R(ρ) Zj + t)ρ, t are the camera-to-world rotation Euler angles and translation of the camera.Zj is the 3D world coordinate system position of point j.This measurement is nonlinear in the estimated parameters andthis motivates use of the iterated extended Kalman filter.
- 29. Initial state estimate distributionis done using batch algorithm[1]to get mean and covariance.This estimates initial 6D camerapositions corresponding toseveral images in the sequence.29.2m traveled and averageerror=22.9cm and maximumerror=72.7cm.
- 30. é
- 31. yx
- 32. y Robert Collins CSE486, Penn Statex λ1 = large , λ2 = small
- 33. y Robert Collins CSE486, Penn Statex λ1 = small , λ2 = small
- 34. y Robert Collins CSE486, Penn Statex λ1 = large , λ2 = large
- 35. 2E(u, v) = w(x, y)[I(x + u, y + v) − I(x, y)] x,y ≈ [I(x, y) + uIx + vIy − I(x, y)]2 x,y = u2 Ix + 2uvIx Iy + v 2 Iy 2 2 x,y 2 Ix Ix Iy u = u v 2 Ix Iy Iy v x,y 2 Ix Ix Iy u = u v ( 2 ) Ix Iy Iy v x,y u 2 Ix Ix Iy E(u, v) ∼ = u v M ,M = w(x, y) 2 v Ix Iy Iy x,y
- 36. R = detM − k(traceM )2 2 2 2 2 = Ix Iy − k(Ix + Iy ) 2 detM =λ1 λ2 α =Ix 2traceM =λ1 + λ2 β =Iy Ix =Gx ∗ Ik is an empirically determined σconstant range from 0.04~0.06 Iy =Gy ∗ I σ 2 Ix Ix Iy M= w(x, y) 2 Ix Iy Iy x,y
- 37. R = detM − k(traceM )2 2 2 2 2 = Ix Iy − k(Ix + Iy ) 2 detM =λ1 λ2 α =Ix 2traceM =λ1 + λ2 β =Iy Ix =Gx ∗ Ik is an empirically determined σconstant range from 0.04~0.06 Iy =Gy ∗ I σ 2 Ix Ix Iy M= w(x, y) 2 Ix Iy Iy x,y Source from [3]
- 38. For each detected feature, search every features within acertain disparity limit from the next image.(10% of image size) (t) (t-1)
- 39. For each detected feature, calculate the normalizedcorrelation using 11x11 window. A= I x,y B= I2 x,y 1 C =√ nB − A2 D= I1 I2 x,y n = 121, 11 × 11The normalized correlation Find the highest value of NC,between two patches is (Mutual consistency check) N C1,2 = (nD − A1 A2 )C1 C2 = max(N C1, 2)
- 40. Circles shows the current feature locationsand lines are feature tracks over the images
- 41. Track matched features and estimate relative positionusing 5-points algorithm. RANSAC reﬁnes position.
- 42. Track matched features and estimate relative positionusing 5-points algorithm. RANSAC reﬁnes position.Construct 3D points with ﬁrst and last observationand estimate the scale factor.
- 43. Track matched features and estimate relative positionusing 5-points algorithm. RANSAC reﬁnes position.Construct 3D points with ﬁrst and last observationand estimate the scale factor.Track additional number of frames and compute theposition of camera with known 3D point using3-point algorithm. RANSAC reﬁnes positions.
- 44. Track matched features and estimate relative positionusing 5-points algorithm. RANSAC reﬁnes position.Construct 3D points with ﬁrst and last observationand estimate the scale factor.Track additional number of frames and compute theposition of camera with known 3D point using3-point algorithm. RANSAC reﬁnes positions.
- 45. Triangulate the observed matches into 3D points. http://en.wikipedia.org/wiki/File:TriangulationReal.svg= abs(y1 − y1 )
- 46. Triangulate the observed matches into 3D points. Track features for a certain number of frames and calculate the position of stereo rig and reﬁne with RANSAC and 3points algorithm.E{(p1 , p1 ), (p2 , p2 ), (p3 , p3 )}From this equation, we p1could get R,T matrix. t p2 p3 p1 t-1 p3 p2
- 47. Triangulate the observed matches into 3D points. Track features for a certain number of frames and calculate the position of stereo rig and reﬁne with RANSAC and 3points algorithm.E{(p1 , p1 ), (p2 , p2 ), (p3 , p3 )}From this equation, we p1could get R,T matrix. t p2 p3 p1 t-1 p3 p2
- 48. Triangulate the observed matches into 3D points. Track features for a certain number of frames and calculate the position of stereo rig and reﬁne with RANSAC and 3points algorithm.E{(p1 , p1 ), (p2 , p2 ), (p3 , p3 )}From this equation, we p1could get R,T matrix. t p2 p3 p1 t-1 p3 p2
- 49. Triangulate the observed matches into 3D points.Track features for a certain number of framesand calculate the position of stereo rig andreﬁne with RANSAC and 3points algorithm.Triangulate all new feature matches and repeatprevious step a certain number of time.
- 50. Triangulate the observed matches into 3D points.Track features for a certain number of framesand calculate the position of stereo rig andreﬁne with RANSAC and 3points algorithm.Triangulate all new feature matches and repeatprevious step a certain number of time.
- 51. Note: In this paper, fire wall refers to the tool in order to avoid errorpropagation. Idea is that don’t triangulate of 3D points using observation beyondthe most recent firewall. time projection error Set the firewall at this frame Then using from this frame to triangulate 3D points. time
- 52. Image size: 720x240Baseline: 28cmHVOF: 50
- 53. Image size: 720x240Baseline: 28cmHVOF: 50
- 54. Visual Odometry’s frame processing rateis around 13Hz.No a priori knowledge of the motion.3D trajectory is estimated.DGPS accuray in RG-2 mode is 2cm
- 55. Red=VO, Blue=DGPS, Traveling=184m,Error of the endpoint is 4.1 meters.
- 56. Frame-to-frame error analysis of thevehicle heading estimates. Approximatelyzero-mean suggests that estimates are notbiased.
- 57. Unit=metre Autonomous run GPS-(Gyro+Wheel)=0.29m GPS-(Gyro+Vis)=0.77m Remote control GPS-(Gyro+Wheel)=-6.78mOfficial runs to report results of visual GPS-(Gyro+Vis)=3.5modometry to DARPA. “Remote” meansmanual control by a person who is not amember of the vo team.Distance from true DGPS position at theend of eacho run. (in metres)
- 58. Blue=DGPSGreen=Gyro+VoRed=Gyro+Wheel
- 59. Red=VoGreen=Wheel
- 60. Dark plus(Blue)=DGPSThick line(Green)=VoThin line(Red)=Wheel+IMU
- 61. Dark plus(Blue)=DGPSThick line(Green)=VoThin line(Red)=Wheel+IMU Because of slippage on muddy trail
- 62. Dark plus(Blue)=DGPSThick line(Green)=VoThin line(Red)=Wheel+IMU
- 63. Dark plus(Blue)=DGPS Dark plus(Blue)=DGPSThick line(Green)=Vo Thick line(Green)=VoThin line(Red)=Wheel+IMU Thin line(Red)=Wheel+Vo
- 64. Thank you

No public clipboards found for this slide

×
### Save the most important slides with Clipping

Clipping is a handy way to collect and organize the most important slides from a presentation. You can keep your great finds in clipboards organized around topics.

Be the first to comment