Visual odometry presentation_without_video
Upcoming SlideShare
Loading in...5
×
 

Like this? Share it with your network

Share

Visual odometry presentation_without_video

on

  • 557 views

Visual odometry presentation material. In this presentation, there are two papers. "Omnidirectional visual odomtery of a planetry rovoer" written by peter corke and "Visual odometry for ground vehicle ...

Visual odometry presentation material. In this presentation, there are two papers. "Omnidirectional visual odomtery of a planetry rovoer" written by peter corke and "Visual odometry for ground vehicle applications" written by David Nister.

Statistics

Views

Total Views
557
Views on SlideShare
557
Embed Views
0

Actions

Likes
1
Downloads
11
Comments
0

0 Embeds 0

No embeds

Accessibility

Categories

Upload Details

Uploaded via as Apple Keynote

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment
  • \n
  • This is our quadrotor. Currently we use the laser scanner to get the position.\n\nStdev for x=0.13m and y=0.09m, The graph is 1m x 1m for 2D. \n\nExplain advantages and disadvantage.\n\nLet’s look at vision sensor for visual odometry.\n
  • This is our quadrotor. Currently we use the laser scanner to get the position.\n\nStdev for x=0.13m and y=0.09m, The graph is 1m x 1m for 2D. \n\nExplain advantages and disadvantage.\n\nLet’s look at vision sensor for visual odometry.\n
  • This is our quadrotor. Currently we use the laser scanner to get the position.\n\nStdev for x=0.13m and y=0.09m, The graph is 1m x 1m for 2D. \n\nExplain advantages and disadvantage.\n\nLet’s look at vision sensor for visual odometry.\n
  • This is our quadrotor. Currently we use the laser scanner to get the position.\n\nStdev for x=0.13m and y=0.09m, The graph is 1m x 1m for 2D. \n\nExplain advantages and disadvantage.\n\nLet’s look at vision sensor for visual odometry.\n
  • This is our quadrotor. Currently we use the laser scanner to get the position.\n\nStdev for x=0.13m and y=0.09m, The graph is 1m x 1m for 2D. \n\nExplain advantages and disadvantage.\n\nLet’s look at vision sensor for visual odometry.\n
  • This is our quadrotor. Currently we use the laser scanner to get the position.\n\nStdev for x=0.13m and y=0.09m, The graph is 1m x 1m for 2D. \n\nExplain advantages and disadvantage.\n\nLet’s look at vision sensor for visual odometry.\n
  • This is our quadrotor. Currently we use the laser scanner to get the position.\n\nStdev for x=0.13m and y=0.09m, The graph is 1m x 1m for 2D. \n
  • This is our quadrotor. Currently we use the laser scanner to get the position.\n\nStdev for x=0.13m and y=0.09m, The graph is 1m x 1m for 2D. \n
  • This is our quadrotor. Currently we use the laser scanner to get the position.\n\nStdev for x=0.13m and y=0.09m, The graph is 1m x 1m for 2D. \n
  • This is our quadrotor. Currently we use the laser scanner to get the position.\n\nStdev for x=0.13m and y=0.09m, The graph is 1m x 1m for 2D. \n
  • This is our quadrotor. Currently we use the laser scanner to get the position.\n\nStdev for x=0.13m and y=0.09m, The graph is 1m x 1m for 2D. \n
  • This is our quadrotor. Currently we use the laser scanner to get the position.\n\nStdev for x=0.13m and y=0.09m, The graph is 1m x 1m for 2D. \n
  • This is our quadrotor. Currently we use the laser scanner to get the position.\n\nStdev for x=0.13m and y=0.09m, The graph is 1m x 1m for 2D. \n
  • This is our quadrotor. Currently we use the laser scanner to get the position.\n\nStdev for x=0.13m and y=0.09m, The graph is 1m x 1m for 2D. \n
  • This is our quadrotor. Currently we use the laser scanner to get the position.\n\nStdev for x=0.13m and y=0.09m, The graph is 1m x 1m for 2D. \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • principle point u0,v0, focal length f, elevation gain alpha\nP = a \n
  • principle point u0,v0, focal length f, elevation gain alpha\nP = a \n
  • principle point u0,v0, focal length f, elevation gain alpha\nP = a \n
  • principle point u0,v0, focal length f, elevation gain alpha\nP = a \n
  • principle point u0,v0, focal length f, elevation gain alpha\nP = a \n
  • principle point u0,v0, focal length f, elevation gain alpha\nP = a \n
  • principle point u0,v0, focal length f, elevation gain alpha\nP = a \n
  • principle point u0,v0, focal length f, elevation gain alpha\nP = a \n
  • principle point u0,v0, focal length f, elevation gain alpha\nP = a \n
  • principle point u0,v0, focal length f, elevation gain alpha\nP = a \n
  • The different approach which is proposed in this paper is structure from motion.\nx_hat=posteriori state estimate\nx_hat_minus=priori state estimate\n
  • The different approach which is proposed in this paper is structure from motion.\nx_hat=posteriori state estimate\nx_hat_minus=priori state estimate\n
  • The different approach which is proposed in this paper is structure from motion.\nx_hat=posteriori state estimate\nx_hat_minus=priori state estimate\n
  • The different approach which is proposed in this paper is structure from motion.\nx_hat=posteriori state estimate\nx_hat_minus=priori state estimate\n
  • The different approach which is proposed in this paper is structure from motion.\nx_hat=posteriori state estimate\nx_hat_minus=priori state estimate\n
  • The different approach which is proposed in this paper is structure from motion.\nx_hat=posteriori state estimate\nx_hat_minus=priori state estimate\n
  • The different approach which is proposed in this paper is structure from motion.\nx_hat=posteriori state estimate\nx_hat_minus=priori state estimate\n
  • The different approach which is proposed in this paper is structure from motion.\nx_hat=posteriori state estimate\nx_hat_minus=priori state estimate\n
  • The different approach which is proposed in this paper is structure from motion.\nx_hat=posteriori state estimate\nx_hat_minus=priori state estimate\n
  • The different approach which is proposed in this paper is structure from motion.\nx_hat=posteriori state estimate\nx_hat_minus=priori state estimate\n
  • The different approach which is proposed in this paper is structure from motion.\nx_hat=posteriori state estimate\nx_hat_minus=priori state estimate\n
  • \n
  • \n
  • Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  • Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  • Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  • Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  • Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  • Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  • Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  • Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  • Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  • Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  • Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  • Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  • Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  • Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  • Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  • Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  • Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  • Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  • Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  • Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  • Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  • Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  • Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  • Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  • Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  • Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  • Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  • Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  • Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  • Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  • Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  • Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  • Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  • Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  • Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  • Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  • Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  • Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  • Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  • Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  • Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  • Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  • Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  • Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  • Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  • Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  • Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  • Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  • Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  • Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  • Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  • Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  • Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  • Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  • Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  • Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  • Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  • Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  • Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  • Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  • Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  • Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  • Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  • Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  • Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  • Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  • Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  • Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  • Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  • Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  • Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  • Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  • Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  • Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  • Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  • Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  • Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  • Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  • Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  • Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  • Autonomous run: 96.09-95.80 =0.29 GPS-(Gyro+Wheel)\n96.09-95.32=0.77 GPS-(Gyro+Vis)\n\n
  • Autonomous run: 96.09-95.80 =0.29 GPS-(Gyro+Wheel)\n96.09-95.32=0.77 GPS-(Gyro+Vis)\n\n
  • Autonomous run: 96.09-95.80 =0.29 GPS-(Gyro+Wheel)\n96.09-95.32=0.77 GPS-(Gyro+Vis)\n\n
  • Autonomous run: 96.09-95.80 =0.29 GPS-(Gyro+Wheel)\n96.09-95.32=0.77 GPS-(Gyro+Vis)\n\n
  • Autonomous run: 96.09-95.80 =0.29 GPS-(Gyro+Wheel)\n96.09-95.32=0.77 GPS-(Gyro+Vis)\n\n
  • Autonomous run: 96.09-95.80 =0.29 GPS-(Gyro+Wheel)\n96.09-95.32=0.77 GPS-(Gyro+Vis)\n\n
  • Autonomous run: 96.09-95.80 =0.29 GPS-(Gyro+Wheel)\n96.09-95.32=0.77 GPS-(Gyro+Vis)\n\n
  • Autonomous run: 96.09-95.80 =0.29 GPS-(Gyro+Wheel)\n96.09-95.32=0.77 GPS-(Gyro+Vis)\n\n
  • Autonomous run: 96.09-95.80 =0.29 GPS-(Gyro+Wheel)\n96.09-95.32=0.77 GPS-(Gyro+Vis)\n\n
  • Autonomous run: 96.09-95.80 =0.29 GPS-(Gyro+Wheel)\n96.09-95.32=0.77 GPS-(Gyro+Vis)\n\n
  • Autonomous run: 96.09-95.80 =0.29 GPS-(Gyro+Wheel)\n96.09-95.32=0.77 GPS-(Gyro+Vis)\n\n
  • Autonomous run: 96.09-95.80 =0.29 GPS-(Gyro+Wheel)\n96.09-95.32=0.77 GPS-(Gyro+Vis)\n\n
  • Autonomous run: 96.09-95.80 =0.29 GPS-(Gyro+Wheel)\n96.09-95.32=0.77 GPS-(Gyro+Vis)\n\n
  • Autonomous run: 96.09-95.80 =0.29 GPS-(Gyro+Wheel)\n96.09-95.32=0.77 GPS-(Gyro+Vis)\n\n
  • \n
  • principle point u0,v0, focal length f, elevation gain alpha\nP = a \n

Visual odometry presentation_without_video Presentation Transcript

  • 1. Visual odometry by Inkyu Sa
  • 2. Motivation
  • 3. Motivation his laser scanner is good enough to obtain the position (x, y, θ, z) of the quadrotor at 10Hz. This data provides from ROS canonical scan matcher package. 0.5 0.4 0.3 0.2 0.1 y position(m) 0 −0.1 −0.2 −0.3 −0.4 −0.5 −0.5 −0.4 −0.3 −0.2 −0.1 0 0.1 0.2 0.3 0.4 0.5 x position(m)
  • 4. Motivation his laser scanner is good enough to obtain the position (x, y, θ, z) of the quadrotor at 10Hz. This data provides from ROS canonical scan matcher package. 0.5 0.4 0.3 0.2- Relatively high accuracy. 0.1 y position(m)- ROS device driver support. 0 −0.1 −0.2 −0.3 −0.4 −0.5 −0.5 −0.4 −0.3 −0.2 −0.1 0 0.1 0.2 0.3 0.4 0.5 x position(m)
  • 5. Motivation his laser scanner is good enough to obtain the position (x, y, θ, z) of the quadrotor at 10Hz. This data provides from ROS canonical scan matcher package. 0.5 0.4 0.3 0.2- Relatively high accuracy. 0.1 y position(m)- ROS device driver support. 0 −0.1 −0.2- Expensive, USD 2375 −0.3- Low frequency 10Hz −0.4- Only for 2D. −0.5 −0.5 −0.4 −0.3 −0.2 −0.1 0 0.1 0.2 0.3 0.4 0.5 x position(m)
  • 6. Motivationhttp://www.ifixit.com
  • 7. Motivation inect 3D depth camera can provide not only 2D RGB images but 3D depth images at 30Hz.http://www.ifixit.com
  • 8. Motivation inect 3D depth camera can provide not only 2D RGB images but 3D depth images at 30Hz. http://www.ifixit.com- Reasonable price. AUD 180.- 3 Dimensional information.- Openni Kinect ROS device driver andpoint could library support.- Available to use for visual odometry andobject recognition, 3D SLAM and so on.
  • 9. Motivation inect 3D depth camera can provide not only 2D RGB images but 3D depth images at 30Hz. http://www.ifixit.com - Relatively low accuracy and many noise.- Reasonable price. AUD 180. - Heavy weight. original kinect over 500g.- 3 Dimensional information.- Openni Kinect ROS device driver and - Requires high computational power.point could library support. ◦ ◦ - Narrow filed of view. H=57,V=43- Available to use for visual odometry andobject recognition, 3D SLAM and so on.
  • 10. Motivation inect 3D depth camera can provide not only 2D RGB images but 3D depth images at 30Hz. http://www.ifixit.com - Relatively low accuracy and many noise.- Reasonable price. AUD 180. - Heavy weight. original kinect over 500g.- 3 Dimensional information.- Openni Kinect ROS device driver and - Requires high computational power.point could library support. ◦ ◦ - Narrow filed of view. H=57,V=43- Available to use for visual odometry andobject recognition, 3D SLAM and so on.
  • 11. Motivation inect 3D depth camera can provide not only 2D RGB images but 3D depth images at 30Hz. http://www.ifixit.com - Relatively low accuracy and many noise.- Reasonable price. AUD 180. - Heavy weight. original kinect over 500g.- 3 Dimensional information.- Openni Kinect ROS device driver and - Requires high computational power.point could library support. ◦ ◦ - Narrow filed of view. H=57,V=43- Available to use for visual odometry andobject recognition, 3D SLAM and so on.
  • 12. Motivation inect 3D depth camera can provide not only 2D RGB images but 3D depth images at 30Hz. http://www.ifixit.com - Relatively low accuracy and many noise.- Reasonable price. AUD 180. - Heavy weight. original kinect over 500g.- 3 Dimensional information.- Openni Kinect ROS device driver and - Requires high computational power.point could library support. ◦ ◦ - Narrow filed of view. H=57,V=43- Available to use for visual odometry andobject recognition, 3D SLAM and so on.
  • 13. Motivation inect 3D depth camera can provide not only 2D RGB images but 3D depth images at 30Hz. http://www.ifixit.com - Relatively low accuracy and many noise.- Reasonable price. AUD 180. - Heavy weight. original kinect over 500g.- 3 Dimensional information.- Openni Kinect ROS device driver and - Requires high computational power.point could library support. ◦ ◦ - Narrow filed of view. H=57,V=43- Available to use for visual odometry andobject recognition, 3D SLAM and so on.
  • 14. Contents
  • 15. Contents
  • 16. ◦◦ ◦
  • 17. ◦◦ ◦
  • 18. ◦◦ ◦
  • 19.     x a  y =  b  z 1a = tan{α tan−1 u/f } cos βb = tan{α tan−1 v/f } sin βu =x point of image plane.v =y point of image plane.
  • 20. (∆x, ∆y, ∆θ) (x , y ) (u, v) (u , v )(x, y) ˆ ˆ (du, dv) = P (u, v, {u0 , v0 , f, α}, {∆x, ∆y, ∆θ}) P is optical flow function of the feature coordinate. t t+1
  • 21. e1 = med ˆ ˆ (dui − dui )2 ) + (dvi − dvi )2 )e1
  • 22. Solar powered robot, Hyperion,developed by CMU.
  • 23. Solar powered robot, Hyperion,developed by CMU.The parameter estimates aresomewhat noisy but closely withthose determined using a CMUcalibration method. estimates=(Value) Calibration method=(True)
  • 24. R W x ˙ x ˙ R = RZ (θ) W y ˙ y ˙Then integration of the robotvelocity using sample timecan be produce the positionof the robot as shown theleft image. R R x x ˙ R = R ∆t y y ˙
  • 25. Using the following equation,the observed robot coordinatevelocity can be calculated. R W x ˙ x ˙ R = RZ (θ) W y ˙ y ˙Then integration of the robotvelocity using sample timecan be produce the positionof the robot as shown theleft image. R R x x ˙ R = R ∆t y y ˙
  • 26. 6DOF of camera position + 3DOFof features position. Observation vector,the projectiondata for the current image. Process noise covariance,shouldbe known. Measurement noise covariance,should be know. isotropic withvariance(4.0 pixels). Error covariance Kalman gain. Observation matrix
  • 27. − − xk = ˆ xk ˆ + Kk (zk − H xk ) ˆ The measurement is re- projection of point. Tzj = (R(ρ) Zj + t)ρ, t are the camera-to-world rotation Euler angles and translation of the camera.Zj is the 3D world coordinate system position of point j.This measurement is nonlinear in the estimated parameters andthis motivates use of the iterated extended Kalman filter.
  • 28. − − xk = ˆ xk ˆ + Kk (zk − H xk ) ˆ The measurement is re- projection of point. Tzj = (R(ρ) Zj + t)ρ, t are the camera-to-world rotation Euler angles and translation of the camera.Zj is the 3D world coordinate system position of point j.This measurement is nonlinear in the estimated parameters andthis motivates use of the iterated extended Kalman filter.
  • 29. Initial state estimate distributionis done using batch algorithm[1]to get mean and covariance.This estimates initial 6D camerapositions corresponding toseveral images in the sequence.29.2m traveled and averageerror=22.9cm and maximumerror=72.7cm.
  • 30. é
  • 31. yx
  • 32. y Robert Collins CSE486, Penn Statex λ1 = large , λ2 = small
  • 33. y Robert Collins CSE486, Penn Statex λ1 = small , λ2 = small
  • 34. y Robert Collins CSE486, Penn Statex λ1 = large , λ2 = large
  • 35. 2E(u, v) = w(x, y)[I(x + u, y + v) − I(x, y)] x,y ≈ [I(x, y) + uIx + vIy − I(x, y)]2 x,y = u2 Ix + 2uvIx Iy + v 2 Iy 2 2 x,y 2 Ix Ix Iy u = u v 2 Ix Iy Iy v x,y 2 Ix Ix Iy u = u v ( 2 ) Ix Iy Iy v x,y u 2 Ix Ix Iy E(u, v) ∼ = u v M ,M = w(x, y) 2 v Ix Iy Iy x,y
  • 36. R = detM − k(traceM )2 2 2 2 2 = Ix Iy − k(Ix + Iy ) 2 detM =λ1 λ2 α =Ix 2traceM =λ1 + λ2 β =Iy Ix =Gx ∗ Ik is an empirically determined σconstant range from 0.04~0.06 Iy =Gy ∗ I σ 2 Ix Ix Iy M= w(x, y) 2 Ix Iy Iy x,y
  • 37. R = detM − k(traceM )2 2 2 2 2 = Ix Iy − k(Ix + Iy ) 2 detM =λ1 λ2 α =Ix 2traceM =λ1 + λ2 β =Iy Ix =Gx ∗ Ik is an empirically determined σconstant range from 0.04~0.06 Iy =Gy ∗ I σ 2 Ix Ix Iy M= w(x, y) 2 Ix Iy Iy x,y Source from [3]
  • 38. For each detected feature, search every features within acertain disparity limit from the next image.(10% of image size) (t) (t-1)
  • 39. For each detected feature, calculate the normalizedcorrelation using 11x11 window. A= I x,y B= I2 x,y 1 C =√ nB − A2 D= I1 I2 x,y n = 121, 11 × 11The normalized correlation Find the highest value of NC,between two patches is (Mutual consistency check) N C1,2 = (nD − A1 A2 )C1 C2 = max(N C1, 2)
  • 40. Circles shows the current feature locationsand lines are feature tracks over the images
  • 41. Track matched features and estimate relative positionusing 5-points algorithm. RANSAC refines position.
  • 42. Track matched features and estimate relative positionusing 5-points algorithm. RANSAC refines position.Construct 3D points with first and last observationand estimate the scale factor.
  • 43. Track matched features and estimate relative positionusing 5-points algorithm. RANSAC refines position.Construct 3D points with first and last observationand estimate the scale factor.Track additional number of frames and compute theposition of camera with known 3D point using3-point algorithm. RANSAC refines positions.
  • 44. Track matched features and estimate relative positionusing 5-points algorithm. RANSAC refines position.Construct 3D points with first and last observationand estimate the scale factor.Track additional number of frames and compute theposition of camera with known 3D point using3-point algorithm. RANSAC refines positions.
  • 45. Triangulate the observed matches into 3D points. http://en.wikipedia.org/wiki/File:TriangulationReal.svg= abs(y1 − y1 )
  • 46. Triangulate the observed matches into 3D points. Track features for a certain number of frames and calculate the position of stereo rig and refine with RANSAC and 3points algorithm.E{(p1 , p1 ), (p2 , p2 ), (p3 , p3 )}From this equation, we p1could get R,T matrix. t p2 p3 p1 t-1 p3 p2
  • 47. Triangulate the observed matches into 3D points. Track features for a certain number of frames and calculate the position of stereo rig and refine with RANSAC and 3points algorithm.E{(p1 , p1 ), (p2 , p2 ), (p3 , p3 )}From this equation, we p1could get R,T matrix. t p2 p3 p1 t-1 p3 p2
  • 48. Triangulate the observed matches into 3D points. Track features for a certain number of frames and calculate the position of stereo rig and refine with RANSAC and 3points algorithm.E{(p1 , p1 ), (p2 , p2 ), (p3 , p3 )}From this equation, we p1could get R,T matrix. t p2 p3 p1 t-1 p3 p2
  • 49. Triangulate the observed matches into 3D points.Track features for a certain number of framesand calculate the position of stereo rig andrefine with RANSAC and 3points algorithm.Triangulate all new feature matches and repeatprevious step a certain number of time.
  • 50. Triangulate the observed matches into 3D points.Track features for a certain number of framesand calculate the position of stereo rig andrefine with RANSAC and 3points algorithm.Triangulate all new feature matches and repeatprevious step a certain number of time.
  • 51. Note: In this paper, fire wall refers to the tool in order to avoid errorpropagation. Idea is that don’t triangulate of 3D points using observation beyondthe most recent firewall. time projection error Set the firewall at this frame Then using from this frame to triangulate 3D points. time
  • 52. Image size: 720x240Baseline: 28cmHVOF: 50
  • 53. Image size: 720x240Baseline: 28cmHVOF: 50
  • 54. Visual Odometry’s frame processing rateis around 13Hz.No a priori knowledge of the motion.3D trajectory is estimated.DGPS accuray in RG-2 mode is 2cm
  • 55. Red=VO, Blue=DGPS, Traveling=184m,Error of the endpoint is 4.1 meters.
  • 56. Frame-to-frame error analysis of thevehicle heading estimates. Approximatelyzero-mean suggests that estimates are notbiased.
  • 57. Unit=metre Autonomous run GPS-(Gyro+Wheel)=0.29m GPS-(Gyro+Vis)=0.77m Remote control GPS-(Gyro+Wheel)=-6.78mOfficial runs to report results of visual GPS-(Gyro+Vis)=3.5modometry to DARPA. “Remote” meansmanual control by a person who is not amember of the vo team.Distance from true DGPS position at theend of eacho run. (in metres)
  • 58. Blue=DGPSGreen=Gyro+VoRed=Gyro+Wheel
  • 59. Red=VoGreen=Wheel
  • 60. Dark plus(Blue)=DGPSThick line(Green)=VoThin line(Red)=Wheel+IMU
  • 61. Dark plus(Blue)=DGPSThick line(Green)=VoThin line(Red)=Wheel+IMU Because of slippage on muddy trail
  • 62. Dark plus(Blue)=DGPSThick line(Green)=VoThin line(Red)=Wheel+IMU
  • 63. Dark plus(Blue)=DGPS Dark plus(Blue)=DGPSThick line(Green)=Vo Thick line(Green)=VoThin line(Red)=Wheel+IMU Thin line(Red)=Wheel+Vo
  • 64. Thank you