SlideShare a Scribd company logo
Visual odometry



                  by Inkyu Sa
Motivation
Motivation
       his laser scanner is good enough
     to obtain the position (x, y, θ, z) of
     the quadrotor at 10Hz. This data
     provides from ROS canonical scan
     matcher package.
                        0.5




                        0.4




                        0.3




                        0.2




                        0.1
       y position(m)




                         0




                       −0.1




                       −0.2




                       −0.3




                       −0.4




                       −0.5
                         −0.5   −0.4   −0.3   −0.2   −0.1        0          0.1   0.2   0.3   0.4   0.5
                                                            x position(m)
Motivation
                                 his laser scanner is good enough
                               to obtain the position (x, y, θ, z) of
                               the quadrotor at 10Hz. This data
                               provides from ROS canonical scan
                               matcher package.
                                                  0.5




                                                  0.4




                                                  0.3




                                                  0.2




- Relatively high accuracy.                       0.1
                                 y position(m)




- ROS device driver support.                       0




                                                 −0.1




                                                 −0.2




                                                 −0.3




                                                 −0.4




                                                 −0.5
                                                   −0.5   −0.4   −0.3   −0.2   −0.1        0          0.1   0.2   0.3   0.4   0.5
                                                                                      x position(m)
Motivation
                                 his laser scanner is good enough
                               to obtain the position (x, y, θ, z) of
                               the quadrotor at 10Hz. This data
                               provides from ROS canonical scan
                               matcher package.
                                                  0.5




                                                  0.4




                                                  0.3




                                                  0.2




- Relatively high accuracy.                       0.1
                                 y position(m)




- ROS device driver support.                       0




                                                 −0.1




                                                 −0.2



- Expensive, USD 2375                            −0.3



- Low frequency 10Hz                             −0.4


- Only for 2D.                                   −0.5
                                                   −0.5   −0.4   −0.3   −0.2   −0.1        0          0.1   0.2   0.3   0.4   0.5
                                                                                      x position(m)
Motivation


http://www.ifixit.com
Motivation
                              inect 3D depth camera can
                           provide not only 2D RGB images but
                           3D depth images at 30Hz.


http://www.ifixit.com
Motivation
                                                inect 3D depth camera can
                                             provide not only 2D RGB images but
                                             3D depth images at 30Hz.


     http://www.ifixit.com


- Reasonable price. AUD 180.
- 3 Dimensional information.
- Openni Kinect ROS device driver and
point could library support.
- Available to use for visual odometry and
object recognition, 3D SLAM and so on.
Motivation
                                             inect 3D depth camera can
                                          provide not only 2D RGB images but
                                          3D depth images at 30Hz.


     http://www.ifixit.com
                                           - Relatively low accuracy and many noise.
- Reasonable price. AUD 180.
                                           - Heavy weight. original kinect over 500g.
- 3 Dimensional information.
- Openni Kinect ROS device driver and      - Requires high computational power.
point could library support.
                                                                       ◦    ◦
                                           - Narrow filed of view. H=57,V=43
- Available to use for visual odometry and
object recognition, 3D SLAM and so on.
Motivation
                                             inect 3D depth camera can
                                          provide not only 2D RGB images but
                                          3D depth images at 30Hz.


     http://www.ifixit.com
                                           - Relatively low accuracy and many noise.
- Reasonable price. AUD 180.
                                           - Heavy weight. original kinect over 500g.
- 3 Dimensional information.
- Openni Kinect ROS device driver and      - Requires high computational power.
point could library support.
                                                                       ◦    ◦
                                           - Narrow filed of view. H=57,V=43
- Available to use for visual odometry and
object recognition, 3D SLAM and so on.
Motivation
                                             inect 3D depth camera can
                                          provide not only 2D RGB images but
                                          3D depth images at 30Hz.


     http://www.ifixit.com
                                           - Relatively low accuracy and many noise.
- Reasonable price. AUD 180.
                                           - Heavy weight. original kinect over 500g.
- 3 Dimensional information.
- Openni Kinect ROS device driver and      - Requires high computational power.
point could library support.
                                                                       ◦    ◦
                                           - Narrow filed of view. H=57,V=43
- Available to use for visual odometry and
object recognition, 3D SLAM and so on.
Motivation
                                             inect 3D depth camera can
                                          provide not only 2D RGB images but
                                          3D depth images at 30Hz.


     http://www.ifixit.com
                                           - Relatively low accuracy and many noise.
- Reasonable price. AUD 180.
                                           - Heavy weight. original kinect over 500g.
- 3 Dimensional information.
- Openni Kinect ROS device driver and      - Requires high computational power.
point could library support.
                                                                       ◦    ◦
                                           - Narrow filed of view. H=57,V=43
- Available to use for visual odometry and
object recognition, 3D SLAM and so on.
Motivation
                                             inect 3D depth camera can
                                          provide not only 2D RGB images but
                                          3D depth images at 30Hz.


     http://www.ifixit.com
                                           - Relatively low accuracy and many noise.
- Reasonable price. AUD 180.
                                           - Heavy weight. original kinect over 500g.
- 3 Dimensional information.
- Openni Kinect ROS device driver and      - Requires high computational power.
point could library support.
                                                                       ◦    ◦
                                           - Narrow filed of view. H=57,V=43
- Available to use for visual odometry and
object recognition, 3D SLAM and so on.
Contents
Contents
◦
◦       ◦
◦
◦       ◦
◦
◦       ◦
                    
   x               a
  y =           b 
   z               1

a = tan{α tan−1 u/f } cos β
b = tan{α tan−1 v/f } sin β
u =x point of image plane.
v =y point of image plane.
(∆x, ∆y, ∆θ)

         (x , y )
                                          (u, v) (u , v )
(x, y)
                      ˆ ˆ
                    (du, dv) = P (u, v, {u0 , v0 , f, α}, {∆x, ∆y, ∆θ})

                    P is optical flow function of
                       the feature coordinate.
   t     t+1
e1 = med           ˆ                 ˆ
           (dui − dui )2 ) + (dvi − dvi )2 )




e1
Solar powered robot, Hyperion,
developed by CMU.
Solar powered robot, Hyperion,
developed by CMU.

The parameter estimates are
somewhat noisy but closely with
those determined using a CMU
calibration method.




                          estimates=(Value)
                 Calibration method=(True)
R                   W
      x
      ˙                   x
                          ˙
    R      = RZ (θ)     W
      y
      ˙                   y
                          ˙


Then integration of the robot
velocity using sample time
can be produce the position
of the robot as shown the
left image.
     R            R
       x            x
                    ˙
     R     =      R      ∆t
       y            y
                    ˙
Using the following equation,
the observed robot coordinate
velocity can be calculated.
    R                   W
      x
      ˙                   x
                          ˙
    R      = RZ (θ)     W
      y
      ˙                   y
                          ˙


Then integration of the robot
velocity using sample time
can be produce the position
of the robot as shown the
left image.
     R            R
       x            x
                    ˙
     R     =      R      ∆t
       y            y
                    ˙
6DOF of camera position + 3DOF
of features position.
    Observation vector,the projection
data for the current image.
      Process noise covariance,should
be known.
      Measurement noise covariance,
should be know. isotropic with
variance(4.0 pixels).
   Error covariance
   Kalman gain.
   Observation matrix
−                    −
     xk =
     ˆ      xk
            ˆ     + Kk (zk −   H xk )
                                 ˆ

                               The measurement is re-
                               projection of point.
                  T
zj =      (R(ρ) Zj + t)

ρ, t are the camera-to-world rotation Euler angles and translation
   of the camera.
Zj is the 3D world coordinate system position of point j.
This measurement is nonlinear in the estimated parameters and
this motivates use of the iterated extended Kalman filter.
−                    −
     xk =
     ˆ      xk
            ˆ     + Kk (zk −   H xk )
                                 ˆ

                               The measurement is re-
                               projection of point.
                  T
zj =      (R(ρ) Zj + t)

ρ, t are the camera-to-world rotation Euler angles and translation
   of the camera.
Zj is the 3D world coordinate system position of point j.
This measurement is nonlinear in the estimated parameters and
this motivates use of the iterated extended Kalman filter.
Initial state estimate distribution
is done using batch algorithm[1]
to get mean and covariance.

This estimates initial 6D camera
positions corresponding to
several images in the sequence.

29.2m traveled and average
error=22.9cm and maximum
error=72.7cm.
é
y


x
y   Robert Collins CSE486, Penn State




x




             λ1 = large , λ2 = small
y   Robert Collins CSE486, Penn State




x




              λ1 = small , λ2 = small
y   Robert Collins CSE486, Penn State




x




            λ1 = large , λ2 = large
2
E(u, v) =         w(x, y)[I(x + u, y + v) − I(x, y)]
            x,y



                                 ≈          [I(x, y) + uIx + vIy − I(x, y)]2
                                      x,y

                                 =          u2 Ix + 2uvIx Iy + v 2 Iy
                                                2                   2

                                      x,y
                                                            2
                                                           Ix Ix Iy              u
                                 =            u v                 2
                                                           Ix Iy Iy              v
                                      x,y
                                                                2
                                                               Ix Ix Iy              u
                                 =      u v      (                    2     )
                                                               Ix Iy Iy              v
                                                     x,y

                                                           u                               2
                                                                                          Ix Ix Iy
                          E(u, v) ∼
                                  =     u v      M               ,M =           w(x, y)          2
                                                           v                              Ix Iy Iy
                                                                          x,y
R = detM − k(traceM )2
       2 2       2    2
    = Ix Iy − k(Ix + Iy )


                                        2
   detM =λ1 λ2                      α =Ix
                                        2
traceM =λ1 + λ2                     β =Iy
                                    Ix =Gx ∗ I
k is an empirically determined           σ

constant range from 0.04~0.06       Iy =Gy ∗ I
                                         σ




                          2
                         Ix Ix Iy
   M=          w(x, y)          2
                         Ix Iy Iy
         x,y
R = detM − k(traceM )2
       2 2       2    2
    = Ix Iy − k(Ix + Iy )


                                        2
   detM =λ1 λ2                      α =Ix
                                        2
traceM =λ1 + λ2                     β =Iy
                                    Ix =Gx ∗ I
k is an empirically determined           σ

constant range from 0.04~0.06       Iy =Gy ∗ I
                                         σ




                          2
                         Ix Ix Iy
   M=          w(x, y)          2
                         Ix Iy Iy
         x,y
                                                 Source from [3]
For each detected feature, search every features within a
certain disparity limit from the next image.
(10% of image size)




                                            (t)

                                              (t-1)
For each detected feature, calculate the normalized
correlation using 11x11 window.
              A=           I
                     x,y

              B=           I2
                     x,y
                        1
              C =√
                      nB − A2
              D=           I1 I2
                     x,y


  n = 121, 11 × 11

The normalized correlation         Find the highest value of NC,
between two patches is             (Mutual consistency check)
    N C1,2 = (nD − A1 A2 )C1 C2          = max(N C1, 2)
Circles shows the current feature locations
and lines are feature tracks over the images
Track matched features and estimate relative position
using 5-points algorithm. RANSAC refines position.
Track matched features and estimate relative position
using 5-points algorithm. RANSAC refines position.

Construct 3D points with first and last observation
and estimate the scale factor.
Track matched features and estimate relative position
using 5-points algorithm. RANSAC refines position.

Construct 3D points with first and last observation
and estimate the scale factor.

Track additional number of frames and compute the
position of camera with known 3D point using
3-point algorithm. RANSAC refines positions.
Track matched features and estimate relative position
using 5-points algorithm. RANSAC refines position.

Construct 3D points with first and last observation
and estimate the scale factor.

Track additional number of frames and compute the
position of camera with known 3D point using
3-point algorithm. RANSAC refines positions.
Triangulate the observed matches into 3D points.




                    http://en.wikipedia.org/wiki/File:TriangulationReal.svg
= abs(y1 − y1 )
Triangulate the observed matches into 3D points.

              Track features for a certain number of frames
              and calculate the position of stereo rig and
              refine with RANSAC and 3points algorithm.
E{(p1 , p1 ), (p2 , p2 ), (p3 , p3 )}

From this equation, we                  p1
could get R,T matrix.                                t
                                        p2   p3



                                        p1            t-1
                                              p3
                                        p2
Triangulate the observed matches into 3D points.

              Track features for a certain number of frames
              and calculate the position of stereo rig and
              refine with RANSAC and 3points algorithm.
E{(p1 , p1 ), (p2 , p2 ), (p3 , p3 )}

From this equation, we                  p1
could get R,T matrix.                                t
                                        p2   p3



                                        p1            t-1
                                              p3
                                        p2
Triangulate the observed matches into 3D points.

              Track features for a certain number of frames
              and calculate the position of stereo rig and
              refine with RANSAC and 3points algorithm.
E{(p1 , p1 ), (p2 , p2 ), (p3 , p3 )}

From this equation, we                  p1
could get R,T matrix.                                t
                                        p2   p3



                                        p1            t-1
                                              p3
                                        p2
Triangulate the observed matches into 3D points.

Track features for a certain number of frames
and calculate the position of stereo rig and
refine with RANSAC and 3points algorithm.

Triangulate all new feature matches and repeat
previous step a certain number of time.
Triangulate the observed matches into 3D points.

Track features for a certain number of frames
and calculate the position of stereo rig and
refine with RANSAC and 3points algorithm.

Triangulate all new feature matches and repeat
previous step a certain number of time.
Note: In this paper, fire wall refers to the tool in order to avoid error
propagation. Idea is that don’t triangulate of 3D points using observation beyond
the most recent firewall.


                                  time
   projection error                                 Set the firewall at this frame
                                                      Then using from this frame
                                                      to triangulate 3D points.



                                                         time
Image size: 720x240
Baseline: 28cm
HVOF: 50
Image size: 720x240
Baseline: 28cm
HVOF: 50
Visual Odometry’s frame processing rate
is around 13Hz.
No a priori knowledge of the motion.
3D trajectory is estimated.
DGPS accuray in RG-2 mode is 2cm
Red=VO, Blue=DGPS, Traveling=184m,
Error of the endpoint is 4.1 meters.
Frame-to-frame error analysis of the
vehicle heading estimates. Approximately
zero-mean suggests that estimates are not
biased.
Unit=metre
                                            Autonomous run
                                            GPS-(Gyro+Wheel)=0.29m
                                            GPS-(Gyro+Vis)=0.77m
                                            Remote control
                                            GPS-(Gyro+Wheel)=-6.78m
Official runs to report results of visual   GPS-(Gyro+Vis)=3.5m
odometry to DARPA. “Remote” means
manual control by a person who is not a
member of the vo team.




Distance from true DGPS position at the
end of eacho run. (in metres)
Blue=DGPS
Green=Gyro+Vo
Red=Gyro+Wheel
Red=Vo
Green=Wheel
Dark plus(Blue)=DGPS
Thick line(Green)=Vo
Thin line(Red)=Wheel+IMU
Dark plus(Blue)=DGPS
Thick line(Green)=Vo
Thin line(Red)=Wheel+IMU



 Because of slippage on
 muddy trail
Dark plus(Blue)=DGPS
Thick line(Green)=Vo
Thin line(Red)=Wheel+IMU
Dark plus(Blue)=DGPS       Dark plus(Blue)=DGPS
Thick line(Green)=Vo       Thick line(Green)=Vo
Thin line(Red)=Wheel+IMU   Thin line(Red)=Wheel+Vo
Thank you
Visual odometry presentation_without_video
Visual odometry presentation_without_video

More Related Content

What's hot

SLAM入門 第2章 SLAMの基礎
SLAM入門 第2章 SLAMの基礎SLAM入門 第2章 SLAMの基礎
SLAM入門 第2章 SLAMの基礎
yohei okawa
 
Visual slam
Visual slamVisual slam
Visual slam
Takuya Minagawa
 
BRDF レンダリングの方程式
BRDF レンダリングの方程式BRDF レンダリングの方程式
BRDF レンダリングの方程式
康弘 等々力
 
確率ロボティクス第四回
確率ロボティクス第四回確率ロボティクス第四回
確率ロボティクス第四回
Ryuichi Ueda
 
fusion of Camera and lidar for autonomous driving II
fusion of Camera and lidar for autonomous driving IIfusion of Camera and lidar for autonomous driving II
fusion of Camera and lidar for autonomous driving II
Yu Huang
 
20190825 vins mono
20190825 vins mono20190825 vins mono
20190825 vins mono
Takuya Minagawa
 
SSII2021 [TS1] Visual SLAM ~カメラ幾何の基礎から最近の技術動向まで~
SSII2021 [TS1] Visual SLAM ~カメラ幾何の基礎から最近の技術動向まで~SSII2021 [TS1] Visual SLAM ~カメラ幾何の基礎から最近の技術動向まで~
SSII2021 [TS1] Visual SLAM ~カメラ幾何の基礎から最近の技術動向まで~
SSII
 
確率ロボティクス第六回
確率ロボティクス第六回確率ロボティクス第六回
確率ロボティクス第六回
Ryuichi Ueda
 
複数のGNSSを用いたポーズグラフ最適化
複数のGNSSを用いたポーズグラフ最適化複数のGNSSを用いたポーズグラフ最適化
複数のGNSSを用いたポーズグラフ最適化
TaroSuzuki15
 
Camera calibration
Camera calibrationCamera calibration
Camera calibration
Takahashi Kosuke
 
Deep VO and SLAM
Deep VO and SLAMDeep VO and SLAM
Deep VO and SLAM
Yu Huang
 
Slam algorithms
Slam algorithmsSlam algorithms
Slam algorithms
jdo
 
CNN-SLAMざっくり
CNN-SLAMざっくりCNN-SLAMざっくり
CNN-SLAMざっくり
EndoYuuki
 
ORB-SLAMの手法解説
ORB-SLAMの手法解説ORB-SLAMの手法解説
ORB-SLAMの手法解説
Masaya Kaneko
 
20180527 ORB SLAM Code Reading
20180527 ORB SLAM Code Reading20180527 ORB SLAM Code Reading
20180527 ORB SLAM Code Reading
Takuya Minagawa
 
ロボティクスにおける SLAM 手法と実用化例
ロボティクスにおける SLAM 手法と実用化例ロボティクスにおける SLAM 手法と実用化例
ロボティクスにおける SLAM 手法と実用化例
Yoshitaka HARA
 
Direct Sparse Odometryの解説
Direct Sparse Odometryの解説Direct Sparse Odometryの解説
Direct Sparse Odometryの解説
Masaya Kaneko
 
SSII2019TS: 実践カメラキャリブレーション ~カメラを用いた実世界計測の基礎と応用~
SSII2019TS: 実践カメラキャリブレーション ~カメラを用いた実世界計測の基礎と応用~SSII2019TS: 実践カメラキャリブレーション ~カメラを用いた実世界計測の基礎と応用~
SSII2019TS: 実践カメラキャリブレーション ~カメラを用いた実世界計測の基礎と応用~
SSII
 
[論文解説]Unsupervised monocular depth estimation with Left-Right Consistency
[論文解説]Unsupervised monocular depth estimation with Left-Right Consistency[論文解説]Unsupervised monocular depth estimation with Left-Right Consistency
[論文解説]Unsupervised monocular depth estimation with Left-Right Consistency
Ryutaro Yamauchi
 
Neural Scene Representation & Rendering: Introduction to Novel View Synthesis
Neural Scene Representation & Rendering: Introduction to Novel View SynthesisNeural Scene Representation & Rendering: Introduction to Novel View Synthesis
Neural Scene Representation & Rendering: Introduction to Novel View Synthesis
Vincent Sitzmann
 

What's hot (20)

SLAM入門 第2章 SLAMの基礎
SLAM入門 第2章 SLAMの基礎SLAM入門 第2章 SLAMの基礎
SLAM入門 第2章 SLAMの基礎
 
Visual slam
Visual slamVisual slam
Visual slam
 
BRDF レンダリングの方程式
BRDF レンダリングの方程式BRDF レンダリングの方程式
BRDF レンダリングの方程式
 
確率ロボティクス第四回
確率ロボティクス第四回確率ロボティクス第四回
確率ロボティクス第四回
 
fusion of Camera and lidar for autonomous driving II
fusion of Camera and lidar for autonomous driving IIfusion of Camera and lidar for autonomous driving II
fusion of Camera and lidar for autonomous driving II
 
20190825 vins mono
20190825 vins mono20190825 vins mono
20190825 vins mono
 
SSII2021 [TS1] Visual SLAM ~カメラ幾何の基礎から最近の技術動向まで~
SSII2021 [TS1] Visual SLAM ~カメラ幾何の基礎から最近の技術動向まで~SSII2021 [TS1] Visual SLAM ~カメラ幾何の基礎から最近の技術動向まで~
SSII2021 [TS1] Visual SLAM ~カメラ幾何の基礎から最近の技術動向まで~
 
確率ロボティクス第六回
確率ロボティクス第六回確率ロボティクス第六回
確率ロボティクス第六回
 
複数のGNSSを用いたポーズグラフ最適化
複数のGNSSを用いたポーズグラフ最適化複数のGNSSを用いたポーズグラフ最適化
複数のGNSSを用いたポーズグラフ最適化
 
Camera calibration
Camera calibrationCamera calibration
Camera calibration
 
Deep VO and SLAM
Deep VO and SLAMDeep VO and SLAM
Deep VO and SLAM
 
Slam algorithms
Slam algorithmsSlam algorithms
Slam algorithms
 
CNN-SLAMざっくり
CNN-SLAMざっくりCNN-SLAMざっくり
CNN-SLAMざっくり
 
ORB-SLAMの手法解説
ORB-SLAMの手法解説ORB-SLAMの手法解説
ORB-SLAMの手法解説
 
20180527 ORB SLAM Code Reading
20180527 ORB SLAM Code Reading20180527 ORB SLAM Code Reading
20180527 ORB SLAM Code Reading
 
ロボティクスにおける SLAM 手法と実用化例
ロボティクスにおける SLAM 手法と実用化例ロボティクスにおける SLAM 手法と実用化例
ロボティクスにおける SLAM 手法と実用化例
 
Direct Sparse Odometryの解説
Direct Sparse Odometryの解説Direct Sparse Odometryの解説
Direct Sparse Odometryの解説
 
SSII2019TS: 実践カメラキャリブレーション ~カメラを用いた実世界計測の基礎と応用~
SSII2019TS: 実践カメラキャリブレーション ~カメラを用いた実世界計測の基礎と応用~SSII2019TS: 実践カメラキャリブレーション ~カメラを用いた実世界計測の基礎と応用~
SSII2019TS: 実践カメラキャリブレーション ~カメラを用いた実世界計測の基礎と応用~
 
[論文解説]Unsupervised monocular depth estimation with Left-Right Consistency
[論文解説]Unsupervised monocular depth estimation with Left-Right Consistency[論文解説]Unsupervised monocular depth estimation with Left-Right Consistency
[論文解説]Unsupervised monocular depth estimation with Left-Right Consistency
 
Neural Scene Representation & Rendering: Introduction to Novel View Synthesis
Neural Scene Representation & Rendering: Introduction to Novel View SynthesisNeural Scene Representation & Rendering: Introduction to Novel View Synthesis
Neural Scene Representation & Rendering: Introduction to Novel View Synthesis
 

Similar to Visual odometry presentation_without_video

Getmoving as3kinect
Getmoving as3kinectGetmoving as3kinect
Getmoving as3kinect
Marielle Lange
 
High-Speed Single-Photon SPAD Camera
High-Speed Single-Photon SPAD CameraHigh-Speed Single-Photon SPAD Camera
High-Speed Single-Photon SPAD Camera
Fabrizio Guerrieri
 
Kinect v1+Processing workshot fabcafe_taipei
Kinect v1+Processing workshot fabcafe_taipeiKinect v1+Processing workshot fabcafe_taipei
Kinect v1+Processing workshot fabcafe_taipei
Mao Wu
 
BWA DiSCAN-PTZ.8 (oct-2012)
BWA DiSCAN-PTZ.8 (oct-2012)BWA DiSCAN-PTZ.8 (oct-2012)
BWA DiSCAN-PTZ.8 (oct-2012)
BWA Technology GmbH
 
MIT 6.870 - Template Matching and Histograms (Nicolas Pinto, MIT)
MIT 6.870 - Template Matching and Histograms (Nicolas Pinto, MIT)MIT 6.870 - Template Matching and Histograms (Nicolas Pinto, MIT)
MIT 6.870 - Template Matching and Histograms (Nicolas Pinto, MIT)
npinto
 
ADAPTIVE FILTER FOR DENOISING 3D DATA CAPTURED BY DEPTH SENSORS
ADAPTIVE FILTER FOR DENOISING 3D DATA CAPTURED BY DEPTH SENSORSADAPTIVE FILTER FOR DENOISING 3D DATA CAPTURED BY DEPTH SENSORS
ADAPTIVE FILTER FOR DENOISING 3D DATA CAPTURED BY DEPTH SENSORS
Soma Boubou
 
Color Imaging Lab Research Interests 2010
Color Imaging Lab Research Interests 2010Color Imaging Lab Research Interests 2010
Color Imaging Lab Research Interests 2010
Juan Luis Nieves
 
GLS-1000
GLS-1000GLS-1000
Object based image analysis tools for opticks
Object based image analysis tools for opticksObject based image analysis tools for opticks
Object based image analysis tools for opticksMohit Kumar
 

Similar to Visual odometry presentation_without_video (20)

Getmoving as3kinect
Getmoving as3kinectGetmoving as3kinect
Getmoving as3kinect
 
High-Speed Single-Photon SPAD Camera
High-Speed Single-Photon SPAD CameraHigh-Speed Single-Photon SPAD Camera
High-Speed Single-Photon SPAD Camera
 
Kinect v1+Processing workshot fabcafe_taipei
Kinect v1+Processing workshot fabcafe_taipeiKinect v1+Processing workshot fabcafe_taipei
Kinect v1+Processing workshot fabcafe_taipei
 
Scd 2020 r
Scd 2020 rScd 2020 r
Scd 2020 r
 
Scz 3370 p
Scz 3370 pScz 3370 p
Scz 3370 p
 
Scz 3370 p
Scz 3370 pScz 3370 p
Scz 3370 p
 
BWA DiSCAN-PTZ.8 (oct-2012)
BWA DiSCAN-PTZ.8 (oct-2012)BWA DiSCAN-PTZ.8 (oct-2012)
BWA DiSCAN-PTZ.8 (oct-2012)
 
MIT 6.870 - Template Matching and Histograms (Nicolas Pinto, MIT)
MIT 6.870 - Template Matching and Histograms (Nicolas Pinto, MIT)MIT 6.870 - Template Matching and Histograms (Nicolas Pinto, MIT)
MIT 6.870 - Template Matching and Histograms (Nicolas Pinto, MIT)
 
ADAPTIVE FILTER FOR DENOISING 3D DATA CAPTURED BY DEPTH SENSORS
ADAPTIVE FILTER FOR DENOISING 3D DATA CAPTURED BY DEPTH SENSORSADAPTIVE FILTER FOR DENOISING 3D DATA CAPTURED BY DEPTH SENSORS
ADAPTIVE FILTER FOR DENOISING 3D DATA CAPTURED BY DEPTH SENSORS
 
Testo 881 datasheet
Testo 881 datasheetTesto 881 datasheet
Testo 881 datasheet
 
Color Imaging Lab Research Interests 2010
Color Imaging Lab Research Interests 2010Color Imaging Lab Research Interests 2010
Color Imaging Lab Research Interests 2010
 
Sncrx
SncrxSncrx
Sncrx
 
GLS-1000
GLS-1000GLS-1000
GLS-1000
 
Scd 2080 r
Scd 2080 rScd 2080 r
Scd 2080 r
 
Dual photography
Dual photographyDual photography
Dual photography
 
Sco 2080 r
Sco 2080 rSco 2080 r
Sco 2080 r
 
Sco 2080 r
Sco 2080 rSco 2080 r
Sco 2080 r
 
01002250 Ecografo
01002250 Ecografo01002250 Ecografo
01002250 Ecografo
 
Object based image analysis tools for opticks
Object based image analysis tools for opticksObject based image analysis tools for opticks
Object based image analysis tools for opticks
 
Testo 875 datasheet
Testo 875 datasheetTesto 875 datasheet
Testo 875 datasheet
 

Recently uploaded

Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024
Tobias Schneck
 
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered Quality
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualitySoftware Delivery At the Speed of AI: Inflectra Invests In AI-Powered Quality
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered Quality
Inflectra
 
Monitoring Java Application Security with JDK Tools and JFR Events
Monitoring Java Application Security with JDK Tools and JFR EventsMonitoring Java Application Security with JDK Tools and JFR Events
Monitoring Java Application Security with JDK Tools and JFR Events
Ana-Maria Mihalceanu
 
AI for Every Business: Unlocking Your Product's Universal Potential by VP of ...
AI for Every Business: Unlocking Your Product's Universal Potential by VP of ...AI for Every Business: Unlocking Your Product's Universal Potential by VP of ...
AI for Every Business: Unlocking Your Product's Universal Potential by VP of ...
Product School
 
State of ICS and IoT Cyber Threat Landscape Report 2024 preview
State of ICS and IoT Cyber Threat Landscape Report 2024 previewState of ICS and IoT Cyber Threat Landscape Report 2024 preview
State of ICS and IoT Cyber Threat Landscape Report 2024 preview
Prayukth K V
 
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdfSmart TV Buyer Insights Survey 2024 by 91mobiles.pdf
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf
91mobiles
 
How world-class product teams are winning in the AI era by CEO and Founder, P...
How world-class product teams are winning in the AI era by CEO and Founder, P...How world-class product teams are winning in the AI era by CEO and Founder, P...
How world-class product teams are winning in the AI era by CEO and Founder, P...
Product School
 
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...
UiPathCommunity
 
UiPath Test Automation using UiPath Test Suite series, part 4
UiPath Test Automation using UiPath Test Suite series, part 4UiPath Test Automation using UiPath Test Suite series, part 4
UiPath Test Automation using UiPath Test Suite series, part 4
DianaGray10
 
From Siloed Products to Connected Ecosystem: Building a Sustainable and Scala...
From Siloed Products to Connected Ecosystem: Building a Sustainable and Scala...From Siloed Products to Connected Ecosystem: Building a Sustainable and Scala...
From Siloed Products to Connected Ecosystem: Building a Sustainable and Scala...
Product School
 
FIDO Alliance Osaka Seminar: FIDO Security Aspects.pdf
FIDO Alliance Osaka Seminar: FIDO Security Aspects.pdfFIDO Alliance Osaka Seminar: FIDO Security Aspects.pdf
FIDO Alliance Osaka Seminar: FIDO Security Aspects.pdf
FIDO Alliance
 
GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...
GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...
GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...
Sri Ambati
 
De-mystifying Zero to One: Design Informed Techniques for Greenfield Innovati...
De-mystifying Zero to One: Design Informed Techniques for Greenfield Innovati...De-mystifying Zero to One: Design Informed Techniques for Greenfield Innovati...
De-mystifying Zero to One: Design Informed Techniques for Greenfield Innovati...
Product School
 
PCI PIN Basics Webinar from the Controlcase Team
PCI PIN Basics Webinar from the Controlcase TeamPCI PIN Basics Webinar from the Controlcase Team
PCI PIN Basics Webinar from the Controlcase Team
ControlCase
 
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...
James Anderson
 
Accelerate your Kubernetes clusters with Varnish Caching
Accelerate your Kubernetes clusters with Varnish CachingAccelerate your Kubernetes clusters with Varnish Caching
Accelerate your Kubernetes clusters with Varnish Caching
Thijs Feryn
 
Bits & Pixels using AI for Good.........
Bits & Pixels using AI for Good.........Bits & Pixels using AI for Good.........
Bits & Pixels using AI for Good.........
Alison B. Lowndes
 
UiPath Test Automation using UiPath Test Suite series, part 3
UiPath Test Automation using UiPath Test Suite series, part 3UiPath Test Automation using UiPath Test Suite series, part 3
UiPath Test Automation using UiPath Test Suite series, part 3
DianaGray10
 
GraphRAG is All You need? LLM & Knowledge Graph
GraphRAG is All You need? LLM & Knowledge GraphGraphRAG is All You need? LLM & Knowledge Graph
GraphRAG is All You need? LLM & Knowledge Graph
Guy Korland
 
Neuro-symbolic is not enough, we need neuro-*semantic*
Neuro-symbolic is not enough, we need neuro-*semantic*Neuro-symbolic is not enough, we need neuro-*semantic*
Neuro-symbolic is not enough, we need neuro-*semantic*
Frank van Harmelen
 

Recently uploaded (20)

Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024
 
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered Quality
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualitySoftware Delivery At the Speed of AI: Inflectra Invests In AI-Powered Quality
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered Quality
 
Monitoring Java Application Security with JDK Tools and JFR Events
Monitoring Java Application Security with JDK Tools and JFR EventsMonitoring Java Application Security with JDK Tools and JFR Events
Monitoring Java Application Security with JDK Tools and JFR Events
 
AI for Every Business: Unlocking Your Product's Universal Potential by VP of ...
AI for Every Business: Unlocking Your Product's Universal Potential by VP of ...AI for Every Business: Unlocking Your Product's Universal Potential by VP of ...
AI for Every Business: Unlocking Your Product's Universal Potential by VP of ...
 
State of ICS and IoT Cyber Threat Landscape Report 2024 preview
State of ICS and IoT Cyber Threat Landscape Report 2024 previewState of ICS and IoT Cyber Threat Landscape Report 2024 preview
State of ICS and IoT Cyber Threat Landscape Report 2024 preview
 
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdfSmart TV Buyer Insights Survey 2024 by 91mobiles.pdf
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf
 
How world-class product teams are winning in the AI era by CEO and Founder, P...
How world-class product teams are winning in the AI era by CEO and Founder, P...How world-class product teams are winning in the AI era by CEO and Founder, P...
How world-class product teams are winning in the AI era by CEO and Founder, P...
 
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...
 
UiPath Test Automation using UiPath Test Suite series, part 4
UiPath Test Automation using UiPath Test Suite series, part 4UiPath Test Automation using UiPath Test Suite series, part 4
UiPath Test Automation using UiPath Test Suite series, part 4
 
From Siloed Products to Connected Ecosystem: Building a Sustainable and Scala...
From Siloed Products to Connected Ecosystem: Building a Sustainable and Scala...From Siloed Products to Connected Ecosystem: Building a Sustainable and Scala...
From Siloed Products to Connected Ecosystem: Building a Sustainable and Scala...
 
FIDO Alliance Osaka Seminar: FIDO Security Aspects.pdf
FIDO Alliance Osaka Seminar: FIDO Security Aspects.pdfFIDO Alliance Osaka Seminar: FIDO Security Aspects.pdf
FIDO Alliance Osaka Seminar: FIDO Security Aspects.pdf
 
GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...
GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...
GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...
 
De-mystifying Zero to One: Design Informed Techniques for Greenfield Innovati...
De-mystifying Zero to One: Design Informed Techniques for Greenfield Innovati...De-mystifying Zero to One: Design Informed Techniques for Greenfield Innovati...
De-mystifying Zero to One: Design Informed Techniques for Greenfield Innovati...
 
PCI PIN Basics Webinar from the Controlcase Team
PCI PIN Basics Webinar from the Controlcase TeamPCI PIN Basics Webinar from the Controlcase Team
PCI PIN Basics Webinar from the Controlcase Team
 
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...
 
Accelerate your Kubernetes clusters with Varnish Caching
Accelerate your Kubernetes clusters with Varnish CachingAccelerate your Kubernetes clusters with Varnish Caching
Accelerate your Kubernetes clusters with Varnish Caching
 
Bits & Pixels using AI for Good.........
Bits & Pixels using AI for Good.........Bits & Pixels using AI for Good.........
Bits & Pixels using AI for Good.........
 
UiPath Test Automation using UiPath Test Suite series, part 3
UiPath Test Automation using UiPath Test Suite series, part 3UiPath Test Automation using UiPath Test Suite series, part 3
UiPath Test Automation using UiPath Test Suite series, part 3
 
GraphRAG is All You need? LLM & Knowledge Graph
GraphRAG is All You need? LLM & Knowledge GraphGraphRAG is All You need? LLM & Knowledge Graph
GraphRAG is All You need? LLM & Knowledge Graph
 
Neuro-symbolic is not enough, we need neuro-*semantic*
Neuro-symbolic is not enough, we need neuro-*semantic*Neuro-symbolic is not enough, we need neuro-*semantic*
Neuro-symbolic is not enough, we need neuro-*semantic*
 

Visual odometry presentation_without_video

  • 1. Visual odometry by Inkyu Sa
  • 3. Motivation his laser scanner is good enough to obtain the position (x, y, θ, z) of the quadrotor at 10Hz. This data provides from ROS canonical scan matcher package. 0.5 0.4 0.3 0.2 0.1 y position(m) 0 −0.1 −0.2 −0.3 −0.4 −0.5 −0.5 −0.4 −0.3 −0.2 −0.1 0 0.1 0.2 0.3 0.4 0.5 x position(m)
  • 4. Motivation his laser scanner is good enough to obtain the position (x, y, θ, z) of the quadrotor at 10Hz. This data provides from ROS canonical scan matcher package. 0.5 0.4 0.3 0.2 - Relatively high accuracy. 0.1 y position(m) - ROS device driver support. 0 −0.1 −0.2 −0.3 −0.4 −0.5 −0.5 −0.4 −0.3 −0.2 −0.1 0 0.1 0.2 0.3 0.4 0.5 x position(m)
  • 5. Motivation his laser scanner is good enough to obtain the position (x, y, θ, z) of the quadrotor at 10Hz. This data provides from ROS canonical scan matcher package. 0.5 0.4 0.3 0.2 - Relatively high accuracy. 0.1 y position(m) - ROS device driver support. 0 −0.1 −0.2 - Expensive, USD 2375 −0.3 - Low frequency 10Hz −0.4 - Only for 2D. −0.5 −0.5 −0.4 −0.3 −0.2 −0.1 0 0.1 0.2 0.3 0.4 0.5 x position(m)
  • 7. Motivation inect 3D depth camera can provide not only 2D RGB images but 3D depth images at 30Hz. http://www.ifixit.com
  • 8. Motivation inect 3D depth camera can provide not only 2D RGB images but 3D depth images at 30Hz. http://www.ifixit.com - Reasonable price. AUD 180. - 3 Dimensional information. - Openni Kinect ROS device driver and point could library support. - Available to use for visual odometry and object recognition, 3D SLAM and so on.
  • 9. Motivation inect 3D depth camera can provide not only 2D RGB images but 3D depth images at 30Hz. http://www.ifixit.com - Relatively low accuracy and many noise. - Reasonable price. AUD 180. - Heavy weight. original kinect over 500g. - 3 Dimensional information. - Openni Kinect ROS device driver and - Requires high computational power. point could library support. ◦ ◦ - Narrow filed of view. H=57,V=43 - Available to use for visual odometry and object recognition, 3D SLAM and so on.
  • 10. Motivation inect 3D depth camera can provide not only 2D RGB images but 3D depth images at 30Hz. http://www.ifixit.com - Relatively low accuracy and many noise. - Reasonable price. AUD 180. - Heavy weight. original kinect over 500g. - 3 Dimensional information. - Openni Kinect ROS device driver and - Requires high computational power. point could library support. ◦ ◦ - Narrow filed of view. H=57,V=43 - Available to use for visual odometry and object recognition, 3D SLAM and so on.
  • 11. Motivation inect 3D depth camera can provide not only 2D RGB images but 3D depth images at 30Hz. http://www.ifixit.com - Relatively low accuracy and many noise. - Reasonable price. AUD 180. - Heavy weight. original kinect over 500g. - 3 Dimensional information. - Openni Kinect ROS device driver and - Requires high computational power. point could library support. ◦ ◦ - Narrow filed of view. H=57,V=43 - Available to use for visual odometry and object recognition, 3D SLAM and so on.
  • 12. Motivation inect 3D depth camera can provide not only 2D RGB images but 3D depth images at 30Hz. http://www.ifixit.com - Relatively low accuracy and many noise. - Reasonable price. AUD 180. - Heavy weight. original kinect over 500g. - 3 Dimensional information. - Openni Kinect ROS device driver and - Requires high computational power. point could library support. ◦ ◦ - Narrow filed of view. H=57,V=43 - Available to use for visual odometry and object recognition, 3D SLAM and so on.
  • 13. Motivation inect 3D depth camera can provide not only 2D RGB images but 3D depth images at 30Hz. http://www.ifixit.com - Relatively low accuracy and many noise. - Reasonable price. AUD 180. - Heavy weight. original kinect over 500g. - 3 Dimensional information. - Openni Kinect ROS device driver and - Requires high computational power. point could library support. ◦ ◦ - Narrow filed of view. H=57,V=43 - Available to use for visual odometry and object recognition, 3D SLAM and so on.
  • 16.
  • 17.
  • 18. ◦ ◦
  • 19. ◦ ◦
  • 20. ◦ ◦
  • 21.    x a  y =  b  z 1 a = tan{α tan−1 u/f } cos β b = tan{α tan−1 v/f } sin β u =x point of image plane. v =y point of image plane.
  • 22. (∆x, ∆y, ∆θ) (x , y ) (u, v) (u , v ) (x, y) ˆ ˆ (du, dv) = P (u, v, {u0 , v0 , f, α}, {∆x, ∆y, ∆θ}) P is optical flow function of the feature coordinate. t t+1
  • 23. e1 = med ˆ ˆ (dui − dui )2 ) + (dvi − dvi )2 ) e1
  • 24.
  • 25. Solar powered robot, Hyperion, developed by CMU.
  • 26. Solar powered robot, Hyperion, developed by CMU. The parameter estimates are somewhat noisy but closely with those determined using a CMU calibration method. estimates=(Value) Calibration method=(True)
  • 27. R W x ˙ x ˙ R = RZ (θ) W y ˙ y ˙ Then integration of the robot velocity using sample time can be produce the position of the robot as shown the left image. R R x x ˙ R = R ∆t y y ˙
  • 28. Using the following equation, the observed robot coordinate velocity can be calculated. R W x ˙ x ˙ R = RZ (θ) W y ˙ y ˙ Then integration of the robot velocity using sample time can be produce the position of the robot as shown the left image. R R x x ˙ R = R ∆t y y ˙
  • 29.
  • 30.
  • 31. 6DOF of camera position + 3DOF of features position. Observation vector,the projection data for the current image. Process noise covariance,should be known. Measurement noise covariance, should be know. isotropic with variance(4.0 pixels). Error covariance Kalman gain. Observation matrix
  • 32. − xk = ˆ xk ˆ + Kk (zk − H xk ) ˆ The measurement is re- projection of point. T zj = (R(ρ) Zj + t) ρ, t are the camera-to-world rotation Euler angles and translation of the camera. Zj is the 3D world coordinate system position of point j. This measurement is nonlinear in the estimated parameters and this motivates use of the iterated extended Kalman filter.
  • 33. − xk = ˆ xk ˆ + Kk (zk − H xk ) ˆ The measurement is re- projection of point. T zj = (R(ρ) Zj + t) ρ, t are the camera-to-world rotation Euler angles and translation of the camera. Zj is the 3D world coordinate system position of point j. This measurement is nonlinear in the estimated parameters and this motivates use of the iterated extended Kalman filter.
  • 34. Initial state estimate distribution is done using batch algorithm[1] to get mean and covariance. This estimates initial 6D camera positions corresponding to several images in the sequence. 29.2m traveled and average error=22.9cm and maximum error=72.7cm.
  • 35.
  • 36.
  • 37.
  • 38.
  • 39. é
  • 40.
  • 41.
  • 42. y x
  • 43. y Robert Collins CSE486, Penn State x λ1 = large , λ2 = small
  • 44. y Robert Collins CSE486, Penn State x λ1 = small , λ2 = small
  • 45. y Robert Collins CSE486, Penn State x λ1 = large , λ2 = large
  • 46. 2 E(u, v) = w(x, y)[I(x + u, y + v) − I(x, y)] x,y ≈ [I(x, y) + uIx + vIy − I(x, y)]2 x,y = u2 Ix + 2uvIx Iy + v 2 Iy 2 2 x,y 2 Ix Ix Iy u = u v 2 Ix Iy Iy v x,y 2 Ix Ix Iy u = u v ( 2 ) Ix Iy Iy v x,y u 2 Ix Ix Iy E(u, v) ∼ = u v M ,M = w(x, y) 2 v Ix Iy Iy x,y
  • 47. R = detM − k(traceM )2 2 2 2 2 = Ix Iy − k(Ix + Iy ) 2 detM =λ1 λ2 α =Ix 2 traceM =λ1 + λ2 β =Iy Ix =Gx ∗ I k is an empirically determined σ constant range from 0.04~0.06 Iy =Gy ∗ I σ 2 Ix Ix Iy M= w(x, y) 2 Ix Iy Iy x,y
  • 48. R = detM − k(traceM )2 2 2 2 2 = Ix Iy − k(Ix + Iy ) 2 detM =λ1 λ2 α =Ix 2 traceM =λ1 + λ2 β =Iy Ix =Gx ∗ I k is an empirically determined σ constant range from 0.04~0.06 Iy =Gy ∗ I σ 2 Ix Ix Iy M= w(x, y) 2 Ix Iy Iy x,y Source from [3]
  • 49.
  • 50. For each detected feature, search every features within a certain disparity limit from the next image. (10% of image size) (t) (t-1)
  • 51. For each detected feature, calculate the normalized correlation using 11x11 window. A= I x,y B= I2 x,y 1 C =√ nB − A2 D= I1 I2 x,y n = 121, 11 × 11 The normalized correlation Find the highest value of NC, between two patches is (Mutual consistency check) N C1,2 = (nD − A1 A2 )C1 C2 = max(N C1, 2)
  • 52. Circles shows the current feature locations and lines are feature tracks over the images
  • 53. Track matched features and estimate relative position using 5-points algorithm. RANSAC refines position.
  • 54. Track matched features and estimate relative position using 5-points algorithm. RANSAC refines position. Construct 3D points with first and last observation and estimate the scale factor.
  • 55. Track matched features and estimate relative position using 5-points algorithm. RANSAC refines position. Construct 3D points with first and last observation and estimate the scale factor. Track additional number of frames and compute the position of camera with known 3D point using 3-point algorithm. RANSAC refines positions.
  • 56. Track matched features and estimate relative position using 5-points algorithm. RANSAC refines position. Construct 3D points with first and last observation and estimate the scale factor. Track additional number of frames and compute the position of camera with known 3D point using 3-point algorithm. RANSAC refines positions.
  • 57. Triangulate the observed matches into 3D points. http://en.wikipedia.org/wiki/File:TriangulationReal.svg = abs(y1 − y1 )
  • 58. Triangulate the observed matches into 3D points. Track features for a certain number of frames and calculate the position of stereo rig and refine with RANSAC and 3points algorithm. E{(p1 , p1 ), (p2 , p2 ), (p3 , p3 )} From this equation, we p1 could get R,T matrix. t p2 p3 p1 t-1 p3 p2
  • 59. Triangulate the observed matches into 3D points. Track features for a certain number of frames and calculate the position of stereo rig and refine with RANSAC and 3points algorithm. E{(p1 , p1 ), (p2 , p2 ), (p3 , p3 )} From this equation, we p1 could get R,T matrix. t p2 p3 p1 t-1 p3 p2
  • 60. Triangulate the observed matches into 3D points. Track features for a certain number of frames and calculate the position of stereo rig and refine with RANSAC and 3points algorithm. E{(p1 , p1 ), (p2 , p2 ), (p3 , p3 )} From this equation, we p1 could get R,T matrix. t p2 p3 p1 t-1 p3 p2
  • 61. Triangulate the observed matches into 3D points. Track features for a certain number of frames and calculate the position of stereo rig and refine with RANSAC and 3points algorithm. Triangulate all new feature matches and repeat previous step a certain number of time.
  • 62. Triangulate the observed matches into 3D points. Track features for a certain number of frames and calculate the position of stereo rig and refine with RANSAC and 3points algorithm. Triangulate all new feature matches and repeat previous step a certain number of time.
  • 63. Note: In this paper, fire wall refers to the tool in order to avoid error propagation. Idea is that don’t triangulate of 3D points using observation beyond the most recent firewall. time projection error Set the firewall at this frame Then using from this frame to triangulate 3D points. time
  • 64.
  • 65.
  • 68.
  • 69. Visual Odometry’s frame processing rate is around 13Hz. No a priori knowledge of the motion. 3D trajectory is estimated. DGPS accuray in RG-2 mode is 2cm
  • 70.
  • 71. Red=VO, Blue=DGPS, Traveling=184m, Error of the endpoint is 4.1 meters.
  • 72.
  • 73.
  • 74.
  • 75. Frame-to-frame error analysis of the vehicle heading estimates. Approximately zero-mean suggests that estimates are not biased.
  • 76.
  • 77.
  • 78. Unit=metre Autonomous run GPS-(Gyro+Wheel)=0.29m GPS-(Gyro+Vis)=0.77m Remote control GPS-(Gyro+Wheel)=-6.78m Official runs to report results of visual GPS-(Gyro+Vis)=3.5m odometry to DARPA. “Remote” means manual control by a person who is not a member of the vo team. Distance from true DGPS position at the end of eacho run. (in metres)
  • 79.
  • 81.
  • 84. Dark plus(Blue)=DGPS Thick line(Green)=Vo Thin line(Red)=Wheel+IMU Because of slippage on muddy trail
  • 86. Dark plus(Blue)=DGPS Dark plus(Blue)=DGPS Thick line(Green)=Vo Thick line(Green)=Vo Thin line(Red)=Wheel+IMU Thin line(Red)=Wheel+Vo
  • 87.
  • 88.
  • 89.
  • 90.
  • 91.
  • 92.
  • 93.

Editor's Notes

  1. \n
  2. This is our quadrotor. Currently we use the laser scanner to get the position.\n\nStdev for x=0.13m and y=0.09m, The graph is 1m x 1m for 2D. \n\nExplain advantages and disadvantage.\n\nLet’s look at vision sensor for visual odometry.\n
  3. This is our quadrotor. Currently we use the laser scanner to get the position.\n\nStdev for x=0.13m and y=0.09m, The graph is 1m x 1m for 2D. \n\nExplain advantages and disadvantage.\n\nLet’s look at vision sensor for visual odometry.\n
  4. This is our quadrotor. Currently we use the laser scanner to get the position.\n\nStdev for x=0.13m and y=0.09m, The graph is 1m x 1m for 2D. \n\nExplain advantages and disadvantage.\n\nLet’s look at vision sensor for visual odometry.\n
  5. This is our quadrotor. Currently we use the laser scanner to get the position.\n\nStdev for x=0.13m and y=0.09m, The graph is 1m x 1m for 2D. \n\nExplain advantages and disadvantage.\n\nLet’s look at vision sensor for visual odometry.\n
  6. This is our quadrotor. Currently we use the laser scanner to get the position.\n\nStdev for x=0.13m and y=0.09m, The graph is 1m x 1m for 2D. \n\nExplain advantages and disadvantage.\n\nLet’s look at vision sensor for visual odometry.\n
  7. This is our quadrotor. Currently we use the laser scanner to get the position.\n\nStdev for x=0.13m and y=0.09m, The graph is 1m x 1m for 2D. \n\nExplain advantages and disadvantage.\n\nLet’s look at vision sensor for visual odometry.\n
  8. This is our quadrotor. Currently we use the laser scanner to get the position.\n\nStdev for x=0.13m and y=0.09m, The graph is 1m x 1m for 2D. \n
  9. This is our quadrotor. Currently we use the laser scanner to get the position.\n\nStdev for x=0.13m and y=0.09m, The graph is 1m x 1m for 2D. \n
  10. This is our quadrotor. Currently we use the laser scanner to get the position.\n\nStdev for x=0.13m and y=0.09m, The graph is 1m x 1m for 2D. \n
  11. This is our quadrotor. Currently we use the laser scanner to get the position.\n\nStdev for x=0.13m and y=0.09m, The graph is 1m x 1m for 2D. \n
  12. This is our quadrotor. Currently we use the laser scanner to get the position.\n\nStdev for x=0.13m and y=0.09m, The graph is 1m x 1m for 2D. \n
  13. This is our quadrotor. Currently we use the laser scanner to get the position.\n\nStdev for x=0.13m and y=0.09m, The graph is 1m x 1m for 2D. \n
  14. This is our quadrotor. Currently we use the laser scanner to get the position.\n\nStdev for x=0.13m and y=0.09m, The graph is 1m x 1m for 2D. \n
  15. This is our quadrotor. Currently we use the laser scanner to get the position.\n\nStdev for x=0.13m and y=0.09m, The graph is 1m x 1m for 2D. \n
  16. This is our quadrotor. Currently we use the laser scanner to get the position.\n\nStdev for x=0.13m and y=0.09m, The graph is 1m x 1m for 2D. \n
  17. \n
  18. \n
  19. \n
  20. \n
  21. \n
  22. \n
  23. \n
  24. \n
  25. \n
  26. \n
  27. \n
  28. \n
  29. principle point u0,v0, focal length f, elevation gain alpha\nP = a \n
  30. principle point u0,v0, focal length f, elevation gain alpha\nP = a \n
  31. principle point u0,v0, focal length f, elevation gain alpha\nP = a \n
  32. principle point u0,v0, focal length f, elevation gain alpha\nP = a \n
  33. principle point u0,v0, focal length f, elevation gain alpha\nP = a \n
  34. principle point u0,v0, focal length f, elevation gain alpha\nP = a \n
  35. principle point u0,v0, focal length f, elevation gain alpha\nP = a \n
  36. principle point u0,v0, focal length f, elevation gain alpha\nP = a \n
  37. principle point u0,v0, focal length f, elevation gain alpha\nP = a \n
  38. principle point u0,v0, focal length f, elevation gain alpha\nP = a \n
  39. The different approach which is proposed in this paper is structure from motion.\nx_hat=posteriori state estimate\nx_hat_minus=priori state estimate\n
  40. The different approach which is proposed in this paper is structure from motion.\nx_hat=posteriori state estimate\nx_hat_minus=priori state estimate\n
  41. The different approach which is proposed in this paper is structure from motion.\nx_hat=posteriori state estimate\nx_hat_minus=priori state estimate\n
  42. The different approach which is proposed in this paper is structure from motion.\nx_hat=posteriori state estimate\nx_hat_minus=priori state estimate\n
  43. The different approach which is proposed in this paper is structure from motion.\nx_hat=posteriori state estimate\nx_hat_minus=priori state estimate\n
  44. The different approach which is proposed in this paper is structure from motion.\nx_hat=posteriori state estimate\nx_hat_minus=priori state estimate\n
  45. The different approach which is proposed in this paper is structure from motion.\nx_hat=posteriori state estimate\nx_hat_minus=priori state estimate\n
  46. The different approach which is proposed in this paper is structure from motion.\nx_hat=posteriori state estimate\nx_hat_minus=priori state estimate\n
  47. The different approach which is proposed in this paper is structure from motion.\nx_hat=posteriori state estimate\nx_hat_minus=priori state estimate\n
  48. The different approach which is proposed in this paper is structure from motion.\nx_hat=posteriori state estimate\nx_hat_minus=priori state estimate\n
  49. The different approach which is proposed in this paper is structure from motion.\nx_hat=posteriori state estimate\nx_hat_minus=priori state estimate\n
  50. \n
  51. \n
  52. Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  53. Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  54. Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  55. Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  56. Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  57. Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  58. Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  59. Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  60. Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  61. Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  62. Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  63. Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  64. Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  65. Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  66. Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  67. Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  68. Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  69. Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  70. Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  71. Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  72. Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  73. Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  74. Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  75. Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  76. Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  77. Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  78. Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  79. Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  80. Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  81. Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  82. Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  83. Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  84. Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  85. Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  86. Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  87. Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  88. Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  89. Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  90. Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  91. Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  92. Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  93. Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  94. Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  95. Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  96. Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  97. Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  98. Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  99. Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  100. Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  101. Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  102. Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  103. Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  104. Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  105. Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  106. Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  107. Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  108. Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  109. Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  110. Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  111. Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  112. Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  113. Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  114. Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  115. Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  116. Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  117. Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  118. Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  119. Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  120. Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  121. Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  122. Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  123. Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  124. Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  125. Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  126. Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  127. Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  128. Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  129. Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  130. Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  131. Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  132. Autonomous run: 96.09-95.80 =0.29 GPS-(Gyro+Wheel)\n96.09-95.32=0.77 GPS-(Gyro+Vis)\n\n
  133. Autonomous run: 96.09-95.80 =0.29 GPS-(Gyro+Wheel)\n96.09-95.32=0.77 GPS-(Gyro+Vis)\n\n
  134. Autonomous run: 96.09-95.80 =0.29 GPS-(Gyro+Wheel)\n96.09-95.32=0.77 GPS-(Gyro+Vis)\n\n
  135. Autonomous run: 96.09-95.80 =0.29 GPS-(Gyro+Wheel)\n96.09-95.32=0.77 GPS-(Gyro+Vis)\n\n
  136. Autonomous run: 96.09-95.80 =0.29 GPS-(Gyro+Wheel)\n96.09-95.32=0.77 GPS-(Gyro+Vis)\n\n
  137. Autonomous run: 96.09-95.80 =0.29 GPS-(Gyro+Wheel)\n96.09-95.32=0.77 GPS-(Gyro+Vis)\n\n
  138. Autonomous run: 96.09-95.80 =0.29 GPS-(Gyro+Wheel)\n96.09-95.32=0.77 GPS-(Gyro+Vis)\n\n
  139. Autonomous run: 96.09-95.80 =0.29 GPS-(Gyro+Wheel)\n96.09-95.32=0.77 GPS-(Gyro+Vis)\n\n
  140. Autonomous run: 96.09-95.80 =0.29 GPS-(Gyro+Wheel)\n96.09-95.32=0.77 GPS-(Gyro+Vis)\n\n
  141. Autonomous run: 96.09-95.80 =0.29 GPS-(Gyro+Wheel)\n96.09-95.32=0.77 GPS-(Gyro+Vis)\n\n
  142. Autonomous run: 96.09-95.80 =0.29 GPS-(Gyro+Wheel)\n96.09-95.32=0.77 GPS-(Gyro+Vis)\n\n
  143. Autonomous run: 96.09-95.80 =0.29 GPS-(Gyro+Wheel)\n96.09-95.32=0.77 GPS-(Gyro+Vis)\n\n
  144. Autonomous run: 96.09-95.80 =0.29 GPS-(Gyro+Wheel)\n96.09-95.32=0.77 GPS-(Gyro+Vis)\n\n
  145. Autonomous run: 96.09-95.80 =0.29 GPS-(Gyro+Wheel)\n96.09-95.32=0.77 GPS-(Gyro+Vis)\n\n
  146. \n
  147. principle point u0,v0, focal length f, elevation gain alpha\nP = a \n