Survey on Camera calibration

                                    Yuji Oyamada

                                 1 HVRL,   Keio University
2 Chair   for Computer Aided Medical Procedure (CAMP), Technische Universit¨t M¨nchen
                                                                           a   u


                                    April 3, 2012
Camera calibration




      Necessary to recover 3D metric from image(s).
             3D reconstruction,
             Object/camera localization, and
             etc.
      Computes 3D (real world)–2D (camera image) relationship.




Y. Oyamada (Keio Univ. and TUM)        Camera Calibration   April 3, 2012   2 / 44
For quick literature review




Several review/survey papers and book chapters
      [Salvi et al., 2002]
      [Zhang, 2005]
      [Remondino and Fraser, 2006]




Y. Oyamada (Keio Univ. and TUM)      Camera Calibration   April 3, 2012   3 / 44
For camera calibration




REFORMAT: Important issues for calibration should be synchronized with
the following text.
      Camera modeling:
      Parameters estimation:




Y. Oyamada (Keio Univ. and TUM)         Camera Calibration   April 3, 2012   4 / 44
A point in camera geometry


A point is expressed with several coordinate system.
3D points in world coordinate
A point Xw = (Xw , Yw , Zw )T in a world coordinate.

3D points in camera coordinate
A point Xc = (Xc , Yc , Zc )T in a camera coordinate.

2D points in image coordinate
A point x = (x, y )T in an image plane.




Y. Oyamada (Keio Univ. and TUM)    Camera Calibration   April 3, 2012   5 / 44
Projection matrix



A 3 × 4 projection matrix P denotes relationship between Xw and x as

                      x = PXw ,                                                (1)
                                                      
                                                    X
                                   p11 p12 p13 p14  w 
                             
                            x
                                                      Y
                       → s y  = p21 p22 p23 p24   w  .
                                                      Zw                     (2)
                            1      p31 p32 p33 p34
                                                        1




Y. Oyamada (Keio Univ. and TUM)       Camera Calibration       April 3, 2012   6 / 44
Intrinsic and extrinsic parameters

A projection matrix can be decomposed into two components, intrinsic
and extrinsic parameters, as

              x = PXw = A [R|t] Xw ,                                          (3)
                                                        
                                                  Xw
                   x     αx s x0       r11 r12 r13 t1  
                                                        Y
               → y  =  0 αy y0  r21 r22 r23 t2   w  ,
                                                        Zw                  (4)
                   1      0   0 1      r31 r32 r33 t3
                                                          1

where
      Intrinsic: 3 × 3 calibration matrix A.
      Extrinsic: 3 × 3 Rotation matrix R and 3 × 1 translation vector t.



Y. Oyamada (Keio Univ. and TUM)   Camera Calibration          April 3, 2012   7 / 44
Extrinsic parameters



Denotes transformation between Xw and Xc as

                          Xc = [R|t] Xw ,                                     (5)
                                                    
                           Xc                        X
                                     r11 r12 r13 t1  w 
                                  
                          Yc 
                            = r21 r22 r23 t2  Yw  .                     (6)
                           Zc                       Zw 
                                     r31 r32 r33 t3
                            1                           1




Y. Oyamada (Keio Univ. and TUM)        Camera Calibration     April 3, 2012   8 / 44
Intrinsic parameters

Project a 3D point Xc to image plane as

                             x = A [R|t] Xw = AXc ,                           (7)
                                                      
                                                  Xc
                                  x       αx s x0  
                                                      Y
                              → y  =  0 αy y0   c  ,
                                                      Zc                    (8)
                                  1       0   0 1
                                                       1

where
      αx and αy are focal lengths in pixel unit.
      x0 and y0 are image center in pixel unit.
      s is skew parameter.



Y. Oyamada (Keio Univ. and TUM)        Camera Calibration     April 3, 2012   9 / 44
4 steps projecting a 3D world point to a 2D image point



A 3 × 4 projection matrix P denotes relationship between     WX        and I x as
                                                                   w

                   I
                       x = PW X w ,                                             (9)
                                                       WX
                                                            
                                      p11 p12 p13 p14 W w 
                           I  
                              xd
                                                          Y
                       → s I yd  = p21 p22 p23 p24  W w  .
                                                         Zw                  (10)
                              1       p31 p32 p33 p34
                                                          1




Y. Oyamada (Keio Univ. and TUM)       Camera Calibration       April 3, 2012    10 / 44
1/4: A 3D world point to a 3D camera point


Change the world coordinate system to the camera one.
      From a 3D point             WX       in metric system w.r.t. the world coordinate
                                       w
      To a 3D point C Xw in metric system w.r.t. the camera coordinate


                              C
                           Xw = [C Rw |C Tw ]W Xw ,                                        (11)
                         C
                                                     W Xw
                                                         
                         Xw     
                      C Y       R11 R12 R13 t14 W 
                                                        Y
                  → s C w  = R21 R22 R23 t24  W w  .
                       Zw                           Zw                                 (12)
                                  R31 R32 R33 t34
                         1                              1




Y. Oyamada (Keio Univ. and TUM)               Camera Calibration           April 3, 2012    11 / 44
2/4: A 3D camera point to a 2D camera point

Change the 3D camera coordinate system to the 2D camera one.
      From a 3D point C Xw in metric system w.r.t. the camera coordinate
      To a 2D point C Xu in metric system w.r.t. the camera coordinate


                  C
                Xu = [C Rw |C Tw ]W Xw ,                                               (13)
                                    W Xw
                                          
           C  
              Xu       f 0 0 0 W 
                                         Y
       → s C Yu  = 0 f 0 0 W w  ,
                                      Zw                                             (14)
              1        0 0 1 0
                                         1
                  C         f W                        C           f        W
                    Xu = W        Xw                       Yu =   WZ
                                                                                Yw ,
                            Zw                                          w

where f denotes focal length in metric system.


Y. Oyamada (Keio Univ. and TUM)   Camera Calibration                   April 3, 2012    12 / 44
3/4: Lens distortion

Practical lens distort the previous 3D→2D projection.
                   C
                       Xu = C Xd + δx                        C
                                                                 Yu = C Yd + δy ,               (15)

where δx and δy denote distortion parameter along with each axis.
In the case of no lens distortion,

                             δx = 0                                δy = 0                       (16)

      Radial distortion δxr and δyr ,
      Decentering distortoin δxd and δyd ,
      Thin prism distortion δxp and δyp .




Y. Oyamada (Keio Univ. and TUM)         Camera Calibration                      April 3, 2012    13 / 44
3/4: Lens distortion: Radial distortion




        Caused by flawed radial curvature of lens.
        Modeled by [Tsai, 1987] 1 .



          δxr = k1 C Xd (C Xd + C Yd )
                            2      2
                                                                 δyr = k1 C Yd (C Xd + C Yd )
                                                                                   2      2
                                                                                                                        (17)




    1
      R. Y. Tsai. A versatile camera calibration technique for high-accuracy 3d machine vision metrology using off-the-shelf tv
cameras and lenses. IEEE Transactions on Robotics and Automation, 3(4):323–344, 1987
Y. Oyamada (Keio Univ. and TUM)                       Camera Calibration                               April 3, 2012      14 / 44
3/4: Lens distortion: Weng’s model




Model the lens distortion from the undistorted image point C Xu instead of
the distorted one C Xd .
                   C
                       Xd = C Xu + δx                        C
                                                                 Yd = C Yu + δy               (18)




Y. Oyamada (Keio Univ. and TUM)         Camera Calibration                    April 3, 2012    15 / 44
3/4: Lens distortion: Decentering distortion


       Since the optical center of the lens is not correctly aligned with the
       center of the camera.
       Modeled by [Weng et al., 1992] 2 ,



                             δxd = p1 (3C Xu + C Yu ) + 2p2 C Xu C Yu
                                           2      2
                                                                                                                 (19)
                             δyd = 2p1 C Xu C Yu + p2 (C Xu + 3C Yu )
                                                          2       2
                                                                                                                 (20)




    2
      J. Weng, P. Cohen, and M. Herniou. Camera calibration with distortion models and accuracy evaluation. IEEE
Transactions on Pattern Analysis and Machine Intelligence, 14(10):965–980, 1992. ISSN 0162-8828. doi: 10.1109/34.159901.
URL http://dx.doi.org/10.1109/34.159901
Y. Oyamada (Keio Univ. and TUM)                    Camera Calibration                            April 3, 2012     16 / 44
3/4: Lens distortion: Thin prism distortion



      From imperfection in lens design and manufacturing as well as camera
      assembly.
      Modeled by adding a thin prism to the optic system, causing radial
      and tangential distortions [Weng et al., 1992].



              δxp = s1 (C Xu + C Yu )
                           2      2
                                                     δyp = s2 (C Xu + C Yu )
                                                                  2      2
                                                                                        (21)




Y. Oyamada (Keio Univ. and TUM)     Camera Calibration                  April 3, 2012    17 / 44
3/4: Lens distortion: Merged distortion

Merge the all distortion as

  δxp = (g1 + g3 )C Xu + g4 C Xu C Yu + g1 C Yu + k1 C Xu (C Xu + C Yu ), (22)
                     2                        2               2      2

  δyp = g2 C Xu + g3 C Xu C Yu + (g2 + g4 )C Yu + k1 C Yu (C Xu + C Yu ), (23)
              2                               2               2      2


where

                                  g1 = s1 + p1
                                  g2 = s2 + p2
                                  g3 = 2p1
                                  g4 = 2p2




Y. Oyamada (Keio Univ. and TUM)   Camera Calibration           April 3, 2012   18 / 44
4/4: A 2D camera point to a 2D image point

Change the 2D camera coordinate system to the 2D image one.
      From a 2D point C Xd in metric system w.r.t. the camera coordinate
      To a 2D point I Xd in pixel system w.r.t. the camera coordinate

         I                             C 
            Xd      −ku  0            u0      Xd
       s I Yd  =  0  −kv           v0  C Yd  ,                                       (24)
            1        0   0            1       1
                   I
                       Xd = −ku C Xd + u0                   I
                                                                Yd = −kv C Yd + v0 ,

where
      parameters (ku , kv ) transform from metric measures to pixel.
      (u0 , v0 ) define the projection of the focal point in the plain.


Y. Oyamada (Keio Univ. and TUM)        Camera Calibration                  April 3, 2012    19 / 44
Camera calibration: General idea



Task
Compute camera parameters:
      Packed parameters P.
      Each components A, R, and t.

Given
      Known 3D points {Xi |i = 1, . . . , N}.
      Observed 2D points {xi |i = 1, . . . , N}.




Y. Oyamada (Keio Univ. and TUM)    Camera Calibration   April 3, 2012   20 / 44
Camera calibration: Projective matrix estimation


Setting p34 = 1, i-th image point xi is written as
                                  Xi p11 + Yi p12 + Zi p13 + p14
                             xi =                                                  (25)
                                   Xi p31 + Yi p32 + Zi p33 + 1
                                  Xi p21 + Yi p22 + Zi p23 + p24
                             yi =                                                  (26)
                                   Xi p31 + Yi p32 + Zi p33 + 1
Solve as an optimization problem w.r.t. P such as
   1   Linear method 1 solves as Ax = b.
   2   Linear method 2 solves as Ax = 0.
   3   Non-linear method solves non-linearly.




Y. Oyamada (Keio Univ. and TUM)         Camera Calibration         April 3, 2012    21 / 44
Camera calibration: Linear method 1

Proposed by [Hall et al., 1982] 3 .
Eq. (25) and Eq. (26) is rewritten as

     Xi p11 + Yi p12 + Zi p13 + p14 − xi Xi p31 − xi Yi p32 − xi Zi p33 = xi                                          (27)
     Xi p21 + Yi p22 + Zi p23 + p24 − yi Xi p31 − yi Yi p32 − yi Zi p33 = yi                                          (28)

Given N corresponding points {Xi } and {xi }, generate following equation:
                                                                                                       
                                                                                               p11       x1
             X1     Y1     Z1    1     0      0      0     0    −x1 X1      −x1 Y1     −x1 Z1 p 
           
                                                                                                         y1 
            0       0      0    0    X1     Y1     Z1     1    −y1 X1      −y1 Y1               12 
                                                                                       −y1 Z1          
                                                                                               .     . 
            ·       ·      ·    ·     ·      ·      ·     ·      ·           ·          ·     .  =  .              (29)
           
                                                                                       −xN ZN   . 
                                                                                                         . 
           X
              N     YN     ZN    1     0      0      0     0    −xN XN      −xN YN              p 
                                                                                                         
                                                                                                        x 
             0       0      0    0    XN     YN     ZN     1    −yN XN      −yN YN     −yN ZN      32      N
                                                                                                  p33    yN
           → Ap = b



where A ∈ R2N×11 , p ∈ R11 , and b ∈ R2N .

     3
       E. L. Hall, J. B. K. Tio, C. A. McPherson, and F. A. Sadjadi. Measuring curved surfaces for robot vision. Computer, 15
(12):42–54, 1982. ISSN 0018-9162. doi: 10.1109/MC.1982.1653915. URL http://dx.doi.org/10.1109/MC.1982.1653915
Y. Oyamada (Keio Univ. and TUM)                      Camera Calibration                              April 3, 2012      22 / 44
Camera calibration: Linear method 1 cont.

Considering an energy function E1 = Ap − b 2 , projection matrix is
obtained by minimizing E1 as

                    p = arg min E1 = arg min(Ap − b)T (Ap − b)
                    ˆ                                                            (30)
                                  p                p

Differentiating E1 w.r.t. p,
                                      ∂E1
                                          =0
                                      ∂p
                                      → AT (Aˆ − b) = 0
                                             p
                                      → AT Aˆ = AT b
                                            p
                                      → p = (AT A)−1 AT b
                                        ˆ                                        (31)

p can be estimated if AT A is invertible.


Y. Oyamada (Keio Univ. and TUM)           Camera Calibration     April 3, 2012    23 / 44
Camera calibration: Linear method 1 cont.




This method heavily relies on whether the matrix AT A is invertible or not.
Alternatively, we solve the problem by solving Ax = 0 as Linear method 2
does.




Y. Oyamada (Keio Univ. and TUM)   Camera Calibration         April 3, 2012   24 / 44
Camera calibration: Linear method 2

Eq. (25) and Eq. (26) is rewritten as

  Xi p11 + Yi p12 + Zi p13 + p14 − xi Xi p31 − xi Yi p32 − xi Zi p33 − xi p34 = 0
                                                                               (32)
  Xi p21 + Yi p22 + Zi p23 + p24 − yi Xi p31 − yi Yi p32 − yi Zi p33 − yi p34 = 0
                                                                               (33)

                                                                                           
                                                                                   p11      0
       X1   Y1    Z1    1    0     0    0   0    −x1 X1     −x1 Y1   −x1 Z1   −x1 p 
     
                                                                                       12  0
      0     0     0    0   X1    Y1   Z1   1    −y1 X1     −y1 Y1   −y1 Z1   −y1          
                                                                                   .     .
      ·     ·     ·    ·    ·     ·    ·   ·      ·          ·        ·       ·   .  = .       (34)
                                                                                        
                                                                              −xN   . 
                                                                                            .
     X
        N   YN    ZN    1    0     0    0   0    −xN XN     −xN YN   −xN ZN         p 
                                                                                             
                                                                                            0
       0     0     0    0   XN    YN   ZN   1    −yN XN     −yN YN   −yN ZN   −yN      33
                                                                                      p34    0
     → Ap = 0



where A ∈ R2N×12 is points matrix, p ∈ R12 is unknown projection matrix
parameters vector, and b ∈ R2N is 2D points vector

Y. Oyamada (Keio Univ. and TUM)             Camera Calibration                      April 3, 2012   25 / 44
Camera calibration: Linear method 2 cont.

To obtain the non-trivial solution of homogeneous system Ap = 0, apply
constrained optimization.
Considering an energy function E2 = Ap 2 subject to the constraint
 p 2 − 1 = 0, prevents p from becoming a zero vector.
With a Lagrange multiplier λ > 0, we obtain the following energy function
                                           2               2
                          E2 (p, λ) = Ap       − λ( p          − 1)
                                  = (Ap) (Ap) − λ(pT p − 1).
                                           T
                                                                                      (35)



            p = arg min E2 (p, λ) = arg min(Ap)T (Ap) − λ(pT p − 1)
            ˆ                                                                         (36)
                          p                    p




Y. Oyamada (Keio Univ. and TUM)       Camera Calibration              April 3, 2012    26 / 44
Camera calibration: Linear method 2 cont.

Differentiating E2 w.r.t. p

                                  ∂E2
                                      =0
                                  ∂p
                                  → AT Aˆ − λˆ = 0
                                        p    p
                                  → AT Aˆ = λˆ
                                        p    p                            (37)

Differentiating E2 w.r.t. λ

                                   ∂E2
                                       =0
                                   ∂λ
                                   → pT p − 1 = 0
                                      ˆ ˆ
                                            2
                                   → p
                                     ˆ          =1                        (38)


Y. Oyamada (Keio Univ. and TUM)      Camera Calibration   April 3, 2012    27 / 44
Camera calibration: Linear method 2 cont.



Pre-multiplying both sides of Eq. (37) by pT gives
                                          ˆ

                                  pT AT Aˆ = λˆ T p
                                  ˆ      p    p ˆ
                                  → (Aˆ )T (Aˆ ) = λ1
                                      p      p
                                             2
                                  → Aˆ
                                     p           =λ                        (39)

Eq. (39) is the same expression that E2 = Ap 2 . This means that
minimizing Ap 2 is to minimize λ.




Y. Oyamada (Keio Univ. and TUM)       Camera Calibration   April 3, 2012    28 / 44
Camera calibration: Linear method 2 cont.




Differencing the energy function E2 tells that
                                         ˆ
      Since Eq. (37) forms like Ax = λx, p should be an eigenvector of the
              T
      matrix A A whose corresponding eigenvalue is λ.
      Eq. (38) minimizes λ as much as possible (ideally 0)
      ˆ
Thus, p should be the eigenvector corresponding to the smallest
eigenvalue of the matrix AT A.




Y. Oyamada (Keio Univ. and TUM)   Camera Calibration         April 3, 2012   29 / 44
Camera calibration: Non-linear method




Consider the following reprojection error:
                    N                                             2                                             2
                                 Xi p11 + Yi p12 + Zi p13 + p14                Xi p21 + Yi p22 + Zi p23 + p24
            ENL =         xi −                                        + yi −                                        .     (40)
                    i=1
                                 Xi p31 + Yi p32 + Zi p33 + p34                Xi p31 + Yi p32 + Zi p33 + p34



      Need good initial guess (use linear method’s result).




Y. Oyamada (Keio Univ. and TUM)                      Camera Calibration                                  April 3, 2012   30 / 44
The method of Faugeras with radial distortion
Let’s derive the Formulation

                     CX       CY T        f       CX          CY T
                       u        u    =   CZ         w           w                               (41)
                                              w
                   
                   C Xw = r11 W Xw + r12 W Yw + r13 W Zw + tx
                   
                    CY = r WX + r WY + r WZ + t                                                 (42)
                    w
                   C
                            21    w    22    w    23    w    y
                      Zw = r31 WX + r WY + r WZ + t
                                  w    32    w    33    w    z



                         CX       CY T   =    CX     + δx      CY    + δy
                                                                            T
                                                                                                (43)
                           u        u           d                d

                          δxr = k1 C Xd (C Xd + C Yd )
                                            2      2
                                                                                                (44)
                          δyr = k1 C Yd (C Xd + C Yd )
                                            2      2



                   IX       IY T   = −ku C Xd + u0 −kv C Yd + v0
                                                                                T
                                                                                                (45)
                     d        d


Y. Oyamada (Keio Univ. and TUM)          Camera Calibration                     April 3, 2012    31 / 44
The method of Faugeras with radial distortion cont.

                               WX        WY        WZ        T         IX    I Y T,
Since we only know               w         w             w       and     d      d

                            CX
                        f   CZ
                               w
                                   − C Xd + k1 C Xd (C Xd + C Yd ) = 0
                                                        2      2
                            CY
                               w                                                                 (46)
                        f   CZ
                               w
                               w
                                   − C Yd + k1 C Yd (C Xd + C Yd ) = 0
                                                        2      2


where
                   
                   C Xw = r11 W Xw + r12 W Yw + r13 W Zw + tx
                   
                   
                   C
                    Yw = r21 W Xw + r22 W Yw + r23 W Zw + ty
                   
                   
                    CZ = r WX + r WY + r WZ + t                                                  (47)
                    w I X −u w
                   C
                             31        32    w    33    w    z
                    Xd = d 0
                   
                             −ku
                           I
                      Yd = Yd −v0
                   
                   C
                              −kv




Y. Oyamada (Keio Univ. and TUM)             Camera Calibration                   April 3, 2012    32 / 44
The method of Faugeras with radial distortion cont.


Finally, we derive the two energy function
           r11 W Xw + r12 W Yw + r13 W Zw + tx   I
                                                    Xd − u0      I
                                                                   Xd − u0 2
  U(x) = f     WX + r WY + r WZ + t
                                               −            + k1          r (48)
           r31    w    32    w    33    w    z       −ku            −ku
           r21 W Xw + r22 W Yw + r23 W Zw + ty    I
                                                    Yd − v0      I
                                                                   Yd − v0 2
  V (x) = f W             WY + r WZ + t
                                               −            + k1          r (49)
           r31 Xw + r32      w    33    w    z       −kv            −kv
where x packs all unknown parameters and

                                   IX              2          IY        2
                                     d− u0                      d− v0
                            r2 =                       +                    .                   (50)
                                     −ku                        −kv




Y. Oyamada (Keio Univ. and TUM)          Camera Calibration                     April 3, 2012   33 / 44
The method of Faugeras with radial distortion cont.

Since the rotation matrix can be described by 3 rotation angles, instead of
rij , we use 3 parameters α, β, and γ as
                                                  
                                  1      0      0
                       Rx (α) = 0 cos α − sin α                      (51)
                                  0 sin α cos α
                                                  
                                    cos β 0 sin β
                        Ry (β) =  0       1     0                    (52)
                                   − sin β 0 cos β
                                                  
                                   cos γ − sin γ 0
                        Rz (γ) =  sin γ cos γ 0                      (53)
                                     0      0     1




Y. Oyamada (Keio Univ. and TUM)   Camera Calibration        April 3, 2012   34 / 44
The method of Faugeras with radial distortion cont.

Since our energy functions, Eq. (48) and Eq. (49), are non-linear, the
method of Faugeras solves the problem with iterative non-linear
optimization as
                        G (xk ) ≈ G (xk−1 ) + J∆xk ,                   (54)
where xk denotes the k-th estimated parameters, G (x) is the minimization
function, and J denotes its Jacobian matrix as
                                ∂U (x ) ∂U (x )               ∂U1 (xk−1 )
                                                                           
                                   1 k−1       1 k−1
             
               U1 (xk−1 )
                          
                                    ∂α          ∂β       ···      ∂k1
                                ∂V1 (xk−1 ) ∂V1 (xk−1 )       ∂V1 (xk−1 ) 
             V1 (xk−1 )
                         
                                ∂α
                                               ∂β       ···      ∂k1
                                                                           
                                                                           
 G (xk−1 ) =       .
                    .                 .
                                      .           .
                                                  .      ..         .
                                                                    .
                          J =                                             (55)
                                                                        
                   .               .           .          .      .      
             Un (xk−1 )       ∂Un (xk−1 ) ∂Un (xk−1 ) · · · ∂Un (xk−1 ) 
                                ∂α             ∂β                ∂k1      
               Vn (xk−1 )        ∂Vn (xk−1 ) ∂Vn (xk−1 )       ∂Vn (xk−1 )
                                    ∂α          ∂β       ···      ∂k1




Y. Oyamada (Keio Univ. and TUM)   Camera Calibration             April 3, 2012   35 / 44
The method of Faugeras with radial distortion cont.




Then, we compute ∆xk using J and G (xk−1 ) as

                                  ∆xk = −(J T J)−1 J T G (xk−1 )                   (56)

and update the current estimate as xk = xk−1 + ∆xk .




Y. Oyamada (Keio Univ. and TUM)            Camera Calibration      April 3, 2012    36 / 44
How to choose the points set




Simply speaking, 6 corresponding points are required because
      we have 12 or 11 unknowns.
      One corresponding points give two equations.
Any points are fine for optimization?
The answer is NO.




Y. Oyamada (Keio Univ. and TUM)    Camera Calibration      April 3, 2012   37 / 44
Rank issues

For a n × m matrix A, n ≤ m,

                             rank (A) + null (A) = min(n, m),                   (57)

where null (A) represents the dimension of the null space of A.
Since n ≤ m = 12 in our case, we have to consider three cases:
   1   rank (P) = 12: The null space has 0 dimension means there is only
       one solution, namely p = 0.
   2   rank (P) = 11: The null space has 1 dimension and there is a unique
       solution up to s scale factor.
   3   rank (P) < 11: The null space has 2 or more dimension. The null
       vector p can be any vector in the dimensional space. This means that
       there exist infinite number of solutions.
The third case happens when all the points {Xi } are located on a plane.


Y. Oyamada (Keio Univ. and TUM)        Camera Calibration       April 3, 2012    38 / 44
Camera calibration: Projective matrix decomposition




Now, we have
      An estimate of projective matrix P.
      A set of corresponding points {Xi } and {xi }.
Next task is to decompose P into A, R, and t.
Basically, we use constraint on matrix form




Y. Oyamada (Keio Univ. and TUM)   Camera Calibration   April 3, 2012   39 / 44
Projective matrix decomposition cont.

Consider

                  P = A[R|t]
                                                           
                       αx s x0          r11        r12 r13 t1
                    =  0 αy y0  r21             r22 r23 t2 
                        0    0 1        r31        r32 r33 t3
                                    T             
                       αx s x0          r1        t1
                    =  0 αy y0  rT    2        t2 
                        0    0 1        rT
                                         3        t3
                       T         T + x rT
                                                                           
                       αx r1 + sr2      0 3         αx tx + s · t2 + x0 tz
                    =  αy rT + y0 rT
                              2       3                 αx ty + y0 tz                     (58)
                                rT
                                 3                            tz

where rT = (rj1 , rj2 , rj3 ), for j = 1, 2, 3.
       j



Y. Oyamada (Keio Univ. and TUM)       Camera Calibration                   April 3, 2012   40 / 44
Projective matrix decomposition cont.



If the camera parameters follows Eq. (58) format, following conditions
should be satisfied:
   1    p3 = 1, and
             Since p3 = r3 and r3 = 1.
   2   (p1 ∧ p3 ) · (p2 ∧ p3 ) = 0.
             Using ri and rj , i = j, is orhothogonal.




Y. Oyamada (Keio Univ. and TUM)       Camera Calibration    April 3, 2012   41 / 44
Camera calibration methods




   1   [Hall et al., 1982]
   2   [Faugeras and Toscani, 1986]
   3   [Salvi et al., 2002]
   4   [Tsai, 1987]
   5   [Weng et al., 1992]




Y. Oyamada (Keio Univ. and TUM)    Camera Calibration   April 3, 2012   42 / 44
Camera calibration type




      Lens distortion: Linear vs. Non-linear
      Camera parameters: Intrinsic vs. Extrinsic
      Computing camera parameters: Implicit vs. Explicit
      Calibration pattern: known 3D points vs. a reduced set of 3D points




Y. Oyamada (Keio Univ. and TUM)      Camera Calibration     April 3, 2012   43 / 44
References I

O. D. Faugeras and G. Toscani. The calibration problem for stereo. In IEEE Conference on
   Computer Vision and Pattern Recognition (CVPR), IEEE Publ.86CH2290-5, pages 15–20.
   IEEE, 1986.
E. L. Hall, J. B. K. Tio, C. A. McPherson, and F. A. Sadjadi. Measuring curved surfaces for
   robot vision. Computer, 15(12):42–54, 1982. ISSN 0018-9162. doi:
   10.1109/MC.1982.1653915. URL http://dx.doi.org/10.1109/MC.1982.1653915.
F. Remondino and C. Fraser. Digital camera calibration methods: Considerations and
   comparisons. In ISPRS Commision V Symposium, volume 6, pages 266–272, 2006.
J. Salvi, X. Armangue, and J. Batlle. A comparative review of camera calibrating methods with
   accuracy evaluation. Pattern Recognition, 35:1617–1635, 2002.
R. Y. Tsai. A versatile camera calibration technique for high-accuracy 3d machine vision
   metrology using off-the-shelf tv cameras and lenses. IEEE Transactions on Robotics and
   Automation, 3(4):323–344, 1987.
J. Weng, P. Cohen, and M. Herniou. Camera calibration with distortion models and accuracy
   evaluation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 14(10):
   965–980, 1992. ISSN 0162-8828. doi: 10.1109/34.159901. URL
   http://dx.doi.org/10.1109/34.159901.
Z. Zhang. Camera calibration. In G. Medioni and S. B. Kang, editors, Emerging Topics in
   Computer Vision, chapter 2, pages 4–43. Prentice Hall, 2005.


Y. Oyamada (Keio Univ. and TUM)         Camera Calibration                   April 3, 2012   44 / 44

Camera calibration

  • 1.
    Survey on Cameracalibration Yuji Oyamada 1 HVRL, Keio University 2 Chair for Computer Aided Medical Procedure (CAMP), Technische Universit¨t M¨nchen a u April 3, 2012
  • 2.
    Camera calibration Necessary to recover 3D metric from image(s). 3D reconstruction, Object/camera localization, and etc. Computes 3D (real world)–2D (camera image) relationship. Y. Oyamada (Keio Univ. and TUM) Camera Calibration April 3, 2012 2 / 44
  • 3.
    For quick literaturereview Several review/survey papers and book chapters [Salvi et al., 2002] [Zhang, 2005] [Remondino and Fraser, 2006] Y. Oyamada (Keio Univ. and TUM) Camera Calibration April 3, 2012 3 / 44
  • 4.
    For camera calibration REFORMAT:Important issues for calibration should be synchronized with the following text. Camera modeling: Parameters estimation: Y. Oyamada (Keio Univ. and TUM) Camera Calibration April 3, 2012 4 / 44
  • 5.
    A point incamera geometry A point is expressed with several coordinate system. 3D points in world coordinate A point Xw = (Xw , Yw , Zw )T in a world coordinate. 3D points in camera coordinate A point Xc = (Xc , Yc , Zc )T in a camera coordinate. 2D points in image coordinate A point x = (x, y )T in an image plane. Y. Oyamada (Keio Univ. and TUM) Camera Calibration April 3, 2012 5 / 44
  • 6.
    Projection matrix A 3× 4 projection matrix P denotes relationship between Xw and x as x = PXw , (1)    X p11 p12 p13 p14  w     x Y → s y  = p21 p22 p23 p24   w  .  Zw  (2) 1 p31 p32 p33 p34 1 Y. Oyamada (Keio Univ. and TUM) Camera Calibration April 3, 2012 6 / 44
  • 7.
    Intrinsic and extrinsicparameters A projection matrix can be decomposed into two components, intrinsic and extrinsic parameters, as x = PXw = A [R|t] Xw , (3)        Xw x αx s x0 r11 r12 r13 t1   Y → y  =  0 αy y0  r21 r22 r23 t2   w  ,  Zw  (4) 1 0 0 1 r31 r32 r33 t3 1 where Intrinsic: 3 × 3 calibration matrix A. Extrinsic: 3 × 3 Rotation matrix R and 3 × 1 translation vector t. Y. Oyamada (Keio Univ. and TUM) Camera Calibration April 3, 2012 7 / 44
  • 8.
    Extrinsic parameters Denotes transformationbetween Xw and Xc as Xc = [R|t] Xw , (5)     Xc  X r11 r12 r13 t1  w   Yc    = r21 r22 r23 t2  Yw  . (6)  Zc   Zw  r31 r32 r33 t3 1 1 Y. Oyamada (Keio Univ. and TUM) Camera Calibration April 3, 2012 8 / 44
  • 9.
    Intrinsic parameters Project a3D point Xc to image plane as x = A [R|t] Xw = AXc , (7)       Xc x αx s x0   Y → y  =  0 αy y0   c  ,  Zc  (8) 1 0 0 1 1 where αx and αy are focal lengths in pixel unit. x0 and y0 are image center in pixel unit. s is skew parameter. Y. Oyamada (Keio Univ. and TUM) Camera Calibration April 3, 2012 9 / 44
  • 10.
    4 steps projectinga 3D world point to a 2D image point A 3 × 4 projection matrix P denotes relationship between WX and I x as w I x = PW X w , (9)  WX   p11 p12 p13 p14 W w  I   xd Y → s I yd  = p21 p22 p23 p24  W w  .  Zw  (10) 1 p31 p32 p33 p34 1 Y. Oyamada (Keio Univ. and TUM) Camera Calibration April 3, 2012 10 / 44
  • 11.
    1/4: A 3Dworld point to a 3D camera point Change the world coordinate system to the camera one. From a 3D point WX in metric system w.r.t. the world coordinate w To a 3D point C Xw in metric system w.r.t. the camera coordinate C Xw = [C Rw |C Tw ]W Xw , (11) C  W Xw    Xw  C Y  R11 R12 R13 t14 W  Y → s C w  = R21 R22 R23 t24  W w  .  Zw   Zw  (12) R31 R32 R33 t34 1 1 Y. Oyamada (Keio Univ. and TUM) Camera Calibration April 3, 2012 11 / 44
  • 12.
    2/4: A 3Dcamera point to a 2D camera point Change the 3D camera coordinate system to the 2D camera one. From a 3D point C Xw in metric system w.r.t. the camera coordinate To a 2D point C Xu in metric system w.r.t. the camera coordinate C Xu = [C Rw |C Tw ]W Xw , (13)  W Xw   C   Xu f 0 0 0 W  Y → s C Yu  = 0 f 0 0 W w  ,  Zw  (14) 1 0 0 1 0 1 C f W C f W Xu = W Xw Yu = WZ Yw , Zw w where f denotes focal length in metric system. Y. Oyamada (Keio Univ. and TUM) Camera Calibration April 3, 2012 12 / 44
  • 13.
    3/4: Lens distortion Practicallens distort the previous 3D→2D projection. C Xu = C Xd + δx C Yu = C Yd + δy , (15) where δx and δy denote distortion parameter along with each axis. In the case of no lens distortion, δx = 0 δy = 0 (16) Radial distortion δxr and δyr , Decentering distortoin δxd and δyd , Thin prism distortion δxp and δyp . Y. Oyamada (Keio Univ. and TUM) Camera Calibration April 3, 2012 13 / 44
  • 14.
    3/4: Lens distortion:Radial distortion Caused by flawed radial curvature of lens. Modeled by [Tsai, 1987] 1 . δxr = k1 C Xd (C Xd + C Yd ) 2 2 δyr = k1 C Yd (C Xd + C Yd ) 2 2 (17) 1 R. Y. Tsai. A versatile camera calibration technique for high-accuracy 3d machine vision metrology using off-the-shelf tv cameras and lenses. IEEE Transactions on Robotics and Automation, 3(4):323–344, 1987 Y. Oyamada (Keio Univ. and TUM) Camera Calibration April 3, 2012 14 / 44
  • 15.
    3/4: Lens distortion:Weng’s model Model the lens distortion from the undistorted image point C Xu instead of the distorted one C Xd . C Xd = C Xu + δx C Yd = C Yu + δy (18) Y. Oyamada (Keio Univ. and TUM) Camera Calibration April 3, 2012 15 / 44
  • 16.
    3/4: Lens distortion:Decentering distortion Since the optical center of the lens is not correctly aligned with the center of the camera. Modeled by [Weng et al., 1992] 2 , δxd = p1 (3C Xu + C Yu ) + 2p2 C Xu C Yu 2 2 (19) δyd = 2p1 C Xu C Yu + p2 (C Xu + 3C Yu ) 2 2 (20) 2 J. Weng, P. Cohen, and M. Herniou. Camera calibration with distortion models and accuracy evaluation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 14(10):965–980, 1992. ISSN 0162-8828. doi: 10.1109/34.159901. URL http://dx.doi.org/10.1109/34.159901 Y. Oyamada (Keio Univ. and TUM) Camera Calibration April 3, 2012 16 / 44
  • 17.
    3/4: Lens distortion:Thin prism distortion From imperfection in lens design and manufacturing as well as camera assembly. Modeled by adding a thin prism to the optic system, causing radial and tangential distortions [Weng et al., 1992]. δxp = s1 (C Xu + C Yu ) 2 2 δyp = s2 (C Xu + C Yu ) 2 2 (21) Y. Oyamada (Keio Univ. and TUM) Camera Calibration April 3, 2012 17 / 44
  • 18.
    3/4: Lens distortion:Merged distortion Merge the all distortion as δxp = (g1 + g3 )C Xu + g4 C Xu C Yu + g1 C Yu + k1 C Xu (C Xu + C Yu ), (22) 2 2 2 2 δyp = g2 C Xu + g3 C Xu C Yu + (g2 + g4 )C Yu + k1 C Yu (C Xu + C Yu ), (23) 2 2 2 2 where g1 = s1 + p1 g2 = s2 + p2 g3 = 2p1 g4 = 2p2 Y. Oyamada (Keio Univ. and TUM) Camera Calibration April 3, 2012 18 / 44
  • 19.
    4/4: A 2Dcamera point to a 2D image point Change the 2D camera coordinate system to the 2D image one. From a 2D point C Xd in metric system w.r.t. the camera coordinate To a 2D point I Xd in pixel system w.r.t. the camera coordinate I    C  Xd −ku 0 u0 Xd s I Yd  =  0 −kv v0  C Yd  , (24) 1 0 0 1 1 I Xd = −ku C Xd + u0 I Yd = −kv C Yd + v0 , where parameters (ku , kv ) transform from metric measures to pixel. (u0 , v0 ) define the projection of the focal point in the plain. Y. Oyamada (Keio Univ. and TUM) Camera Calibration April 3, 2012 19 / 44
  • 20.
    Camera calibration: Generalidea Task Compute camera parameters: Packed parameters P. Each components A, R, and t. Given Known 3D points {Xi |i = 1, . . . , N}. Observed 2D points {xi |i = 1, . . . , N}. Y. Oyamada (Keio Univ. and TUM) Camera Calibration April 3, 2012 20 / 44
  • 21.
    Camera calibration: Projectivematrix estimation Setting p34 = 1, i-th image point xi is written as Xi p11 + Yi p12 + Zi p13 + p14 xi = (25) Xi p31 + Yi p32 + Zi p33 + 1 Xi p21 + Yi p22 + Zi p23 + p24 yi = (26) Xi p31 + Yi p32 + Zi p33 + 1 Solve as an optimization problem w.r.t. P such as 1 Linear method 1 solves as Ax = b. 2 Linear method 2 solves as Ax = 0. 3 Non-linear method solves non-linearly. Y. Oyamada (Keio Univ. and TUM) Camera Calibration April 3, 2012 21 / 44
  • 22.
    Camera calibration: Linearmethod 1 Proposed by [Hall et al., 1982] 3 . Eq. (25) and Eq. (26) is rewritten as Xi p11 + Yi p12 + Zi p13 + p14 − xi Xi p31 − xi Yi p32 − xi Zi p33 = xi (27) Xi p21 + Yi p22 + Zi p23 + p24 − yi Xi p31 − yi Yi p32 − yi Zi p33 = yi (28) Given N corresponding points {Xi } and {xi }, generate following equation:      p11 x1 X1 Y1 Z1 1 0 0 0 0 −x1 X1 −x1 Y1 −x1 Z1 p    y1   0 0 0 0 X1 Y1 Z1 1 −y1 X1 −y1 Y1  12  −y1 Z1      .    .   · · · · · · · · · · ·  .  =  .  (29)  −xN ZN   .   .  X N YN ZN 1 0 0 0 0 −xN XN −xN YN p    x  0 0 0 0 XN YN ZN 1 −yN XN −yN YN −yN ZN 32 N p33 yN → Ap = b where A ∈ R2N×11 , p ∈ R11 , and b ∈ R2N . 3 E. L. Hall, J. B. K. Tio, C. A. McPherson, and F. A. Sadjadi. Measuring curved surfaces for robot vision. Computer, 15 (12):42–54, 1982. ISSN 0018-9162. doi: 10.1109/MC.1982.1653915. URL http://dx.doi.org/10.1109/MC.1982.1653915 Y. Oyamada (Keio Univ. and TUM) Camera Calibration April 3, 2012 22 / 44
  • 23.
    Camera calibration: Linearmethod 1 cont. Considering an energy function E1 = Ap − b 2 , projection matrix is obtained by minimizing E1 as p = arg min E1 = arg min(Ap − b)T (Ap − b) ˆ (30) p p Differentiating E1 w.r.t. p, ∂E1 =0 ∂p → AT (Aˆ − b) = 0 p → AT Aˆ = AT b p → p = (AT A)−1 AT b ˆ (31) p can be estimated if AT A is invertible. Y. Oyamada (Keio Univ. and TUM) Camera Calibration April 3, 2012 23 / 44
  • 24.
    Camera calibration: Linearmethod 1 cont. This method heavily relies on whether the matrix AT A is invertible or not. Alternatively, we solve the problem by solving Ax = 0 as Linear method 2 does. Y. Oyamada (Keio Univ. and TUM) Camera Calibration April 3, 2012 24 / 44
  • 25.
    Camera calibration: Linearmethod 2 Eq. (25) and Eq. (26) is rewritten as Xi p11 + Yi p12 + Zi p13 + p14 − xi Xi p31 − xi Yi p32 − xi Zi p33 − xi p34 = 0 (32) Xi p21 + Yi p22 + Zi p23 + p24 − yi Xi p31 − yi Yi p32 − yi Zi p33 − yi p34 = 0 (33)      p11 0 X1 Y1 Z1 1 0 0 0 0 −x1 X1 −x1 Y1 −x1 Z1 −x1 p   12  0  0 0 0 0 X1 Y1 Z1 1 −y1 X1 −y1 Y1 −y1 Z1 −y1      .  .  · · · · · · · · · · · ·   .  = . (34)    −xN   .  . X N YN ZN 1 0 0 0 0 −xN XN −xN YN −xN ZN p    0 0 0 0 0 XN YN ZN 1 −yN XN −yN YN −yN ZN −yN 33 p34 0 → Ap = 0 where A ∈ R2N×12 is points matrix, p ∈ R12 is unknown projection matrix parameters vector, and b ∈ R2N is 2D points vector Y. Oyamada (Keio Univ. and TUM) Camera Calibration April 3, 2012 25 / 44
  • 26.
    Camera calibration: Linearmethod 2 cont. To obtain the non-trivial solution of homogeneous system Ap = 0, apply constrained optimization. Considering an energy function E2 = Ap 2 subject to the constraint p 2 − 1 = 0, prevents p from becoming a zero vector. With a Lagrange multiplier λ > 0, we obtain the following energy function 2 2 E2 (p, λ) = Ap − λ( p − 1) = (Ap) (Ap) − λ(pT p − 1). T (35) p = arg min E2 (p, λ) = arg min(Ap)T (Ap) − λ(pT p − 1) ˆ (36) p p Y. Oyamada (Keio Univ. and TUM) Camera Calibration April 3, 2012 26 / 44
  • 27.
    Camera calibration: Linearmethod 2 cont. Differentiating E2 w.r.t. p ∂E2 =0 ∂p → AT Aˆ − λˆ = 0 p p → AT Aˆ = λˆ p p (37) Differentiating E2 w.r.t. λ ∂E2 =0 ∂λ → pT p − 1 = 0 ˆ ˆ 2 → p ˆ =1 (38) Y. Oyamada (Keio Univ. and TUM) Camera Calibration April 3, 2012 27 / 44
  • 28.
    Camera calibration: Linearmethod 2 cont. Pre-multiplying both sides of Eq. (37) by pT gives ˆ pT AT Aˆ = λˆ T p ˆ p p ˆ → (Aˆ )T (Aˆ ) = λ1 p p 2 → Aˆ p =λ (39) Eq. (39) is the same expression that E2 = Ap 2 . This means that minimizing Ap 2 is to minimize λ. Y. Oyamada (Keio Univ. and TUM) Camera Calibration April 3, 2012 28 / 44
  • 29.
    Camera calibration: Linearmethod 2 cont. Differencing the energy function E2 tells that ˆ Since Eq. (37) forms like Ax = λx, p should be an eigenvector of the T matrix A A whose corresponding eigenvalue is λ. Eq. (38) minimizes λ as much as possible (ideally 0) ˆ Thus, p should be the eigenvector corresponding to the smallest eigenvalue of the matrix AT A. Y. Oyamada (Keio Univ. and TUM) Camera Calibration April 3, 2012 29 / 44
  • 30.
    Camera calibration: Non-linearmethod Consider the following reprojection error: N 2 2 Xi p11 + Yi p12 + Zi p13 + p14 Xi p21 + Yi p22 + Zi p23 + p24 ENL = xi − + yi − . (40) i=1 Xi p31 + Yi p32 + Zi p33 + p34 Xi p31 + Yi p32 + Zi p33 + p34 Need good initial guess (use linear method’s result). Y. Oyamada (Keio Univ. and TUM) Camera Calibration April 3, 2012 30 / 44
  • 31.
    The method ofFaugeras with radial distortion Let’s derive the Formulation CX CY T f CX CY T u u = CZ w w (41) w  C Xw = r11 W Xw + r12 W Yw + r13 W Zw + tx  CY = r WX + r WY + r WZ + t (42)  w C 21 w 22 w 23 w y Zw = r31 WX + r WY + r WZ + t w 32 w 33 w z CX CY T = CX + δx CY + δy T (43) u u d d δxr = k1 C Xd (C Xd + C Yd ) 2 2 (44) δyr = k1 C Yd (C Xd + C Yd ) 2 2 IX IY T = −ku C Xd + u0 −kv C Yd + v0 T (45) d d Y. Oyamada (Keio Univ. and TUM) Camera Calibration April 3, 2012 31 / 44
  • 32.
    The method ofFaugeras with radial distortion cont. WX WY WZ T IX I Y T, Since we only know w w w and d d CX f CZ w − C Xd + k1 C Xd (C Xd + C Yd ) = 0 2 2 CY w (46) f CZ w w − C Yd + k1 C Yd (C Xd + C Yd ) = 0 2 2 where  C Xw = r11 W Xw + r12 W Yw + r13 W Zw + tx   C  Yw = r21 W Xw + r22 W Yw + r23 W Zw + ty   CZ = r WX + r WY + r WZ + t (47)  w I X −u w C 31 32 w 33 w z  Xd = d 0   −ku I Yd = Yd −v0  C −kv Y. Oyamada (Keio Univ. and TUM) Camera Calibration April 3, 2012 32 / 44
  • 33.
    The method ofFaugeras with radial distortion cont. Finally, we derive the two energy function r11 W Xw + r12 W Yw + r13 W Zw + tx I Xd − u0 I Xd − u0 2 U(x) = f WX + r WY + r WZ + t − + k1 r (48) r31 w 32 w 33 w z −ku −ku r21 W Xw + r22 W Yw + r23 W Zw + ty I Yd − v0 I Yd − v0 2 V (x) = f W WY + r WZ + t − + k1 r (49) r31 Xw + r32 w 33 w z −kv −kv where x packs all unknown parameters and IX 2 IY 2 d− u0 d− v0 r2 = + . (50) −ku −kv Y. Oyamada (Keio Univ. and TUM) Camera Calibration April 3, 2012 33 / 44
  • 34.
    The method ofFaugeras with radial distortion cont. Since the rotation matrix can be described by 3 rotation angles, instead of rij , we use 3 parameters α, β, and γ as   1 0 0 Rx (α) = 0 cos α − sin α (51) 0 sin α cos α   cos β 0 sin β Ry (β) =  0 1 0  (52) − sin β 0 cos β   cos γ − sin γ 0 Rz (γ) =  sin γ cos γ 0 (53) 0 0 1 Y. Oyamada (Keio Univ. and TUM) Camera Calibration April 3, 2012 34 / 44
  • 35.
    The method ofFaugeras with radial distortion cont. Since our energy functions, Eq. (48) and Eq. (49), are non-linear, the method of Faugeras solves the problem with iterative non-linear optimization as G (xk ) ≈ G (xk−1 ) + J∆xk , (54) where xk denotes the k-th estimated parameters, G (x) is the minimization function, and J denotes its Jacobian matrix as  ∂U (x ) ∂U (x ) ∂U1 (xk−1 )  1 k−1 1 k−1  U1 (xk−1 )  ∂α ∂β ··· ∂k1  ∂V1 (xk−1 ) ∂V1 (xk−1 ) ∂V1 (xk−1 )  V1 (xk−1 )    ∂α  ∂β ··· ∂k1   G (xk−1 ) =  . . . . . . .. . . J =   (55)      .   . . . .  Un (xk−1 )  ∂Un (xk−1 ) ∂Un (xk−1 ) · · · ∂Un (xk−1 )   ∂α ∂β ∂k1  Vn (xk−1 ) ∂Vn (xk−1 ) ∂Vn (xk−1 ) ∂Vn (xk−1 ) ∂α ∂β ··· ∂k1 Y. Oyamada (Keio Univ. and TUM) Camera Calibration April 3, 2012 35 / 44
  • 36.
    The method ofFaugeras with radial distortion cont. Then, we compute ∆xk using J and G (xk−1 ) as ∆xk = −(J T J)−1 J T G (xk−1 ) (56) and update the current estimate as xk = xk−1 + ∆xk . Y. Oyamada (Keio Univ. and TUM) Camera Calibration April 3, 2012 36 / 44
  • 37.
    How to choosethe points set Simply speaking, 6 corresponding points are required because we have 12 or 11 unknowns. One corresponding points give two equations. Any points are fine for optimization? The answer is NO. Y. Oyamada (Keio Univ. and TUM) Camera Calibration April 3, 2012 37 / 44
  • 38.
    Rank issues For an × m matrix A, n ≤ m, rank (A) + null (A) = min(n, m), (57) where null (A) represents the dimension of the null space of A. Since n ≤ m = 12 in our case, we have to consider three cases: 1 rank (P) = 12: The null space has 0 dimension means there is only one solution, namely p = 0. 2 rank (P) = 11: The null space has 1 dimension and there is a unique solution up to s scale factor. 3 rank (P) < 11: The null space has 2 or more dimension. The null vector p can be any vector in the dimensional space. This means that there exist infinite number of solutions. The third case happens when all the points {Xi } are located on a plane. Y. Oyamada (Keio Univ. and TUM) Camera Calibration April 3, 2012 38 / 44
  • 39.
    Camera calibration: Projectivematrix decomposition Now, we have An estimate of projective matrix P. A set of corresponding points {Xi } and {xi }. Next task is to decompose P into A, R, and t. Basically, we use constraint on matrix form Y. Oyamada (Keio Univ. and TUM) Camera Calibration April 3, 2012 39 / 44
  • 40.
    Projective matrix decompositioncont. Consider P = A[R|t]    αx s x0 r11 r12 r13 t1 =  0 αy y0  r21 r22 r23 t2  0 0 1 r31 r32 r33 t3   T  αx s x0 r1 t1 =  0 αy y0  rT 2 t2  0 0 1 rT 3 t3  T T + x rT  αx r1 + sr2 0 3 αx tx + s · t2 + x0 tz =  αy rT + y0 rT 2 3 αx ty + y0 tz  (58) rT 3 tz where rT = (rj1 , rj2 , rj3 ), for j = 1, 2, 3. j Y. Oyamada (Keio Univ. and TUM) Camera Calibration April 3, 2012 40 / 44
  • 41.
    Projective matrix decompositioncont. If the camera parameters follows Eq. (58) format, following conditions should be satisfied: 1 p3 = 1, and Since p3 = r3 and r3 = 1. 2 (p1 ∧ p3 ) · (p2 ∧ p3 ) = 0. Using ri and rj , i = j, is orhothogonal. Y. Oyamada (Keio Univ. and TUM) Camera Calibration April 3, 2012 41 / 44
  • 42.
    Camera calibration methods 1 [Hall et al., 1982] 2 [Faugeras and Toscani, 1986] 3 [Salvi et al., 2002] 4 [Tsai, 1987] 5 [Weng et al., 1992] Y. Oyamada (Keio Univ. and TUM) Camera Calibration April 3, 2012 42 / 44
  • 43.
    Camera calibration type Lens distortion: Linear vs. Non-linear Camera parameters: Intrinsic vs. Extrinsic Computing camera parameters: Implicit vs. Explicit Calibration pattern: known 3D points vs. a reduced set of 3D points Y. Oyamada (Keio Univ. and TUM) Camera Calibration April 3, 2012 43 / 44
  • 44.
    References I O. D.Faugeras and G. Toscani. The calibration problem for stereo. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), IEEE Publ.86CH2290-5, pages 15–20. IEEE, 1986. E. L. Hall, J. B. K. Tio, C. A. McPherson, and F. A. Sadjadi. Measuring curved surfaces for robot vision. Computer, 15(12):42–54, 1982. ISSN 0018-9162. doi: 10.1109/MC.1982.1653915. URL http://dx.doi.org/10.1109/MC.1982.1653915. F. Remondino and C. Fraser. Digital camera calibration methods: Considerations and comparisons. In ISPRS Commision V Symposium, volume 6, pages 266–272, 2006. J. Salvi, X. Armangue, and J. Batlle. A comparative review of camera calibrating methods with accuracy evaluation. Pattern Recognition, 35:1617–1635, 2002. R. Y. Tsai. A versatile camera calibration technique for high-accuracy 3d machine vision metrology using off-the-shelf tv cameras and lenses. IEEE Transactions on Robotics and Automation, 3(4):323–344, 1987. J. Weng, P. Cohen, and M. Herniou. Camera calibration with distortion models and accuracy evaluation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 14(10): 965–980, 1992. ISSN 0162-8828. doi: 10.1109/34.159901. URL http://dx.doi.org/10.1109/34.159901. Z. Zhang. Camera calibration. In G. Medioni and S. B. Kang, editors, Emerging Topics in Computer Vision, chapter 2, pages 4–43. Prentice Hall, 2005. Y. Oyamada (Keio Univ. and TUM) Camera Calibration April 3, 2012 44 / 44