426 Lecture5: AR Registration
Upcoming SlideShare
Loading in...5
×
 

426 Lecture5: AR Registration

on

  • 1,153 views

COSC 426 Lecture 5 on Mathematical Principles Behind AR Registration. Given by Adrian Clark from the HIT Lab NZ at the University of Canterbury, August 8, 2012

COSC 426 Lecture 5 on Mathematical Principles Behind AR Registration. Given by Adrian Clark from the HIT Lab NZ at the University of Canterbury, August 8, 2012

Statistics

Views

Total Views
1,153
Views on SlideShare
1,153
Embed Views
0

Actions

Likes
2
Downloads
24
Comments
0

0 Embeds 0

No embeds

Accessibility

Upload Details

Uploaded via as Adobe PDF

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

426 Lecture5: AR Registration 426 Lecture5: AR Registration Presentation Transcript

  • Mathemacal  Principles  behind   Registraon   Adrian  Clark   HIT  Lab  NZ   adrian.clark@hitlabnz.org  
  • Registraon  •  We  wish  to  calculate  the   transformaon  from  the   camera  to  the  object   (extrinsic  parameters).  In   order  to  this,  we  must   find  the  transformaon   from  the  camera  to  the   image  plane  (camera   intrinsics),  and  combine   that  with  the   transformaon  from   known  points  in  the   object  to  their  locaons   in  the  image  plane.  
  • Object  to  Image  Plane  •  The  calculaon  for  the   point  on  image  plane   (px,py)  is  related  to  the   ray  passing  from  object   (Px,Py,Pz)  through  the   camera    focal  point  and   intersecng  the  image   plane  at  focal  length  f,   such  that:  
  • Object  to  Image  Plane  •  The  previous  formulas  can  be   represented  in  matrix  form  as:    (equaon  is  non-­‐linear  –  s  is  scale  factor)  •  Previous  equaons  have  been   assuming  a  perfect  pinhole  aperture.   Instead  we  have  a  lens,  which  has  a   principal  point  (up,  vp)  –  the   transformaon  from  camera  origin  to   image  plane  origin  –  and  scale  factor   (sx,sy)  pixel  distance  to  real  world  units   (mm).  
  • CAMERA  CALIBRATION  
  • Camera  Calibraon  •  Knowing  the  camera  intrinsics  we  can   calculate  the  transformaon  from  an  object  P   to  a  pixel  (u,v).  •  During  the  process  of  calibraon  we  calculate   the  intrinsics.  •  This  is  done  by  taking  mulple  images  of  a   planar  chessboard  where  each  square  is  a   known  size.  
  • Camera  Calibraon   •  If  we  assume  the  z  value  of  each  point  on  the   chessboard  to  be  at  0,  then  the  transformaon  is   found  as:    For  each  point  there  is  a  Homography   mapping  P  to  (u,v):  
  • Camera  Calibraon   •  Through  some  derivaon  and  substuon,   we  find:    With  the  homography  represented  as:    The  matrix:     Mulplies  with  the  H  vector  to:  
  • Camera  Calibraon  •  With  at  least  four  pairs  of  point   correspondences,  we  can  solve:    using  Singular  Value  Decomposion  for  total  least   square  minimizaon.    From  the  homography  of  these  four  points,  the  values   of  (u,v),  s,  (sx,sy)  can  be  esmated  with  a  bit  more   maths.    (Zhang,  Z.:  2000,  A  flexible  new  technique  for  camera   calibraon,  IEEE  Transacons  on  Paern  Analysis  and   Machine  Intelligence  22,  1330–1334.)  
  • Camera  Calibraon  •  Once  we  have  the  camera  calibraon,  we   can  go  ahead  and  compute  the  extrinsic   parameters  (transformaon)  as:    Now  that  we  know  the  complete  transformaon,  we   can  opmise  our  intrinsic  parameters  using  the   Levenberg-­‐Marquardt  Algorithm  on:    We  can  also  calculate  radial  distorons  of  the  lens  and   remove  them  if  we  feel  so  inclined,  and  further   opmise.  
  • Camera  Calibraon  EXAMPLE:  ARTOOLKIT  
  • Camera  Calibraon  •  Camera  Parameters   1.  Perspecve  Projecon  Matrix   2.  Image  Distoron  Parameters  •  Two  camera  calibraon  methods   1.  Accurate  2  step  method   2.  Easy  1  step  method  
  • Easy  1  step  method:    calib_camera2.exe  •  Finds  all  camera  parameters  including  distoron   and  perspecve  projecon  matrix.  •  Doesn’t  require  careful  setup.  •  Accuracy  is  good  enough  for  image  overlay.     (Not  good  enough  for  3D  measurement.)  
  • Using  calib_dist2.exe  Selecting dots with mouse Getting distortion parameters by automatic line-fitting • Take pattern pictures as large as possible. • Slant in various directions with big angle. • 4 times or more
  • Accurate  2  step  method  •  Using  dot  paern  and  grid  paern  •  2  step  method   –  1)  Gedng  distoron  parameters  –  calib_dist.exe   –  2)  Gedng  perspecve  projecon  parameters  
  • Step2:  Gedng  perspecve  projecon  matrix    calib_cparam.exe  Manual line-fitting
  • Image  Distoron  
  • Scaling  Parameter  for  Size  Adjustment  
  • Image  Distoron  Parameters  •  Relaonships  between  Ideal  and  Observed   Screen  Coordinates  
  • Implementaon  of  Image  Distoron   parameters  
  • Camera  Calibraon  EXAMPLE:  OPENCV  
  • Registraon  •  We  now  have  a  reliable  model  of  the  cameras   intrinsic  parameters,  and  have  removed  any  radial   distoron.  Now  it’s  just  a  maer  of  learning  some   points  in  a  marker,  and  then  searching  for  them  in   each  frame,  calculang  the  extrinsic  parameters  as:  
  • REGISTRATION  
  • Registraon  ARTOOLKIT  
  • Queson:  Gedng  TCM  •  Known  Parameters   –  Camera  Parameter:  C   –  Image  Distoron  Parameters:  x0,  y0,  f,  s     –  Coordinates  of  4  Verces  in  Marker  Coordinates  Frame  •  Obtained  Parameters  by  Image  Processing   –  Coordinates  of  4  Verces  in  Observed  Screen  Coordinates  •  Goal   –  Gedng  Transformaon  Matrix  from  Marker  to  Camera  
  • Image  Processing  •  Thresholding  •  Rectangle  extracon  •  Esmaon  of  vertex  posions  •  Normalized  template  matching     Idenficaon  of  markers  
  • Rectangle  Extracon    1.  Thresholding,  Labeling,     Feature  Extracon  (area,  posion)  2.  Contour  Extracon  3.  Four  straight  lines  fidng    Lile  fidng  error  =>  Rectangle.   This method is very simple and very fast.
  • How  to  get  TCM  
  • Esmaon  of  Transformaon  Matrix  1st  step:  Geometrical  calculaon     –  Rotaon  &  Translaon  2nd  step:  Opmizaon   –  Iterave  processing   •  Opmizaon  of  Rotaon  Component   •  Opmizaon  of  Translaon  Component  
  • Opmizaon  of  Rotaon  Component •  Observed  posions  of  4  verces  •  Calculated  posions  of  4  verces   –  Posions  in  marker  coordinates              Esmated  transformaon  matrix  &  Perspecve  matrix   –  Ideal  screen  coordinates              Distoron  funcon   –  Posions  in  observed  screen  coordinates  •  Minimizing  distance  between  observed  and  calculated  posions   by  changing  rotaon  component  in  esmated  transformaon   matrix  
  • Search  Tcm  by  Minimizing  Error  •  Opmizaon   –  Iterave  process  
  • (2)  Use  of  esmaon  accuracy  arGetTransMat()  minimizes  the  err.  It  returns  this  minimized  err.  If  err  is  sll  big,      Miss-­‐detected  marker.      Use  of  camera  parameters  by  bad  calibraon.  
  • How  to  set  the  inial  condion  for   Opmizaon  Process  •  Geometrical  calculaon  based  on  4  verces  coordinates   –  Independent  in  each  image  frame:  Good  feature.   –  Unstable  result  (Jier  occurs.):  Bad  feature  •  Use  of  informaon  from  previous  image  frame   –  Needs  previous  frame  informaon.     –  Cannot  use  for  the    first  frame.   –  Stable  results.  (This  does  not  mean  accurate  results.)  •  ARToolKit  supports  both  
  • Two  types  of  inial  condion  1.  Geometrical  calculaon  based  on  4  verces     in  screen  coordinates   double arGetTransMat( ARMarkerInfo *marker_info, double center[2], double width, double conv[3][4] );2.  Use  of  informaon  from  previous  image  frame   double arGetTransMatCont( ARMarkerInfo *marker_info, double prev_conv[3][4], double center[2], double width, double conv[3][4] );
  • Use  of  Inside  paern  •  Why?   –  Square  has  symmetries  in  90  degree  rotaon   •  4  templates  are  needed  for  each  paern   –  Enable  the  use  of  mulple  markers  •  How?   –  Template  matching   –  Normalizing  the  shape  of  inside  paern   –  Normalized  correlaon    
  •      Accuracy  vs.  Speed  on  paern   idenficaon  •  Paern  normalizaon  takes  much  me.  •  This  is  a  problem  when  using  many  markers.  •  Normalizaon  process.   Normalization Resolution convert
  • Paern  Normalizaon  Gedng  projecon  parameter  from  4  verces  posion  
  • Normalizaon  Correlaon  
  •   In  config.h   –  #define      AR_PATT_SAMPLE_NUM      64   –  #define      AR_PATT_SIZE_X              16     –  #define      AR_PATT_SIZE_Y              16     Identification Accuracy Speed Large size Good Slow Small size Bad Fast
  • Registraon  NATURAL  FEATURES  
  • Natural  Feature  Registraon  •  There  are  three  steps  to  natural  feature   registraon:  Find  reliable  points,  describe   points  uniquely,  match  points.  •  There  are  heaps  of  exisng  natural  feature   registraon  algorithms  (SIFT,  SURF,  GLOH,   Ferns…)  with  their  own  intricacies,  so  we  will   just  look  at  a  high  level  approach  
  • How  NFR  Works  1.  Find  feature  points  in  the  image.  2.  In  order  to  differenate  each  feature  point,  create   a  descriptor  of  a  local  window  using  a  funcon.  3.  Repeat  1  and  2  for  both  the  source,  or  “marker”   image,  as  well  as  the  current  frame.  4.  Compare  all  features  in  marker  to  all  features  in   current  frame  to  find  closest  matches.  5.  Use  matches  to  calculate  transformaon  
  • Feature  Detecon  •  Feature  detecon  involves  finding  areas  of  an   image  which  are  unique  amongst  their   surroundings,  and  can  easily  be  idenfied   regardless  of  changes  in  viewpoint.  •  Good  feature  candidates  are  corners  and  points.  
  • Point  Detecon  Example   FAST  Corner  Detector  
  • Feature  Descripon  •  A  feature  point  has  0  dimensions,  and  as  such,  there   is  no  way  of  telling  them  apart.  •  To  resolve  this,  a  window  surrounding  the  point  is   transformed  into  a  1  dimensional  array.  •  The  window  is  examined  at  the  scale  the  point  was   found  at,  and  the  transformaon  needs  to  allow  for   distoron/deformaon,  but  sll  able  to  idenfy   between  every  feature.  
  • Feature  Descripon  Example   Descriptors  Feature  Points   Feature  Windows  
  • Feature  Matching  •  A  marker  is  trained  when  the  features  and   descriptors  present  have  all  been  found.  •  During  runme,  this  process  is  performed  for  each   frame  of  video.  •  The  descriptors  of  each  features  are  compared   between  the  marker  and  the  current  frame.  If  the   descriptors  of  two  features  are  similar  within  a   threshhold,  they  are  assumed  to  be  a  match.  
  • Feature  Matching  Example  
  • Registraon  •  From  here,  we  can  oponally  run  RANSAC   over  the  homography  calculaon:   1.  Pick  4  random  points,  find  homography   2.  Test  homography  by  evaluang  other  points   3.  If  p-­‐HP<e,  Recompute  homography  with  all  inliers,  else  goto  1  •  From  there  we  just  take  the  homography,   combine  it  with  the  camera  intrinsics  and   get  the  transformaon  matrix.  
  • Natural  Feature  Registraon  APPLICATIONS  
  • NFR  Applicaons  •  Any  applicaon  using  marker  based   registraon  can  also  be  achieved  using  NFR,   but  there  are  a  number  of  addional   possibilies.  •  As  NFR  does  not  require  special  markings,   any  exisng  media  can  be  used  without   modificaon,  e.g.  painngs  in  museums,   print  media  adversements,  etc  
  • NFR  Applicaons  •  NFR  is  especially  suited  to  applicaons  where   there  is  another  “layer”  of  data  relevant  to   an  exisng  surface,  e.g.  Three  dimensional   overlays  of  map  data,  “MagicBooks”,   proposed  building  sites,  manufacturing  blue   prints,  etc  
  • OSG-­‐OPIRA  
  • ACMI  Magicbook  
  • “Travelling  New  Zealand”  GIS  Book  
  • Future  Direcons  of  Natural   Feature  Registraon  
  • Mobile  NFR  Mobile  Augmented  Reality  is  becoming  extremely  popular  due  to  the  ubiquitous  nature  of  devices  with  cameras  and  displays.  The  processing  capabilies  of  these  devices  is  improving,  and  natural  feature  registraon  is  becoming  increasingly  feasible  with  the  design  of  NFR  algorithms  for  high  performance.   Wagner,  D.;  Reitmayr,  G.;  Mulloni,  A.;  Drummond,  T.;  Schmalseg,  D.;  ,  "Pose  tracking  from  natural  features  on  mobile  phones,"  Mixed  and   Augmented  Reality,  2008.  ISMAR  2008.  7th  IEEE/ACM  Interna?onal  Symposium  on  ,  vol.,  no.,  pp.125-­‐134,  15-­‐18  Sept.  2008  
  • Non-­‐Rigid  NFR   Using  deformaon  models,  non-­‐rigid  planar  surfaces  can  be   registered,  and  their  shape  recovered.  Not  only  does  this   improve  registraon  robustness,  but  also  allows  for  more   realisc  rendering  of  augmented  content  J.  Pilet,  V.  Lepet,  and  P.  Fua,  Fast  Non-­‐Rigid  Surface  Detecon,  Registraon  and  Realisc  Augmentaon,  Internaonal  Journal  of  Computer  Vision,  Vol.  76,  Nr.  2,  February  2008.  M.  Salzmann,  J.Pilet,  S.Ilic,  P.Fua,  Surface  Deformaon  Models  for  Non-­‐Rigid  3-­‐-­‐D  Shape  Recovery,  IEEE  Transacons  on  Paern  Analysis  and  Machine  Intelligence,  Vol.  29,  Nr.  8,  pp.  1481  -­‐  1487,  August  2007.  
  • Model  Based  Tracking  Using  a  known  three  dimensional  model  in  conjuncon  with  edge/texture  informaon,  three  dimensional  objects  can  be  tracked  regardless  of  view  point.  Model  based  tracking  also  improves  robustness  to  self  occlusion.   Reitmayr,  G.;  Drummond,  T.W.;  ,  "Going  out:  robust  model-­‐based  tracking  for  outdoor  augmented  reality,"  Mixed  and  Augmented  Reality,   2006.  ISMAR  2006.  IEEE/ACM  Interna?onal  Symposium  on,  pp.109-­‐118,  22-­‐25  Oct.  2006   L.  Vacched,  V.  Lepet  and  P.  Fua,  Stable  Real-­‐Time  3D  Tracking  Using  Online  and  Offline  Informaon,  IEEE  Transac?ons  on  PaGern  Analysis   and  Machine  Intelligence,  Vol.  26,  Nr.  10,  pp.  1385-­‐1391,  2004.  
  • OPIRA:  Opcal-­‐flow  Perspecve  Invariant  Registraon  Augmentaon  
  • What  makes  good  NFR?  •  In  order  for  a  natural  feature  registraon   algorithm  to  work  well  it  must  be  robust  to   common  image  transformaons  and   distorons:  
  • Feature  descriptor  robustness  •  Feature  descriptors  are  vulnerable  to   transformaons  and  distorons,  with  the   excepon  of  translaon  and  scale,  which  are   handled  by  modifying  the  descriptor  window   to  match  the  scale  and  posion  the  feature   was  detected  at.  
  • Feature  descriptor  robustness  
  • OPIRA  •  The  Opcal-­‐flow  Perspecve  Invariant  Registraon   Augmentaon  is  an  algorithm  which  adds   perspecve  invariance  to  exisng  registraon   algorithms  by  tracking  the  object  over   mulple  frames  using  opcal  flow,  and  using   perspecve  correcon  to  eliminate  the  effect   of  perspecve  distorons.  Clark,  A.,  Green,  R.  and  Grant,  R.:  2008,  Perspecve  correcon  for  improved  visual  registraon  using  natural  features.,  Image  and  Vision  Compung  New  Zealand,  2008.  IVCNZ  2008.  23rd  Internaonal  Conference,  pp.  1-­‐6  
  • Effect  of  perspecve  distoron  
  • OPIRA  Process  •  Once  an  inial  frame  of  registraon  occurs,   all  correct  points  used  for  registraon  are   tracked  from  frame  t-­‐1  to  t  using  sparse   opcal  flow.  •  The  transformaon  is  calculated  for  frame  t   based  on  the  tracked  points  and  their  marker   posions  as  matched  in  frame  t-­‐1  
  • OPIRA  Process  (Cont.)  •  Using  the  inverse  of  the  transformaon   computed  using  Opcal  Flow,  the  frame  t  is   warped  to  match  the  posion  and   orientaon  of  the  marker.  •  The  registraon  algorithm  is  performed  on   the  newly  aligned  frame.  Matches  are  found,   and  the  transformaon  is  mulplied  by  the   Opcal  Flow  transform  to  realign  the   transformaon  with  the  original  image.  
  • OPIRA  -­‐  Example  
  • Addional  Benefits  •  OPIRA  is  able  to  add  some  degree  of  scale   and  rotaon  invariance  to  exisng   algorithms,  by  transforming  the  object  to   match  it’s  marker  representaon.  •  Using  the  undistorted  image,  we  can  perform   background  subtracon  to  isolate  occluding   objects  for  pixel  scale  occlusion  in   Augmented  Reality.  
  • OPIRA  PROGRAMMING  EXAMPLES