SlideShare a Scribd company logo
PIRF-Nav:
An Online Incremental Appearance-
 based Localization and Mapping in
            Dynamic Environments
                  Aram Kawewong

                    Hasegawa Laboratory
     Department of Computational Intelligence and Systems Science
      Interdisciplinary Graduate School of Science and Engineering
                      Tokyo Institute of Technology




                                 1
Introduction to SLAM
 Simultaneous Localization and Mapping, or SLAM, is a
  navigation system needed for every kind of mobile
  robots
 In the unfamiliar environment, the robot must be able
  to perform two important tasks simultaneously
   Mapping the new place if the place has never been visited
    previously
   Localizing itself to some mapped place if the place has
    been visited before




                                2
Appearance-based Localization and
      Mapping (FAB-MAP)




                3
Why Visual SLAM ? What are the
            Challenging ?
 Why don’t we just use GPS ?
   GPS is not always reliable in the crowded city centre
   GPS can only locate the coordinate/position of the agent but
    not the corresponded scene; how can the robot answer the
    question “look at this picture and tell me where it is ?” or “have
    you ever visited this place before ? Can you describe about the
    nearby places ?”
 No false positive (can have false negative)
   If the robot is not confident then it should answer “this is the
    new place”. If the robot is to answer “this place is the same
    place as the place ….”, it must be 100% correct.
   100% precision (all answers must be correct)
                                  4
Appearance-based Localization and
     Mapping VS Place Recognition
                  Place Recognition          Localization and Mapping (Robotics)
                  (Computer Vision)

Input Images   All testing images are      Every input image is a testing image; it
               known to come from          might come from somewhere in the map
               somewhere in the map        or it might be the previously unseen
                                           place

Environment    Closed Environment          Opened Environment

Precision      Precision-1 is not the     Precision-1 is the first priority concern;
               main concern if the recall one false positive may lead the serious
               rate is reasonably high    error in navigation


                                      5
Appearance-based SLAM’s Common
           Objectives
 100% Precision with very high Recall Rates
 Can run incrementally in an online manner
 Life-long
   Low computation time
   Consume less memory
 Suitable to navigate in large-scale environments
 Can solve 2 main problems:
   Dynamical Changes
   Perceptual Aliasing (Different Places but look similar)
 Note:
   Coordinate-based Localization is not required here

                                   6
Visual SLAM’s Related Works


1.    FAB-MAP (Cummins & Newman, IJRR’08)
      Considering the efficiency at 100% precision, the obtained
       recall rate of FAB-MAP (a State-of-the-art method) is still not
       so high.
      An offline generation process for dictionary generation is
       necessary.
2. Fast Incremental Bag-of-words (Angeli, et al. T-RO’08)
      The system can run incrementally; offline dictionary
       generation process is not needed.
      Accuracy is said to be less than or equal to that of FAB-MAP
      Consume much higher memory than FAB-MAP
                                                                         7
What Do We Want ? :PIRF-Nav’s
                Advantages
                                     FAB-MAP           Inc. BoW (T-    PIRF-Nav
                                     (IJRR’08)         RO’08)          (prop.)
Ability to incrementally run
without needs for offline                No               Yes            Yes
dictionary generation process
Memory Consumption                       Low             High          Moderate
Ability to run in real-time              Yes              Yes            Yes
Robustness against dynamical           Moderate            Low              High
changes*                                (~40% on       (~20% on City     (~85% on
                                      City Centre)        Centre)       City Centre)

                  * The recall rate is considered at 100% precision
                                           8
Basic Idea & Concept of PIRF-Nav


  Making use of PIRF, we can detect the good landmarks of
   each individual place
  The extracted PIRFs should be sufficiently informative to
   represent the place so that the system does not need the
   preliminary generated visual vocabulary
  The number of PIRFs is sufficiently small to be used in the
   real-time application
  Because the PIRF is robust against dynamical changes of
   scenes, the PIRF-based visual SLAM (called PIRF-Nav)
   become an efficient online incremental visual SLAM
                                                                 9
Basic Idea of PIRFs (proposed)


 Outdoor Scenes generally include distant objects
  whose appearances are robust against the changes in
  camera position
 Averaging the “slow-moving” local features which
  capture such objects give us the less and more robust
  features




                           10
                                                          10
PIRF Extraction Algorithm
                                    Image Sequence




    3            1              2         0          4   0
    0            3              4         2          1   1
    0            6              1         6          0   5
    1            0              5         4          5   0
    4            5              3         1          0   4
    3            0              1         3          0   2


        Sliding Window; w = 3



                Sequence of Matching Index Vectors
                                         11
Briefly on PIRF’s Performance
 Exp. 1 Scenes From Suzukakedai




    Training (640x428)   Testing (640x428)
           580                 489

 Exp. 1 Scenes From O-okayama



    Training (640x428)   Testing (640x428)
           450                 493           12
PIRF’s Performance
    Recognition Rate of Suzukakedai and O-okayama




                                                                    93.46%
                                                                             77.48%
100.00%
 90.00%
 80.00%

                                45.75%
 70.00%




                                             36.71%




                                                                   Suzukakedai
                                                          30.22%
                            31.08%




                                                                   O-Okayama
                   27.59%



 60.00%
          24.54%




                                         22.29%



                                                      18.23%
 50.00%
 40.00%
 30.00%
 20.00%
 10.00%
  0.00%




                                           13
Even With These Strong Changes,
     PIRF Still Works Well !!!




      Highly Dynamic Changes in Scenes




                          14
      Illumination Changes in Scenes
PIRF (City Centre Dataset)



               Original Descriptors (SIFT)




                                                         15


   Position-invariant Robust Feature (PIRF) (proposed)
PIRF-Nav Processing Diagram (prop.)

Overall Processing Diagram
 Step 1: Perform simple feature
  matching. The score is
  calculated based on the
  popular term frequency-
  inverted document frequency
  weighting
 Step 2-3: Adapt the score by
  considering the neighbors and
  then perform normalization
 Step 4: Perform second
  integration over the score’s
  space for relocalization
                                      16
Notation Definition


 At time t, a map of the environment is a collection of
  nt discrete and disjoint location
                      ������ = {������1 , … , ������������ ������ }
 Each of these locations ������������ , which has been created
  from past image ������������ , has an associated model ������������
 The model ������������ is a set of PIRFs



                                   17
STEP 1: Simple Feature Matching


 The current model ������������ is compared to each of all
  mapped models ������������ = {������0 , … , ������������ ������ } using standard
  feature matching with distant threshold ������2
 Each matching outputs the similarity score s
 ������0 is model of the location ������0 which is a virtual
  location for the event “no loop closure occurred at
  time t”.
 Based on the obtained score s, the system proceed to
  the next step if ������������������������������������(������) ≠ 0

                            18
STEP 1: Simple Feature Matching
            (Continued)

 The similarity score s is calculated by considering the term
  frequency – inverted document frequency (tf-idf) weighting
  (Sivic & Zisserman, ICCV’ 03) :-
                                ������������������      ������
                     tf − idf =        log
                                 ������������      ������������
 ������������������ is the number of occurrences of visual word w in ������������
 ������������ is the total number of visual words in ������������
 ������������ is the number of models containing word w
 N is the total number of all existing models

                                19
STEP 1: Simple Feature Matching
            (Continued)

 To be used with PIRF, the function is then converted to
                            mi       ������������
                    ������������ =      log
                            k=1     ������������ ������
 ������������ ������ is the number of models ������������ , 0 ≤ ������ ≤ ������������ , ������ ≠ ������ ,
  containing PIRFs which match the kth PIRF of the input
  model ������������
 ������������ is the number of all matched PIRFs between input and
  the query model
 The system proceeds to STEP 2 if and only if the maximum
  score does not belong to ������0 and is greater than ������1
                                20
STEP 2: Considering Neighbors


 Accepting of rejecting loop-closure detection based on the
  score from only single image is sensitive to noise
 This can be handled by considering the similarity score of
  neighboring image models as:-
                          ������+������
                ������������ =                 ������������ ∙ ������������ ������, ������
                          ������=������−������
 The term ������������ ������, ������ is the transition probability generated
  from a Gaussian on the distance in time between i and k
 ������ stands for the number of neighbors examined
                                  21
STEP 3: Normalizing the Score
                   Done by considering the
                    standard deviation and
                    mean value over all scores
                   ln indicates the number of
                    neighbours taken into
                    consideration
                   The beta-scores are
                    converted into normalized
                    score according to the
                    equation
                            ������������ − ������
                                      ,   if ������������ ≥ ������
                     ������������ =      ������
                            1,            Otherwise

                    where                                22



                      ������ = ������ + ������
STEP 4: Re-localization


 The obtained location ������������ would be accepted as loop
  closure if ������������ − ������ > ������2
 Ideally, the neighboring model scores of location Lj
  should decrease symmetrically from a model score.
  However, scenes in dynamic environments usually
  contains moving objects that frequently cause the
  occlusion. The score of some assigned location may
  not be symmetrical.

                           23
Step 4: Relocalization (Sample
          Problems)
                    Location assigned from
                    Step 3 does not have a
                    symmetrical score




                    Performing one more
                    summation can shift the
                    location to the right one


               24
STEP 4: Re-Localization


 Therefore, we perform the second summation over
  the neighbouring score model to achieve a more
  accurate localization
                          ������+������
                ������������′ =                ������������ ∙ ������������ ������, ������
                          ������=������−������


 The obtained normalized score for all possible models
  determines the most potential loop-closure location ������������,
  where ������ = argmax ������������′
                 ������
                                  25
Results & Experiments : DATASETS


 Three datasets have been used
   City Centre (2474 images with size 640 x 480)
       The dataset was taken to address to problem of dynamical changes of
       scenes in the city centre.
   New College (2146 images with size 640 x 480)
       The dataset was taken to address the problem of perceptual aliasing. By
       this dataset, a robot walked to the same place many times. Many different
       places look very similar.
   Suzukakedai (1079 images with size 1920 x 1080)
       The dataset was taken by video camera attached with the omnidirectional
       lens. The dataset was taken to address the problem of highly dynamical
       changes where the different event was organized (i.e. open-campus
       event)



                                     26
Results & Experiments: DATASETS


  City Centre




                 27
Results & Experiments: DATASETS


  New College




                 28
Results & Experiments: DATASETS


  Suzukakedai




                 29
Results & Experiments: BASELINE


  Among many visual SLAM methods, FAB-MAP (Cummins &
   Newman. IJRR’08) and the fast incremental BoW method
   of Angeli et al. (T-RO’ 08) are considered to be state-of-the-
   art.
  Both of them are based on Bag-of-words scheme
  Each of them offer different advantages
    FAB-MAP  High accuracy with offline dictionary generation
    Angeli et al.  Lower than or equal accuracy to FAB-MAP but
     with an online incremental dictionary generation
  PIRF-Nav must offer higher accuracy than FAB-MAP while
   being an online incremental method like Angeli et al.

                                30
Evaluation on Appearance-based
      Loop-closure Detection Problem
                                                                            Correct Loop-closure
                                                            PrecisionA =
                                                                               All Loop-closure
             Input Image
                                                                              Correct Loop-closure
                                                              RecallA =
                                                                             All labeled loop-closure

                Loop-                       Binary Classification: New
               Closing ?                        place / Old Place



                                                              Image Retrieval Problem:
Add new place to           Find the loop-                  Retrieve the most likely place for
    the map                closure place                             loop-closure

                                                                  Correctly retrieved image
                                                  PrecisionB =
                                                                      All retrieved images

           Output the loop-                                       Correctly retrieved image
           closure location                           RecallB =
                                                                      All labeled images
                                                 31
Evaluation on Appearance-based
Loop-closure Detection Problem


 Actually, performance should be evaluated by two graphs:
   Precision A – Recall A curve
   Precision B – Recall B curve
 However, for compact representation, most works in
  visual SLAM use Precision B – Recall B curve to show the
  performance because
   The binary classification is currently not so much problematic
   Important challenge is given to the performance of image
    retrieval

                                   32
Evaluation on Appearance-based
Loop-closure Detection Problem
          (City Centre)
                         Precision A – Recall A:
                    Focusing on only the problem of
                     saying “YES/NO” loop-closure
                      detected is currently trivial




                      Precision B – Recall B:
                Instead, given that the precision of
                    the “YES/NO” loop-closure
                 detected is 100%, it is much more
               interesting to see how accurate the
                 system can correctly retrieve the
                       corresponding image
               33
Result 1: City Centre
                         Vehicle Trajectory
                         Loop Closure Detection




PIRF-Nav (100% Precision) (proposed)                   FAB-MAP (100% Precision)
                                                  34
Result 1 : City Centre (Precision-Recall
                 Curve)




                   35
Result 1: City Centre
            (Computation Time)




                                                                         36
*It is noteworthy that all programs of PIRF-Nav were written in MATLAB
while FAB-MAP was written in C.
Result 2: New College
      Vehicle Trajectory
      Loop Closure Detection




PIRF-Nav (100% Precision) (proposed)     FAB-MAP (100% Precision)
                                         37
Result 2: New College (Precision-
          Recall Curve)




                 38
Result 3: Suzukakedai




Vehicle Trajectory
Loop Closure Detection


                               39
              PIRF-Nav (100% Precision)
Result 3: Suzukakedai (Precision-
           Recall Curve)




                                    40
Result 4: Combined Datasets
              (Precision-Recall Curve)




                                                                                                    41
Note: We did not test FAB-MAP on this experiment because FAB-MAP completely failed in Suzukakedai
Dataset. Also the results on City Centre and New College clearly imply that FAB-MAP will not gain
better accuracy in this experiment.
Sample Matched Images (Dynamical
  Changes in Major Part of Scene)




                42
Sample Matched Images (Different
         View-Points)




               43
Conclusions


 PIRF-Nav outperforms FAB-MAP in term of accuracy with
  more than 80% recall rate at 100% precision on all datasets
  provided by the authors
 PIRF-Nav offers an online and incremental ability to run in
  very different environments
 Although the computation time of PIRF-Nav at the same
  image scale is slower than FAB, PIRF-Nav compensates this
  drawback by processing on smaller image scale since the
  accuracy is still considerably much higher than FAB-MAP

                              44
Thank you for Your Kind
        Attention
“DOUBT IS THE FATHER OF INVENTION”

                                     QUOTED BY GALILEO




                       45
Publication


 Journal
   1.   A. Kawewong and O. Hasegawa, "Classifying 3D Real-World Texture Images by
        Combining Maximum Response 8, 4th Order of Auto Correlation and Colortons, " Jour.
        of Advanced Comp. Intelligence and Intelligent Informatics, vol. 11, no. 5, 2007.
   2.   A. Kawewong, Y. Honda, M. Tsuboyama, and O. Hasegawa, "Reasoning on the Self-
        Organizing Incremental Associative Memory for Online Robot Path Planning," IEICE
        Trans. Inf. & Sys., vol. E93-D, no. 3, 2009. (impact factor 0.369)
   3.   本田雄太郎,Aram Kawewong, 坪山学,長谷川修:"半教師ありニューラルネットワーク
        による場所細胞の獲得とロボットの自律移動制御",信学論D,2009,採録決定
   4.   A. Kawewong, N. Tongprasit, S. Tangruamsub and O. Hasegawa, “Online and
        Incremental Appearance-based SLAM in Highly Dynamic Environments, " Int’l Jour.
        Robotics Research (IJRR). (To Appear in 2010, impact factor 2.882, rank#1 in robotics)
   5.   A. Kawewong, S. Tangruamsub and O. Hasegawa, “Position-Invariant Robust Features
        for Long-term Recognition of Dynamic Outdoor Scenes," IEICE Trans. Inf. & Sys.
        (conditional accepted)


                                             46
Publication


 Conferences
  1.   A. Kawewong and O. Hasegawa, "3D Texture Classification by Using Pre-testing
       Stage and Reliability Table, " IEEE Proc. International Conference on Image
       Processing (ICIP), (2005).
  2.   A. Kawewong and O. Hasegawa, "Combining Rotationally Variant and Invariant
       Features Based on Between-Class Error for 3D Texture Classification, " IEEE Int’l
       Conf. On Computer Vision (ICCV) Workshop, 2005.
  3.   A. Kawewong, Y. Honda, M. Tsuboyama, O. Hasegawa, "A Common-Neural-
       Pattern Based Reasoning for Mobile Robot Cognitive Mapping, " In Proc. Int’l
       Conf. Neural Information Processing (ICONIP), 2008.
  4.   A. Kawewong, Y. Honda, M. Tsuboyama, O. Hasegawa, "Common-Patterns Based
       Mapping for Robot Navigation, " in Proc. IEEE Int’l Conf. Robotics and Biomimetics
       (ROBIO), 2008.
  5.   S. Tangruamsub, M. Tsuboyama, A. Kawewong and O. Hasegawa, "Mobile Robot
       Vision-Based Navigation Using Self-Organizing and Incremental Neural
       Networks," in Proc. Int’l Joint Conf. Neural Networks (IJCNN), 2009.

                                          47
Publication


 Conferences
  6. A. Kawewong, S. Tangruamsub, and O. Hasegawa, "Wide-baseline Visible
     Features for Highly Dynamic Scene Recognition," in Proc. Int'l Conf.
     Computer Analysis of Images and Patterns (CAIP), 2009.
  7. N. Tongprasit, A. Kawewong and O. Hasegawa, "Data Partitioning
     Technique for Online and Incremental Visual SLAM," in Proc. Int’l Conf. on
     Neural Information Processing (ICONIP), 2009. (oral & student travel award)




                                      48

More Related Content

What's hot

Robust and Efficient Coupling of Perception to Actuation with Metric and Non-...
Robust and Efficient Coupling of Perception to Actuation with Metric and Non-...Robust and Efficient Coupling of Perception to Actuation with Metric and Non-...
Robust and Efficient Coupling of Perception to Actuation with Metric and Non-...
Darius Burschka
 
Deep Learning - a Path from Big Data Indexing to Robotic Applications
Deep Learning - a Path from Big Data Indexing to Robotic ApplicationsDeep Learning - a Path from Big Data Indexing to Robotic Applications
Deep Learning - a Path from Big Data Indexing to Robotic Applications
Darius Burschka
 
3D Shape and Indirect Appearance by Structured Light Transport
3D Shape and Indirect Appearance by Structured Light Transport3D Shape and Indirect Appearance by Structured Light Transport
3D Shape and Indirect Appearance by Structured Light Transport
Matthew O'Toole
 
201109CVIM/PRMU Inverse Composite Alignment of a sphere under orthogonal proj...
201109CVIM/PRMU Inverse Composite Alignment of a sphere under orthogonal proj...201109CVIM/PRMU Inverse Composite Alignment of a sphere under orthogonal proj...
201109CVIM/PRMU Inverse Composite Alignment of a sphere under orthogonal proj...
Toru Tamaki
 
Introduction of slam
Introduction of slamIntroduction of slam
Introduction of slam
Hung-Chih Chang
 
Jp2516981701
Jp2516981701Jp2516981701
Jp2516981701
IJERA Editor
 
SIGGRAPH 2014 Course on Computational Cameras and Displays (part 4)
SIGGRAPH 2014 Course on Computational Cameras and Displays (part 4)SIGGRAPH 2014 Course on Computational Cameras and Displays (part 4)
SIGGRAPH 2014 Course on Computational Cameras and Displays (part 4)
Matthew O'Toole
 
物体検出の歴史(R-CNNからSSD・YOLOまで)
物体検出の歴史(R-CNNからSSD・YOLOまで)物体検出の歴史(R-CNNからSSD・YOLOまで)
物体検出の歴史(R-CNNからSSD・YOLOまで)
HironoriKanazawa
 
Motion capture document
Motion capture documentMotion capture document
Motion capture document
harini501
 
In tech vision-based_obstacle_detection_module_for_a_wheeled_mobile_robot
In tech vision-based_obstacle_detection_module_for_a_wheeled_mobile_robotIn tech vision-based_obstacle_detection_module_for_a_wheeled_mobile_robot
In tech vision-based_obstacle_detection_module_for_a_wheeled_mobile_robot
Sudhakar Spartan
 
Optical Computing for Fast Light Transport Analysis
Optical Computing for Fast Light Transport AnalysisOptical Computing for Fast Light Transport Analysis
Optical Computing for Fast Light Transport Analysis
Matthew O'Toole
 
Loop snakesiv05
Loop snakesiv05Loop snakesiv05
Loop snakesiv05
saulloribeiro
 
20110326 CG・CVにおける散乱
20110326 CG・CVにおける散乱20110326 CG・CVにおける散乱
20110326 CG・CVにおける散乱
Toru Tamaki
 
20110415 Scattering in CG and CV
20110415 Scattering in CG and CV20110415 Scattering in CG and CV
20110415 Scattering in CG and CV
Toru Tamaki
 
Temporal Frequency Probing for 5D Transient Analysis of Global Light Transport
Temporal Frequency Probing for 5D Transient Analysis of Global Light TransportTemporal Frequency Probing for 5D Transient Analysis of Global Light Transport
Temporal Frequency Probing for 5D Transient Analysis of Global Light Transport
Matthew O'Toole
 
Primal-Dual Coding to Probe Light Transport
Primal-Dual Coding to Probe Light TransportPrimal-Dual Coding to Probe Light Transport
Primal-Dual Coding to Probe Light Transport
Matthew O'Toole
 
IGARSS-SAR-Pritt.pptx
IGARSS-SAR-Pritt.pptxIGARSS-SAR-Pritt.pptx
IGARSS-SAR-Pritt.pptx
grssieee
 
MINIMUM ENDMEMBER-WISE DISTANCE CONSTRAINED NONNEGATIVE MATRIX FACTORIZATION ...
MINIMUM ENDMEMBER-WISE DISTANCE CONSTRAINED NONNEGATIVE MATRIX FACTORIZATION ...MINIMUM ENDMEMBER-WISE DISTANCE CONSTRAINED NONNEGATIVE MATRIX FACTORIZATION ...
MINIMUM ENDMEMBER-WISE DISTANCE CONSTRAINED NONNEGATIVE MATRIX FACTORIZATION ...
grssieee
 

What's hot (18)

Robust and Efficient Coupling of Perception to Actuation with Metric and Non-...
Robust and Efficient Coupling of Perception to Actuation with Metric and Non-...Robust and Efficient Coupling of Perception to Actuation with Metric and Non-...
Robust and Efficient Coupling of Perception to Actuation with Metric and Non-...
 
Deep Learning - a Path from Big Data Indexing to Robotic Applications
Deep Learning - a Path from Big Data Indexing to Robotic ApplicationsDeep Learning - a Path from Big Data Indexing to Robotic Applications
Deep Learning - a Path from Big Data Indexing to Robotic Applications
 
3D Shape and Indirect Appearance by Structured Light Transport
3D Shape and Indirect Appearance by Structured Light Transport3D Shape and Indirect Appearance by Structured Light Transport
3D Shape and Indirect Appearance by Structured Light Transport
 
201109CVIM/PRMU Inverse Composite Alignment of a sphere under orthogonal proj...
201109CVIM/PRMU Inverse Composite Alignment of a sphere under orthogonal proj...201109CVIM/PRMU Inverse Composite Alignment of a sphere under orthogonal proj...
201109CVIM/PRMU Inverse Composite Alignment of a sphere under orthogonal proj...
 
Introduction of slam
Introduction of slamIntroduction of slam
Introduction of slam
 
Jp2516981701
Jp2516981701Jp2516981701
Jp2516981701
 
SIGGRAPH 2014 Course on Computational Cameras and Displays (part 4)
SIGGRAPH 2014 Course on Computational Cameras and Displays (part 4)SIGGRAPH 2014 Course on Computational Cameras and Displays (part 4)
SIGGRAPH 2014 Course on Computational Cameras and Displays (part 4)
 
物体検出の歴史(R-CNNからSSD・YOLOまで)
物体検出の歴史(R-CNNからSSD・YOLOまで)物体検出の歴史(R-CNNからSSD・YOLOまで)
物体検出の歴史(R-CNNからSSD・YOLOまで)
 
Motion capture document
Motion capture documentMotion capture document
Motion capture document
 
In tech vision-based_obstacle_detection_module_for_a_wheeled_mobile_robot
In tech vision-based_obstacle_detection_module_for_a_wheeled_mobile_robotIn tech vision-based_obstacle_detection_module_for_a_wheeled_mobile_robot
In tech vision-based_obstacle_detection_module_for_a_wheeled_mobile_robot
 
Optical Computing for Fast Light Transport Analysis
Optical Computing for Fast Light Transport AnalysisOptical Computing for Fast Light Transport Analysis
Optical Computing for Fast Light Transport Analysis
 
Loop snakesiv05
Loop snakesiv05Loop snakesiv05
Loop snakesiv05
 
20110326 CG・CVにおける散乱
20110326 CG・CVにおける散乱20110326 CG・CVにおける散乱
20110326 CG・CVにおける散乱
 
20110415 Scattering in CG and CV
20110415 Scattering in CG and CV20110415 Scattering in CG and CV
20110415 Scattering in CG and CV
 
Temporal Frequency Probing for 5D Transient Analysis of Global Light Transport
Temporal Frequency Probing for 5D Transient Analysis of Global Light TransportTemporal Frequency Probing for 5D Transient Analysis of Global Light Transport
Temporal Frequency Probing for 5D Transient Analysis of Global Light Transport
 
Primal-Dual Coding to Probe Light Transport
Primal-Dual Coding to Probe Light TransportPrimal-Dual Coding to Probe Light Transport
Primal-Dual Coding to Probe Light Transport
 
IGARSS-SAR-Pritt.pptx
IGARSS-SAR-Pritt.pptxIGARSS-SAR-Pritt.pptx
IGARSS-SAR-Pritt.pptx
 
MINIMUM ENDMEMBER-WISE DISTANCE CONSTRAINED NONNEGATIVE MATRIX FACTORIZATION ...
MINIMUM ENDMEMBER-WISE DISTANCE CONSTRAINED NONNEGATIVE MATRIX FACTORIZATION ...MINIMUM ENDMEMBER-WISE DISTANCE CONSTRAINED NONNEGATIVE MATRIX FACTORIZATION ...
MINIMUM ENDMEMBER-WISE DISTANCE CONSTRAINED NONNEGATIVE MATRIX FACTORIZATION ...
 

Viewers also liked

東工大 長谷川修研の環境学習・認識・探索技術
東工大 長谷川修研の環境学習・認識・探索技術東工大 長谷川修研の環境学習・認識・探索技術
東工大 長谷川修研の環境学習・認識・探索技術SOINN Inc.
 
東工大長谷川修研紹介 2011 (8月1日版)
東工大長谷川修研紹介 2011 (8月1日版)東工大長谷川修研紹介 2011 (8月1日版)
東工大長谷川修研紹介 2011 (8月1日版)SOINN Inc.
 
ロボットによる一般問題解決
ロボットによる一般問題解決ロボットによる一般問題解決
ロボットによる一般問題解決SOINN Inc.
 

Viewers also liked (9)

東工大 長谷川修研の環境学習・認識・探索技術
東工大 長谷川修研の環境学習・認識・探索技術東工大 長谷川修研の環境学習・認識・探索技術
東工大 長谷川修研の環境学習・認識・探索技術
 
PIRF-NAV2
PIRF-NAV2PIRF-NAV2
PIRF-NAV2
 
PBAI
PBAIPBAI
PBAI
 
I
II
I
 
東工大長谷川修研紹介 2011 (8月1日版)
東工大長谷川修研紹介 2011 (8月1日版)東工大長谷川修研紹介 2011 (8月1日版)
東工大長谷川修研紹介 2011 (8月1日版)
 
SOINN-AM
SOINN-AMSOINN-AM
SOINN-AM
 
ロボットによる一般問題解決
ロボットによる一般問題解決ロボットによる一般問題解決
ロボットによる一般問題解決
 
SSA-SOINN
SSA-SOINNSSA-SOINN
SSA-SOINN
 
E-SOINN
E-SOINNE-SOINN
E-SOINN
 

Similar to Dr.Kawewong Ph.D Thesis

Practical Digital Image Processing 4
Practical Digital Image Processing 4Practical Digital Image Processing 4
Practical Digital Image Processing 4
Aly Abdelkareem
 
Accommodation-invariant Computational Near-eye Displays - SIGGRAPH 2017
Accommodation-invariant Computational Near-eye Displays - SIGGRAPH 2017Accommodation-invariant Computational Near-eye Displays - SIGGRAPH 2017
Accommodation-invariant Computational Near-eye Displays - SIGGRAPH 2017
StanfordComputationalImaging
 
J017377578
J017377578J017377578
J017377578
IOSR Journals
 
Real-time Moving Object Detection using SURF
Real-time Moving Object Detection using SURFReal-time Moving Object Detection using SURF
Real-time Moving Object Detection using SURF
iosrjce
 
Object Tracking with Instance Matching and Online Learning
Object Tracking with Instance Matching and Online LearningObject Tracking with Instance Matching and Online Learning
Object Tracking with Instance Matching and Online Learning
Jui-Hsin (Larry) Lai
 
December 4, Project
December 4, ProjectDecember 4, Project
PSanthanam.ppt
PSanthanam.pptPSanthanam.ppt
PSanthanam.ppt
VasoTeAmargo
 
Human Action Recognition in Videos Employing 2DPCA on 2DHOOF and Radon Transform
Human Action Recognition in Videos Employing 2DPCA on 2DHOOF and Radon TransformHuman Action Recognition in Videos Employing 2DPCA on 2DHOOF and Radon Transform
Human Action Recognition in Videos Employing 2DPCA on 2DHOOF and Radon Transform
Fadwa Fouad
 
An Assessment of Image Matching Algorithms in Depth Estimation
An Assessment of Image Matching Algorithms in Depth EstimationAn Assessment of Image Matching Algorithms in Depth Estimation
An Assessment of Image Matching Algorithms in Depth Estimation
CSCJournals
 
Video Stitching using Improved RANSAC and SIFT
Video Stitching using Improved RANSAC and SIFTVideo Stitching using Improved RANSAC and SIFT
Video Stitching using Improved RANSAC and SIFT
IRJET Journal
 
Mapping mobile robotics
Mapping mobile roboticsMapping mobile robotics
Mapping mobile robotics
Devasena Inupakutika
 
MIT 6.870 - Template Matching and Histograms (Nicolas Pinto, MIT)
MIT 6.870 - Template Matching and Histograms (Nicolas Pinto, MIT)MIT 6.870 - Template Matching and Histograms (Nicolas Pinto, MIT)
MIT 6.870 - Template Matching and Histograms (Nicolas Pinto, MIT)
npinto
 
Lecture 06: Features and Uncertainty
Lecture 06: Features and UncertaintyLecture 06: Features and Uncertainty
Lecture 06: Features and Uncertainty
University of Colorado at Boulder
 
Real Time Human Posture Detection with Multiple Depth Sensors
Real Time Human Posture Detection with Multiple Depth SensorsReal Time Human Posture Detection with Multiple Depth Sensors
Real Time Human Posture Detection with Multiple Depth Sensors
Wassim Filali
 
"High-resolution 3D Reconstruction on a Mobile Processor," a Presentation fro...
"High-resolution 3D Reconstruction on a Mobile Processor," a Presentation fro..."High-resolution 3D Reconstruction on a Mobile Processor," a Presentation fro...
"High-resolution 3D Reconstruction on a Mobile Processor," a Presentation fro...
Edge AI and Vision Alliance
 
5 ray casting computer graphics
5 ray casting computer graphics5 ray casting computer graphics
5 ray casting computer graphics
cairo university
 
CNN vs SIFT-based Visual Localization - Laura Leal-Taixé - UPC Barcelona 2018
CNN vs SIFT-based Visual Localization - Laura Leal-Taixé - UPC Barcelona 2018CNN vs SIFT-based Visual Localization - Laura Leal-Taixé - UPC Barcelona 2018
CNN vs SIFT-based Visual Localization - Laura Leal-Taixé - UPC Barcelona 2018
Universitat Politècnica de Catalunya
 
2007 112
2007 1122007 112
2007 112
guest987b6
 
SLAM Zero to One
SLAM Zero to OneSLAM Zero to One
SLAM Zero to One
Gavin Gao
 
Deliberately Planning and Acting for Angry Birds with Refinement Methods
Deliberately Planning and Acting for Angry Birds with Refinement MethodsDeliberately Planning and Acting for Angry Birds with Refinement Methods
Deliberately Planning and Acting for Angry Birds with Refinement Methods
Ruofei Du
 

Similar to Dr.Kawewong Ph.D Thesis (20)

Practical Digital Image Processing 4
Practical Digital Image Processing 4Practical Digital Image Processing 4
Practical Digital Image Processing 4
 
Accommodation-invariant Computational Near-eye Displays - SIGGRAPH 2017
Accommodation-invariant Computational Near-eye Displays - SIGGRAPH 2017Accommodation-invariant Computational Near-eye Displays - SIGGRAPH 2017
Accommodation-invariant Computational Near-eye Displays - SIGGRAPH 2017
 
J017377578
J017377578J017377578
J017377578
 
Real-time Moving Object Detection using SURF
Real-time Moving Object Detection using SURFReal-time Moving Object Detection using SURF
Real-time Moving Object Detection using SURF
 
Object Tracking with Instance Matching and Online Learning
Object Tracking with Instance Matching and Online LearningObject Tracking with Instance Matching and Online Learning
Object Tracking with Instance Matching and Online Learning
 
December 4, Project
December 4, ProjectDecember 4, Project
December 4, Project
 
PSanthanam.ppt
PSanthanam.pptPSanthanam.ppt
PSanthanam.ppt
 
Human Action Recognition in Videos Employing 2DPCA on 2DHOOF and Radon Transform
Human Action Recognition in Videos Employing 2DPCA on 2DHOOF and Radon TransformHuman Action Recognition in Videos Employing 2DPCA on 2DHOOF and Radon Transform
Human Action Recognition in Videos Employing 2DPCA on 2DHOOF and Radon Transform
 
An Assessment of Image Matching Algorithms in Depth Estimation
An Assessment of Image Matching Algorithms in Depth EstimationAn Assessment of Image Matching Algorithms in Depth Estimation
An Assessment of Image Matching Algorithms in Depth Estimation
 
Video Stitching using Improved RANSAC and SIFT
Video Stitching using Improved RANSAC and SIFTVideo Stitching using Improved RANSAC and SIFT
Video Stitching using Improved RANSAC and SIFT
 
Mapping mobile robotics
Mapping mobile roboticsMapping mobile robotics
Mapping mobile robotics
 
MIT 6.870 - Template Matching and Histograms (Nicolas Pinto, MIT)
MIT 6.870 - Template Matching and Histograms (Nicolas Pinto, MIT)MIT 6.870 - Template Matching and Histograms (Nicolas Pinto, MIT)
MIT 6.870 - Template Matching and Histograms (Nicolas Pinto, MIT)
 
Lecture 06: Features and Uncertainty
Lecture 06: Features and UncertaintyLecture 06: Features and Uncertainty
Lecture 06: Features and Uncertainty
 
Real Time Human Posture Detection with Multiple Depth Sensors
Real Time Human Posture Detection with Multiple Depth SensorsReal Time Human Posture Detection with Multiple Depth Sensors
Real Time Human Posture Detection with Multiple Depth Sensors
 
"High-resolution 3D Reconstruction on a Mobile Processor," a Presentation fro...
"High-resolution 3D Reconstruction on a Mobile Processor," a Presentation fro..."High-resolution 3D Reconstruction on a Mobile Processor," a Presentation fro...
"High-resolution 3D Reconstruction on a Mobile Processor," a Presentation fro...
 
5 ray casting computer graphics
5 ray casting computer graphics5 ray casting computer graphics
5 ray casting computer graphics
 
CNN vs SIFT-based Visual Localization - Laura Leal-Taixé - UPC Barcelona 2018
CNN vs SIFT-based Visual Localization - Laura Leal-Taixé - UPC Barcelona 2018CNN vs SIFT-based Visual Localization - Laura Leal-Taixé - UPC Barcelona 2018
CNN vs SIFT-based Visual Localization - Laura Leal-Taixé - UPC Barcelona 2018
 
2007 112
2007 1122007 112
2007 112
 
SLAM Zero to One
SLAM Zero to OneSLAM Zero to One
SLAM Zero to One
 
Deliberately Planning and Acting for Angry Birds with Refinement Methods
Deliberately Planning and Acting for Angry Birds with Refinement MethodsDeliberately Planning and Acting for Angry Birds with Refinement Methods
Deliberately Planning and Acting for Angry Birds with Refinement Methods
 

More from SOINN Inc.

Original SOINN
Original SOINNOriginal SOINN
Original SOINN
SOINN Inc.
 
PhDThesis, Dr Shen Furao
PhDThesis, Dr Shen FuraoPhDThesis, Dr Shen Furao
PhDThesis, Dr Shen Furao
SOINN Inc.
 
SOIAM (SOINN-AM)
SOIAM (SOINN-AM)SOIAM (SOINN-AM)
SOIAM (SOINN-AM)
SOINN Inc.
 
学生さんへのメッセージ
学生さんへのメッセージ学生さんへのメッセージ
学生さんへのメッセージSOINN Inc.
 
超高速オンライン転移学習
超高速オンライン転移学習超高速オンライン転移学習
超高速オンライン転移学習SOINN Inc.
 

More from SOINN Inc. (6)

Original SOINN
Original SOINNOriginal SOINN
Original SOINN
 
PhDThesis, Dr Shen Furao
PhDThesis, Dr Shen FuraoPhDThesis, Dr Shen Furao
PhDThesis, Dr Shen Furao
 
SOINN PBR
SOINN PBRSOINN PBR
SOINN PBR
 
SOIAM (SOINN-AM)
SOIAM (SOINN-AM)SOIAM (SOINN-AM)
SOIAM (SOINN-AM)
 
学生さんへのメッセージ
学生さんへのメッセージ学生さんへのメッセージ
学生さんへのメッセージ
 
超高速オンライン転移学習
超高速オンライン転移学習超高速オンライン転移学習
超高速オンライン転移学習
 

Recently uploaded

UiPath Test Automation using UiPath Test Suite series, part 5
UiPath Test Automation using UiPath Test Suite series, part 5UiPath Test Automation using UiPath Test Suite series, part 5
UiPath Test Automation using UiPath Test Suite series, part 5
DianaGray10
 
Enchancing adoption of Open Source Libraries. A case study on Albumentations.AI
Enchancing adoption of Open Source Libraries. A case study on Albumentations.AIEnchancing adoption of Open Source Libraries. A case study on Albumentations.AI
Enchancing adoption of Open Source Libraries. A case study on Albumentations.AI
Vladimir Iglovikov, Ph.D.
 
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...
Neo4j
 
Artificial Intelligence for XMLDevelopment
Artificial Intelligence for XMLDevelopmentArtificial Intelligence for XMLDevelopment
Artificial Intelligence for XMLDevelopment
Octavian Nadolu
 
Uni Systems Copilot event_05062024_C.Vlachos.pdf
Uni Systems Copilot event_05062024_C.Vlachos.pdfUni Systems Copilot event_05062024_C.Vlachos.pdf
Uni Systems Copilot event_05062024_C.Vlachos.pdf
Uni Systems S.M.S.A.
 
Large Language Model (LLM) and it’s Geospatial Applications
Large Language Model (LLM) and it’s Geospatial ApplicationsLarge Language Model (LLM) and it’s Geospatial Applications
Large Language Model (LLM) and it’s Geospatial Applications
Rohit Gautam
 
Communications Mining Series - Zero to Hero - Session 1
Communications Mining Series - Zero to Hero - Session 1Communications Mining Series - Zero to Hero - Session 1
Communications Mining Series - Zero to Hero - Session 1
DianaGray10
 
PCI PIN Basics Webinar from the Controlcase Team
PCI PIN Basics Webinar from the Controlcase TeamPCI PIN Basics Webinar from the Controlcase Team
PCI PIN Basics Webinar from the Controlcase Team
ControlCase
 
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024
GraphSummit Singapore | The Art of the  Possible with Graph - Q2 2024GraphSummit Singapore | The Art of the  Possible with Graph - Q2 2024
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024
Neo4j
 
GraphSummit Singapore | Enhancing Changi Airport Group's Passenger Experience...
GraphSummit Singapore | Enhancing Changi Airport Group's Passenger Experience...GraphSummit Singapore | Enhancing Changi Airport Group's Passenger Experience...
GraphSummit Singapore | Enhancing Changi Airport Group's Passenger Experience...
Neo4j
 
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...
DanBrown980551
 
20240607 QFM018 Elixir Reading List May 2024
20240607 QFM018 Elixir Reading List May 202420240607 QFM018 Elixir Reading List May 2024
20240607 QFM018 Elixir Reading List May 2024
Matthew Sinclair
 
Introduction to CHERI technology - Cybersecurity
Introduction to CHERI technology - CybersecurityIntroduction to CHERI technology - Cybersecurity
Introduction to CHERI technology - Cybersecurity
mikeeftimakis1
 
GraphSummit Singapore | Graphing Success: Revolutionising Organisational Stru...
GraphSummit Singapore | Graphing Success: Revolutionising Organisational Stru...GraphSummit Singapore | Graphing Success: Revolutionising Organisational Stru...
GraphSummit Singapore | Graphing Success: Revolutionising Organisational Stru...
Neo4j
 
Mind map of terminologies used in context of Generative AI
Mind map of terminologies used in context of Generative AIMind map of terminologies used in context of Generative AI
Mind map of terminologies used in context of Generative AI
Kumud Singh
 
“I’m still / I’m still / Chaining from the Block”
“I’m still / I’m still / Chaining from the Block”“I’m still / I’m still / Chaining from the Block”
“I’m still / I’m still / Chaining from the Block”
Claudio Di Ciccio
 
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Encryption in Microsoft 365 - ExpertsLive Netherlands 2024
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024
Albert Hoitingh
 
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...
James Anderson
 
みなさんこんにちはこれ何文字まで入るの?40文字以下不可とか本当に意味わからないけどこれ限界文字数書いてないからマジでやばい文字数いけるんじゃないの?えこ...
みなさんこんにちはこれ何文字まで入るの?40文字以下不可とか本当に意味わからないけどこれ限界文字数書いてないからマジでやばい文字数いけるんじゃないの?えこ...みなさんこんにちはこれ何文字まで入るの?40文字以下不可とか本当に意味わからないけどこれ限界文字数書いてないからマジでやばい文字数いけるんじゃないの?えこ...
みなさんこんにちはこれ何文字まで入るの?40文字以下不可とか本当に意味わからないけどこれ限界文字数書いてないからマジでやばい文字数いけるんじゃないの?えこ...
名前 です男
 
Epistemic Interaction - tuning interfaces to provide information for AI support
Epistemic Interaction - tuning interfaces to provide information for AI supportEpistemic Interaction - tuning interfaces to provide information for AI support
Epistemic Interaction - tuning interfaces to provide information for AI support
Alan Dix
 

Recently uploaded (20)

UiPath Test Automation using UiPath Test Suite series, part 5
UiPath Test Automation using UiPath Test Suite series, part 5UiPath Test Automation using UiPath Test Suite series, part 5
UiPath Test Automation using UiPath Test Suite series, part 5
 
Enchancing adoption of Open Source Libraries. A case study on Albumentations.AI
Enchancing adoption of Open Source Libraries. A case study on Albumentations.AIEnchancing adoption of Open Source Libraries. A case study on Albumentations.AI
Enchancing adoption of Open Source Libraries. A case study on Albumentations.AI
 
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...
 
Artificial Intelligence for XMLDevelopment
Artificial Intelligence for XMLDevelopmentArtificial Intelligence for XMLDevelopment
Artificial Intelligence for XMLDevelopment
 
Uni Systems Copilot event_05062024_C.Vlachos.pdf
Uni Systems Copilot event_05062024_C.Vlachos.pdfUni Systems Copilot event_05062024_C.Vlachos.pdf
Uni Systems Copilot event_05062024_C.Vlachos.pdf
 
Large Language Model (LLM) and it’s Geospatial Applications
Large Language Model (LLM) and it’s Geospatial ApplicationsLarge Language Model (LLM) and it’s Geospatial Applications
Large Language Model (LLM) and it’s Geospatial Applications
 
Communications Mining Series - Zero to Hero - Session 1
Communications Mining Series - Zero to Hero - Session 1Communications Mining Series - Zero to Hero - Session 1
Communications Mining Series - Zero to Hero - Session 1
 
PCI PIN Basics Webinar from the Controlcase Team
PCI PIN Basics Webinar from the Controlcase TeamPCI PIN Basics Webinar from the Controlcase Team
PCI PIN Basics Webinar from the Controlcase Team
 
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024
GraphSummit Singapore | The Art of the  Possible with Graph - Q2 2024GraphSummit Singapore | The Art of the  Possible with Graph - Q2 2024
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024
 
GraphSummit Singapore | Enhancing Changi Airport Group's Passenger Experience...
GraphSummit Singapore | Enhancing Changi Airport Group's Passenger Experience...GraphSummit Singapore | Enhancing Changi Airport Group's Passenger Experience...
GraphSummit Singapore | Enhancing Changi Airport Group's Passenger Experience...
 
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...
 
20240607 QFM018 Elixir Reading List May 2024
20240607 QFM018 Elixir Reading List May 202420240607 QFM018 Elixir Reading List May 2024
20240607 QFM018 Elixir Reading List May 2024
 
Introduction to CHERI technology - Cybersecurity
Introduction to CHERI technology - CybersecurityIntroduction to CHERI technology - Cybersecurity
Introduction to CHERI technology - Cybersecurity
 
GraphSummit Singapore | Graphing Success: Revolutionising Organisational Stru...
GraphSummit Singapore | Graphing Success: Revolutionising Organisational Stru...GraphSummit Singapore | Graphing Success: Revolutionising Organisational Stru...
GraphSummit Singapore | Graphing Success: Revolutionising Organisational Stru...
 
Mind map of terminologies used in context of Generative AI
Mind map of terminologies used in context of Generative AIMind map of terminologies used in context of Generative AI
Mind map of terminologies used in context of Generative AI
 
“I’m still / I’m still / Chaining from the Block”
“I’m still / I’m still / Chaining from the Block”“I’m still / I’m still / Chaining from the Block”
“I’m still / I’m still / Chaining from the Block”
 
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Encryption in Microsoft 365 - ExpertsLive Netherlands 2024
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024
 
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...
 
みなさんこんにちはこれ何文字まで入るの?40文字以下不可とか本当に意味わからないけどこれ限界文字数書いてないからマジでやばい文字数いけるんじゃないの?えこ...
みなさんこんにちはこれ何文字まで入るの?40文字以下不可とか本当に意味わからないけどこれ限界文字数書いてないからマジでやばい文字数いけるんじゃないの?えこ...みなさんこんにちはこれ何文字まで入るの?40文字以下不可とか本当に意味わからないけどこれ限界文字数書いてないからマジでやばい文字数いけるんじゃないの?えこ...
みなさんこんにちはこれ何文字まで入るの?40文字以下不可とか本当に意味わからないけどこれ限界文字数書いてないからマジでやばい文字数いけるんじゃないの?えこ...
 
Epistemic Interaction - tuning interfaces to provide information for AI support
Epistemic Interaction - tuning interfaces to provide information for AI supportEpistemic Interaction - tuning interfaces to provide information for AI support
Epistemic Interaction - tuning interfaces to provide information for AI support
 

Dr.Kawewong Ph.D Thesis

  • 1. PIRF-Nav: An Online Incremental Appearance- based Localization and Mapping in Dynamic Environments Aram Kawewong Hasegawa Laboratory Department of Computational Intelligence and Systems Science Interdisciplinary Graduate School of Science and Engineering Tokyo Institute of Technology 1
  • 2. Introduction to SLAM  Simultaneous Localization and Mapping, or SLAM, is a navigation system needed for every kind of mobile robots  In the unfamiliar environment, the robot must be able to perform two important tasks simultaneously  Mapping the new place if the place has never been visited previously  Localizing itself to some mapped place if the place has been visited before 2
  • 3. Appearance-based Localization and Mapping (FAB-MAP) 3
  • 4. Why Visual SLAM ? What are the Challenging ?  Why don’t we just use GPS ?  GPS is not always reliable in the crowded city centre  GPS can only locate the coordinate/position of the agent but not the corresponded scene; how can the robot answer the question “look at this picture and tell me where it is ?” or “have you ever visited this place before ? Can you describe about the nearby places ?”  No false positive (can have false negative)  If the robot is not confident then it should answer “this is the new place”. If the robot is to answer “this place is the same place as the place ….”, it must be 100% correct.  100% precision (all answers must be correct) 4
  • 5. Appearance-based Localization and Mapping VS Place Recognition Place Recognition Localization and Mapping (Robotics) (Computer Vision) Input Images All testing images are Every input image is a testing image; it known to come from might come from somewhere in the map somewhere in the map or it might be the previously unseen place Environment Closed Environment Opened Environment Precision Precision-1 is not the Precision-1 is the first priority concern; main concern if the recall one false positive may lead the serious rate is reasonably high error in navigation 5
  • 6. Appearance-based SLAM’s Common Objectives  100% Precision with very high Recall Rates  Can run incrementally in an online manner  Life-long  Low computation time  Consume less memory  Suitable to navigate in large-scale environments  Can solve 2 main problems:  Dynamical Changes  Perceptual Aliasing (Different Places but look similar)  Note:  Coordinate-based Localization is not required here 6
  • 7. Visual SLAM’s Related Works 1. FAB-MAP (Cummins & Newman, IJRR’08)  Considering the efficiency at 100% precision, the obtained recall rate of FAB-MAP (a State-of-the-art method) is still not so high.  An offline generation process for dictionary generation is necessary. 2. Fast Incremental Bag-of-words (Angeli, et al. T-RO’08)  The system can run incrementally; offline dictionary generation process is not needed.  Accuracy is said to be less than or equal to that of FAB-MAP  Consume much higher memory than FAB-MAP 7
  • 8. What Do We Want ? :PIRF-Nav’s Advantages FAB-MAP Inc. BoW (T- PIRF-Nav (IJRR’08) RO’08) (prop.) Ability to incrementally run without needs for offline No Yes Yes dictionary generation process Memory Consumption Low High Moderate Ability to run in real-time Yes Yes Yes Robustness against dynamical Moderate Low High changes* (~40% on (~20% on City (~85% on City Centre) Centre) City Centre) * The recall rate is considered at 100% precision 8
  • 9. Basic Idea & Concept of PIRF-Nav  Making use of PIRF, we can detect the good landmarks of each individual place  The extracted PIRFs should be sufficiently informative to represent the place so that the system does not need the preliminary generated visual vocabulary  The number of PIRFs is sufficiently small to be used in the real-time application  Because the PIRF is robust against dynamical changes of scenes, the PIRF-based visual SLAM (called PIRF-Nav) become an efficient online incremental visual SLAM 9
  • 10. Basic Idea of PIRFs (proposed)  Outdoor Scenes generally include distant objects whose appearances are robust against the changes in camera position  Averaging the “slow-moving” local features which capture such objects give us the less and more robust features 10 10
  • 11. PIRF Extraction Algorithm Image Sequence 3 1 2 0 4 0 0 3 4 2 1 1 0 6 1 6 0 5 1 0 5 4 5 0 4 5 3 1 0 4 3 0 1 3 0 2 Sliding Window; w = 3 Sequence of Matching Index Vectors 11
  • 12. Briefly on PIRF’s Performance  Exp. 1 Scenes From Suzukakedai Training (640x428) Testing (640x428) 580 489  Exp. 1 Scenes From O-okayama Training (640x428) Testing (640x428) 450 493 12
  • 13. PIRF’s Performance Recognition Rate of Suzukakedai and O-okayama 93.46% 77.48% 100.00% 90.00% 80.00% 45.75% 70.00% 36.71% Suzukakedai 30.22% 31.08% O-Okayama 27.59% 60.00% 24.54% 22.29% 18.23% 50.00% 40.00% 30.00% 20.00% 10.00% 0.00% 13
  • 14. Even With These Strong Changes, PIRF Still Works Well !!! Highly Dynamic Changes in Scenes 14 Illumination Changes in Scenes
  • 15. PIRF (City Centre Dataset) Original Descriptors (SIFT) 15 Position-invariant Robust Feature (PIRF) (proposed)
  • 16. PIRF-Nav Processing Diagram (prop.) Overall Processing Diagram  Step 1: Perform simple feature matching. The score is calculated based on the popular term frequency- inverted document frequency weighting  Step 2-3: Adapt the score by considering the neighbors and then perform normalization  Step 4: Perform second integration over the score’s space for relocalization 16
  • 17. Notation Definition  At time t, a map of the environment is a collection of nt discrete and disjoint location ������ = {������1 , … , ������������ ������ }  Each of these locations ������������ , which has been created from past image ������������ , has an associated model ������������  The model ������������ is a set of PIRFs 17
  • 18. STEP 1: Simple Feature Matching  The current model ������������ is compared to each of all mapped models ������������ = {������0 , … , ������������ ������ } using standard feature matching with distant threshold ������2  Each matching outputs the similarity score s  ������0 is model of the location ������0 which is a virtual location for the event “no loop closure occurred at time t”.  Based on the obtained score s, the system proceed to the next step if ������������������������������������(������) ≠ 0 18
  • 19. STEP 1: Simple Feature Matching (Continued)  The similarity score s is calculated by considering the term frequency – inverted document frequency (tf-idf) weighting (Sivic & Zisserman, ICCV’ 03) :- ������������������ ������ tf − idf = log ������������ ������������  ������������������ is the number of occurrences of visual word w in ������������  ������������ is the total number of visual words in ������������  ������������ is the number of models containing word w  N is the total number of all existing models 19
  • 20. STEP 1: Simple Feature Matching (Continued)  To be used with PIRF, the function is then converted to mi ������������ ������������ = log k=1 ������������ ������  ������������ ������ is the number of models ������������ , 0 ≤ ������ ≤ ������������ , ������ ≠ ������ , containing PIRFs which match the kth PIRF of the input model ������������  ������������ is the number of all matched PIRFs between input and the query model  The system proceeds to STEP 2 if and only if the maximum score does not belong to ������0 and is greater than ������1 20
  • 21. STEP 2: Considering Neighbors  Accepting of rejecting loop-closure detection based on the score from only single image is sensitive to noise  This can be handled by considering the similarity score of neighboring image models as:- ������+������ ������������ = ������������ ∙ ������������ ������, ������ ������=������−������  The term ������������ ������, ������ is the transition probability generated from a Gaussian on the distance in time between i and k  ������ stands for the number of neighbors examined 21
  • 22. STEP 3: Normalizing the Score  Done by considering the standard deviation and mean value over all scores  ln indicates the number of neighbours taken into consideration  The beta-scores are converted into normalized score according to the equation ������������ − ������ , if ������������ ≥ ������ ������������ = ������ 1, Otherwise where 22 ������ = ������ + ������
  • 23. STEP 4: Re-localization  The obtained location ������������ would be accepted as loop closure if ������������ − ������ > ������2  Ideally, the neighboring model scores of location Lj should decrease symmetrically from a model score. However, scenes in dynamic environments usually contains moving objects that frequently cause the occlusion. The score of some assigned location may not be symmetrical. 23
  • 24. Step 4: Relocalization (Sample Problems) Location assigned from Step 3 does not have a symmetrical score Performing one more summation can shift the location to the right one 24
  • 25. STEP 4: Re-Localization  Therefore, we perform the second summation over the neighbouring score model to achieve a more accurate localization ������+������ ������������′ = ������������ ∙ ������������ ������, ������ ������=������−������  The obtained normalized score for all possible models determines the most potential loop-closure location ������������, where ������ = argmax ������������′ ������ 25
  • 26. Results & Experiments : DATASETS  Three datasets have been used  City Centre (2474 images with size 640 x 480) The dataset was taken to address to problem of dynamical changes of scenes in the city centre.  New College (2146 images with size 640 x 480) The dataset was taken to address the problem of perceptual aliasing. By this dataset, a robot walked to the same place many times. Many different places look very similar.  Suzukakedai (1079 images with size 1920 x 1080) The dataset was taken by video camera attached with the omnidirectional lens. The dataset was taken to address the problem of highly dynamical changes where the different event was organized (i.e. open-campus event) 26
  • 27. Results & Experiments: DATASETS  City Centre 27
  • 28. Results & Experiments: DATASETS  New College 28
  • 29. Results & Experiments: DATASETS  Suzukakedai 29
  • 30. Results & Experiments: BASELINE  Among many visual SLAM methods, FAB-MAP (Cummins & Newman. IJRR’08) and the fast incremental BoW method of Angeli et al. (T-RO’ 08) are considered to be state-of-the- art.  Both of them are based on Bag-of-words scheme  Each of them offer different advantages  FAB-MAP  High accuracy with offline dictionary generation  Angeli et al.  Lower than or equal accuracy to FAB-MAP but with an online incremental dictionary generation  PIRF-Nav must offer higher accuracy than FAB-MAP while being an online incremental method like Angeli et al. 30
  • 31. Evaluation on Appearance-based Loop-closure Detection Problem Correct Loop-closure PrecisionA = All Loop-closure Input Image Correct Loop-closure RecallA = All labeled loop-closure Loop- Binary Classification: New Closing ? place / Old Place Image Retrieval Problem: Add new place to Find the loop- Retrieve the most likely place for the map closure place loop-closure Correctly retrieved image PrecisionB = All retrieved images Output the loop- Correctly retrieved image closure location RecallB = All labeled images 31
  • 32. Evaluation on Appearance-based Loop-closure Detection Problem  Actually, performance should be evaluated by two graphs:  Precision A – Recall A curve  Precision B – Recall B curve  However, for compact representation, most works in visual SLAM use Precision B – Recall B curve to show the performance because  The binary classification is currently not so much problematic  Important challenge is given to the performance of image retrieval 32
  • 33. Evaluation on Appearance-based Loop-closure Detection Problem (City Centre) Precision A – Recall A: Focusing on only the problem of saying “YES/NO” loop-closure detected is currently trivial Precision B – Recall B: Instead, given that the precision of the “YES/NO” loop-closure detected is 100%, it is much more interesting to see how accurate the system can correctly retrieve the corresponding image 33
  • 34. Result 1: City Centre Vehicle Trajectory Loop Closure Detection PIRF-Nav (100% Precision) (proposed) FAB-MAP (100% Precision) 34
  • 35. Result 1 : City Centre (Precision-Recall Curve) 35
  • 36. Result 1: City Centre (Computation Time) 36 *It is noteworthy that all programs of PIRF-Nav were written in MATLAB while FAB-MAP was written in C.
  • 37. Result 2: New College Vehicle Trajectory Loop Closure Detection PIRF-Nav (100% Precision) (proposed) FAB-MAP (100% Precision) 37
  • 38. Result 2: New College (Precision- Recall Curve) 38
  • 39. Result 3: Suzukakedai Vehicle Trajectory Loop Closure Detection 39 PIRF-Nav (100% Precision)
  • 40. Result 3: Suzukakedai (Precision- Recall Curve) 40
  • 41. Result 4: Combined Datasets (Precision-Recall Curve) 41 Note: We did not test FAB-MAP on this experiment because FAB-MAP completely failed in Suzukakedai Dataset. Also the results on City Centre and New College clearly imply that FAB-MAP will not gain better accuracy in this experiment.
  • 42. Sample Matched Images (Dynamical Changes in Major Part of Scene) 42
  • 43. Sample Matched Images (Different View-Points) 43
  • 44. Conclusions  PIRF-Nav outperforms FAB-MAP in term of accuracy with more than 80% recall rate at 100% precision on all datasets provided by the authors  PIRF-Nav offers an online and incremental ability to run in very different environments  Although the computation time of PIRF-Nav at the same image scale is slower than FAB, PIRF-Nav compensates this drawback by processing on smaller image scale since the accuracy is still considerably much higher than FAB-MAP 44
  • 45. Thank you for Your Kind Attention “DOUBT IS THE FATHER OF INVENTION” QUOTED BY GALILEO 45
  • 46. Publication  Journal 1. A. Kawewong and O. Hasegawa, "Classifying 3D Real-World Texture Images by Combining Maximum Response 8, 4th Order of Auto Correlation and Colortons, " Jour. of Advanced Comp. Intelligence and Intelligent Informatics, vol. 11, no. 5, 2007. 2. A. Kawewong, Y. Honda, M. Tsuboyama, and O. Hasegawa, "Reasoning on the Self- Organizing Incremental Associative Memory for Online Robot Path Planning," IEICE Trans. Inf. & Sys., vol. E93-D, no. 3, 2009. (impact factor 0.369) 3. 本田雄太郎,Aram Kawewong, 坪山学,長谷川修:"半教師ありニューラルネットワーク による場所細胞の獲得とロボットの自律移動制御",信学論D,2009,採録決定 4. A. Kawewong, N. Tongprasit, S. Tangruamsub and O. Hasegawa, “Online and Incremental Appearance-based SLAM in Highly Dynamic Environments, " Int’l Jour. Robotics Research (IJRR). (To Appear in 2010, impact factor 2.882, rank#1 in robotics) 5. A. Kawewong, S. Tangruamsub and O. Hasegawa, “Position-Invariant Robust Features for Long-term Recognition of Dynamic Outdoor Scenes," IEICE Trans. Inf. & Sys. (conditional accepted) 46
  • 47. Publication  Conferences 1. A. Kawewong and O. Hasegawa, "3D Texture Classification by Using Pre-testing Stage and Reliability Table, " IEEE Proc. International Conference on Image Processing (ICIP), (2005). 2. A. Kawewong and O. Hasegawa, "Combining Rotationally Variant and Invariant Features Based on Between-Class Error for 3D Texture Classification, " IEEE Int’l Conf. On Computer Vision (ICCV) Workshop, 2005. 3. A. Kawewong, Y. Honda, M. Tsuboyama, O. Hasegawa, "A Common-Neural- Pattern Based Reasoning for Mobile Robot Cognitive Mapping, " In Proc. Int’l Conf. Neural Information Processing (ICONIP), 2008. 4. A. Kawewong, Y. Honda, M. Tsuboyama, O. Hasegawa, "Common-Patterns Based Mapping for Robot Navigation, " in Proc. IEEE Int’l Conf. Robotics and Biomimetics (ROBIO), 2008. 5. S. Tangruamsub, M. Tsuboyama, A. Kawewong and O. Hasegawa, "Mobile Robot Vision-Based Navigation Using Self-Organizing and Incremental Neural Networks," in Proc. Int’l Joint Conf. Neural Networks (IJCNN), 2009. 47
  • 48. Publication  Conferences 6. A. Kawewong, S. Tangruamsub, and O. Hasegawa, "Wide-baseline Visible Features for Highly Dynamic Scene Recognition," in Proc. Int'l Conf. Computer Analysis of Images and Patterns (CAIP), 2009. 7. N. Tongprasit, A. Kawewong and O. Hasegawa, "Data Partitioning Technique for Online and Incremental Visual SLAM," in Proc. Int’l Conf. on Neural Information Processing (ICONIP), 2009. (oral & student travel award) 48