Your SlideShare is downloading. ×
Cvpr2007 object category recognition   p1 - bag of words models
Cvpr2007 object category recognition   p1 - bag of words models
Cvpr2007 object category recognition   p1 - bag of words models
Cvpr2007 object category recognition   p1 - bag of words models
Cvpr2007 object category recognition   p1 - bag of words models
Cvpr2007 object category recognition   p1 - bag of words models
Cvpr2007 object category recognition   p1 - bag of words models
Cvpr2007 object category recognition   p1 - bag of words models
Cvpr2007 object category recognition   p1 - bag of words models
Cvpr2007 object category recognition   p1 - bag of words models
Cvpr2007 object category recognition   p1 - bag of words models
Cvpr2007 object category recognition   p1 - bag of words models
Cvpr2007 object category recognition   p1 - bag of words models
Cvpr2007 object category recognition   p1 - bag of words models
Cvpr2007 object category recognition   p1 - bag of words models
Cvpr2007 object category recognition   p1 - bag of words models
Cvpr2007 object category recognition   p1 - bag of words models
Cvpr2007 object category recognition   p1 - bag of words models
Cvpr2007 object category recognition   p1 - bag of words models
Cvpr2007 object category recognition   p1 - bag of words models
Cvpr2007 object category recognition   p1 - bag of words models
Cvpr2007 object category recognition   p1 - bag of words models
Cvpr2007 object category recognition   p1 - bag of words models
Cvpr2007 object category recognition   p1 - bag of words models
Cvpr2007 object category recognition   p1 - bag of words models
Cvpr2007 object category recognition   p1 - bag of words models
Cvpr2007 object category recognition   p1 - bag of words models
Cvpr2007 object category recognition   p1 - bag of words models
Cvpr2007 object category recognition   p1 - bag of words models
Cvpr2007 object category recognition   p1 - bag of words models
Cvpr2007 object category recognition   p1 - bag of words models
Cvpr2007 object category recognition   p1 - bag of words models
Cvpr2007 object category recognition   p1 - bag of words models
Cvpr2007 object category recognition   p1 - bag of words models
Cvpr2007 object category recognition   p1 - bag of words models
Cvpr2007 object category recognition   p1 - bag of words models
Cvpr2007 object category recognition   p1 - bag of words models
Cvpr2007 object category recognition   p1 - bag of words models
Cvpr2007 object category recognition   p1 - bag of words models
Cvpr2007 object category recognition   p1 - bag of words models
Cvpr2007 object category recognition   p1 - bag of words models
Cvpr2007 object category recognition   p1 - bag of words models
Cvpr2007 object category recognition   p1 - bag of words models
Cvpr2007 object category recognition   p1 - bag of words models
Cvpr2007 object category recognition   p1 - bag of words models
Cvpr2007 object category recognition   p1 - bag of words models
Cvpr2007 object category recognition   p1 - bag of words models
Cvpr2007 object category recognition   p1 - bag of words models
Cvpr2007 object category recognition   p1 - bag of words models
Cvpr2007 object category recognition   p1 - bag of words models
Cvpr2007 object category recognition   p1 - bag of words models
Cvpr2007 object category recognition   p1 - bag of words models
Cvpr2007 object category recognition   p1 - bag of words models
Cvpr2007 object category recognition   p1 - bag of words models
Cvpr2007 object category recognition   p1 - bag of words models
Cvpr2007 object category recognition   p1 - bag of words models
Cvpr2007 object category recognition   p1 - bag of words models
Cvpr2007 object category recognition   p1 - bag of words models
Cvpr2007 object category recognition   p1 - bag of words models
Cvpr2007 object category recognition   p1 - bag of words models
Cvpr2007 object category recognition   p1 - bag of words models
Cvpr2007 object category recognition   p1 - bag of words models
Cvpr2007 object category recognition   p1 - bag of words models
Cvpr2007 object category recognition   p1 - bag of words models
Cvpr2007 object category recognition   p1 - bag of words models
Cvpr2007 object category recognition   p1 - bag of words models
Cvpr2007 object category recognition   p1 - bag of words models
Cvpr2007 object category recognition   p1 - bag of words models
Cvpr2007 object category recognition   p1 - bag of words models
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

×
Saving this for later? Get the SlideShare app to save on your phone or tablet. Read anywhere, anytime – even offline.
Text the download link to your phone
Standard text messaging rates apply

Cvpr2007 object category recognition p1 - bag of words models

1,247

Published on

Published in: Education, Technology
0 Comments
1 Like
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total Views
1,247
On Slideshare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
Downloads
49
Comments
0
Likes
1
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
No notes for slide
  • For that we will use a method called probabilistic latent semantic analysis. pLSA can be thought of as a matrix decomposition. Here is our term-document matrix (documents, words) and we want to find topic vectors common to all documents and mixture coefficients P(z|d) specific to each document. Note that these are all probabilites which sum to one. So a column here is here expressed as a convex combination of the topic vectors. What we would like to see is that the topics correspond to objects. So an image will be expressed as a mixture of different objects and backgrounds.
  • An images is a mixture of learned topic vectors and mixing weights. Fit the model and use the learned mixing coeficents to classify an image. In particular assign to the highest P(z|d)
  • We use maximum likelihood approach to fit parameters of the model. We write down the likelihood of the whole collection and maximize it using EM. M,N goes over all visual words and all documents. P(w|d) factorizes as on previous slide the entries in those matrices are our parameters. n(w,d) are the observed counts in our data
  • Transcript

    • 1. Part 1: Bag-of-words models by Li Fei-Fei (Princeton)
    • 2. Related works
      • Early “bag of words” models: mostly texture recognition
        • Cula & Dana, 2001; Leung & Malik 2001; Mori, Belongie & Malik, 2001; Schmid 2001; Varma & Zisserman, 2002, 2003; Lazebnik, Schmid & Ponce, 2003;
      • Hierarchical Bayesian models for documents (pLSA, LDA, etc.)
        • Hoffman 1999; Blei, Ng & Jordan, 2004; Teh, Jordan, Beal & Blei, 2004
      • Object categorization
        • Csurka, Bray, Dance & Fan, 2004; Sivic, Russell, Efros, Freeman & Zisserman, 2005; Sudderth, Torralba, Freeman & Willsky, 2005;
      • Natural scene categorization
        • Vogel & Schiele, 2004; Fei-Fei & Perona, 2005; Bosch, Zisserman & Munoz, 2006
    • 3. Object Bag of ‘words’
    • 4. Analogy to documents Of all the sensory impressions proceeding to the brain, the visual experiences are the dominant ones. Our perception of the world around us is based essentially on the messages that reach the brain from our eyes. For a long time it was thought that the retinal image was transmitted point by point to visual centers in the brain; the cerebral cortex was a movie screen, so to speak, upon which the image in the eye was projected. Through the discoveries of Hubel and Wiesel we now know that behind the origin of the visual perception in the brain there is a considerably more complicated course of events. By following the visual impulses along their path to the various cell layers of the optical cortex, Hubel and Wiesel have been able to demonstrate that the message about the image falling on the retina undergoes a step-wise analysis in a system of nerve cells stored in columns. In this system each cell has its specific function and is responsible for a specific detail in the pattern of the retinal image. sensory, brain, visual, perception, retinal, cerebral cortex, eye, cell, optical nerve, image Hubel, Wiesel China is forecasting a trade surplus of $90bn (£51bn) to $100bn this year, a threefold increase on 2004's $32bn. The Commerce Ministry said the surplus would be created by a predicted 30% jump in exports to $750bn, compared with a 18% rise in imports to $660bn. The figures are likely to further annoy the US, which has long argued that China's exports are unfairly helped by a deliberately undervalued yuan. Beijing agrees the surplus is too high, but says the yuan is only one factor. Bank of China governor Zhou Xiaochuan said the country also needed to do more to boost domestic demand so more goods stayed within the country. China increased the value of the yuan against the dollar by 2.1% in July and permitted it to trade within a narrow band, but the US wants the yuan to be allowed to trade freely. However, Beijing has made it clear that it will take its time and tread carefully before allowing the yuan to rise further in value. China, trade, surplus, commerce, exports, imports, US, yuan, bank, domestic, foreign, increase, trade, value
    • 5.
      • Looser definition
        • Independent features
      A clarification: definition of “BoW”
    • 6. A clarification: definition of “BoW”
      • Looser definition
        • Independent features
      • Stricter definition
        • Independent features
        • histogram representation
    • 7. category decision learning feature detection & representation codewords dictionary image representation category models (and/or) classifiers recognition
    • 8. Representation 1. 2. 3. feature detection & representation codewords dictionary image representation
    • 9. 1.Feature detection and representation
    • 10. 1.Feature detection and representation
      • Regular grid
        • Vogel & Schiele, 2003
        • Fei-Fei & Perona, 2005
    • 11. 1.Feature detection and representation
      • Regular grid
        • Vogel & Schiele, 2003
        • Fei-Fei & Perona, 2005
      • Interest point detector
        • Csurka, et al. 2004
        • Fei-Fei & Perona, 2005
        • Sivic, et al. 2005
    • 12. 1.Feature detection and representation
      • Regular grid
        • Vogel & Schiele, 2003
        • Fei-Fei & Perona, 2005
      • Interest point detector
        • Csurka, Bray, Dance & Fan, 2004
        • Fei-Fei & Perona, 2005
        • Sivic, Russell, Efros, Freeman & Zisserman, 2005
      • Other methods
        • Random sampling (Vidal-Naquet & Ullman, 2002)
        • Segmentation based patches (Barnard, Duygulu, Forsyth, de Freitas, Blei, Jordan, 2003)
    • 13. 1.Feature detection and representation Normalize patch Detect patches [Mikojaczyk and Schmid ’02] [Mata, Chum, Urban & Pajdla, ’02] [Sivic & Zisserman, ’03] Compute SIFT descriptor [Lowe’99] Slide credit: Josef Sivic
    • 14. 1.Feature detection and representation …
    • 15. 2. Codewords dictionary formation …
    • 16. 2. Codewords dictionary formation Vector quantization Slide credit: Josef Sivic …
    • 17. 2. Codewords dictionary formation Fei-Fei et al. 2005
    • 18. Image patch examples of codewords Sivic et al. 2005
    • 19. 3. Image representation frequency codewords … ..
    • 20. Representation 1. 2. 3. feature detection & representation codewords dictionary image representation
    • 21. category models (and/or) classifiers Learning and Recognition category decision codewords dictionary
    • 22. category models (and/or) classifiers Learning and Recognition
      • Generative method:
        • - graphical models
      • Discriminative method:
        • - SVM
    • 23.
      • Naïve Bayes classifier
        • Csurka Bray, Dance & Fan, 2004
      • Hierarchical Bayesian text models (pLSA and LDA)
        • Background: Hoffman 2001, Blei, Ng & Jordan, 2004
        • Object categorization: Sivic et al. 2005, Sudderth et al. 2005
        • Natural scene categorization: Fei-Fei et al. 2005
      2 generative models
    • 24.
      • w n : each patch in an image
        • w n = [0,0,…1,…,0,0] T
      • w: a collection of all N patches in an image
        • w = [w 1 ,w 2 ,…,w N ]
      • d j : the j th image in an image collection
      • c: category of the image
      • z: theme or topic of the patch
      First, some notations
    • 25. w N c Case #1: the Naïve Bayes model Csurka et al. 2004 Prior prob. of the object classes Image likelihood given the class Object class decision
    • 26. Csurka et al. 2004
    • 27. Csurka et al. 2004
    • 28. Hoffman, 2001 Case #2: Hierarchical Bayesian text models Blei et al., 2001 Probabilistic Latent Semantic Analysis (pLSA) Latent Dirichlet Allocation (LDA) w N d z D w N c z D 
    • 29. Case #2: Hierarchical Bayesian text models Probabilistic Latent Semantic Analysis (pLSA) Sivic et al. ICCV 2005 w N d z D “ face”
    • 30. Case #2: Hierarchical Bayesian text models Latent Dirichlet Allocation (LDA) Fei-Fei et al. ICCV 2005 w N c z D  “ beach”
    • 31. Case #2: the pLSA model w N d z D
    • 32. Case #2: the pLSA model Slide credit: Josef Sivic w N d z D Observed codeword distributions Codeword distributions per theme (topic) Theme distributions per image
    • 33. Case #2: Recognition using pLSA Slide credit: Josef Sivic
    • 34. Maximize likelihood of data using EM Observed counts of word i in document j M … number of codewords N … number of images Case #2: Learning the pLSA parameters Slide credit: Josef Sivic
    • 35. Demo
      • Course website
    • 36. task: face detection – no labeling
    • 37.
      • Output of crude feature detector
        • Find edges
        • Draw points randomly from edge set
        • Draw from uniform distribution to get scale
      Demo: feature detection
    • 38. Demo: learnt parameters Codeword distributions per theme (topic) Theme distributions per image
      • Learning the model: do_plsa(‘config_file_1’)
      • Evaluate and visualize the model: do_plsa_evaluation(‘config_file_1’)
    • 39. Demo: recognition examples
    • 40.
      • Performance of each theme
      Demo: categorization results
    • 41. category models (and/or) classifiers Learning and Recognition
      • Generative method:
        • - graphical models
      • Discriminative method:
        • - SVM
    • 42. Discriminative methods based on ‘bag of words’ representation Zebra Non-zebra Decision boundary
    • 43. Discriminative methods based on ‘bag of words’ representation
      • Grauman & Darrell, 2005, 2006:
        • SVM w/ Pyramid Match kernels
      • Others
        • Csurka, Bray, Dance & Fan, 2004
        • Serre & Poggio, 2005
    • 44. Summary: Pyramid match kernel optimal partial matching between sets of features Grauman & Darrell, 2005, Slide credit: Kristen Grauman
    • 45. Pyramid Match (Grauman & Darrell 2005) Histogram intersection Slide credit: Kristen Grauman
    • 46. Pyramid Match (Grauman & Darrell 2005) Histogram intersection Slide credit: Kristen Grauman Difference in histogram intersections across levels counts number of new pairs matched matches at this level matches at previous level
    • 47. Pyramid match kernel
      • Weights inversely proportional to bin size
      • Normalize kernel values to avoid favoring large sets
      Slide credit: Kristen Grauman measure of difficulty of a match at level i histogram pyramids number of newly matched pairs at level i
    • 48. Example pyramid match Level 0 Slide credit: Kristen Grauman
    • 49. Example pyramid match Level 1 Slide credit: Kristen Grauman
    • 50. Example pyramid match Level 2 Slide credit: Kristen Grauman
    • 51. Example pyramid match pyramid match optimal match Slide credit: Kristen Grauman
    • 52. Summary: Pyramid match kernel optimal partial matching between sets of features number of new matches at level i difficulty of a match at level i Slide credit: Kristen Grauman
    • 53. Object recognition results
      • ETH-80 database 8 object classes
      • ( Eichhorn and Chapelle 2004)
      • Features:
        • Harris detector
        • PCA-SIFT descriptor, d =10
      Slide credit: Kristen Grauman Recognition rate Complexity Kernel Pyramid match Bhattacharyya affinity [Kondor & Jebara] Match [Wallraven et al.] 84% 85% 84%
    • 54. Object recognition results
      • Caltech objects database 101 object classes
      • Features:
        • SIFT detector
        • PCA-SIFT descriptor, d =10
      • 30 training images / class
      • 43% recognition rate
      • (1% chance performance)
      • 0.002 seconds per match
      Slide credit: Kristen Grauman
    • 55. category decision learning feature detection & representation codewords dictionary image representation category models (and/or) classifiers recognition
    • 56. What about spatial info? ?
    • 57. What about spatial info?
      • Generative models
        • Sudderth, Torralba, Freeman & Willsky, 2005, 2006
        • Niebles & Fei-Fei, CVPR 2007
    • 58. What about spatial info?
      • Generative models
        • Sudderth, Torralba, Freeman & Willsky, 2005, 2006
        • Niebles & Fei-Fei, CVPR 2007
      P 3 P 1 P 2 P 4 Bg Image w
    • 59. What about spatial info?
      • Generative models
      • Discriminative methods
        • Lazebnik, Schmid & Ponce, 2006
    • 60. Invariance issues
      • Scale and rotation
        • Implicit
        • Detectors and descriptors
      Kadir and Brady. 2003
    • 61.
      • Scale and rotation
      • Occlusion
        • Implicit in the models
        • Codeword distribution: small variations
        • (In theory) Theme (z) distribution: different occlusion patterns
      Invariance issues
    • 62.
      • Scale and rotation
      • Occlusion
      • Translation
        • Encode (relative) location information
          • Sudderth, Torralba, Freeman & Willsky, 2005, 2006
          • Niebles & Fei-Fei, 2007
      Invariance issues
    • 63.
      • Scale and rotation
      • Occlusion
      • Translation
      • View point (in theory)
        • Codewords: detector and descriptor
        • Theme distributions: different view points
      Invariance issues Fergus, Fei-Fei, Perona & Zisserman, 2005
    • 64.
      • Intuitive
        • Analogy to documents
      Model properties Of all the sensory impressions proceeding to the brain, the visual experiences are the dominant ones. Our perception of the world around us is based essentially on the messages that reach the brain from our eyes. For a long time it was thought that the retinal image was transmitted point by point to visual centers in the brain; the cerebral cortex was a movie screen, so to speak, upon which the image in the eye was projected. Through the discoveries of Hubel and Wiesel we now know that behind the origin of the visual perception in the brain there is a considerably more complicated course of events. By following the visual impulses along their path to the various cell layers of the optical cortex, Hubel and Wiesel have been able to demonstrate that the message about the image falling on the retina undergoes a step-wise analysis in a system of nerve cells stored in columns. In this system each cell has its specific function and is responsible for a specific detail in the pattern of the retinal image. sensory, brain, visual, perception, retinal, cerebral cortex, eye, cell, optical nerve, image Hubel, Wiesel
    • 65.
      • Intuitive
        • Analogy to documents
        • Analogy to human vision
      Model properties Olshausen and Field, 2004, Fei-Fei and Perona, 2005
    • 66.
      • Intuitive
      • generative models
        • Convenient for weakly- or un-supervised, incremental training
        • Prior information
        • Flexibility (e.g. HDP)
      Model properties Li, Wang & Fei-Fei, CVPR 2007 Sivic, Russell, Efros, Freeman, Zisserman, 2005 model Classification Dataset Incremental learning
    • 67.
      • Intuitive
      • generative models
      • Discriminative method
        • Computationally efficient
      Model properties Grauman et al. CVPR 2005
    • 68.
      • Intuitive
      • generative models
      • Discriminative method
      • Learning and recognition relatively fast
        • Compare to other methods
      Model properties
    • 69.
      • No rigorous geometric information of the object components
      • It’s intuitive to most of us that objects are made of parts – no such information
      • Not extensively tested yet for
        • View point invariance
        • Scale invariance
      • Segmentation and localization unclear
      Weakness of the model

    ×