Your SlideShare is downloading. ×
Exploiting Hierarchical Context on a Large Database of Object Categories
Upcoming SlideShare
Loading in...5

Thanks for flagging this SlideShare!

Oops! An error has occurred.

Saving this for later? Get the SlideShare app to save on your phone or tablet. Read anywhere, anytime – even offline.
Text the download link to your phone
Standard text messaging rates apply

Exploiting Hierarchical Context on a Large Database of Object Categories


Published on

Exploiting Hierarchical Context on a Large Database of Object Categories -- Paper Presentation

Exploiting Hierarchical Context on a Large Database of Object Categories -- Paper Presentation

Published in: Technology, Business
  • Be the first to comment

  • Be the first to like this

No Downloads
Total Views
On Slideshare
From Embeds
Number of Embeds
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

No notes for slide


  • 1. Exploiting Hierarchical Context on a Large Database of Object Categories
    Myung Jin Choi, Joseph J. Lim, Antonio Torralba, Alan S. Willsky
    Proceedings of CVPR-2010
  • 2. The SUN 09 Dataset
    • 12,000 annotated images (indoors and outdoors)
    • 3. Large number of scene categories, 200 object categories, 152,000 annotated object instances (using LabelMe)
    • 4. Average object size is 5% of the image size
    • 5. A typical image contains 7 different object categories
    PASCAl 07
    SUN 09
  • 6. Tree-structured Context Model
    Context Model
    Prior Model
    Measurement Model
    Co-occurrences Prior
    Spatial Prior
    Global Image Features
    Local Detector Outputs
  • 7. Prior Model
    Co-occurrences Prior: Encodes the co-occurrence statistics using a binary tree model
    Spatial Prior: Captures information regarding the specific relative positions among appearance of objects
  • 8. Prior on Spatial Locations
    • Given L-x, L-y and L-z as any object’s location in the 3D world co-ordinate, L-x is ignored (being uninformative), L-y is modeled as jointly Gaussian and L-z as Log-normal distribution.
    • 9. Location variable: L-i = (L-y, log L-z)
    • 10. L-i’s are modeled as jointly Gaussian and in case of multiple instances of the same category, L-I represent the median location of all instances.
    The joint distribution of all binary and Gaussian variables is finally represented as:
  • 11. Measurement Model
    Incorporating Global Image Features: Uses gist to measure the presence of an object in an image (scene)
    Integrating Local Detector Outputs: Taking the candidate windows from a baseline object detector, and learning the likelihood of their correct detection from the training set, the expected location of an object is obtained.
  • 12. Alternating Inference
    Given the gist g, candidate window locations W and their scores s, the algorithm infers the presence of objects b, the correct detection c and expected location of objects L, by solving the optimization problem:
  • 13. Learning the dependency
    The dependency structure among objects is learnt from a set of fully labeled images using the Chow-Liu algorithm.
    • It computes the empirical mutual information of all pairs of variables (using sample values in the set of labeled images)
    • 14. It then finds the maximum weight spanning tree with edge weights equal to the mutual information
    • 15. A root node is arbitrarily selected once a tree structure is learned.
  • Learning the dependency
  • 16. Results
    Performance on Pascal 07
    Object Recognition Performance
  • 17. Results
    Performance on SUN 09
    Image Annotation Performance
  • 18. Results
    Performance on SUN 09
  • 19. Detecting Images out of context
  • 20. Detecting Images out of context
    • Database: 26 images with one/more objects out of context
    • 21. All objects have ground-truth object labels, except for the one under the test.
    • 22. The context model correctly identifies the most unexpected object in the scene.
  • Conclusion
    • The new dataset SUN 09 contains richer contextual information compared to PASCAL 07, which was originally designed for training object detectors.
    • 23. The paper demonstrates that the contextual information learned from SUN 09 significantly improves the accuracy of object recognition tasks, and can even be used to identify out-of-context scenes.
    • 24. The tree-based context model enables an efficient and coherent modeling of regularities among object categories, and can easily scale to capture dependencies of over 100 object categories.