Prior Model<br />Co-occurrences Prior: Encodes the co-occurrence statistics using a binary tree model<br />Spatial Prior: Captures information regarding the specific relative positions among appearance of objects<br />
Prior on Spatial Locations<br /><ul><li> Given L-x, L-y and L-z as any object’s location in the 3D world co-ordinate, L-x is ignored (being uninformative), L-y is modeled as jointly Gaussian and L-z as Log-normal distribution.
L-i’s are modeled as jointly Gaussian and in case of multiple instances of the same category, L-I represent the median location of all instances.</li></ul>The joint distribution of all binary and Gaussian variables is finally represented as:<br />
Measurement Model<br />Incorporating Global Image Features: Uses gist to measure the presence of an object in an image (scene)<br />Integrating Local Detector Outputs: Taking the candidate windows from a baseline object detector, and learning the likelihood of their correct detection from the training set, the expected location of an object is obtained.<br />
Alternating Inference<br />Given the gist g, candidate window locations W and their scores s, the algorithm infers the presence of objects b, the correct detection c and expected location of objects L, by solving the optimization problem:<br />
Learning the dependency <br />The dependency structure among objects is learnt from a set of fully labeled images using the Chow-Liu algorithm.<br /><ul><li> It computes the empirical mutual information of all pairs of variables (using sample values in the set of labeled images)
It then finds the maximum weight spanning tree with edge weights equal to the mutual information
A root node is arbitrarily selected once a tree structure is learned.</li></li></ul><li>Learning the dependency <br />
Results<br />Performance on Pascal 07<br />Object Recognition Performance<br />
Results<br />Performance on SUN 09<br />Image Annotation Performance<br />
Detecting Images out of context<br /><ul><li> Database: 26 images with one/more objects out of context
All objects have ground-truth object labels, except for the one under the test.
The context model correctly identifies the most unexpected object in the scene.</li></li></ul><li>Conclusion<br /><ul><li> The new dataset SUN 09 contains richer contextual information compared to PASCAL 07, which was originally designed for training object detectors.
The paper demonstrates that the contextual information learned from SUN 09 significantly improves the accuracy of object recognition tasks, and can even be used to identify out-of-context scenes.
The tree-based context model enables an efficient and coherent modeling of regularities among object categories, and can easily scale to capture dependencies of over 100 object categories.</li>