Visual Saliency: Learning to Detect Salient Objects

9,968 views
9,765 views

Published on

Visual Salency: Learning to Detect Salient Objects

Published in: Technology, Education
9 Comments
6 Likes
Statistics
Notes
No Downloads
Views
Total views
9,968
On SlideShare
0
From Embeds
0
Number of Embeds
3
Actions
Shares
0
Downloads
272
Comments
9
Likes
6
Embeds 0
No embeds

No notes for slide
  • Not so good result
  • Good result
  • Not so good result
  • Visual Saliency: Learning to Detect Salient Objects

    1. 1. Visual Attention: Detecting Saliency on Images<br />Vicente Ordonez<br />Department of Computer Science<br />State University of New York<br />Stony Brook, NY 11790<br />
    2. 2. I will be working mainly on the following paper<br />Learning to Detect a Salient Object. T. Liu, J. Sun, N. Zheng, X. Tang, H. Shum. (Xian Jiaotong University and Microsoft Research Asia) from CVPR 2007. <br />http://research.microsoft.com/en-us/um/people/jiansun/papers/SalientDetection_CVPR07.pdf<br />
    3. 3. What is Saliency? What is Visual Attention?<br />“Everyone knows what attention is...”<br />—William James, 1890<br />
    4. 4. This is a problem of…<br />Arbitrary object detection?<br />Background / Foreground segmentation?<br />Modeling Visual Attention?<br />
    5. 5. The Method<br />Features: <br />Multiscale Contrast (Done!)<br />Center surround histogram (Mostly Done!) (Done!)<br />Color spatial distribution (Done!)<br />Supervised learning using Conditional Random Fields to determine the parameters to combine the features obtained above. (Done!) [I will use a labeled dataset of 5000 images provided by Microsoft Research Asia!]<br />
    6. 6. Multiscale Contrast Function<br />Generate the Gaussian Pyramid for the input image.<br />For each level in the pyramid <br />Do gaussian blurring<br />Do resampling<br />I’m using a 6 levels Gaussian pyramid for each RGB channel.<br />
    7. 7. How a Gaussian pyramid looks like<br />Figure from David Forsyth<br />
    8. 8. Generate contrast maps for each level of the Pyramid.<br />Sum all of the results to produce the final multiscale contrast map.<br />The two steps mentioned above are described in this formula:<br />Multiscale Contrast Function<br />
    9. 9. Input image<br />
    10. 10. Contrast maps<br />
    11. 11. Contrast maps<br />Original image<br />Contrast map at level 1<br />Contrast map at level 4<br />Contrast map at level 6<br />
    12. 12. Multiscale Contrast Map Output<br />
    13. 13. Center Surround Histogram Feature<br /><ul><li>For each pixel in the image
    14. 14. For each possible rectangle with a reasonable size and aspect ratio
    15. 15. Create a surrounding rectangle and calculate the histogram of the rectangle and the surrounding area.
    16. 16. Pick and record the rectangle that maximizes the Chi-Square distance between the two histograms calculated above and also record the Chi-Square distance.</li></li></ul><li>Center Surround Histogram Feature<br />
    17. 17. Center Surround Histogram Feature<br />The algorithm as described before is computationally expensive… <br />It is required to use a technique called Integral Histogram. It allows you fast calculation of the histogram of any given rectangular region of an image.<br />The algorithm was introduced in:<br />“Integral Histogram: A Fast Way to Extract Histograms in Cartesian Spaces” by FatihPorikli, Mitsubishi Electric Research Lab in CVPR 2005.<br />
    18. 18. Center Surround Histogram Feature<br />Use the Chi Square Distances Map and the Map of Most Salient Rectangle Regions per pixel to generate the Center Surround Histogram Feature using the next formula:<br />
    19. 19. Center Surround Histogram<br />Results Using my Implementation (15.2 sec, size = 245x384)<br />Results Reported in the Paper<br />
    20. 20. Center Surround Histogram<br />Results Using my Implementation (13.6 sec, size = 247x346)<br />Results Reported in the Paper<br />
    21. 21. Center Surround Histogram<br />Results Using my Implementation (10.2 sec, size = 248x277)<br />
    22. 22. More Results<br />
    23. 23. More Results<br />
    24. 24. More results<br />
    25. 25. More Results<br />
    26. 26. More Results<br />
    27. 27. More Results<br />
    28. 28. More Results<br />
    29. 29. More Results<br />
    30. 30. More Results<br />
    31. 31. More Results<br />
    32. 32. More Results<br />
    33. 33. Color Spatial Distribution<br />
    34. 34. Color Spatial Distribution<br />Make an initial clustering of the colors in the image using k-means. <br />Further refine the clusters by using Gaussian Mixture Models. The Gaussian Mixture Model parameters are calculated using the EM algorithm.<br />I am using 5 clusters (5 colors) per image. And the results look similar to those presented in the paper with an execution time of around 17 seconds per image.<br />
    35. 35. Color Spatial Distribution<br />Calculate the vertical variance of the horizontal positions of the pixels for each cluster. And then the same for the vertical positions. Sum the variances and use this value to weight more those clusters with less spatial variance.<br />Penalize the clusters that contain the majority of its pixels away from the center of the image.<br />
    36. 36. Color Spatial Distribution<br />
    37. 37. Color Spatial Distribution<br />
    38. 38. Color Spatial Distribution<br />
    39. 39. Color Spatial Distribution<br />
    40. 40. Color Spatial Distribution<br />
    41. 41. Color Spatial Distribution<br />
    42. 42. Color Spatial Distribution<br />
    43. 43. Color Spatial Distribution<br />
    44. 44. Combine Features Together<br />
    45. 45. Conditional Random Field Training and Inference<br />Accelerated Training of Conditional Random Fields with Stochastic Meta-Descent S Vishwanathan, N. Schraudolph, M. Schmidt, K. Murphy. ICML&apos;06 (Intl Conf on Machine Learning). <br />I did the training using this toolbox from the above paper:<br />http://people.cs.ubc.ca/~murphyk/Software/CRF/crf.html<br />
    46. 46. Mask outputs using CRF inference<br />Input M-Contrast-map Center Surr. Hist. Color Spatial Var.<br />Input Combined features Ground truth<br />
    47. 47. Mask outputs using CRF inference<br />Input M-Contrast-map Center Surr. Hist. Color Spatial Var.<br />Input Combined features Ground truth<br />
    48. 48. Mask outputs using CRF inference<br />Input M-Contrast-map Center Surr. Hist. Color Spatial Var.<br />Input Combined features Ground truth<br />
    49. 49. Mask outputs using CRF inference<br />Input M-Contrast-map Center Surr. Hist. Color Spatial Var.<br />Input Combined features Ground truth<br />
    50. 50. Precision / Recall obtained<br />
    51. 51. Some Conclusions<br />The results of the original research paper on computing the visual features have been successfully replicated in a considerable extent.<br />The Conditional Random Field framework used in this project turned out to perform well for this task.<br />The center-surround histogram map turned out to be the feature that gave the higher precision.<br />The amount of time required for computing the individual features is in the order of several seconds.<br />

    ×