Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

CMSC828B Multidimensional Spectral Hashing

66 views

Published on

Multidimensional Spectral Hashing

Published in: Technology
  • Be the first to comment

  • Be the first to like this

CMSC828B Multidimensional Spectral Hashing

  1. 1. Multidimensional Spectral Hashing Yair Weiss, Rob Fergus, and Antonio Torralba Presenter: Ruofei Du 10/25/2014
  2. 2. Motivation Date Deluge Sublinear Time Search Hashing Object Retrieval / Recognition Large Scale Search
  3. 3. Outline Multidimensional Spectral Hashing ● Introduction o Semantic Hashing o Spectral Hashing o Motivation of MSH ● Formulation o Hamming affinity o Curse of dimensionality o Kernel Trick ● Results o Toy, CIFAR, 1M Tiny Images
  4. 4. Introduction Semantic Hashing [Salakhutdinov & Hinton, 2007]
  5. 5. Introduction Semantic Hashing
  6. 6. Introduction Advantages ● Fast Query Speed o By using hashing scheme, we can achieve constant or sublinear search time complexity. o Exhaustive search is also acceptable because the comparison cost is cheap now ● Reduce Storage Cost
  7. 7. Introduction A good hashing code ● What makes a good hashing code? o Easily computed for a novel input o Requires a small number of bits to code the full dataset o Maps similar items to similar binary codewords
  8. 8. Introduction Locally Sensitive Hashing [Gionis, A. & Indyk, P. & Motwani, R., 1999] Every bit in the code is calculated by a random linear projection followed by a random threshold As the number of bits increases the precision improves (and approaches one with many bits), but the rate of convergence can be very slow.
  9. 9. Introduction Try Hamming distance ≈ semantic distance ● Random Projections “LSH” (Andoni and Indyk 06) ● Deep Neural Network “DNN” (Salakhutdinov and Hinton 07) ● SH: “Spectral Hashing” (Weiss et al. 08) 􏰀 ● USPLH (Wang, Kumar, Chang 2010) ● BRE (Kulis and Darell 2009) ● ITQ (Gong and Lazebnik 2011) ● MLH (Nourouzi and Fleet 2011) ● MDSH (Weiss et al. 2012)
  10. 10. Introduction Spectral Hashing [Weiss, Y., NIPS 2008]
  11. 11. Introduction Spectral Hashing [Weiss, Y., NIPS 2008] ● Assumption o Points are embedded in Euclidean space ● Research Question o How to binarize so Hamming distance approximates Euclidean distance? // Hamming Distance // Binary Code // Balance Property // Independence Property
  12. 12. Introduction Graph Partitioning
  13. 13. Introduction Spectral Relaxation Solution: k eigenfunctions of weighted Laplace-Beltrami operators Threshold the eigenfunction to get the hashing codes.
  14. 14. Introduction Eigenfunction ● The eigenfunctions of the continuous weighted Laplacian have an outer product form. ● The eigenvalues from the outer product eigenfunctions are discarded to maintain the uncorrelatedness of bits.
  15. 15. Introduction Spectral Hashing [Weiss, Y., NIPS 2008]
  16. 16. Introduction Spectral Hashing [Weiss, Y., NIPS 2008] ● Pros o Efficient: ● Cons o Strong assumption: Multidimensional uniform distribution generated data. o Cannot be extended to larger samples as it requires the computation of covariance matrix o The eigenvalues from the outer product eigenfunctions are discarded to maintain the uncorrelatedness of bits. o Poor performance for approximating the far away neighbors.
  17. 17. Evaluation Hashing Function
  18. 18. Introduction Motivation: Which algorithm is better? ● Same algorithm, same data ● Different evaluation: When T<Good neighbors> is 4 times larger… ● For the smaller neighborhoods, SH greatly outperforms the other techniques, while for the larger neighborhood it is the worst algorithm.
  19. 19. Introduction Motivation: Which algorithm is better? ● Spectral Hashing does a very bad job of approximating the distance to far away points and collapses all of them to a similar distance ● The other methods do much better at capturing the far distances but not small distances. ● Application dependent!
  20. 20. Formulation Hamming Affinity ● Why Hamming Affinity o Equivalent to ● SH o Performance can degrade with more bits. o It discards the “mixed bits” which are deterministic of other bits. ● MDSH o Code affinity should approximate image similarity o Approximation improves with more bits o Affinity is a nonlinear function of distance
  21. 21. Formulation Inefficient Multidimensional Spectral Hashing The original SH used only single-dimension eigenfunctions during retrieval, here we expand the code to include the outer-product eigenfunctions as well.
  22. 22. Formulation Efficient Multidimensional Spectral Hashing
  23. 23. Formulation Efficient Multidimensional Spectral Hashing
  24. 24. Result Toy Data
  25. 25. Result CIFAR
  26. 26. Result 1M Tiny Images
  27. 27. Questions ● Questions o What is the best “good neighbor” threshold?  Could hashing code be expanded for different “good neighbor” threshold? A single training could solve different applications.  E.G. 101010100 | 0001100 | 0010101 for further neighbor. o Complexity  The time complexity is still as spectral hashing with more computation on the Hamming affinity weights matrix.  Any comparison between the time consumed among the two? o How to correct wrong hash code?
  28. 28. Kernel Trick Efficient Multidimensional Spectral Hashing 1, 1, 1, 1 -1, 1, 1, 1 1, 1, -1, -1 -1, 1, -1, -1 we try (during learning) all codes that are Hamming distance one or two from the all- ones codeword. We then choose the K + 1 directions that have the highest affinity to the all ones codeword. These directions are subsequently used for all queries. Thus we can still do retrieval with a total of K + 1 lookups. K,d is the number of bits in the code
  29. 29. Introduction Spectral Relaxation [Weiss, Y., NIPS 2008] k eigenvectors of D − W with minimal eigenvalue Thredshold the eigenfunction to get the hashing codes. Spectral Relaxation Graph partitioning
  30. 30. Formulation Hamming Affinity

×