0
Max-Margin Additive Classifiers for Detection<br />SubhransuMaji & Alexander Berg<br />University of California at Berkele...
Accuracy vs. Evaluation Timefor SVM Classifiers<br />Non-linear Kernel<br />Evaluation time<br />Linear Kernel<br />Accura...
Accuracy vs. Evaluation Timefor SVM Classifiers<br />Non-linear Kernel<br />Evaluation time<br />Our CVPR 08<br />Linear K...
Non-linear Kernel<br /> Additive Kernel<br />Evaluation time<br />Our CVPR 08<br />Linear Kernel<br />Accuracy<br />Accura...
Additive Kernel<br />Non-linear Kernel<br /> Additive Kernel<br />Evaluation time<br />Our CVPR 08<br />Linear Kernel<br /...
Accuracy vs. Evaluation Timefor SVM Classifiers<br />Additive Kernel<br />Non-linear Kernel<br />Evaluation time<br />Our ...
Additive Classifiers<br />Much work already uses them!	<br />SVMs with additive kernels are additive classifiers<br />Hist...
Accuracy vs. Training Timefor SVM Classifiers<br />Non-linear<br />Training time<br />Linear Kernel<br />Accuracy<br />
Accuracy vs. Training Timefor SVM Classifiers<br />Non-linear<br />Training time<br /><=1990s<br />Linear<br />Accuracy<br />
Accuracy vs. Training Timefor SVM Classifiers<br />Non-linear<br />Training time<br />Today<br />Linear<br />Accuracy<br /...
Accuracy vs. Training Timefor SVM Classifiers<br />Non-linear<br />Additive<br />Training time<br />Our CVPR 08<br />Linea...
Accuracy vs. Training Timefor SVM Classifiers<br />Non-linear<br />Additive<br />Training time<br />Our CVPR 08<br />✗<br ...
Accuracy vs. Training Timefor SVM Classifiers<br />Non-linear<br />Additive<br />Training time<br />This Paper<br />Linear...
Accuracy vs. Training Timefor SVM Classifiers<br />Non-linear<br />Training time<br />This Paper<br />Linear<br />Additive...
Summary<br />Additive classifiers are widely used and can provide better accuracy than linear<br />Our CVPR 08: SVMs with ...
Support Vector Machines<br />Embedded Space<br />Input Space<br />Kernel Function<br /><ul><li> Inner Product in the embed...
 Can learn non-linear boundaries in input space </li></ul>Classification Function<br />Kernel Trick<br />
Embeddings…<br />These embeddings can be high dimensional (even infinite)<br />Our approach is based on embeddings thatapp...
Key Idea: Embedding an Additive Kernel<br />Additive Kernels are easy to embed, just embed each dimension independently<br...
Issues: Embedding Error<br />Quantization leads to large errors<br />Better encoding<br />x<br />y<br />
Issues: Sparsity<br />Represent with sparse values<br />
Linear SVM objective (solve with LIBLINEAR):<br />Encoded SVM objective (not practical): <br />Linear vs. Encoded SVMs <br />
Linear vs. Encoded SVMs <br />Linear SVM objective (solve with LIBLINEAR):<br />Encoded SVM modified (custom solver): <br ...
Linear SVM objective (solve with LIBLINEAR):<br />Encoded SVM objective (solve with LIBLINEAR) : <br />Linear vs. Encoded ...
Additive Classifier Choices<br />Regularization<br />Encoding<br />
Additive Classifier Choices<br />Accuracy Increases <br />Regularization<br />Encoding<br />Evaluation times are similar<b...
Additive Classifier Choices<br />Accuracy Increases <br />Regularization<br />Encoding<br />Accuracy Increases <br />Evalu...
Additive Classifier Choices<br />Accuracy Increases <br />Regularization<br />Encoding<br />Accuracy Increases <br />Stand...
Additive Classifier Choices<br />Accuracy Increases <br />Regularization<br />Encoding<br />Accuracy Increases <br />Custo...
Additive Classifier Choices<br />Accuracy Increases <br />Regularization<br />Encoding<br />Accuracy Increases <br />Class...
Experiments<br />“Small” Scale: Caltech 101 (Fei-Fei, et.al.)<br />“Medium” Scale: DC Pedestrians (Munder & Gavrila)<br />...
Experiment : DC Pedestrians<br />(3.18s, 89.25%)<br />(1.86s, 88.80%)<br />(363s, 89.05%)<br />(2.98s, 85.71%)<br />100x f...
Experiment : Caltech 101<br />(291s, 55.35%)<br />(2687s, 56.49%)<br />(102s, 54.8%)<br />(90s, 51.64%)<br />10x faster<br...
Experiment : INRIA  Pedestrians<br />(140 mins, 0.95)<br />(76s, 0.94)<br />(27s, 0.88)<br />300x faster<br />training tim...
Experiment : INRIA  Pedestrians<br />300x faster<br />training time ~ linear SVM<br />accuracy ~ kernel SVMtrains the dete...
Take Home Messages<br />Additive models are practical for large scale data<br />Can be trained discriminatively:	<br />Poo...
Upcoming SlideShare
Loading in...5
×

ICCV2009: Max-Margin Ađitive Classifiers for Detection

560

Published on

Published in: Education
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total Views
560
On Slideshare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
Downloads
5
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide
  • Thankyou. Good afternoon everybody. I am going to present ways to train additive classifiers efficiently . This work is a part of an ongoing collaboration with alex berg.
  • For any classification task the two main things we care about are accuracy and evaluation time. Especially for object detection where one evalutaes a classifier on thousands of windowsPer image – the evalutation time becomes very important. In the past linear SVMs though relatively less accurate were preferred over kernel SVMs for real-time applications.
  • In our CVPR 08 paper…
  • We identified a subset of non-linear kernels, called additive kernels that are used in many of the current object recognition tasks. These kernels have the special form that they decompose as a sum of Kernels over individual dimensions.
  • We identified a subset of non-linear kernels, called additive kernels that are used in many of the current object recognition tasks. These kernels have the special form that they decompose as a sum of Kernels over individual dimensions.
  • And showed that they can be evaulated efficiently. This makes it possible for one to use more accurate classifiers with relatively no loss in speed. In fact more than half of thisYear’s submissions to the PACCAL VOC object detection challenge use variants of additive kernels.
  • In this talk we are going to talk about additive models in general – where the classifier decomposes into dimensions. This may seem restrictive but it’s a useful class of classifiers which iis strictly more general than linear classifiers.In fact if the underlying kernel for the SVM is additive then the classifier is also additive
  • Pic looks similar to that for evaluation time… it is important to note that this was not the case even somewhat recently…
  • Maybe put some refs on this…
  • Maybe put some refs on this…As mentioned before, our previous work identified a subset of non-linear classifiers with an additive structure and showed they could be evaluated efficiently, but unfortunately did not address improving efficiency for training…
  • Maybe put some refs on this…
  • This paper addresses efficient training for additive classifiers, developing training methods that are about as efficient as the best methods fortraining linear classifiers. We also demonstrate the accuracy avantages on some popular datasets.?....
  • Should we change the wording? Drop SVM?
  • (finish this by 5 mins)
  • The idea of support vector machines is to find a separating hyperplane on the data into a high dimension space using a Kernel.The final classifier is ofcouse a line in a very high dimensional space but can be expressed using only the Kernel function using the so called kernel trick. If the embedded space is low dimensional then one can take advantage of the very fast linear SVM training algorithms which scale linearly with trainingData as opposed to the quadratic growth for the kernel SVM.
  • Unfortunately these embeddings are often high dimensionalOur approach can be seen as finding embeddings that are both sparse and accurate so that we can use the very best of the linear SVM training algorithms for trainingThe classifier. In fact we would ideally like the number of non zero entries in the embedded features to be a small multiple of the nonn zero entries in the input features.
  • A key idea of the paper is to realize that additive kernels are easy to embed as the final embedding is just a concatenation of the individual dimension embeddingsAS as example the min kernel or the histogram intersection kernel defined as A well known embedding for min kernel for integers is the unaryencoding where each number is represented in the unaryExample …For non-integers one may just approximate this by quantization
  • Transcript of "ICCV2009: Max-Margin Ađitive Classifiers for Detection"

    1. 1. Max-Margin Additive Classifiers for Detection<br />SubhransuMaji & Alexander Berg<br />University of California at Berkeley <br />Columbia University<br />ICCV 2009, Kyoto, Japan<br />
    2. 2. Accuracy vs. Evaluation Timefor SVM Classifiers<br />Non-linear Kernel<br />Evaluation time<br />Linear Kernel<br />Accuracy<br />
    3. 3. Accuracy vs. Evaluation Timefor SVM Classifiers<br />Non-linear Kernel<br />Evaluation time<br />Our CVPR 08<br />Linear Kernel<br />Accuracy<br />
    4. 4. Non-linear Kernel<br /> Additive Kernel<br />Evaluation time<br />Our CVPR 08<br />Linear Kernel<br />Accuracy<br />Accuracy vs. Evaluation Timefor SVM Classifiers<br />
    5. 5. Additive Kernel<br />Non-linear Kernel<br /> Additive Kernel<br />Evaluation time<br />Our CVPR 08<br />Linear Kernel<br />Accuracy<br />Accuracy vs. Evaluation Timefor SVM Classifiers<br />
    6. 6. Accuracy vs. Evaluation Timefor SVM Classifiers<br />Additive Kernel<br />Non-linear Kernel<br />Evaluation time<br />Our CVPR 08<br />Linear Kernel<br /> Additive Kernel<br />Accuracy<br />Made it possible to use SVMs with additive kernels for detection. <br />
    7. 7. Additive Classifiers<br />Much work already uses them! <br />SVMs with additive kernels are additive classifiers<br />Histogram based kernels<br />Histogram intersection, chi-squared kernel<br />Pyramid Match Kernel (Grauman & Darell, ICCV’05)<br />Spatial Pyramid Match Kernel (Lazebniket.al., CVPR’06)<br />…. <br />
    8. 8. Accuracy vs. Training Timefor SVM Classifiers<br />Non-linear<br />Training time<br />Linear Kernel<br />Accuracy<br />
    9. 9. Accuracy vs. Training Timefor SVM Classifiers<br />Non-linear<br />Training time<br /><=1990s<br />Linear<br />Accuracy<br />
    10. 10. Accuracy vs. Training Timefor SVM Classifiers<br />Non-linear<br />Training time<br />Today<br />Linear<br />Accuracy<br />Eg. Cutting Plane, Stoc. Gradient Descend, Dual Coordinate Descend <br />
    11. 11. Accuracy vs. Training Timefor SVM Classifiers<br />Non-linear<br />Additive<br />Training time<br />Our CVPR 08<br />Linear<br />Accuracy<br />
    12. 12. Accuracy vs. Training Timefor SVM Classifiers<br />Non-linear<br />Additive<br />Training time<br />Our CVPR 08<br />✗<br />Linear<br />Accuracy<br />
    13. 13. Accuracy vs. Training Timefor SVM Classifiers<br />Non-linear<br />Additive<br />Training time<br />This Paper<br />Linear<br />Accuracy<br />
    14. 14. Accuracy vs. Training Timefor SVM Classifiers<br />Non-linear<br />Training time<br />This Paper<br />Linear<br />Additive<br />Accuracy<br />Makes it possible to train additive classifiers very fast. <br />
    15. 15. Summary<br />Additive classifiers are widely used and can provide better accuracy than linear<br />Our CVPR 08: SVMs with additive kernels are additive classifiers and can be evaluated in O(#dim) -- same as linear.<br />This work: additive classifiers can be trained directly as efficiently (up to a small constant) as the best approaches for training linear classifiers.<br />An example<br />
    16. 16. Support Vector Machines<br />Embedded Space<br />Input Space<br />Kernel Function<br /><ul><li> Inner Product in the embedded space
    17. 17. Can learn non-linear boundaries in input space </li></ul>Classification Function<br />Kernel Trick<br />
    18. 18. Embeddings…<br />These embeddings can be high dimensional (even infinite)<br />Our approach is based on embeddings thatapproximate kernels.<br />We’d like this to be as accurate as possible<br />We are going to use fast linear classifier training algorithms on the so sparseness is important.<br />
    19. 19. Key Idea: Embedding an Additive Kernel<br />Additive Kernels are easy to embed, just embed each dimension independently<br />Linear Embedding for min Kernel for integers<br />For non integers can approximate by quantizing<br />
    20. 20. Issues: Embedding Error<br />Quantization leads to large errors<br />Better encoding<br />x<br />y<br />
    21. 21. Issues: Sparsity<br />Represent with sparse values<br />
    22. 22. Linear SVM objective (solve with LIBLINEAR):<br />Encoded SVM objective (not practical): <br />Linear vs. Encoded SVMs <br />
    23. 23. Linear vs. Encoded SVMs <br />Linear SVM objective (solve with LIBLINEAR):<br />Encoded SVM modified (custom solver): <br />Encourages smooth functions<br />Closely approximates min kernel SVM<br />Custom solver : PWLSGD (see paper)<br />
    24. 24. Linear SVM objective (solve with LIBLINEAR):<br />Encoded SVM objective (solve with LIBLINEAR) : <br />Linear vs. Encoded SVMs <br />
    25. 25. Additive Classifier Choices<br />Regularization<br />Encoding<br />
    26. 26. Additive Classifier Choices<br />Accuracy Increases <br />Regularization<br />Encoding<br />Evaluation times are similar<br />
    27. 27. Additive Classifier Choices<br />Accuracy Increases <br />Regularization<br />Encoding<br />Accuracy Increases <br />Evaluation times are similar<br />
    28. 28. Additive Classifier Choices<br />Accuracy Increases <br />Regularization<br />Encoding<br />Accuracy Increases <br />Standard solver<br />Eg. LIBSVM<br />Few lines of code + standard solver<br />Eg. LIBLINEAR<br />
    29. 29. Additive Classifier Choices<br />Accuracy Increases <br />Regularization<br />Encoding<br />Accuracy Increases <br />Custom solver<br />
    30. 30. Additive Classifier Choices<br />Accuracy Increases <br />Regularization<br />Encoding<br />Accuracy Increases <br />Classifier Notations<br />
    31. 31. Experiments<br />“Small” Scale: Caltech 101 (Fei-Fei, et.al.)<br />“Medium” Scale: DC Pedestrians (Munder & Gavrila)<br />“Large” Scale : INRIA Pedestrians (Dalal & Triggs)<br />
    32. 32. Experiment : DC Pedestrians<br />(3.18s, 89.25%)<br />(1.86s, 88.80%)<br />(363s, 89.05%)<br />(2.98s, 85.71%)<br />100x faster<br />training time ~ linear SVM<br />accuracy ~ kernel SVM <br />(1.89s, 72.98%)<br />20,000 features, 656 dimensional<br />100 bins for encoding<br />6-fold cross validation <br />
    33. 33. Experiment : Caltech 101<br />(291s, 55.35%)<br />(2687s, 56.49%)<br />(102s, 54.8%)<br />(90s, 51.64%)<br />10x faster<br />Small loss in accuracy<br />(41s, 46.15%)<br />30 training examples per category<br />100 bins for encoding<br />Pyramid HOG + Spatial Pyramid Match Kernel<br />
    34. 34. Experiment : INRIA Pedestrians<br />(140 mins, 0.95)<br />(76s, 0.94)<br />(27s, 0.88)<br />300x faster<br />training time ~ linear SVM<br />accuracy ~ kernel SVMtrains the detector in < 2 mins <br />(122s, 0.85)<br />(20s, 0.82)<br />SPHOG: 39,000 features, 2268 dimensional <br />100 bins for encoding<br />Cross Validation Plots<br />
    35. 35. Experiment : INRIA Pedestrians<br />300x faster<br />training time ~ linear SVM<br />accuracy ~ kernel SVMtrains the detector in < 2 mins <br />SPHOG: 39,000 features, 2268 dimensional <br />100 bins for encoding<br />Cross Validation Plots<br />
    36. 36. Take Home Messages<br />Additive models are practical for large scale data<br />Can be trained discriminatively: <br />Poor man’s version : encode + Linear SVM Solver<br />Middle man’s version : encode + Custom Solver<br />Rich man’s version : Min Kernel SVM<br />Embedding only Approximates kernels, leads to small loss in accuracy but up to 100x speedup in training time<br />Everyone should use: see code on our websites<br />Fast IKSVM from CVPR’08, Encoded SVMs, etc <br />
    37. 37. Thank You<br />
    1. A particular slide catching your eye?

      Clipping is a handy way to collect important slides you want to go back to later.

    ×