Slideshare uses cookies to improve functionality and performance, and to provide you with relevant advertising. If you continue browsing the site, you agree to the use of cookies on this website. See our User Agreement and Privacy Policy.

Slideshare uses cookies to improve functionality and performance, and to provide you with relevant advertising. If you continue browsing the site, you agree to the use of cookies on this website. See our Privacy Policy and User Agreement for details.

Like this presentation? Why not share!

- Shraddha Sarees, Jaipur, Designer P... by IndiaMART InterM... 791 views
- Vertex Power Solutions Pvt Ltd, Ban... by IndiaMART InterM... 587 views
- Chemtron Science Laboratories, Navi... by IndiaMART InterM... 104 views
- Kalyani Industries, Mumbai, Deltame... by IndiaMART InterM... 456 views
- Modern Traders, Mumbai, Industrial ... by IndiaMART InterM... 169 views
- 14 Trouble by guest056a5ab 168 views

803 views

Published on

No Downloads

Total views

803

On SlideShare

0

From Embeds

0

Number of Embeds

4

Shares

0

Downloads

39

Comments

0

Likes

2

No embeds

No notes for slide

- 1. Glenn Fung and Olvi L. Mangasarian August 2000 20081021 Kuan-Chi-I
- 2. Outline <ul><li>Introduction </li></ul><ul><li>SVM </li></ul><ul><li>MSVM </li></ul><ul><li>Comparisons </li></ul><ul><li>Conclusion </li></ul>
- 3. Introduction <ul><li>A method for selecting a small set of support vectors which determines a separating plane clsssifier. </li></ul><ul><li>Useful for applications contain millions of data points. </li></ul>
- 4. SVM <ul><li>A method for classification. </li></ul>
- 5. SVM (Linear Separable Case)
- 6. SVM <ul><li>To find the maximum margin ,equivelent to find minimum ½|| w || 2. </li></ul><ul><li>We can transfer above problem to a quadratic problem with parameter v > 0. </li></ul><ul><li>A : a real m×n matrix. </li></ul><ul><li>e : column vectors of ones in arbitrary dimension. </li></ul><ul><li>e ′ : transpose of e . </li></ul><ul><li>y : nonnegitive slack variables. </li></ul><ul><li>D : m×m diagonal matrix of 1 or -1. </li></ul>
- 7. SVM <ul><li>Written in individual component natation . </li></ul><ul><li>A i :row vector of matrix A . </li></ul>
- 8. SVM <ul><li>x′w = γ +1 bounds the class A ＋ points. </li></ul><ul><li>x′w = γ +1 bounds the class A － points. </li></ul><ul><li>γ : the location relative to the origin. </li></ul><ul><li>w : normal to the bounding planes. </li></ul><ul><li>The linear separating surface is the plane: </li></ul>
- 9. SVM (Linearly Inseparable Case)
- 10. SVM (Inseparable) <ul><li>If the class are inseparable then the two planes bound the two class with a 〝 soft margin”. </li></ul>
- 11. MSVM (1-Norm SVM) <ul><li>A minimal support vertor machine (MSVM). </li></ul><ul><li>In order to make use of a faster programming based approach, we reformulate (1) by replacing the 2-norm by a 1-norm as follows: </li></ul>
- 12. MSVM <ul><li>The mathematical program (7) is easily convert to a linear program as follows: </li></ul><ul><li>υ : the absolute value | w | of w , and υ i ≧| w i | </li></ul>
- 13. MSVM <ul><li>If we deﬁne nonnegative multipliers u ∈ R m associated with the ﬁrst set of constraints of the linear program (8), and multipliers (r, s) ∈ R n+n for the second set of constraints of (8), then the dual linear program associated with the linear SVM formulation (8) is the following: </li></ul>
- 14. MSVM <ul><li>We modify the linear program to generate an SVM with as fewer support vector as possible by addingan error term e ′ y * </li></ul><ul><li>The term e ′ y * suppresses mis-classified points and results in our minimal support vector machine MSVM: </li></ul><ul><li>y * :vector x in R n with components ( y * ) i =1 if y i > 0 and 0 otherwise. </li></ul><ul><li>μ :positive parameter ,chosen by a tuning set . </li></ul>
- 15. MSVM <ul><li>We approximate e ′ y * here by a smooth concave exponential on the nonnegative real line as was done in the feature selection approach of. For y ≥ 0, the approximation of the step vector y∗ of (9) by the concave exponential, , i = 1, . . . ,m, that is: </li></ul>
- 16. MSVM <ul><li>The smooth MSVM: </li></ul>
- 17. MSVM (SLA)
- 18. Comparison
- 19. Observations of Comparisons <ul><li>1. For all test problems MSVM had least number of support vectors. </li></ul><ul><li>2. For the Ionosphere problem, the reduction in the num- </li></ul><ul><li>ber of support vectors of MSVM over SVM| · | 1 is 81%, and </li></ul><ul><li>the average reduction in the number of support vectors of MSVM over SVM| · | is 65.8%. </li></ul><ul><li>3. Tenfold testing set correctness of MSVM was good. </li></ul><ul><li>4. Computing times were higher for MSVM than for other classifiers. </li></ul>
- 20. Conclution <ul><li>We proposed a minimal support vector machine. </li></ul><ul><li>Useful in classifying very large datasets by using only a fraction of the data. </li></ul><ul><li>Improves generalization over other classiﬁers that use a higher number of data points. </li></ul><ul><li>MSVM requires the solution of a few linear programs to determine a sepaeating surface . </li></ul>

No public clipboards found for this slide

×
### Save the most important slides with Clipping

Clipping is a handy way to collect and organize the most important slides from a presentation. You can keep your great finds in clipboards organized around topics.

Be the first to comment