Slideshare uses cookies to improve functionality and performance, and to provide you with relevant advertising. If you continue browsing the site, you agree to the use of cookies on this website. See our User Agreement and Privacy Policy.

Slideshare uses cookies to improve functionality and performance, and to provide you with relevant advertising. If you continue browsing the site, you agree to the use of cookies on this website. See our Privacy Policy and User Agreement for details.

Like this presentation? Why not share!

No Downloads

Total views

1,322

On SlideShare

0

From Embeds

0

Number of Embeds

8

Shares

0

Downloads

0

Comments

0

Likes

2

No embeds

No notes for slide

- 1. SUPPORT VECTOR MACHINE<br />BY PARIN SHAH<br />
- 2. SVM FOR LINEARLY SEPARABLE DATA<br />Plot the points.<br />Find the margin and support vectors.<br />Find the hyperplane having maximum margin.<br />Based on the computed margin value classify the new input data sets into different categories.<br />
- 3. FIGURE REPRESENTING LINEARLY SEPARABLE DATA<br />Figure representing the support vector and maximum margin hyper plane.<br /> (w · x) + b = +1 (positive labels)<br /> (w · x) + b = -1 (negative labels)<br />(w · x) + b = 0 (hyperplane)<br /> <br />Margin :: <br />
- 4. SVM FOR NON LINEARLY SEPARABLE DATA<br />
- 5. STEPS FOR NON LINEARLY SEPARABLE DATA<br />1.) Map into feature space.<br />2.) Use Polynomial kernel Φ(X1) = (X1, X1^2) to <br /> map points.<br />3.) Compute the positive , negative and zero <br />hyperplane.<br />4.) We get the support vectors and the margin value <br /> from it. <br />5.) Classify the new input values from margin value <br />
- 6. KERNEL AND ITS TYPES.<br />Computation of various points in the feature space can be very costly because feature space can be typically said to be infinite-dimensional.<br />The kernel function is used for to reduce these cost because the data points appear in dot product and the kernel function are able to compute the inner products of these points. <br />By kernel function we can directly compute the data points through inner product without explicitly mapping on the feature space.<br />
- 7. KERNEL AND ITS TYPES.<br />1.) Polynomial kernel with degree d.<br /> <br /> <br />2.) Radial basis function kernel with width s <br /> <br /> <br />3.) Sigmoid with parameter k and q <br /> <br /> <br /> <br />4.) Linear Kernel<br /> K(x,y)= x' * y<br />
- 8. SPARSE MATRIX AND SPARSE DATA<br />Simple data structure of 2-dimensional array storing non-zero values.<br />Sparse Data iterates over non-zero values only.<br />Stores the values, row number and column number of non-zero values from the matrix.<br />Easy to compute the inner product of zeroes.<br />Speed of SVM algorithms increases by use of Sparse data. <br />
- 9. STORING SPARSE DATA<br />Dictionary of keys (DOK)<br />DOK represents non-zero values as a dictionary mapping (row, column) tuples to values<br /> <br />List of lists (LIL)<br />LIL stores one list per row, where each entry stores a column index and value. Typically, these entries are kept sorted by column index for faster lookup. <br /> <br />Coordinate list (COO)<br />COO stores a list of (row, column, value) tuples. In this the entries are sorted (row index then column index value) to improve random access times. <br /> <br />Yale format<br />
- 10. STORING SPARSE DATA<br />The Yale Sparse Matrix Format stores an initial sparse m×n matrix,<br /> Where M = row in three one-dimensional arrays. <br /> NNZ = number of nonzero entries of M. <br /> Array A = length= NNZ, and holds all nonzero entries. Order-top bottom right left.<br /> Array IA= length is m + 1. IA(i) contains the index in A of the first nonzero element of row i. <br /> Row i of the original matrix extends from A(IA(i)) to A(IA(i+1)-1), i.e. from the start <br /> of one row to the last index before the start of the next. <br /> Array JA= column index of each element of A, length= NNZ.<br />EXAMPLES:::<br /> [ 1 2 0 0 ]<br /> [ 0 3 9 0 ]<br /> [ 0 1 4 0 ]<br /> <br /> So computing it we get values as,<br /> A = [ 1 2 3 9 1 4 ] , IA = [ 0 2 4 6 ] and JA = [ 0 1 1 2 1 2 ].<br /> <br />
- 11. ADVANTAGES OF SVM<br />In high dimensional spaces Support Vector Machines are very effective.<br />When number of dimensions is greater than the number of samples in such cases also it is found to be very effective.<br />Memory Efficient because it uses subset of training points(support vectors) as decisive factors for classification.<br />Versatile: For different decision function we can define different kernel as long as they provide correct result. Depending upon our requirement we can define our own kernel.<br />
- 12. DISADVANTAGES OF SVM<br />If the number of features is much greater than the number of samples, the method is likely to give poor performances. It is useful for small training samples.<br />SVMs do not directly provide probability estimates, so these must be calculated using indirect techniques.<br />We can have Non-traditional data like strings and trees as input to SVM instead of featured vectors.<br />Should select appropriate kernel for their project according to requirement<br />

No public clipboards found for this slide

×
### Save the most important slides with Clipping

Clipping is a handy way to collect and organize the most important slides from a presentation. You can keep your great finds in clipboards organized around topics.

Be the first to comment