Slideshare uses cookies to improve functionality and performance, and to provide you with relevant advertising. If you continue browsing the site, you agree to the use of cookies on this website. See our User Agreement and Privacy Policy.

Slideshare uses cookies to improve functionality and performance, and to provide you with relevant advertising. If you continue browsing the site, you agree to the use of cookies on this website. See our Privacy Policy and User Agreement for details.

Like this presentation? Why not share!

- icml2004 tutorial on spectral clust... by zukun 1006 views
- Luminous Efficacy by Ionela 868 views
- Introduction to local spectral theo... by AlmetaEscovedo 12 views
- Ambient, Task and Accent Lights for... by IES Light Logic 408 views
- Ambient/Task/Accent Lighting by bthomps8 3670 views
- Task - Surround - Ambient Lighting by Cindy Foster-Warthen 176 views

No Downloads

Total views

2,763

On SlideShare

0

From Embeds

0

Number of Embeds

64

Shares

0

Downloads

50

Comments

0

Likes

1

No embeds

No notes for slide

- 1. Presented by Danushka Bollegala
- 2. Spectrum = the set of eigenvalues By looking at the spectrum we can know about the graph itself! A way of normalizing data (canonical form) and then perform clustering (e.g. via k- means) on this normalized/reduced space. Input: A similarity matrix Output: A set of (non-overlapping/hard) clusters.
- 3. UndirectedGraph G(V, E) V: set of vertices (nodes in the network) E: set of edges (links in the network) ▪ Weight wij is the weight of the edge connecting vertex I and j (represented by the affinity matrix.) Degree: sum of weights on outgoing edges of a vertex. Measuring the size of a subset A ofV
- 4. How to create the affinity matrixW from the similarity matrix S? ε-neighborhood graph ▪ Connect all vertices that have similarity greater than ε k-nearest neighbor graph ▪ Connect the k-nearest neighbors of each vertex. ▪ Mutual k-nearest neighbor graphs for asymmetric S. Fully connected graph ▪ Use the Gaussian similarity function (kernel)
- 5. L = D –W D: degree matrix. A diagonal matrix diag(d1,...,dn) Properties For every vector L is symmetric and positive semi-definite The smallest eigenvalue of L is zero and the corresponding eigenvector is 1 = (1,...,1)T L has n non-negative, real-valued eigenvalues
- 6. Two versions exist Lsym = D-1/2LD-1/2 = I - D-1/2WD-1/2 Lrw = D-1L = I - D-1W
- 7. The partition (A1,...,Ak) induces a cut on the graph Two types of graph cuts exist Spectral clustering solves a relaxed version of the mincut problem (therefore it is an approximation)
- 8. By the Rayleigh-Ritz theorem it follows that the second eigenvalue is the minimum.
- 9. Transition probability matrix and Laplacian are related! P = D-1W Lrw = I - P
- 10. Lrw based spectral clustering (Shi & Malik,2000) is better (especially when the degree distribution is uneven). Use k-nearest neighbor graphs How to set the number of clusters: k=log(n) Use the eigengap heuristic If using Gaussian kernel how to set sigma Mean distance of a point to its log(n)+1 nearest neighbors.
- 11. Eckart-YoungTheorem The low-rank approximation B for a matrix A s.t. rank(B) = r < rank(A) is given by, B = USV*, where A = UZV* and S is the same as Z except the (r+1) and above singular values of Z are set to zero. Approximation is done by minimizing the Frobenius norm ▪ minB||A – B||F, subject to rank(B) = r

No public clipboards found for this slide

×
### Save the most important slides with Clipping

Clipping is a handy way to collect and organize the most important slides from a presentation. You can keep your great finds in clipboards organized around topics.

Be the first to comment