Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.
Presented by Danushka Bollegala
 Spectrum = the set of eigenvalues
 By looking at the spectrum we can know about
the graph itself!
 A way of normalizin...
 UndirectedGraph G(V, E)
 V: set of vertices (nodes in the network)
 E: set of edges (links in the network)
▪ Weight wi...
 How to create the affinity matrixW from the
similarity matrix S?
 ε-neighborhood graph
▪ Connect all vertices that have...
 L = D –W
 D: degree matrix. A diagonal matrix diag(d1,...,dn)
 Properties
 For every vector
 L is symmetric and posi...
 Two versions exist
 Lsym = D-1/2LD-1/2 = I - D-1/2WD-1/2
 Lrw = D-1L = I - D-1W
 The partition (A1,...,Ak) induces a cut on the graph
 Two types of graph cuts exist
 Spectral clustering solves a rela...
By the Rayleigh-Ritz
theorem it follows that the
second eigenvalue is the
minimum.
 Transition probability matrix and Laplacian
are related!
 P = D-1W
 Lrw = I - P
 Lrw based spectral clustering (Shi &
Malik,2000) is better (especially when the
degree distribution is uneven).
 Use k-...
 Eckart-YoungTheorem
 The low-rank approximation B for a matrix A s.t.
rank(B) = r < rank(A) is given by,
 B = USV*, wh...
Spectral graph theory
Spectral graph theory
Spectral graph theory
Spectral graph theory
Spectral graph theory
Spectral graph theory
Upcoming SlideShare
Loading in …5
×

Spectral graph theory

3,545 views

Published on

Published in: Technology, Education
  • Be the first to comment

Spectral graph theory

  1. 1. Presented by Danushka Bollegala
  2. 2.  Spectrum = the set of eigenvalues  By looking at the spectrum we can know about the graph itself!  A way of normalizing data (canonical form) and then perform clustering (e.g. via k- means) on this normalized/reduced space.  Input: A similarity matrix  Output: A set of (non-overlapping/hard) clusters.
  3. 3.  UndirectedGraph G(V, E)  V: set of vertices (nodes in the network)  E: set of edges (links in the network) ▪ Weight wij is the weight of the edge connecting vertex I and j (represented by the affinity matrix.)  Degree: sum of weights on outgoing edges of a vertex.  Measuring the size of a subset A ofV
  4. 4.  How to create the affinity matrixW from the similarity matrix S?  ε-neighborhood graph ▪ Connect all vertices that have similarity greater than ε  k-nearest neighbor graph ▪ Connect the k-nearest neighbors of each vertex. ▪ Mutual k-nearest neighbor graphs for asymmetric S.  Fully connected graph ▪ Use the Gaussian similarity function (kernel)
  5. 5.  L = D –W  D: degree matrix. A diagonal matrix diag(d1,...,dn)  Properties  For every vector  L is symmetric and positive semi-definite  The smallest eigenvalue of L is zero and the corresponding eigenvector is 1 = (1,...,1)T  L has n non-negative, real-valued eigenvalues
  6. 6.  Two versions exist  Lsym = D-1/2LD-1/2 = I - D-1/2WD-1/2  Lrw = D-1L = I - D-1W
  7. 7.  The partition (A1,...,Ak) induces a cut on the graph  Two types of graph cuts exist  Spectral clustering solves a relaxed version of the mincut problem (therefore it is an approximation)
  8. 8. By the Rayleigh-Ritz theorem it follows that the second eigenvalue is the minimum.
  9. 9.  Transition probability matrix and Laplacian are related!  P = D-1W  Lrw = I - P
  10. 10.  Lrw based spectral clustering (Shi & Malik,2000) is better (especially when the degree distribution is uneven).  Use k-nearest neighbor graphs  How to set the number of clusters:  k=log(n)  Use the eigengap heuristic  If using Gaussian kernel how to set sigma  Mean distance of a point to its log(n)+1 nearest neighbors.
  11. 11.  Eckart-YoungTheorem  The low-rank approximation B for a matrix A s.t. rank(B) = r < rank(A) is given by,  B = USV*, where A = UZV* and S is the same as Z except the (r+1) and above singular values of Z are set to zero.  Approximation is done by minimizing the Frobenius norm ▪ minB||A – B||F, subject to rank(B) = r

×