• Share
  • Email
  • Embed
  • Like
  • Save
  • Private Content

Loading…

Flash Player 9 (or above) is needed to view presentations.
We have detected that you do not have it on your computer. To install it, go here.

Like this presentation? Why not share!

Spectral graph theory

on

  • 2,028 views

 

Statistics

Views

Total Views
2,028
Views on SlideShare
1,977
Embed Views
51

Actions

Likes
1
Downloads
30
Comments
0

5 Embeds 51

http://danushkaresearch.blogspot.com 41
http://danushkaresearch.blogspot.jp 4
http://danushkaresearch.blogspot.in 4
http://danushkaresearch.blogspot.de 1
http://danushkaresearch.blogspot.co.uk 1

Accessibility

Categories

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

    Spectral graph theory Spectral graph theory Presentation Transcript

    • An Introduction to Spectral Graph Theory
      Presented by Danushka Bollegala
    • Spectral Graph Theory
      Spectrum = the set of eigenvalues
      By looking at the spectrum we can know about the graph itself!
      A way of normalizing data (canonical form) and then perform clustering (e.g. via k-means) on this normalized/reduced space.
      Input: A similarity matrix
      Output: A set of (non-overlapping/hard) clusters.
    • Definitions
      Undirected Graph G(V, E)
      V: set of vertices (nodes in the network)
      E: set of edges (links in the network)
      Weight wijis the weight of the edge connecting vertex I and j (represented by the affinity matrix.)
      Degree: sum of weights on outgoing edges of a vertex.
      Measuring the size of a subset A of V
    • How to create W?
      How to create the affinity matrix W from the similarity matrix S?
      ε-neighborhood graph
      Connect all vertices that have similarity greater than ε
      k-nearest neighbor graph
      Connect the k-nearest neighbors of each vertex.
      Mutual k-nearest neighbor graphs for asymmetric S.
      Fully connected graph
      Use the Gaussian similarity function (kernel)
    • Unnormalized Graph Laplacian
      L = D – W
      D: degree matrix. A diagonal matrix diag(d1,...,dn)
      Properties
      For every vector
      L is symmetric and positive semi-definite
      The smallest eigenvalue of L is zero and the corresponding eigenvector is 1 = (1,...,1)T
      L has n non-negative, real-valued eigenvalues
    • Normalized Graph Laplacians
      Two versions exist
      Lsym = D-1/2LD-1/2 = I - D-1/2WD-1/2
      Lrw = D-1L = I - D-1W
    • Spectral Clustering (L)
    • Spectral Clustering (Lrw)
    • Spectral Clustering (Lsym)
    • Graph cut point of view
      The partition (A1,...,Ak) induces a cut on the graph
      Two types of graph cuts exist
      Spectral clustering solves a relaxed version of the mincut problem (therefore it is an approximation)
    • Example: RatioCut, k=2
      By the Rayleigh-Ritz theorem it follows that the second eigenvalue is the minimum.
    • Relaxation can be problematic!
    • Random walks point of view
      Transition probability matrix and Laplacian are related!
      P = D-1W
      Lrw = I - P
    • Recommendations
      Lrw based spectral clustering (Shi & Malik,2000) is better (especially when the degree distribution is uneven).
      Use k-nearest neighbor graphs
      How to set the number of clusters:
      k=log(n)
      Use the eigengap heuristic
      If using Gaussian kernel how to set sigma
      Mean distance of a point to its log(n)+1 nearest neighbors.
    • Matrix Approximation using SVD
      Eckart-Young Theorem
      The low-rank approximation B for a matrix A s.t. rank(B) = r < rank(A) is given by,
      B = USV*, where A = UZV* and S is the same as Z except the (r+1) and above singular values of Z are set to zero.
      Approximation is done by minimizing the Frobenius norm
      minB||A – B||F, subject to rank(B) = r
    • Approximation Error
      1. Create a 100 x 1000
      random matrix A.
      2. B = approx(A,r)
      3. RMSE = ||A – B||F
      4. Plot RMSE against r.