SlideShare a Scribd company logo
1 of 44
Download to read offline
Part 2. Spectral Clustering from
                   Matrix Perspective


                A brief tutorial emphasizing recent developments

                      (More detailed tutorial is given in ICML’04 )



PCA & Matrix Factorizations for Learning, ICML 2005 Tutorial, Chris Ding   56
From PCA to spectral clustering
                         using generalized eigenvectors
            Consider the kernel matrix:                          Wij = Ο† ( xi ),Ο† ( x j )

          In Kernel PCA we compute eigenvector:                             Wv = Ξ»v

             Generalized Eigenvector:                         Wq = Ξ»Dq

                                                       D = diag (d1,L, dn )       di =   βˆ‘w j   ij


                   This leads to Spectral Clustering !
PCA & Matrix Factorizations for Learning, ICML 2005 Tutorial, Chris Ding                             57
Indicator Matrix Quadratic Clustering
                                   Framework

       Unsigned Cluster indicator Matrix H=(h1, …, hK)
       Kernel K-means clustering:

                            max Tr( H T WH ), s.t. H T H = I , H β‰₯ 0
                              H

             K-means:             W = XT X;             Kernel K-means W = (< Ο† ( xi ),Ο† ( x j ) >)

        Spectral clustering (normalized cut)

                           max Tr( H T WH ), s.t. H T DH = I , H β‰₯ 0
                             H

PCA & Matrix Factorizations for Learning, ICML 2005 Tutorial, Chris Ding                              58
Brief Introduction to Spectral Clustering
                        (Laplacian matrix based clustering)




PCA & Matrix Factorizations for Learning, ICML 2005 Tutorial, Chris Ding   59
Some historical notes
         β€’   Fiedler, 1973, 1975, graph Laplacian matrix
         β€’   Donath & Hoffman, 1973, bounds
         β€’   Hall, 1970, Quadratic Placement (embedding)
         β€’   Pothen, Simon, Liou, 1990, Spectral graph
             partitioning (many related papers there after)
         β€’   Hagen & Kahng, 1992, Ratio-cut
         β€’   Chan, Schlag & Zien, multi-way Ratio-cut
         β€’   Chung, 1997, Spectral graph theory book
         β€’   Shi & Malik, 2000, Normalized Cut
PCA & Matrix Factorizations for Learning, ICML 2005 Tutorial, Chris Ding   60
Spectral Gold-Rush of 2001
                                  9 papers on spectral clustering

     β€’ Meila & Shi, AI-Stat 2001. Random Walk interpreation of
           Normalized Cut
     β€’ Ding, He & Zha, KDD 2001. Perturbation analysis of Laplacian
                    matrix on sparsely connected graphs
     β€’ Ng, Jordan & Weiss, NIPS 2001, K-means algorithm on the
           embeded eigen-space
     β€’ Belkin & Niyogi, NIPS 2001. Spectral Embedding
     β€’ Dhillon, KDD 2001, Bipartite graph clustering
     β€’ Zha et al, CIKM 2001, Bipartite graph clustering
     β€’ Zha et al, NIPS 2001. Spectral Relaxation of K-means
     β€’ Ding et al, ICDM 2001. MinMaxCut, Uniqueness of relaxation.
     β€’ Gu et al, K-way Relaxation of NormCut and MinMaxCut
PCA & Matrix Factorizations for Learning, ICML 2005 Tutorial, Chris Ding   61
Spectral Clustering
         min cutsize , without explicit size constraints
          But where to cut ?




            Need to balance sizes
PCA & Matrix Factorizations for Learning, ICML 2005 Tutorial, Chris Ding   62
Graph Clustering
                                       min between-cluster similarities (weights)
                                                                           sim(A,B) = βˆ‘βˆ‘ wij
                                                                                    i∈ A j∈B




                                                                              Balance weight
                                                                              Balance size
                                                                              Balance volume



                                                                            sim(A,A) = βˆ‘βˆ‘ wij
                                                                                      i∈ A j∈ A

                                           max within-cluster similarities (weights)
PCA & Matrix Factorizations for Learning, ICML 2005 Tutorial, Chris Ding                          63
Clustering Objective Functions
                                                                            s(A,B) =   βˆ‘βˆ‘ w           ij

         β€’ Ratio Cut                                                                   i∈ A j∈B
                                                       s(A,B) s(A,B)
                             J Rcut (A,B) =                  +
                                                         |A|    |B|
         β€’ Normalized Cut                                                         dA =    βˆ‘d      i
                                                                                          i∈A
                                                      s ( A, B) s ( A, B)
                           J Ncut ( A, B) =                    +
                                                          dA        dB
                                                             s ( A, B)           s ( A, B)
                                                =                          +
                                                      s ( A, A) + s ( A, B) s(B, B) + s ( A, B)
         β€’ Min-Max-Cut
                                                      s(A,B) s(A,B)
                             J MMC(A,B) =                   +
                                                      s(A,A) s(B,B)
PCA & Matrix Factorizations for Learning, ICML 2005 Tutorial, Chris Ding                                   64
Normalized Cut (Shi & Malik, 2000)

           Min similarity between A & B: s(A,B) = βˆ‘                             βˆ‘ wij
                                                                        i∈ A j∈B
      Balance weights                                        s ( A, B) s ( A, B)
                                  J Ncut ( A, B) =
                                                                 dA
                                                                      +
                                                                           dB      dA =    βˆ‘d
                                                                                           i∈A
                                                                                                     i


                                                      ⎧ d B / d Ad
                                                      βŽͺ                    if i ∈ A
     Cluster indicator:                      q (i ) = ⎨
                                                      βŽͺβˆ’ d A / d B d
                                                      ⎩                    if i ∈ B   d=   βˆ‘d
                                                                                           i∈G
                                                                                                 i

      Normalization:                       q Dq = 1, q De = 0
                                              T                T


      Substitute q leads to                       J Ncut (q) = q T ( D βˆ’ W )q

       min q q T ( D βˆ’ W )q + Ξ» (q T Dq βˆ’ 1)
        Solution is eigenvector of ( D βˆ’ W )q = Ξ»Dq
PCA & Matrix Factorizations for Learning, ICML 2005 Tutorial, Chris Ding                                 65
A simple example
                 2 dense clusters, with sparse connections
                 between them.
               Adjacency matrix                                            Eigenvector q2




PCA & Matrix Factorizations for Learning, ICML 2005 Tutorial, Chris Ding                    66
K-way Spectral Clustering
                       Kβ‰₯2




PCA & Matrix Factorizations for Learning, ICML 2005 Tutorial, Chris Ding   67
K-way Clustering Objectives

         β€’ Ratio Cut
                                                    βŽ› s (C k ,Cl ) s (C k ,Cl ) ⎞           s (C k ,G βˆ’ C k )
           J Rcut (C1 , L , C K ) =         βˆ‘       ⎜
                                                    ⎜ |C | + |C | ⎟ =
                                           < k ,l > ⎝        k            l
                                                                                ⎟
                                                                                ⎠
                                                                                    βˆ‘
                                                                                    k
                                                                                                   |C k|


         β€’ Normalized Cut
                                             βŽ› s (C k ,Cl ) s (C k ,Cl ) ⎞                  s (C k ,G βˆ’ C k )
           J Ncut (C1 , L , C K ) =
                                    < k ,l >
                                            βˆ‘βŽœ
                                             ⎜ d
                                             ⎝       k
                                                           +
                                                                dl
                                                                         ⎟=
                                                                         ⎟
                                                                         ⎠
                                                                                    βˆ‘
                                                                                    k
                                                                                                    dk

         β€’ Min-Max-Cut
                                            βŽ› s (C k ,Cl ) s (C k ,Cl ) ⎞                     s (C k ,G βˆ’ C k )
           J MMC (C1 , L , C K ) =          ⎜
                                   < k ,l > ⎝
                                             βˆ‘    k    k        l   l ⎠
                                                                        ⎟
                                            ⎜ s (C , C ) + s (C , C ) ⎟ =           βˆ‘   k
                                                                                                 s (C k , C k )

PCA & Matrix Factorizations for Learning, ICML 2005 Tutorial, Chris Ding                                          68
K-way Spectral Relaxation
                                                             h1 = (1L1,0 L 0,0 L 0)T
     Unsigned cluster indicators:
                                                             h2 = (0L 0,1L1,0 L 0)T
                                                             LLL
       Re-write:                                             hk = (0 L 0,0L 0,1L1)T

                                         h1 ( D βˆ’ W )h1
                                          T
                                                                           hk ( D βˆ’ W )hk
                                                                            T
             J Rcut (h1 , L, hk ) =              T
                                                              +L+               T
                                                h1 h1                          hk hk

                                          h1 ( D βˆ’ W )h1
                                           T
                                                                           hk ( D βˆ’ W )hk
                                                                            T
              J Ncut (h1 , L, hk ) =            T
                                                               +L+              T
                                               h1 Dh1                          hk Dhk
                                            h1 ( D βˆ’ W )h1
                                             T
                                                                            hk ( D βˆ’ W )hk
                                                                             T
              J MMC (h1 , L , hk ) =             T
                                                                +L+              T
                                                h1 Wh1                          hk Whk


PCA & Matrix Factorizations for Learning, ICML 2005 Tutorial, Chris Ding                     69
K-way Normalized Cut Spectral Relaxation
     Unsigned cluster indicators:                             nk
                                                             }
                                          y k = D1/ 2 (0 L 0,1L1,0L 0)T / || D1/ 2 hk ||
      Re-write:                                           ~                      ~
                  J Ncut ( y1 , L , y k ) =     T
                                               y1   ( I βˆ’ W ) y1 + L + y k ( I βˆ’ W ) y k
                                                                         T

                                     ~                                  ~
                  = Tr (Y T ( I βˆ’ W )Y )                               W = D βˆ’1/ 2WD βˆ’1/ 2
                                   ~
        Optimize : min Tr (Y ( I βˆ’ W )Y ), subject to Y T Y = I
                                              T
                               Y
  By K. Fan’s theorem, optimal solution is
                                        ~
  eigenvectors: Y=(v1,v2, …, vk), ( I βˆ’ W )vk = Ξ»k vk
                        ( D βˆ’ W )u k = Ξ»k Du k ,                u k = D βˆ’1/ 2 vk

                        Ξ»1 + L + Ξ»k ≀ min J Ncut ( y1 ,L , y k )                   (Gu, et al, 2001)

PCA & Matrix Factorizations for Learning, ICML 2005 Tutorial, Chris Ding                               70
K-way Spectral Clustering is difficult
         β€’ Spectral clustering is best applied to 2-way
           clustering
               – positive entries for one cluster
               – negative entries for another cluster
         β€’ For K-way (K>2) clustering
               – Positive and negative signs make cluster
                 assignment difficult
               – Recursive 2-way clustering
               – Low-dimension embedding. Project the data to
                 eigenvector subspace; use another clustering
                 method such as K-means to cluster the data (Ng
                 et al; Zha et al; Back & Jordan, etc)
               – Linearized cluster assignment using spectral ordering and
                 cluster crossing
PCA & Matrix Factorizations for Learning, ICML 2005 Tutorial, Chris Ding     71
Scaled PCA: a Unified Framework
                           for clustering and ordering

       β€’ Scaled PCA has two optimality properties
          – Distance sensitive ordering
          – Min-max principle Clustering
       β€’ SPCA on contingency table β‡’ Correspondence Analysis
          – Simultaneous ordering of rows and columns
          – Simultaneous clustering of rows and columns




PCA & Matrix Factorizations for Learning, ICML 2005 Tutorial, Chris Ding   72
Scaled PCA
            similarity matrix S=(sij) (generated from XXT)
                                                   D = diag(d1 ,L, d n )                  di = si.
                                              ~ βˆ’1 βˆ’1 ~
   Nonlinear re-scaling:                      S = D SD , sij = sij /(si.s j. )
                                                    2 2                       1/ 2

                                   ~
    Apply SVD on                   S⇒
             ~ 1                       ⎑         T⎀
       S = D S D = D βˆ‘ zk Ξ»k z k D = D βŽ’βˆ‘ qk Ξ»k qk βŽ₯ D
                    1
                    1             1
                2   2
                    2         T   2


                      k                ⎣k          ⎦
        qk = D-1/2 zk is the scaled principal component
   Subtract trivial component Ξ» = 1, z = d 1/ 2 /s.., q                                                  =1
                                                               0           0                         0

           β‡’            S βˆ’ dd T /s.. = D βˆ‘ qk Ξ»k qT D
                                                   k
                                                     k =1                      (Ding, et al, 2002)
PCA & Matrix Factorizations for Learning, ICML 2005 Tutorial, Chris Ding                                  73
Scaled PCA on a Rectangle Matrix
               β‡’ Correspondence Analysis
                         ~ βˆ’1 βˆ’1 ~
   Nonlinear re-scaling: P = D 2 PD 2 , p = p /( p p )1/ 2
                              r    c     ij  ij   i. j.

                       ~
          Apply SVD on P                               Subtract trivial component

           P βˆ’ rc / p.. = Dr βˆ‘ f k Ξ»k g Dc
                         T                                        T        r = ( p1.,L, pn. )
                                                                                            T
                                                                  k
                                               k =1
                                βˆ’1                      βˆ’1
                                                                           c = ( p.1,L, p.n )   T

                    fk = D u , gk = D v
                                r
                                  2
                                    k
                                                         2
                                                        c k
              are the scaled row and column principal
              component (standard coordinates in CA)
PCA & Matrix Factorizations for Learning, ICML 2005 Tutorial, Chris Ding                            74
Correspondence Analysis (CA)
      β€’ Mainly used in graphical display of data
      β€’ Popular in France (BenzΓ©cri, 1969)
      β€’ Long history
            – Simultaneous row and column regression (Hirschfeld,
              1935)
            – Reciprocal averaging (Richardson & Kuder, 1933;
              Horst, 1935; Fisher, 1940; Hill, 1974)
            – Canonical correlations, dual scaling, etc.
      β€’ Formulation is a bit complicated (β€œconvoluted”
        Jolliffe, 2002, p.342)
      β€’ β€œA neglected method”, (Hill, 1974)

PCA & Matrix Factorizations for Learning, ICML 2005 Tutorial, Chris Ding   75
Clustering of Bipartite Graphs (rectangle matrix)

           Simultaneous clustering of rows and columns
           of a contingency table (adjacency matrix B )

            Examples of bipartite graphs
            β€’ Information Retrieval: word-by-document matrix
            β€’ Market basket data: transaction-by-item matrix
            β€’ DNA Gene expression profiles
            β€’ Protein vs protein-complex
PCA & Matrix Factorizations for Learning, ICML 2005 Tutorial, Chris Ding   76
Bipartite Graph Clustering
         Clustering indicators for rows and columns:
                         ⎧ 1 if ri ∈ R1                                 ⎧ 1 if ci ∈ C1
                f (i ) = ⎨                                     g (i ) = ⎨
                         βŽ©βˆ’ 1 if ri ∈ R2                                βŽ©βˆ’ 1 if ci ∈ C2

                  βŽ› BR1 ,C1          BR1 ,C2 ⎞             βŽ› 0             B⎞         βŽ›f ⎞
                B=⎜                          ⎟          W =⎜ T              ⎟       q=⎜ ⎟
                                                                                      ⎜g⎟
                  ⎜ BR ,C            BR2 ,C2 ⎟             ⎜B              0⎟         ⎝ ⎠
                  ⎝ 2 1                      ⎠             ⎝                ⎠
     Substitute and obtain
                                                                    s (W12 ) s (W12 )
                                      J MMC (C1 , C 2 ; R1 , R2 ) =         +
                                                                    s (W11 ) s (W22 )
    f,g are determined by
                        βŽ‘βŽ› Dr         ⎞ βŽ› 0            B βŽžβŽ€βŽ› f ⎞   βŽ› Dr            βŽžβŽ› f ⎞
                         ⎜
                        ⎒⎜            βŽŸβˆ’βŽœ T              ⎟βŽ₯⎜ ⎟ = Ξ» ⎜               ⎟⎜ ⎟
                        ⎒⎝
                        ⎣          Dc ⎟ ⎜ B
                                      ⎠ ⎝              0 ⎟βŽ₯⎜ g ⎟
                                                         ⎠⎦⎝ ⎠
                                                                   ⎜
                                                                   ⎝            Dc ⎟⎜ g ⎟
                                                                                   ⎠⎝ ⎠
PCA & Matrix Factorizations for Learning, ICML 2005 Tutorial, Chris Ding                     77
Spectral Clustering of Bipartite Graphs
         Simultaneous clustering of rows and columns
                    (adjacency matrix B )
                                                                                s ( BR1 ,C2 ) =   βˆ‘ βˆ‘b
                                                                                                  ri ∈R1c j ∈C 2
                                                                                                                   ij




                                                          min between-cluster sum of
                                                          xyz weights: s(R1,C2), s(R2,C1)
                                                          max within-cluster sum of xyz
                                 cut                      xyz weights: s(R1,C1), s(R2,C2)

                                        s ( BR1 ,C2 ) + s ( B R2 ,C1 )         s ( B R1 ,C2 ) + s ( B R2 ,C1 )
   J MMC (C1 , C 2 ; R1 , R2 ) =                                           +
                                                2 s ( B R1 ,C1 )                      2 s ( B R2 ,C2 )
                                                                                           (Ding, AI-STAT 2003)
PCA & Matrix Factorizations for Learning, ICML 2005 Tutorial, Chris Ding                                                78
Internet Newsgroups




   Simultaneous clustering
   of documents and words




PCA & Matrix Factorizations for Learning, ICML 2005 Tutorial, Chris Ding   79
Embedding in Principal Subspace

               Cluster Self-Aggregation
           (proved in perturbation analysis)

                                   (Hall, 1970, β€œquadratic placement” (embedding) a graph)


PCA & Matrix Factorizations for Learning, ICML 2005 Tutorial, Chris Ding                     80
Spectral Embedding: Self-aggregation

                        β€’ Compute K eigenvectors of the Laplacian.
                        β€’ Embed objects in the K-dim eigenspace




                                                                           (Ding, 2004)
PCA & Matrix Factorizations for Learning, ICML 2005 Tutorial, Chris Ding                  81
Spectral embedding is not
                              topology preserving


              700 3-D data points form
              2 interlock rings



              In eigenspace, they
              shrink and separate




PCA & Matrix Factorizations for Learning, ICML 2005 Tutorial, Chris Ding   82
Spectral Embedding


          Simplex Embedding Theorem.
          Objects self-aggregate to K centroids
          Centroids locate on K corners of a simplex
                β€’ Simplex consists K basis vectors + coordinate origin
                β€’ Simplex is rotated by an orthogonal transformation T
                β€’T are determined by perturbation analysis




                                                                           (Ding, 2004)
PCA & Matrix Factorizations for Learning, ICML 2005 Tutorial, Chris Ding                  83
Perturbation Analysis
      Wq = Ξ»Dq                   WΛ† z = ( D βˆ’1 / 2WD βˆ’1 / 2 ) z = Ξ»z q = D βˆ’1 / 2 z

        Assume data has 3 dense clusters sparsely connected.

                                                                                C2

                   ⎑W W W ⎀
                     11  12  13                        C1


               W = ⎒ 21 W22 W23βŽ₯
                   ⎒W          βŽ₯
                   ⎒ 31 W32 W33βŽ₯
                   ⎣W          ⎦                                  C3




     Off-diagonal blocks are between-cluster connections,
     assumed small and are treated as a perturbation
                                                                           (Ding et al, KDD’01)
PCA & Matrix Factorizations for Learning, ICML 2005 Tutorial, Chris Ding                          84
Spectral Perturbation Theorem

       Orthogonal Transform Matrix                             T = (t1 ,L , t K )

            T are determined by:                                 Ξ“t k = Ξ» k t k
                                                                                     βˆ’1          βˆ’1
           Spectral Perturbation Matrix                                    Ξ“=Ξ©        2
                                                                                           ΓΩ     2


                ⎑ h11    βˆ’ s12             L βˆ’ s1K ⎀                s pq = s (C p , Cq )
                βŽ’βˆ’ s                       L βˆ’ s2 K βŽ₯
            Ξ“ = ⎒ 21
                ⎒ M
                          h22
                           M               L   M βŽ₯
                                                    βŽ₯               hkk =   βˆ‘           s
                                                                                p| p β‰  k kp
                ⎒                                   βŽ₯
                βŽ£βˆ’ s K 1 βˆ’ s K 2           L hKK ⎦               Ξ© = diag[ ρ (C1 ),L, ρ (Ck )]



PCA & Matrix Factorizations for Learning, ICML 2005 Tutorial, Chris Ding                              85
Connectivity Network
                               ⎧ 1 if i, j belong to same cluster
                         Cij = ⎨
                               ⎩ 0            otherwise

                                                                      K
      Scaled PCA provides                                Cβ‰…D        βˆ‘k =1
                                                                            qk Ξ»k qT D
                                                                                   k

                                                                       K

                                                                     βˆ‘
                                                                        1
       Green’s function :                             C β‰ˆG =      qk       qT
                                                             k =2
                                                                     1 βˆ’ Ξ»k k
                                                                      K
       Projection matrix:                             Cβ‰ˆP≑          βˆ‘k =1
                                                                            qk qT
                                                                                k

PCA & Matrix Factorizations for Learning, ICML 2005 Tutorial, Chris Ding
                                                                                    (Ding et al, 2002)
                                                                                                         86
Similarity matrix W   1st order Perturbation: Example 1




                                                                               1st order
                                                                               solution
Connectivity




                                                      Ξ»2 = 0.300, Ξ»2 = 0.268
  matrix




                                             Between-cluster connections suppressed
                                             Within-cluster connections enhanced
                                                  Effects of self-aggregation
PCA & Matrix Factorizations for Learning, ICML 2005 Tutorial, Chris Ding                   87
Optimality Properties of Scaled PCA
    Scaled principal components have optimality properties:
    Ordering
          – Adjacent objects along the order are similar
          – Far-away objects along the order are dissimilar
          – Optimal solution for the permutation index are given by
            scaled PCA.

    Clustering
          – Maximize within-cluster similarity
          – Minimize between-cluster similarity
          – Optimal solution for cluster membership indicators given
            by scaled PCA.

PCA & Matrix Factorizations for Learning, ICML 2005 Tutorial, Chris Ding   88
Spectral Graph Ordering
    (Barnard, Pothen, Simon, 1993), envelop reduction of sparse
      matrix: find ordering such that the envelop is minimized

       min βˆ‘ max j | i βˆ’ j | wij β‡’ min βˆ‘ ( xi βˆ’ x j ) wij                       2

                  i                                                        ij


       (Hall, 1970), β€œquadratic placement of a graph”:
             Find coordinate x to minimize
                        J=      βˆ‘ ij
                                       ( xi βˆ’ x j ) 2 wij = x T ( D βˆ’ W ) x

          Solution are eigenvectors of Laplacian
PCA & Matrix Factorizations for Learning, ICML 2005 Tutorial, Chris Ding            89
Distance Sensitive Ordering
     Given a graph. Find an optimal Ordering of the nodes.
                                            Ο€ permutation indexes
          J (Ο€ ) = βˆ‘
          d
                        nβˆ’d
                        i =1   Ο€ i ,Ο€ i + d
                                            Ο€ (1,L, n) = (Ο€ 1 ,L, Ο€ n )
                                                w

                           ∩∩
                           ∩∩∩∩∩∩∩∩
                            wΟ€1 ,Ο€ 3
  J d =2 (Ο€ ) :
                            βˆͺβˆͺβˆͺβˆͺβˆͺβˆͺβˆͺβˆͺ
                        min J (Ο€ ) = βˆ‘                      n βˆ’1
                                                            d =1   d J d (Ο€ )
                                                                      2
                           Ο€
      The larger distance, the larger weights, panelity.
PCA & Matrix Factorizations for Learning, ICML 2005 Tutorial, Chris Ding        90
Distance Sensitive Ordering
           J (Ο€ ) = βˆ‘ (i βˆ’ j ) wΟ€ i ,Ο€ j = βˆ‘ (i βˆ’ j ) wΟ€ i ,Ο€ j
                                              2                                         2

                             ij                                   Ο€ i ,Ο€ j

                       = βˆ‘ (Ο€ βˆ’ Ο€ ) wi , j
                                       i
                                        βˆ’1         βˆ’1 2
                                                   j
                             ij
                             n2              Ο€ iβˆ’1 βˆ’( n +1) / 2            Ο€ βˆ’1 βˆ’( n +1) / 2 2
                       =
                               8 ij
                                       βˆ‘(           n/2             βˆ’        j
                                                                                  n/2       ) wi , j
    Define: shifted and rescaled inverse permutation indexes
                       Ο€ iβˆ’1 βˆ’ (n + 1) /2              1βˆ’ n 3 βˆ’ n     n βˆ’1
              qi =                                   ={    ,      ,L,     }
                                  n /2                  n     n         n
                J (Ο€ ) =          n2
                                  8    βˆ‘ (qi βˆ’ q j ) wij = q ( D βˆ’ W )q
                                                            2                n2
                                                                             4
                                                                                  T

                                       ij
PCA & Matrix Factorizations for Learning, ICML 2005 Tutorial, Chris Ding                               91
Distance Sensitive Ordering
        Once q2 is computed, since

                          q2 (i ) < q2 ( j ) β‡’ Ο€                i
                                                                 βˆ’1
                                                                      <Ο€   βˆ’1
                                                                           j

              Ο€   i
                   βˆ’1
                        can be uniquely recovered from q2


              Implementation: sort q2 induces Ο€




PCA & Matrix Factorizations for Learning, ICML 2005 Tutorial, Chris Ding        92
Re-ordering of Genes and Tissues

                                                                                      J (Ο€ )
                                                                            r=
                                                                                    J (random)


                                                                               r = 0.18


                                                                                       J d =1 (Ο€ )
                                                                           rd =1=
                                                                                    J d =1 ( random )

                                                                              rd =1 = 3.39


PCA & Matrix Factorizations for Learning, ICML 2005 Tutorial, Chris Ding                             93
Spectral clustering vs Spectral ordering
         β€’ Continuous approximation of both integer
         programming problems are given by the same
         eigenvector
         β€’ Different problems could have the same
         continuous approximate solution.
         β€’ Quality of the approximation:
               Ordering: better quality: the solution relax
               from a set of evenly spaced discrete values
               Clustering: less better quality: solution relax
               from 2 discrete values
PCA & Matrix Factorizations for Learning, ICML 2005 Tutorial, Chris Ding   94
Linearized Cluster Assignment

        Turn spectral clustering to 1D clustering problem


         β€’ Spectral ordering on connectivity network
         β€’ Cluster crossing
               – Sum of similarities along anti-diagonal
               – Gives 1-D curve with valleys and peaks
               – Divide valleys and peaks into clusters



PCA & Matrix Factorizations for Learning, ICML 2005 Tutorial, Chris Ding   95
Cluster overlap and crossing
                    Given similarity W, and clusters A,B.
        β€’ Cluster overlap                         s(A,B) =       βˆ‘βˆ‘ w
                                                                 i∈ A j∈B
                                                                            ij

        β€’ Cluster crossing compute a smaller fraction of cluster
          overlap.
        β€’ Cluster crossing depends on an ordering o. It sums
          weights cross the site i along the order
                                                     m
                                       ρ (i ) = βˆ‘ wo (iβˆ’ j ),o (i+ j )
                                                     j =1


        β€’ This is a sum along anti-diagonals of W.
PCA & Matrix Factorizations for Learning, ICML 2005 Tutorial, Chris Ding         96
cluster crossing




PCA & Matrix Factorizations for Learning, ICML 2005 Tutorial, Chris Ding        97
K-way Clustering Experiments

              Accuracy of clustering results:

         Method Linearized                            Recursive 2-way Embedding
                           Assignment                 clustering      + K-means
         Data A 89.0%                                 82.8%                75.1%
         Data B 75.7%                                 67.2%                56.4%



PCA & Matrix Factorizations for Learning, ICML 2005 Tutorial, Chris Ding           98
Some Additional
                          Advanced/related Topics

         β€’   Random talks and normalized cut
         β€’   Semi-definite programming
         β€’   Sub-sampling in spectral clustering
         β€’   Extending to semi-supervised classification
         β€’   Green’s function approach
         β€’   Out-of-sample embeding



PCA & Matrix Factorizations for Learning, ICML 2005 Tutorial, Chris Ding   99

More Related Content

What's hot

Bayesian case studies, practical 2
Bayesian case studies, practical 2Bayesian case studies, practical 2
Bayesian case studies, practical 2
Robin Ryder
Β 
Predictve data mining
Predictve data miningPredictve data mining
Predictve data mining
Mintu246
Β 
Beck Workshop on Modelling and Simulation of Coal-fired Power Generation and ...
Beck Workshop on Modelling and Simulation of Coal-fired Power Generation and ...Beck Workshop on Modelling and Simulation of Coal-fired Power Generation and ...
Beck Workshop on Modelling and Simulation of Coal-fired Power Generation and ...
UK Carbon Capture and Storage Research Centre
Β 
Dragisa Zunic - Classical computing with explicit structural rules - the *X c...
Dragisa Zunic - Classical computing with explicit structural rules - the *X c...Dragisa Zunic - Classical computing with explicit structural rules - the *X c...
Dragisa Zunic - Classical computing with explicit structural rules - the *X c...
Dragisa Zunic
Β 
Lecture10 outilier l0_svdd
Lecture10 outilier l0_svddLecture10 outilier l0_svdd
Lecture10 outilier l0_svdd
StΓ©phane Canu
Β 
Mapping Ash Tree Colonization in an Agricultural Moutain Landscape_ Investiga...
Mapping Ash Tree Colonization in an Agricultural Moutain Landscape_ Investiga...Mapping Ash Tree Colonization in an Agricultural Moutain Landscape_ Investiga...
Mapping Ash Tree Colonization in an Agricultural Moutain Landscape_ Investiga...
grssieee
Β 
Hibbeler chapter10
Hibbeler chapter10Hibbeler chapter10
Hibbeler chapter10
ahmedalnamer
Β 
CMA-ES with local meta-models
CMA-ES with local meta-modelsCMA-ES with local meta-models
CMA-ES with local meta-models
zyedb
Β 

What's hot (20)

Montpellier Math Colloquium
Montpellier Math ColloquiumMontpellier Math Colloquium
Montpellier Math Colloquium
Β 
M Gumbel - SCABIO: a framework for bioinformatics algorithms in Scala
M Gumbel - SCABIO: a framework for bioinformatics algorithms in ScalaM Gumbel - SCABIO: a framework for bioinformatics algorithms in Scala
M Gumbel - SCABIO: a framework for bioinformatics algorithms in Scala
Β 
Elementary Landscape Decomposition of the Quadratic Assignment Problem
Elementary Landscape Decomposition of the Quadratic Assignment ProblemElementary Landscape Decomposition of the Quadratic Assignment Problem
Elementary Landscape Decomposition of the Quadratic Assignment Problem
Β 
Bayesian case studies, practical 2
Bayesian case studies, practical 2Bayesian case studies, practical 2
Bayesian case studies, practical 2
Β 
Proximal Splitting and Optimal Transport
Proximal Splitting and Optimal TransportProximal Splitting and Optimal Transport
Proximal Splitting and Optimal Transport
Β 
M. Visinescu - Higher Order First Integrals, Killing Tensors, Killing-Maxwell...
M. Visinescu - Higher Order First Integrals, Killing Tensors, Killing-Maxwell...M. Visinescu - Higher Order First Integrals, Killing Tensors, Killing-Maxwell...
M. Visinescu - Higher Order First Integrals, Killing Tensors, Killing-Maxwell...
Β 
Coherent feedback formulation of a continuous quantum error correction protocol
Coherent feedback formulation of a continuous quantum error correction protocolCoherent feedback formulation of a continuous quantum error correction protocol
Coherent feedback formulation of a continuous quantum error correction protocol
Β 
Predictve data mining
Predictve data miningPredictve data mining
Predictve data mining
Β 
A new class of a stable implicit schemes for treatment of stiff
A new class of a stable implicit schemes for treatment of stiffA new class of a stable implicit schemes for treatment of stiff
A new class of a stable implicit schemes for treatment of stiff
Β 
Beck Workshop on Modelling and Simulation of Coal-fired Power Generation and ...
Beck Workshop on Modelling and Simulation of Coal-fired Power Generation and ...Beck Workshop on Modelling and Simulation of Coal-fired Power Generation and ...
Beck Workshop on Modelling and Simulation of Coal-fired Power Generation and ...
Β 
Theory of Relational Calculus and its Formalization
Theory of Relational Calculus and its FormalizationTheory of Relational Calculus and its Formalization
Theory of Relational Calculus and its Formalization
Β 
Dragisa Zunic - Classical computing with explicit structural rules - the *X c...
Dragisa Zunic - Classical computing with explicit structural rules - the *X c...Dragisa Zunic - Classical computing with explicit structural rules - the *X c...
Dragisa Zunic - Classical computing with explicit structural rules - the *X c...
Β 
Lecture10 outilier l0_svdd
Lecture10 outilier l0_svddLecture10 outilier l0_svdd
Lecture10 outilier l0_svdd
Β 
A Coq Library for the Theory of Relational Calculus
A Coq Library for the Theory of Relational CalculusA Coq Library for the Theory of Relational Calculus
A Coq Library for the Theory of Relational Calculus
Β 
Mapping Ash Tree Colonization in an Agricultural Moutain Landscape_ Investiga...
Mapping Ash Tree Colonization in an Agricultural Moutain Landscape_ Investiga...Mapping Ash Tree Colonization in an Agricultural Moutain Landscape_ Investiga...
Mapping Ash Tree Colonization in an Agricultural Moutain Landscape_ Investiga...
Β 
Talk iccf 19_ben_hammouda
Talk iccf 19_ben_hammoudaTalk iccf 19_ben_hammouda
Talk iccf 19_ben_hammouda
Β 
Hierarchical Deterministic Quadrature Methods for Option Pricing under the Ro...
Hierarchical Deterministic Quadrature Methods for Option Pricing under the Ro...Hierarchical Deterministic Quadrature Methods for Option Pricing under the Ro...
Hierarchical Deterministic Quadrature Methods for Option Pricing under the Ro...
Β 
Hibbeler chapter10
Hibbeler chapter10Hibbeler chapter10
Hibbeler chapter10
Β 
Introduction to harmonic analysis on groups, links with spatial correlation.
Introduction to harmonic analysis on groups, links with spatial correlation.Introduction to harmonic analysis on groups, links with spatial correlation.
Introduction to harmonic analysis on groups, links with spatial correlation.
Β 
CMA-ES with local meta-models
CMA-ES with local meta-modelsCMA-ES with local meta-models
CMA-ES with local meta-models
Β 

Viewers also liked

Kernel Entropy Component Analysis in Remote Sensing Data Clustering.pdf
Kernel Entropy Component Analysis in Remote Sensing Data Clustering.pdfKernel Entropy Component Analysis in Remote Sensing Data Clustering.pdf
Kernel Entropy Component Analysis in Remote Sensing Data Clustering.pdf
grssieee
Β 
fauvel_igarss.pdf
fauvel_igarss.pdffauvel_igarss.pdf
fauvel_igarss.pdf
grssieee
Β 
KPCA_Survey_Report
KPCA_Survey_ReportKPCA_Survey_Report
KPCA_Survey_Report
Randy Salm
Β 
Modeling and forecasting age-specific mortality: Lee-Carter method vs. Functi...
Modeling and forecasting age-specific mortality: Lee-Carter method vs. Functi...Modeling and forecasting age-specific mortality: Lee-Carter method vs. Functi...
Modeling and forecasting age-specific mortality: Lee-Carter method vs. Functi...
hanshang
Β 
Explicit Signal to Noise Ratio in Reproducing Kernel Hilbert Spaces.pdf
Explicit Signal to Noise Ratio in Reproducing Kernel Hilbert Spaces.pdfExplicit Signal to Noise Ratio in Reproducing Kernel Hilbert Spaces.pdf
Explicit Signal to Noise Ratio in Reproducing Kernel Hilbert Spaces.pdf
grssieee
Β 
Pca and kpca of ecg signal
Pca and kpca of ecg signalPca and kpca of ecg signal
Pca and kpca of ecg signal
es712
Β 
Principal component analysis and matrix factorizations for learning (part 1) ...
Principal component analysis and matrix factorizations for learning (part 1) ...Principal component analysis and matrix factorizations for learning (part 1) ...
Principal component analysis and matrix factorizations for learning (part 1) ...
zukun
Β 

Viewers also liked (20)

Nonlinear component analysis as a kernel eigenvalue problem
Nonlinear component analysis as a kernel eigenvalue problemNonlinear component analysis as a kernel eigenvalue problem
Nonlinear component analysis as a kernel eigenvalue problem
Β 
Kernel Entropy Component Analysis in Remote Sensing Data Clustering.pdf
Kernel Entropy Component Analysis in Remote Sensing Data Clustering.pdfKernel Entropy Component Analysis in Remote Sensing Data Clustering.pdf
Kernel Entropy Component Analysis in Remote Sensing Data Clustering.pdf
Β 
fauvel_igarss.pdf
fauvel_igarss.pdffauvel_igarss.pdf
fauvel_igarss.pdf
Β 
Different kind of distance and Statistical Distance
Different kind of distance and Statistical DistanceDifferent kind of distance and Statistical Distance
Different kind of distance and Statistical Distance
Β 
Principal Component Analysis For Novelty Detection
Principal Component Analysis For Novelty DetectionPrincipal Component Analysis For Novelty Detection
Principal Component Analysis For Novelty Detection
Β 
KPCA_Survey_Report
KPCA_Survey_ReportKPCA_Survey_Report
KPCA_Survey_Report
Β 
Adaptive anomaly detection with kernel eigenspace splitting and merging
Adaptive anomaly detection with kernel eigenspace splitting and mergingAdaptive anomaly detection with kernel eigenspace splitting and merging
Adaptive anomaly detection with kernel eigenspace splitting and merging
Β 
Analyzing Kernel Security and Approaches for Improving it
Analyzing Kernel Security and Approaches for Improving itAnalyzing Kernel Security and Approaches for Improving it
Analyzing Kernel Security and Approaches for Improving it
Β 
Modeling and forecasting age-specific mortality: Lee-Carter method vs. Functi...
Modeling and forecasting age-specific mortality: Lee-Carter method vs. Functi...Modeling and forecasting age-specific mortality: Lee-Carter method vs. Functi...
Modeling and forecasting age-specific mortality: Lee-Carter method vs. Functi...
Β 
Explicit Signal to Noise Ratio in Reproducing Kernel Hilbert Spaces.pdf
Explicit Signal to Noise Ratio in Reproducing Kernel Hilbert Spaces.pdfExplicit Signal to Noise Ratio in Reproducing Kernel Hilbert Spaces.pdf
Explicit Signal to Noise Ratio in Reproducing Kernel Hilbert Spaces.pdf
Β 
A Comparative Study between ICA (Independent Component Analysis) and PCA (Pri...
A Comparative Study between ICA (Independent Component Analysis) and PCA (Pri...A Comparative Study between ICA (Independent Component Analysis) and PCA (Pri...
A Comparative Study between ICA (Independent Component Analysis) and PCA (Pri...
Β 
Regularized Principal Component Analysis for Spatial Data
Regularized Principal Component Analysis for Spatial DataRegularized Principal Component Analysis for Spatial Data
Regularized Principal Component Analysis for Spatial Data
Β 
Pca and kpca of ecg signal
Pca and kpca of ecg signalPca and kpca of ecg signal
Pca and kpca of ecg signal
Β 
DataEngConf: Feature Extraction: Modern Questions and Challenges at Google
DataEngConf: Feature Extraction: Modern Questions and Challenges at GoogleDataEngConf: Feature Extraction: Modern Questions and Challenges at Google
DataEngConf: Feature Extraction: Modern Questions and Challenges at Google
Β 
Probabilistic PCA, EM, and more
Probabilistic PCA, EM, and moreProbabilistic PCA, EM, and more
Probabilistic PCA, EM, and more
Β 
Principal component analysis and matrix factorizations for learning (part 1) ...
Principal component analysis and matrix factorizations for learning (part 1) ...Principal component analysis and matrix factorizations for learning (part 1) ...
Principal component analysis and matrix factorizations for learning (part 1) ...
Β 
Principal Component Analysis and Clustering
Principal Component Analysis and ClusteringPrincipal Component Analysis and Clustering
Principal Component Analysis and Clustering
Β 
ECG: Indication and Interpretation
ECG: Indication and InterpretationECG: Indication and Interpretation
ECG: Indication and Interpretation
Β 
Introduction to Statistical Machine Learning
Introduction to Statistical Machine LearningIntroduction to Statistical Machine Learning
Introduction to Statistical Machine Learning
Β 
Principal component analysis
Principal component analysisPrincipal component analysis
Principal component analysis
Β 

Similar to Principal component analysis and matrix factorizations for learning (part 2) ding - icml 2005 tutorial - 2005

icml2004 tutorial on spectral clustering part I
icml2004 tutorial on spectral clustering part Iicml2004 tutorial on spectral clustering part I
icml2004 tutorial on spectral clustering part I
zukun
Β 
Matrix Computations in Machine Learning
Matrix Computations in Machine LearningMatrix Computations in Machine Learning
Matrix Computations in Machine Learning
butest
Β 
Triangle counting handout
Triangle counting handoutTriangle counting handout
Triangle counting handout
csedays
Β 
Principal component analysis and matrix factorizations for learning (part 3) ...
Principal component analysis and matrix factorizations for learning (part 3) ...Principal component analysis and matrix factorizations for learning (part 3) ...
Principal component analysis and matrix factorizations for learning (part 3) ...
zukun
Β 
Logistic Regression(SGD)
Logistic Regression(SGD)Logistic Regression(SGD)
Logistic Regression(SGD)
Prentice Xu
Β 
Image segmentation 3 morphology
Image segmentation 3 morphologyImage segmentation 3 morphology
Image segmentation 3 morphology
Rumah Belajar
Β 
Lesson 5: Matrix Algebra (slides)
Lesson 5: Matrix Algebra (slides)Lesson 5: Matrix Algebra (slides)
Lesson 5: Matrix Algebra (slides)
Matthew Leingang
Β 
CVPR2010: Advanced ITinCVPR in a Nutshell: part 7: Future Trend
CVPR2010: Advanced ITinCVPR in a Nutshell: part 7: Future TrendCVPR2010: Advanced ITinCVPR in a Nutshell: part 7: Future Trend
CVPR2010: Advanced ITinCVPR in a Nutshell: part 7: Future Trend
zukun
Β 
TunUp final presentation
TunUp final presentationTunUp final presentation
TunUp final presentation
Gianmario Spacagna
Β 
Jam 2006 Test Papers Mathematical Statistics
Jam 2006 Test Papers Mathematical StatisticsJam 2006 Test Papers Mathematical Statistics
Jam 2006 Test Papers Mathematical Statistics
ashu29
Β 
C4 January 2012 QP
C4 January 2012 QPC4 January 2012 QP
C4 January 2012 QP
anicholls1234
Β 

Similar to Principal component analysis and matrix factorizations for learning (part 2) ding - icml 2005 tutorial - 2005 (20)

icml2004 tutorial on spectral clustering part I
icml2004 tutorial on spectral clustering part Iicml2004 tutorial on spectral clustering part I
icml2004 tutorial on spectral clustering part I
Β 
ABC workshop: 17w5025
ABC workshop: 17w5025ABC workshop: 17w5025
ABC workshop: 17w5025
Β 
Matrix Computations in Machine Learning
Matrix Computations in Machine LearningMatrix Computations in Machine Learning
Matrix Computations in Machine Learning
Β 
YSC 2013
YSC 2013YSC 2013
YSC 2013
Β 
Triangle counting handout
Triangle counting handoutTriangle counting handout
Triangle counting handout
Β 
Principal component analysis and matrix factorizations for learning (part 3) ...
Principal component analysis and matrix factorizations for learning (part 3) ...Principal component analysis and matrix factorizations for learning (part 3) ...
Principal component analysis and matrix factorizations for learning (part 3) ...
Β 
Logistic Regression(SGD)
Logistic Regression(SGD)Logistic Regression(SGD)
Logistic Regression(SGD)
Β 
Image segmentation 3 morphology
Image segmentation 3 morphologyImage segmentation 3 morphology
Image segmentation 3 morphology
Β 
ABC-Gibbs
ABC-GibbsABC-Gibbs
ABC-Gibbs
Β 
Lesson 5: Matrix Algebra (slides)
Lesson 5: Matrix Algebra (slides)Lesson 5: Matrix Algebra (slides)
Lesson 5: Matrix Algebra (slides)
Β 
Social Network Analysis
Social Network AnalysisSocial Network Analysis
Social Network Analysis
Β 
Divergence clustering
Divergence clusteringDivergence clustering
Divergence clustering
Β 
Masters Thesis Defense
Masters Thesis DefenseMasters Thesis Defense
Masters Thesis Defense
Β 
CVPR2010: Advanced ITinCVPR in a Nutshell: part 7: Future Trend
CVPR2010: Advanced ITinCVPR in a Nutshell: part 7: Future TrendCVPR2010: Advanced ITinCVPR in a Nutshell: part 7: Future Trend
CVPR2010: Advanced ITinCVPR in a Nutshell: part 7: Future Trend
Β 
TunUp final presentation
TunUp final presentationTunUp final presentation
TunUp final presentation
Β 
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Β 
Jam 2006 Test Papers Mathematical Statistics
Jam 2006 Test Papers Mathematical StatisticsJam 2006 Test Papers Mathematical Statistics
Jam 2006 Test Papers Mathematical Statistics
Β 
Cs229 notes7a
Cs229 notes7aCs229 notes7a
Cs229 notes7a
Β 
Automatic bayesian cubature
Automatic bayesian cubatureAutomatic bayesian cubature
Automatic bayesian cubature
Β 
C4 January 2012 QP
C4 January 2012 QPC4 January 2012 QP
C4 January 2012 QP
Β 

More from zukun

My lyn tutorial 2009
My lyn tutorial 2009My lyn tutorial 2009
My lyn tutorial 2009
zukun
Β 
ETHZ CV2012: Tutorial openCV
ETHZ CV2012: Tutorial openCVETHZ CV2012: Tutorial openCV
ETHZ CV2012: Tutorial openCV
zukun
Β 
ETHZ CV2012: Information
ETHZ CV2012: InformationETHZ CV2012: Information
ETHZ CV2012: Information
zukun
Β 
Siwei lyu: natural image statistics
Siwei lyu: natural image statisticsSiwei lyu: natural image statistics
Siwei lyu: natural image statistics
zukun
Β 
Lecture9 camera calibration
Lecture9 camera calibrationLecture9 camera calibration
Lecture9 camera calibration
zukun
Β 
Brunelli 2008: template matching techniques in computer vision
Brunelli 2008: template matching techniques in computer visionBrunelli 2008: template matching techniques in computer vision
Brunelli 2008: template matching techniques in computer vision
zukun
Β 
Modern features-part-4-evaluation
Modern features-part-4-evaluationModern features-part-4-evaluation
Modern features-part-4-evaluation
zukun
Β 
Modern features-part-3-software
Modern features-part-3-softwareModern features-part-3-software
Modern features-part-3-software
zukun
Β 
Modern features-part-2-descriptors
Modern features-part-2-descriptorsModern features-part-2-descriptors
Modern features-part-2-descriptors
zukun
Β 
Modern features-part-1-detectors
Modern features-part-1-detectorsModern features-part-1-detectors
Modern features-part-1-detectors
zukun
Β 
Modern features-part-0-intro
Modern features-part-0-introModern features-part-0-intro
Modern features-part-0-intro
zukun
Β 
Lecture 02 internet video search
Lecture 02 internet video searchLecture 02 internet video search
Lecture 02 internet video search
zukun
Β 
Lecture 01 internet video search
Lecture 01 internet video searchLecture 01 internet video search
Lecture 01 internet video search
zukun
Β 
Lecture 03 internet video search
Lecture 03 internet video searchLecture 03 internet video search
Lecture 03 internet video search
zukun
Β 
Icml2012 tutorial representation_learning
Icml2012 tutorial representation_learningIcml2012 tutorial representation_learning
Icml2012 tutorial representation_learning
zukun
Β 
Advances in discrete energy minimisation for computer vision
Advances in discrete energy minimisation for computer visionAdvances in discrete energy minimisation for computer vision
Advances in discrete energy minimisation for computer vision
zukun
Β 
Gephi tutorial: quick start
Gephi tutorial: quick startGephi tutorial: quick start
Gephi tutorial: quick start
zukun
Β 
EM algorithm and its application in probabilistic latent semantic analysis
EM algorithm and its application in probabilistic latent semantic analysisEM algorithm and its application in probabilistic latent semantic analysis
EM algorithm and its application in probabilistic latent semantic analysis
zukun
Β 
Object recognition with pictorial structures
Object recognition with pictorial structuresObject recognition with pictorial structures
Object recognition with pictorial structures
zukun
Β 
Iccv2011 learning spatiotemporal graphs of human activities
Iccv2011 learning spatiotemporal graphs of human activities Iccv2011 learning spatiotemporal graphs of human activities
Iccv2011 learning spatiotemporal graphs of human activities
zukun
Β 

More from zukun (20)

My lyn tutorial 2009
My lyn tutorial 2009My lyn tutorial 2009
My lyn tutorial 2009
Β 
ETHZ CV2012: Tutorial openCV
ETHZ CV2012: Tutorial openCVETHZ CV2012: Tutorial openCV
ETHZ CV2012: Tutorial openCV
Β 
ETHZ CV2012: Information
ETHZ CV2012: InformationETHZ CV2012: Information
ETHZ CV2012: Information
Β 
Siwei lyu: natural image statistics
Siwei lyu: natural image statisticsSiwei lyu: natural image statistics
Siwei lyu: natural image statistics
Β 
Lecture9 camera calibration
Lecture9 camera calibrationLecture9 camera calibration
Lecture9 camera calibration
Β 
Brunelli 2008: template matching techniques in computer vision
Brunelli 2008: template matching techniques in computer visionBrunelli 2008: template matching techniques in computer vision
Brunelli 2008: template matching techniques in computer vision
Β 
Modern features-part-4-evaluation
Modern features-part-4-evaluationModern features-part-4-evaluation
Modern features-part-4-evaluation
Β 
Modern features-part-3-software
Modern features-part-3-softwareModern features-part-3-software
Modern features-part-3-software
Β 
Modern features-part-2-descriptors
Modern features-part-2-descriptorsModern features-part-2-descriptors
Modern features-part-2-descriptors
Β 
Modern features-part-1-detectors
Modern features-part-1-detectorsModern features-part-1-detectors
Modern features-part-1-detectors
Β 
Modern features-part-0-intro
Modern features-part-0-introModern features-part-0-intro
Modern features-part-0-intro
Β 
Lecture 02 internet video search
Lecture 02 internet video searchLecture 02 internet video search
Lecture 02 internet video search
Β 
Lecture 01 internet video search
Lecture 01 internet video searchLecture 01 internet video search
Lecture 01 internet video search
Β 
Lecture 03 internet video search
Lecture 03 internet video searchLecture 03 internet video search
Lecture 03 internet video search
Β 
Icml2012 tutorial representation_learning
Icml2012 tutorial representation_learningIcml2012 tutorial representation_learning
Icml2012 tutorial representation_learning
Β 
Advances in discrete energy minimisation for computer vision
Advances in discrete energy minimisation for computer visionAdvances in discrete energy minimisation for computer vision
Advances in discrete energy minimisation for computer vision
Β 
Gephi tutorial: quick start
Gephi tutorial: quick startGephi tutorial: quick start
Gephi tutorial: quick start
Β 
EM algorithm and its application in probabilistic latent semantic analysis
EM algorithm and its application in probabilistic latent semantic analysisEM algorithm and its application in probabilistic latent semantic analysis
EM algorithm and its application in probabilistic latent semantic analysis
Β 
Object recognition with pictorial structures
Object recognition with pictorial structuresObject recognition with pictorial structures
Object recognition with pictorial structures
Β 
Iccv2011 learning spatiotemporal graphs of human activities
Iccv2011 learning spatiotemporal graphs of human activities Iccv2011 learning spatiotemporal graphs of human activities
Iccv2011 learning spatiotemporal graphs of human activities
Β 

Recently uploaded

Understanding the FAA Part 107 License ..
Understanding the FAA Part 107 License ..Understanding the FAA Part 107 License ..
Understanding the FAA Part 107 License ..
Christopher Logan Kennedy
Β 
Why Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire businessWhy Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire business
panagenda
Β 
Finding Java's Hidden Performance Traps @ DevoxxUK 2024
Finding Java's Hidden Performance Traps @ DevoxxUK 2024Finding Java's Hidden Performance Traps @ DevoxxUK 2024
Finding Java's Hidden Performance Traps @ DevoxxUK 2024
Victor Rentea
Β 
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Safe Software
Β 

Recently uploaded (20)

Boost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfBoost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdf
Β 
Biography Of Angeliki Cooney | Senior Vice President Life Sciences | Albany, ...
Biography Of Angeliki Cooney | Senior Vice President Life Sciences | Albany, ...Biography Of Angeliki Cooney | Senior Vice President Life Sciences | Albany, ...
Biography Of Angeliki Cooney | Senior Vice President Life Sciences | Albany, ...
Β 
TrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data DiscoveryTrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
Β 
Introduction to Multilingual Retrieval Augmented Generation (RAG)
Introduction to Multilingual Retrieval Augmented Generation (RAG)Introduction to Multilingual Retrieval Augmented Generation (RAG)
Introduction to Multilingual Retrieval Augmented Generation (RAG)
Β 
Artificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : UncertaintyArtificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : Uncertainty
Β 
Understanding the FAA Part 107 License ..
Understanding the FAA Part 107 License ..Understanding the FAA Part 107 License ..
Understanding the FAA Part 107 License ..
Β 
Platformless Horizons for Digital Adaptability
Platformless Horizons for Digital AdaptabilityPlatformless Horizons for Digital Adaptability
Platformless Horizons for Digital Adaptability
Β 
DEV meet-up UiPath Document Understanding May 7 2024 Amsterdam
DEV meet-up UiPath Document Understanding May 7 2024 AmsterdamDEV meet-up UiPath Document Understanding May 7 2024 Amsterdam
DEV meet-up UiPath Document Understanding May 7 2024 Amsterdam
Β 
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
Β 
Why Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire businessWhy Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire business
Β 
Finding Java's Hidden Performance Traps @ DevoxxUK 2024
Finding Java's Hidden Performance Traps @ DevoxxUK 2024Finding Java's Hidden Performance Traps @ DevoxxUK 2024
Finding Java's Hidden Performance Traps @ DevoxxUK 2024
Β 
MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024
Β 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected Worker
Β 
Mcleodganj Call Girls πŸ₯° 8617370543 Service Offer VIP Hot Model
Mcleodganj Call Girls πŸ₯° 8617370543 Service Offer VIP Hot ModelMcleodganj Call Girls πŸ₯° 8617370543 Service Offer VIP Hot Model
Mcleodganj Call Girls πŸ₯° 8617370543 Service Offer VIP Hot Model
Β 
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemkeProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
Β 
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Β 
Vector Search -An Introduction in Oracle Database 23ai.pptx
Vector Search -An Introduction in Oracle Database 23ai.pptxVector Search -An Introduction in Oracle Database 23ai.pptx
Vector Search -An Introduction in Oracle Database 23ai.pptx
Β 
"I see eyes in my soup": How Delivery Hero implemented the safety system for ...
"I see eyes in my soup": How Delivery Hero implemented the safety system for ..."I see eyes in my soup": How Delivery Hero implemented the safety system for ...
"I see eyes in my soup": How Delivery Hero implemented the safety system for ...
Β 
[BuildWithAI] Introduction to Gemini.pdf
[BuildWithAI] Introduction to Gemini.pdf[BuildWithAI] Introduction to Gemini.pdf
[BuildWithAI] Introduction to Gemini.pdf
Β 
Six Myths about Ontologies: The Basics of Formal Ontology
Six Myths about Ontologies: The Basics of Formal OntologySix Myths about Ontologies: The Basics of Formal Ontology
Six Myths about Ontologies: The Basics of Formal Ontology
Β 

Principal component analysis and matrix factorizations for learning (part 2) ding - icml 2005 tutorial - 2005

  • 1. Part 2. Spectral Clustering from Matrix Perspective A brief tutorial emphasizing recent developments (More detailed tutorial is given in ICML’04 ) PCA & Matrix Factorizations for Learning, ICML 2005 Tutorial, Chris Ding 56
  • 2. From PCA to spectral clustering using generalized eigenvectors Consider the kernel matrix: Wij = Ο† ( xi ),Ο† ( x j ) In Kernel PCA we compute eigenvector: Wv = Ξ»v Generalized Eigenvector: Wq = Ξ»Dq D = diag (d1,L, dn ) di = βˆ‘w j ij This leads to Spectral Clustering ! PCA & Matrix Factorizations for Learning, ICML 2005 Tutorial, Chris Ding 57
  • 3. Indicator Matrix Quadratic Clustering Framework Unsigned Cluster indicator Matrix H=(h1, …, hK) Kernel K-means clustering: max Tr( H T WH ), s.t. H T H = I , H β‰₯ 0 H K-means: W = XT X; Kernel K-means W = (< Ο† ( xi ),Ο† ( x j ) >) Spectral clustering (normalized cut) max Tr( H T WH ), s.t. H T DH = I , H β‰₯ 0 H PCA & Matrix Factorizations for Learning, ICML 2005 Tutorial, Chris Ding 58
  • 4. Brief Introduction to Spectral Clustering (Laplacian matrix based clustering) PCA & Matrix Factorizations for Learning, ICML 2005 Tutorial, Chris Ding 59
  • 5. Some historical notes β€’ Fiedler, 1973, 1975, graph Laplacian matrix β€’ Donath & Hoffman, 1973, bounds β€’ Hall, 1970, Quadratic Placement (embedding) β€’ Pothen, Simon, Liou, 1990, Spectral graph partitioning (many related papers there after) β€’ Hagen & Kahng, 1992, Ratio-cut β€’ Chan, Schlag & Zien, multi-way Ratio-cut β€’ Chung, 1997, Spectral graph theory book β€’ Shi & Malik, 2000, Normalized Cut PCA & Matrix Factorizations for Learning, ICML 2005 Tutorial, Chris Ding 60
  • 6. Spectral Gold-Rush of 2001 9 papers on spectral clustering β€’ Meila & Shi, AI-Stat 2001. Random Walk interpreation of Normalized Cut β€’ Ding, He & Zha, KDD 2001. Perturbation analysis of Laplacian matrix on sparsely connected graphs β€’ Ng, Jordan & Weiss, NIPS 2001, K-means algorithm on the embeded eigen-space β€’ Belkin & Niyogi, NIPS 2001. Spectral Embedding β€’ Dhillon, KDD 2001, Bipartite graph clustering β€’ Zha et al, CIKM 2001, Bipartite graph clustering β€’ Zha et al, NIPS 2001. Spectral Relaxation of K-means β€’ Ding et al, ICDM 2001. MinMaxCut, Uniqueness of relaxation. β€’ Gu et al, K-way Relaxation of NormCut and MinMaxCut PCA & Matrix Factorizations for Learning, ICML 2005 Tutorial, Chris Ding 61
  • 7. Spectral Clustering min cutsize , without explicit size constraints But where to cut ? Need to balance sizes PCA & Matrix Factorizations for Learning, ICML 2005 Tutorial, Chris Ding 62
  • 8. Graph Clustering min between-cluster similarities (weights) sim(A,B) = βˆ‘βˆ‘ wij i∈ A j∈B Balance weight Balance size Balance volume sim(A,A) = βˆ‘βˆ‘ wij i∈ A j∈ A max within-cluster similarities (weights) PCA & Matrix Factorizations for Learning, ICML 2005 Tutorial, Chris Ding 63
  • 9. Clustering Objective Functions s(A,B) = βˆ‘βˆ‘ w ij β€’ Ratio Cut i∈ A j∈B s(A,B) s(A,B) J Rcut (A,B) = + |A| |B| β€’ Normalized Cut dA = βˆ‘d i i∈A s ( A, B) s ( A, B) J Ncut ( A, B) = + dA dB s ( A, B) s ( A, B) = + s ( A, A) + s ( A, B) s(B, B) + s ( A, B) β€’ Min-Max-Cut s(A,B) s(A,B) J MMC(A,B) = + s(A,A) s(B,B) PCA & Matrix Factorizations for Learning, ICML 2005 Tutorial, Chris Ding 64
  • 10. Normalized Cut (Shi & Malik, 2000) Min similarity between A & B: s(A,B) = βˆ‘ βˆ‘ wij i∈ A j∈B Balance weights s ( A, B) s ( A, B) J Ncut ( A, B) = dA + dB dA = βˆ‘d i∈A i ⎧ d B / d Ad βŽͺ if i ∈ A Cluster indicator: q (i ) = ⎨ βŽͺβˆ’ d A / d B d ⎩ if i ∈ B d= βˆ‘d i∈G i Normalization: q Dq = 1, q De = 0 T T Substitute q leads to J Ncut (q) = q T ( D βˆ’ W )q min q q T ( D βˆ’ W )q + Ξ» (q T Dq βˆ’ 1) Solution is eigenvector of ( D βˆ’ W )q = Ξ»Dq PCA & Matrix Factorizations for Learning, ICML 2005 Tutorial, Chris Ding 65
  • 11. A simple example 2 dense clusters, with sparse connections between them. Adjacency matrix Eigenvector q2 PCA & Matrix Factorizations for Learning, ICML 2005 Tutorial, Chris Ding 66
  • 12. K-way Spectral Clustering Kβ‰₯2 PCA & Matrix Factorizations for Learning, ICML 2005 Tutorial, Chris Ding 67
  • 13. K-way Clustering Objectives β€’ Ratio Cut βŽ› s (C k ,Cl ) s (C k ,Cl ) ⎞ s (C k ,G βˆ’ C k ) J Rcut (C1 , L , C K ) = βˆ‘ ⎜ ⎜ |C | + |C | ⎟ = < k ,l > ⎝ k l ⎟ ⎠ βˆ‘ k |C k| β€’ Normalized Cut βŽ› s (C k ,Cl ) s (C k ,Cl ) ⎞ s (C k ,G βˆ’ C k ) J Ncut (C1 , L , C K ) = < k ,l > βˆ‘βŽœ ⎜ d ⎝ k + dl ⎟= ⎟ ⎠ βˆ‘ k dk β€’ Min-Max-Cut βŽ› s (C k ,Cl ) s (C k ,Cl ) ⎞ s (C k ,G βˆ’ C k ) J MMC (C1 , L , C K ) = ⎜ < k ,l > ⎝ βˆ‘ k k l l ⎠ ⎟ ⎜ s (C , C ) + s (C , C ) ⎟ = βˆ‘ k s (C k , C k ) PCA & Matrix Factorizations for Learning, ICML 2005 Tutorial, Chris Ding 68
  • 14. K-way Spectral Relaxation h1 = (1L1,0 L 0,0 L 0)T Unsigned cluster indicators: h2 = (0L 0,1L1,0 L 0)T LLL Re-write: hk = (0 L 0,0L 0,1L1)T h1 ( D βˆ’ W )h1 T hk ( D βˆ’ W )hk T J Rcut (h1 , L, hk ) = T +L+ T h1 h1 hk hk h1 ( D βˆ’ W )h1 T hk ( D βˆ’ W )hk T J Ncut (h1 , L, hk ) = T +L+ T h1 Dh1 hk Dhk h1 ( D βˆ’ W )h1 T hk ( D βˆ’ W )hk T J MMC (h1 , L , hk ) = T +L+ T h1 Wh1 hk Whk PCA & Matrix Factorizations for Learning, ICML 2005 Tutorial, Chris Ding 69
  • 15. K-way Normalized Cut Spectral Relaxation Unsigned cluster indicators: nk } y k = D1/ 2 (0 L 0,1L1,0L 0)T / || D1/ 2 hk || Re-write: ~ ~ J Ncut ( y1 , L , y k ) = T y1 ( I βˆ’ W ) y1 + L + y k ( I βˆ’ W ) y k T ~ ~ = Tr (Y T ( I βˆ’ W )Y ) W = D βˆ’1/ 2WD βˆ’1/ 2 ~ Optimize : min Tr (Y ( I βˆ’ W )Y ), subject to Y T Y = I T Y By K. Fan’s theorem, optimal solution is ~ eigenvectors: Y=(v1,v2, …, vk), ( I βˆ’ W )vk = Ξ»k vk ( D βˆ’ W )u k = Ξ»k Du k , u k = D βˆ’1/ 2 vk Ξ»1 + L + Ξ»k ≀ min J Ncut ( y1 ,L , y k ) (Gu, et al, 2001) PCA & Matrix Factorizations for Learning, ICML 2005 Tutorial, Chris Ding 70
  • 16. K-way Spectral Clustering is difficult β€’ Spectral clustering is best applied to 2-way clustering – positive entries for one cluster – negative entries for another cluster β€’ For K-way (K>2) clustering – Positive and negative signs make cluster assignment difficult – Recursive 2-way clustering – Low-dimension embedding. Project the data to eigenvector subspace; use another clustering method such as K-means to cluster the data (Ng et al; Zha et al; Back & Jordan, etc) – Linearized cluster assignment using spectral ordering and cluster crossing PCA & Matrix Factorizations for Learning, ICML 2005 Tutorial, Chris Ding 71
  • 17. Scaled PCA: a Unified Framework for clustering and ordering β€’ Scaled PCA has two optimality properties – Distance sensitive ordering – Min-max principle Clustering β€’ SPCA on contingency table β‡’ Correspondence Analysis – Simultaneous ordering of rows and columns – Simultaneous clustering of rows and columns PCA & Matrix Factorizations for Learning, ICML 2005 Tutorial, Chris Ding 72
  • 18. Scaled PCA similarity matrix S=(sij) (generated from XXT) D = diag(d1 ,L, d n ) di = si. ~ βˆ’1 βˆ’1 ~ Nonlinear re-scaling: S = D SD , sij = sij /(si.s j. ) 2 2 1/ 2 ~ Apply SVD on Sβ‡’ ~ 1 ⎑ T⎀ S = D S D = D βˆ‘ zk Ξ»k z k D = D βŽ’βˆ‘ qk Ξ»k qk βŽ₯ D 1 1 1 2 2 2 T 2 k ⎣k ⎦ qk = D-1/2 zk is the scaled principal component Subtract trivial component Ξ» = 1, z = d 1/ 2 /s.., q =1 0 0 0 β‡’ S βˆ’ dd T /s.. = D βˆ‘ qk Ξ»k qT D k k =1 (Ding, et al, 2002) PCA & Matrix Factorizations for Learning, ICML 2005 Tutorial, Chris Ding 73
  • 19. Scaled PCA on a Rectangle Matrix β‡’ Correspondence Analysis ~ βˆ’1 βˆ’1 ~ Nonlinear re-scaling: P = D 2 PD 2 , p = p /( p p )1/ 2 r c ij ij i. j. ~ Apply SVD on P Subtract trivial component P βˆ’ rc / p.. = Dr βˆ‘ f k Ξ»k g Dc T T r = ( p1.,L, pn. ) T k k =1 βˆ’1 βˆ’1 c = ( p.1,L, p.n ) T fk = D u , gk = D v r 2 k 2 c k are the scaled row and column principal component (standard coordinates in CA) PCA & Matrix Factorizations for Learning, ICML 2005 Tutorial, Chris Ding 74
  • 20. Correspondence Analysis (CA) β€’ Mainly used in graphical display of data β€’ Popular in France (BenzΓ©cri, 1969) β€’ Long history – Simultaneous row and column regression (Hirschfeld, 1935) – Reciprocal averaging (Richardson & Kuder, 1933; Horst, 1935; Fisher, 1940; Hill, 1974) – Canonical correlations, dual scaling, etc. β€’ Formulation is a bit complicated (β€œconvoluted” Jolliffe, 2002, p.342) β€’ β€œA neglected method”, (Hill, 1974) PCA & Matrix Factorizations for Learning, ICML 2005 Tutorial, Chris Ding 75
  • 21. Clustering of Bipartite Graphs (rectangle matrix) Simultaneous clustering of rows and columns of a contingency table (adjacency matrix B ) Examples of bipartite graphs β€’ Information Retrieval: word-by-document matrix β€’ Market basket data: transaction-by-item matrix β€’ DNA Gene expression profiles β€’ Protein vs protein-complex PCA & Matrix Factorizations for Learning, ICML 2005 Tutorial, Chris Ding 76
  • 22. Bipartite Graph Clustering Clustering indicators for rows and columns: ⎧ 1 if ri ∈ R1 ⎧ 1 if ci ∈ C1 f (i ) = ⎨ g (i ) = ⎨ βŽ©βˆ’ 1 if ri ∈ R2 βŽ©βˆ’ 1 if ci ∈ C2 βŽ› BR1 ,C1 BR1 ,C2 ⎞ βŽ› 0 B⎞ βŽ›f ⎞ B=⎜ ⎟ W =⎜ T ⎟ q=⎜ ⎟ ⎜g⎟ ⎜ BR ,C BR2 ,C2 ⎟ ⎜B 0⎟ ⎝ ⎠ ⎝ 2 1 ⎠ ⎝ ⎠ Substitute and obtain s (W12 ) s (W12 ) J MMC (C1 , C 2 ; R1 , R2 ) = + s (W11 ) s (W22 ) f,g are determined by βŽ‘βŽ› Dr ⎞ βŽ› 0 B βŽžβŽ€βŽ› f ⎞ βŽ› Dr βŽžβŽ› f ⎞ ⎜ ⎒⎜ βŽŸβˆ’βŽœ T ⎟βŽ₯⎜ ⎟ = Ξ» ⎜ ⎟⎜ ⎟ ⎒⎝ ⎣ Dc ⎟ ⎜ B ⎠ ⎝ 0 ⎟βŽ₯⎜ g ⎟ ⎠⎦⎝ ⎠ ⎜ ⎝ Dc ⎟⎜ g ⎟ ⎠⎝ ⎠ PCA & Matrix Factorizations for Learning, ICML 2005 Tutorial, Chris Ding 77
  • 23. Spectral Clustering of Bipartite Graphs Simultaneous clustering of rows and columns (adjacency matrix B ) s ( BR1 ,C2 ) = βˆ‘ βˆ‘b ri ∈R1c j ∈C 2 ij min between-cluster sum of xyz weights: s(R1,C2), s(R2,C1) max within-cluster sum of xyz cut xyz weights: s(R1,C1), s(R2,C2) s ( BR1 ,C2 ) + s ( B R2 ,C1 ) s ( B R1 ,C2 ) + s ( B R2 ,C1 ) J MMC (C1 , C 2 ; R1 , R2 ) = + 2 s ( B R1 ,C1 ) 2 s ( B R2 ,C2 ) (Ding, AI-STAT 2003) PCA & Matrix Factorizations for Learning, ICML 2005 Tutorial, Chris Ding 78
  • 24. Internet Newsgroups Simultaneous clustering of documents and words PCA & Matrix Factorizations for Learning, ICML 2005 Tutorial, Chris Ding 79
  • 25. Embedding in Principal Subspace Cluster Self-Aggregation (proved in perturbation analysis) (Hall, 1970, β€œquadratic placement” (embedding) a graph) PCA & Matrix Factorizations for Learning, ICML 2005 Tutorial, Chris Ding 80
  • 26. Spectral Embedding: Self-aggregation β€’ Compute K eigenvectors of the Laplacian. β€’ Embed objects in the K-dim eigenspace (Ding, 2004) PCA & Matrix Factorizations for Learning, ICML 2005 Tutorial, Chris Ding 81
  • 27. Spectral embedding is not topology preserving 700 3-D data points form 2 interlock rings In eigenspace, they shrink and separate PCA & Matrix Factorizations for Learning, ICML 2005 Tutorial, Chris Ding 82
  • 28. Spectral Embedding Simplex Embedding Theorem. Objects self-aggregate to K centroids Centroids locate on K corners of a simplex β€’ Simplex consists K basis vectors + coordinate origin β€’ Simplex is rotated by an orthogonal transformation T β€’T are determined by perturbation analysis (Ding, 2004) PCA & Matrix Factorizations for Learning, ICML 2005 Tutorial, Chris Ding 83
  • 29. Perturbation Analysis Wq = Ξ»Dq WΛ† z = ( D βˆ’1 / 2WD βˆ’1 / 2 ) z = Ξ»z q = D βˆ’1 / 2 z Assume data has 3 dense clusters sparsely connected. C2 ⎑W W W ⎀ 11 12 13 C1 W = ⎒ 21 W22 W23βŽ₯ ⎒W βŽ₯ ⎒ 31 W32 W33βŽ₯ ⎣W ⎦ C3 Off-diagonal blocks are between-cluster connections, assumed small and are treated as a perturbation (Ding et al, KDD’01) PCA & Matrix Factorizations for Learning, ICML 2005 Tutorial, Chris Ding 84
  • 30. Spectral Perturbation Theorem Orthogonal Transform Matrix T = (t1 ,L , t K ) T are determined by: Ξ“t k = Ξ» k t k βˆ’1 βˆ’1 Spectral Perturbation Matrix Ξ“=Ξ© 2 ΓΩ 2 ⎑ h11 βˆ’ s12 L βˆ’ s1K ⎀ s pq = s (C p , Cq ) βŽ’βˆ’ s L βˆ’ s2 K βŽ₯ Ξ“ = ⎒ 21 ⎒ M h22 M L M βŽ₯ βŽ₯ hkk = βˆ‘ s p| p β‰  k kp ⎒ βŽ₯ βŽ£βˆ’ s K 1 βˆ’ s K 2 L hKK ⎦ Ξ© = diag[ ρ (C1 ),L, ρ (Ck )] PCA & Matrix Factorizations for Learning, ICML 2005 Tutorial, Chris Ding 85
  • 31. Connectivity Network ⎧ 1 if i, j belong to same cluster Cij = ⎨ ⎩ 0 otherwise K Scaled PCA provides Cβ‰…D βˆ‘k =1 qk Ξ»k qT D k K βˆ‘ 1 Green’s function : C β‰ˆG = qk qT k =2 1 βˆ’ Ξ»k k K Projection matrix: Cβ‰ˆP≑ βˆ‘k =1 qk qT k PCA & Matrix Factorizations for Learning, ICML 2005 Tutorial, Chris Ding (Ding et al, 2002) 86
  • 32. Similarity matrix W 1st order Perturbation: Example 1 1st order solution Connectivity Ξ»2 = 0.300, Ξ»2 = 0.268 matrix Between-cluster connections suppressed Within-cluster connections enhanced Effects of self-aggregation PCA & Matrix Factorizations for Learning, ICML 2005 Tutorial, Chris Ding 87
  • 33. Optimality Properties of Scaled PCA Scaled principal components have optimality properties: Ordering – Adjacent objects along the order are similar – Far-away objects along the order are dissimilar – Optimal solution for the permutation index are given by scaled PCA. Clustering – Maximize within-cluster similarity – Minimize between-cluster similarity – Optimal solution for cluster membership indicators given by scaled PCA. PCA & Matrix Factorizations for Learning, ICML 2005 Tutorial, Chris Ding 88
  • 34. Spectral Graph Ordering (Barnard, Pothen, Simon, 1993), envelop reduction of sparse matrix: find ordering such that the envelop is minimized min βˆ‘ max j | i βˆ’ j | wij β‡’ min βˆ‘ ( xi βˆ’ x j ) wij 2 i ij (Hall, 1970), β€œquadratic placement of a graph”: Find coordinate x to minimize J= βˆ‘ ij ( xi βˆ’ x j ) 2 wij = x T ( D βˆ’ W ) x Solution are eigenvectors of Laplacian PCA & Matrix Factorizations for Learning, ICML 2005 Tutorial, Chris Ding 89
  • 35. Distance Sensitive Ordering Given a graph. Find an optimal Ordering of the nodes. Ο€ permutation indexes J (Ο€ ) = βˆ‘ d nβˆ’d i =1 Ο€ i ,Ο€ i + d Ο€ (1,L, n) = (Ο€ 1 ,L, Ο€ n ) w ∩∩ ∩∩∩∩∩∩∩∩ wΟ€1 ,Ο€ 3 J d =2 (Ο€ ) : βˆͺβˆͺβˆͺβˆͺβˆͺβˆͺβˆͺβˆͺ min J (Ο€ ) = βˆ‘ n βˆ’1 d =1 d J d (Ο€ ) 2 Ο€ The larger distance, the larger weights, panelity. PCA & Matrix Factorizations for Learning, ICML 2005 Tutorial, Chris Ding 90
  • 36. Distance Sensitive Ordering J (Ο€ ) = βˆ‘ (i βˆ’ j ) wΟ€ i ,Ο€ j = βˆ‘ (i βˆ’ j ) wΟ€ i ,Ο€ j 2 2 ij Ο€ i ,Ο€ j = βˆ‘ (Ο€ βˆ’ Ο€ ) wi , j i βˆ’1 βˆ’1 2 j ij n2 Ο€ iβˆ’1 βˆ’( n +1) / 2 Ο€ βˆ’1 βˆ’( n +1) / 2 2 = 8 ij βˆ‘( n/2 βˆ’ j n/2 ) wi , j Define: shifted and rescaled inverse permutation indexes Ο€ iβˆ’1 βˆ’ (n + 1) /2 1βˆ’ n 3 βˆ’ n n βˆ’1 qi = ={ , ,L, } n /2 n n n J (Ο€ ) = n2 8 βˆ‘ (qi βˆ’ q j ) wij = q ( D βˆ’ W )q 2 n2 4 T ij PCA & Matrix Factorizations for Learning, ICML 2005 Tutorial, Chris Ding 91
  • 37. Distance Sensitive Ordering Once q2 is computed, since q2 (i ) < q2 ( j ) β‡’ Ο€ i βˆ’1 <Ο€ βˆ’1 j Ο€ i βˆ’1 can be uniquely recovered from q2 Implementation: sort q2 induces Ο€ PCA & Matrix Factorizations for Learning, ICML 2005 Tutorial, Chris Ding 92
  • 38. Re-ordering of Genes and Tissues J (Ο€ ) r= J (random) r = 0.18 J d =1 (Ο€ ) rd =1= J d =1 ( random ) rd =1 = 3.39 PCA & Matrix Factorizations for Learning, ICML 2005 Tutorial, Chris Ding 93
  • 39. Spectral clustering vs Spectral ordering β€’ Continuous approximation of both integer programming problems are given by the same eigenvector β€’ Different problems could have the same continuous approximate solution. β€’ Quality of the approximation: Ordering: better quality: the solution relax from a set of evenly spaced discrete values Clustering: less better quality: solution relax from 2 discrete values PCA & Matrix Factorizations for Learning, ICML 2005 Tutorial, Chris Ding 94
  • 40. Linearized Cluster Assignment Turn spectral clustering to 1D clustering problem β€’ Spectral ordering on connectivity network β€’ Cluster crossing – Sum of similarities along anti-diagonal – Gives 1-D curve with valleys and peaks – Divide valleys and peaks into clusters PCA & Matrix Factorizations for Learning, ICML 2005 Tutorial, Chris Ding 95
  • 41. Cluster overlap and crossing Given similarity W, and clusters A,B. β€’ Cluster overlap s(A,B) = βˆ‘βˆ‘ w i∈ A j∈B ij β€’ Cluster crossing compute a smaller fraction of cluster overlap. β€’ Cluster crossing depends on an ordering o. It sums weights cross the site i along the order m ρ (i ) = βˆ‘ wo (iβˆ’ j ),o (i+ j ) j =1 β€’ This is a sum along anti-diagonals of W. PCA & Matrix Factorizations for Learning, ICML 2005 Tutorial, Chris Ding 96
  • 42. cluster crossing PCA & Matrix Factorizations for Learning, ICML 2005 Tutorial, Chris Ding 97
  • 43. K-way Clustering Experiments Accuracy of clustering results: Method Linearized Recursive 2-way Embedding Assignment clustering + K-means Data A 89.0% 82.8% 75.1% Data B 75.7% 67.2% 56.4% PCA & Matrix Factorizations for Learning, ICML 2005 Tutorial, Chris Ding 98
  • 44. Some Additional Advanced/related Topics β€’ Random talks and normalized cut β€’ Semi-definite programming β€’ Sub-sampling in spectral clustering β€’ Extending to semi-supervised classification β€’ Green’s function approach β€’ Out-of-sample embeding PCA & Matrix Factorizations for Learning, ICML 2005 Tutorial, Chris Ding 99