Slideshare uses cookies to improve functionality and performance, and to provide you with relevant advertising. If you continue browsing the site, you agree to the use of cookies on this website. See our User Agreement and Privacy Policy.

Slideshare uses cookies to improve functionality and performance, and to provide you with relevant advertising. If you continue browsing the site, you agree to the use of cookies on this website. See our Privacy Policy and User Agreement for details.

Like this presentation? Why not share!

1,244 views

1,091 views

1,091 views

Published on

the parameters of these prediction functions can be learned by reducing the problem to a one-dimensional regression problem whose runtime only depends on the method’s reduced rank and that can be inspected visually. We derive variants that apply to undirected, weighted, unweighted, unipartite and bipartite graphs. We evaluate our method

experimentally using examples from social networks, collaborative filtering, trust networks, citation networks, authorship graphs and hyperlink networks.

No Downloads

Total views

1,244

On SlideShare

0

From Embeds

0

Number of Embeds

5

Shares

0

Downloads

12

Comments

0

Likes

1

No embeds

No notes for slide

- 1. Learning Spectral Graph Transformationsfor Link Prediction Jérôme Kunegis & Andreas Lommatzsch DAI-Labor, Technische Universität Berlin, Germany 26th Int. Conf. on Machine Learning, Montréal 2009
- 2. Outline The Problem – Link prediction – Known solutions Learning – Spectral transformations – Finding the best spectral transformation Variants – Weighted and signed graphs – Bipartite graphs and the SVD – Graph Laplacian and normalization Some Applications Kunegis & Lommatzsch Learning Spectral Graph Transformations for Link Prediction 1
- 3. The Problem: Link Prediction Known links (A) Links to be predicted (B) Motivation: Recommend connections in a social network Predict links in an undirected, unweighted network Using the adjacency matrices A and B, Find a function F(A) giving prediction values corresponding to B F(A) = B Kunegis & Lommatzsch Learning Spectral Graph Transformations for Link Prediction 2
- 4. Path Counting Follow paths Number of paths of length k given by Ak Nodes connected by many paths Nodes connected by short paths Weight powers of A: αA² + βA³ + γA⁴ … with α > β > γ … [ > 0 ] Examples: Exponential graph kernel: eαA = Σi αi/i! Ai Von Neumann kernel: (I − αA)−1 = Σi αi Ai (with 0 < α < 1) Kunegis & Lommatzsch Learning Spectral Graph Transformations for Link Prediction 3
- 5. Laplacian Link Prediction Functions Graph Laplacian L = D − A “Resistance Distance” L+ (a.k.a. commute time) Regularized Laplacian (I + αL)−1 Heat diffusion kernel e−αL Kunegis & Lommatzsch Learning Spectral Graph Transformations for Link Prediction 4
- 6. Computation of Link Prediction FunctionsAdjacency matrix eigenvalue decomposition: A = UΛUT Matrix polynomial Σi αiAi = U (Σi αiΛi) UT Matrix exponential eαA = U eαΛ UT Von Neumann kernel (I − αA)−1 = U (I − αΛ)−1 UT Rank reduction A(k) = U Λ(k) UTGraph Laplacian eigenvalue decomposition: L = D − A = UΛUT Resistance distance L+ = U Λ+ U T Regularized Laplacian (I + αL)−1 = U (I + αΛ)−1 UT Heat diffusion kernel e−αL = U e−αΛ UT Spectral transformation Kunegis & Lommatzsch Learning Spectral Graph Transformations for Link Prediction 5
- 7. Learning Spectral Transformations Link prediction functions are spectral transformations of A or L F(A) = UF(Λ)UT F(Λ)ii = f(Λii) A spectral transformation F corresponds to a function of reals fMatrix polynomial F(A) = Σi αiAi f(x) = Σi αixi Real polynomialMatrix exponential F(A) = eαA f(x) = eαx Real exponentialMatrix inverse F(A) = (I ± αA)−1 f(x) = 1 / (1 ± αx) Rational functionPseudoinverse F(A) = A+ f(x) = 1/x when x > 0, 0 otherwiseRank-k approximation F(A) = A(k) f(x) = x when |x| ≥ x0, 0 otherwise Kunegis & Lommatzsch Learning Spectral Graph Transformations for Link Prediction 6
- 8. Finding the Best Spectral TransformationFind the best spectral transformation on test set B minF ‖F(A) − B‖FReduce minimization problem = minF ‖UF(Λ) UT − B‖F = minF ‖F(Λ) − UTBU‖F (norm is preserved by U)Reduce to diagonal, because off-diagonal in F(Λ) is constant zero minf Σi (f(Λii) − (UTBU)ii)²The best spectral transformation is given by a one-dimensionalleast-squares problem Kunegis & Lommatzsch Learning Spectral Graph Transformations for Link Prediction 7
- 9. Example: DBLP Citation Network (undirected) 6 (UTBU)iiDBLP citation network 5Symmetric adjacency 4matrices A = UΛUT, B 3 2 αx³ + βx² + γx + δ βeαx 1 x when x < x0, else 0 Λii 0 −1 −30 −20 −10 0 10 20 30 Kunegis & Lommatzsch Learning Spectral Graph Transformations for Link Prediction 8
- 10. Variants: Weighted and Signed Graphs Weighted undirected graphs: use A and L = D − A as is Signed graphs: use Dii = Σj |Aij| (signed graph Laplacian) 10 Example: Slashdot Zoo 5(social network with negative edges) 0 −5 −30 −20 −10 0 10 20 30 40 Kunegis & Lommatzsch Learning Spectral Graph Transformations for Link Prediction 9
- 11. Bipartite GraphsBipartite graphs: paths have odd lengthCompute sum of odd powers of AThe resulting polynomial is odd αA³ + βA⁵ + …For other link prediction functions, use the odd component eαA sinh(αA) (I − αA)−1 αA (I − α²A²)−1 Kunegis & Lommatzsch Learning Spectral Graph Transformations for Link Prediction 10
- 12. Singular Value Decomposition Odd power of a bipartite graph’s adjacency matrixA2n+1 = [0 R; RT 0]2n+1 = [0 (RRT)nR; RT(RRT)n 0] Using the singular value decomposition R = UΣVT(RRT)nR = (UΣVT VΣUT)n UΣVT = (UΣ²UT)n UΣVT = UΣ2n+1VT Odd powers of A are given by odd spectral transformations of R Kunegis & Lommatzsch Learning Spectral Graph Transformations for Link Prediction 11
- 13. SVD Example: Bipartite Rating Graph 35 MovieLens αx⁵ + βx³ + γx rating graph 30 β sinh(αx) 25 α x (x < x0)Rating values sgn(x) (α |x|²+β|x|+γ (x < x0)) 20{−2, −1, 0, +1, +2} 15 10 5 0 −5 0 20 40 60 80 100 120 Kunegis & Lommatzsch Learning Spectral Graph Transformations for Link Prediction 12
- 14. Base Matrices Base matrices A and L Normalized variants: D−1/2AD−1/2, D−1/2LD−1/2 = I − D−1/2AD−1/2 αx² + βx + γ [α,β,γ≥0] Example: D−1/2AD−1/2 βeαxTrust network Advogato β / (1 – αx)Note: ignore eigenvalue 1(constant eigenvector) Kunegis & Lommatzsch Learning Spectral Graph Transformations for Link Prediction 13
- 15. Learning Laplacian Kernels Epinions (signed user-user network), using LNote: Λii > 0 becausethe graph is signedand unbalanced β / (1 + αx) β e−αx Kunegis & Lommatzsch Learning Spectral Graph Transformations for Link Prediction 14
- 16. Almost Bipartite NetworksDating site LíbímSeTi.cz: (users rate users) 250 200 αx³ + βx² + γx + δSome networks are“almost” bipartite 150 β sinh(αx)Plot has near 100central symmetry 50 0 −50 −100 −150Bipartition: men/women −200 −250 −800 −600 −400 −200 0 200 400 600 800 Kunegis & Lommatzsch Learning Spectral Graph Transformations for Link Prediction 15
- 17. Learning the Reduced Rank kSome plots suggest a reduced rank k (UTBU)ii ≈ 0 significant sing valuesExample: MovieLens/SVDLearned: k = 14 Kunegis & Lommatzsch Learning Spectral Graph Transformations for Link Prediction 16
- 18. Conclusion & Ongoing ResearchConclusions Many link prediction functions are spectral transformations Spectral transformations can be learnedOngoing Research New link prediction function: sinh(A), odd von Neumann (pseudo-)kernel Signed graph Laplacian Other matrix decompositions Other norms More datasets Thank You Kunegis & Lommatzsch Learning Spectral Graph Transformations for Link Prediction 17

No public clipboards found for this slide

×
### Save the most important slides with Clipping

Clipping is a handy way to collect and organize the most important slides from a presentation. You can keep your great finds in clipboards organized around topics.

Be the first to comment