Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

How Powerful are Graph Networks?

622 views

Published on

Slide for
"ICLR/ICML2019読み会"

Published in: Technology
  • Be the first to comment

How Powerful are Graph Networks?

  1. 1. How Powerful are Graph Neural Networks? ~Low-Pass Filterを添えて~ NaN 2019/07/18
  2. 2. Presentation of Amateur, by Amateur, for Amateur Outline • Introduction to Graph Neural Networks • GUNDAM: General Universal Network for Dynamic Active Memory • My Perspective for Graph Neural Networks • What is operation on Graph Neural Networks After All?
  3. 3. Conclusion Use Case: • NODE & GRAPH Classifications • Drug Discovery, Web Analytics,…, All About Graph Problems (DNN also?) • Could not understand how operate such the classification on GNN “Less Powerful But Interesting GNNs” @Section-5 Title !? …“How Powerful are Graph Neural Networks?”… • “Revisiting Graph Neural Networks: All We Have is Low-Pass Filters” • Claim:Features are in Low-Frequency→GNN outputs such that →Low-Pass Filter!!! • Adjacency Matrix A = I – L (L: Laplacian) • Caused by the ”L”?
  4. 4. Graph Neural Networks NeighborhoodsExample Graph Node’s Feature Edge’s Feature - Undirected/Directed - Weighted/No-Weighted 1 1 1 1 1 1 1 1 1 1 1 1 1 11 1 1 1 1 1 1 1 1 1 1 1 1 1 Adjacent Matrix: O(N2) = Complete Network Representation
  5. 5. Graph Neural Networks Step-1 (k=1) 1 1 1 1 1 1 1 1 1 1 1 1 1 11 1 1 1 1 1 1 1 1 1 1 1 1 1 Adjacent Matrix: O(N2) a a aa b b b bb c c cde Step-2 (k=2) b b bb c c d de cd c Step-3 (k=3) c c cc d d dd e e e Step-4 (k=4) c c cc e e e e c d Weights e
  6. 6. Preliminary • O: Zero Matrix/Vector(oi,j=0) • U: Ones Matrix/Vector (ui,j=1) • E: Unit Matrix(ei,j=1 ; i=j, otherwise ei,j=0) • Matrix Product: D = B・C • Matrix/Vector Decomposition: B = [B1, B2] = [B1, O] + [O, B2] • Hadamard Product◎:B◎C = E・B・(E・C) • Graph Representation • Adjacency Matrix A: ai,j=1 if node-i and node-j is connected • Baseline Graph G = f(A◎W*X): Mask W by A(=Edge-Pruning Flags) • Keep W for Next Training
  7. 7. Cheat Sheet f( ) f( ) f( ) = ・ OO OO OO W(1) W(2) W(3) Feedforward Network f(・): Activation Function f( ) f( ) f( ) = ・ OO O O OO E Concat E O f(X)=X f( ) OO Sum U = ・ f(X)=X f( ) f( ) f( ) = ・ OO O O OO E Residual E O f(X)=X f( ) OO Mean-Pool U = ・ f(X)=X/|U| f( ) Max-Pool = ・OOE f(X)=argmax(X) Readout Injection
  8. 8. Graph Neural Networks =f ・Aav huW◎ COMBINE AGGREGATE =f avhvhv
  9. 9. Collorary8&9 (Fig.3) 0 1 1 1 0 0 1 0 0 0 1 1 1 1 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 Mean = (■+■)/(1+1) Max-Pool = (■, ■)=2 Mean = (■+■+■+■)/(1+1+1+1) Max-Pool = (■, ■)=2 Adjacency Matrix Adjacency Matrix isomorphic ?
  10. 10. Conclusion Use Case: • NODE & GRAPH Classifications • Drug Discovery, Web Analytics,…, All About Graph Problems (DNN also?) • Could not understand how operate such the classification on GNN “Less Powerful But Interesting GNNs” @Section-5 Title !? …“How Powerful are Graph Neural Networks?”… • “Revisiting Graph Neural Networks: All We Have is Low-Pass Filters” • Claim:Features are in Low-Frequency→GNN outputs such that →Low-Pass Filter!!! • Adjacency Matrix A = I – L (L: Laplacian) • Caused by the ”L”?

×