Model Call Girl in Bikash Puri Delhi reach out to us at 🔝9953056974🔝
COARFORMER: TRANSFORMER FOR LARGE GRAPH VIA GRAPH COARSENING.pptx
1. Van Thuy Hoang
Network Science Lab
Dept. of Artificial Intelligence
The Catholic University of Korea
E-mail: hoangvanthuy90@gmail.com
2023-01-08
Openreview. PMLR 23.
2. 2
Problems
The applicability of Transformer on large graphs
Distant nodes problem
Computational complexity regarding the number of nodes
3. 3
Ideas
Both local and global information is useful for encoding each node
in a large graph.
Two-view architecture Coarformer consisting of a fine-grained
local view and a coarse global view
Local view: a GNN-based module to the original input graph to
encode each node by its local topological structures and the
node features.
Global view: Transformer-based module to a coarse graph
produced by an adopted graph coarsening algorithm
4. 4
Overview of the Coarformer architecture.
Input graph G via graph coarsening to generate a coarse graph G’
GNN-based module and a Transformer-based module work on G and
G’
5. 5
Graph coarsening
Reduces the size of a graph, rate
Grouping the nodes of a graph into similarity clusters
Produces a partition:
of V and regards each cluster 𝐶𝑖 as a super-node.
Node features and ajdacency matrix:
7. 7
Cross-view propagation
The outputs of each layer in these two modules by ,
respectively, and define their forward propagation as follow:
8. 8
Computational complexity
Graph coarsening algorithms allow to control the size of coarse
graph n’
Accordingly, the time complexity of our Transformer-based module is
quadratic w.r.t. the number of super-nodes and thus linear w.r.t. the
original number of nodes:
9. 9
EXPERIMENTS
Five homophilic graphs: Cora, CiteSeer, PubMed, Computers, Photo
Five heterophilic graphs: Chameleon, Squirrel, Actor, Texas, Cornell
Adapt the Transformer-based module Graphormer, which achieves
excellent results in, with:
Different PEs
Attentional bias: None, PPR, SPD.
12. 12
Conclusion
a novel two-view architecture Coarformer: a GNN-based module and
a Transformer-based module to encode nodes from the local and
global view
a cross-view propagation scheme that is consistent with minibatch
training