Boost PC performance: How more available memory can improve productivity
NS-CUK Seminar: J.H.Lee, Review on "Recipe for a General, Powerful, Scalable Graph Transformer", NeurIPS 2022
1. Joo-Ho Lee
School of Computer Science and
Information Engineering, The Catholic
University of Korea
E-mail: jooho414@gmail.com
2023-04-14
2. 1
Introduction
• Problem statement
• Contributions
Related Works
Methodology
Experiment
Conclusion
3. 2
Introduction
Problem Statement
• Graph Transformer alleviates the problems of message passing by using global attention (attention across all
nodes).
• To enable global attention, nodes in the graph and its substructures must be well-identified. This has led to the
development of various positional encoding schemes that adjust spectral features and graph features.
• Standard graph attention has a time complexity of $O(N^2)$, which limits the number of nodes in a GTs to a few
hundred to avoid increasing costs.
• While the focus is on the aspect of identifying specific nodes, there is no principled framework for designing GTs.
4. 3
Introduction
Contribution
• Proposed design principles for GPS Graph Transformers (GTs).
• Defined better positional encoding and spectral encoding schemes, and organized them into local, global, and
relative categories.
• Demonstrated that GPS with linear global attention from Performer and BigBird can scale to thousands of nodes
and achieve good results even without explicit edge features in the attention module, while traditional fully-
connected GTs are limited to graphs with a maximum of a few hundred nodes.
13. 12
Conclusion
• This paper lays the groundwork for a unified architecture of graph neural networks, incorporating modular and
scalable graph Transformers and a better understanding of the significance of positional and structural
encodings in graphs
• Through our ablation studies, they have shown that each module - the Transformer, flexible message-passing,
and advanced positional and structural encodings - played a vital role in the success of GPS in various
benchmarks.
Editor's Notes
이미 이전에 propagation을 활용하여 rumor detection 하는 논문들을 모두 봤었다.
이미 이전에 propagation을 활용하여 rumor detection 하는 논문들을 모두 봤었다.
이미 이전에 propagation을 활용하여 rumor detection 하는 논문들을 모두 봤었다.
이미 이전에 propagation을 활용하여 rumor detection 하는 논문들을 모두 봤었다.
이미 이전에 propagation을 활용하여 rumor detection 하는 논문들을 모두 봤었다.
이미 이전에 propagation을 활용하여 rumor detection 하는 논문들을 모두 봤었다.
이미 이전에 propagation을 활용하여 rumor detection 하는 논문들을 모두 봤었다.
이미 이전에 propagation을 활용하여 rumor detection 하는 논문들을 모두 봤었다.
이미 이전에 propagation을 활용하여 rumor detection 하는 논문들을 모두 봤었다.
이미 이전에 propagation을 활용하여 rumor detection 하는 논문들을 모두 봤었다.
이미 이전에 propagation을 활용하여 rumor detection 하는 논문들을 모두 봤었다.