1. Joo-Ho Lee
School of Computer Science and Information Engineering,
The Catholic University of Korea
E-mail: jooho414@gmail.com
2023-09-18
2. 1
Introduction
Problem Statement
• Depending on how the graph signals (or features) are leveraged, GNNs can be roughly categorized into two
classes, namely spatial GNNs and spectral GNNs
• Although spatial GNNs have achieved impressive performances in many domains, spectral GNNs are
somewhat under-explored
3. 2
Introduction
Problem Statement
• There are a few reasons why spectral GNNs have not been able to catch up.
• First, most existing spectral filters are essentially scalar-to-scalar functions
• In particular, they take a single eigenvalue as input and apply the same filter to all eigenvalues
• This filtering mechanism could ignore the rich information embedded in the spectrum, i.e., the set of eigenvalues
4. 3
Introduction
Problem Statement
• Second, the spectral filters are often approximated via fixed-order (or truncated) orthonormal bases, e.g.,
Chebyshev polynomials and graph wavelets, in order to avoid the costly spectral decomposition of the graph
Laplacian
• Although the orthonormality is a nice property, this truncated approximation is less expressive and may severely
limit the graph representation learning
5. 4
Introduction
Problem Statement
• Therefore, in order to improve spectral GNNs, it is natural to ask
How can we build expressive spectral filters that can effectively leverage the spectrum of graph Laplacian?
• To answer this question, they first note that eigenvalues of graph Laplacian represent the frequency, i.e., total
variation of the corresponding eigenvectors
• The magnitudes of frequencies thus convey rich information.
• Moreover, the relative difference between two eigenvalues also reflects important frequency information, e.g.,
the spectral gap
6. 5
Introduction
Problem Statement
• To capture both magnitudes of frequency and relative frequency, they propose a Transformer based set-to-set
spectral filter, termed Specformer
• Their Specformer first encodes the range of eigenvalues via positional embedding and then exploits the self-
attention mechanism to learn relative information from the set of eigenvalues
• Relying on the learned representations of eigenvalues, they also design a decoder with a bank of learnable
bases
7. 6
Introduction
Contribution
• They propose a novel Transformer-based set-to-set spectral filter along with learnable bases, called
Specformer, which effectively captures both magnitudes and relative differences of all eigenvalues of the graph
Laplacian
• They show that Specformer is permutation equivariant and can perform non-local graph convolutions, which is
non-trivial to achieve in many spatial GNNs
• Experiments on synthetic datasets show that Specformer learns to better recover the given spectral filters than
other spectral GNNs
• Extensive experiments on various node-level and graph-level benchmarks demonstrate that Specformer
outperforms state-of-the-art GNNs and learns meaningful spectrum patterns
9. 8
Methodology
EIGENVALUE ENCODING
• Eigenvalue encoding function
𝜌 𝜆, 2𝑖 = sin 𝜖𝜆/100002𝑖/𝑑
𝜌 𝜆, 2𝑖 + 1 = cos 𝜖𝜆/100002𝑖/𝑑
• The benefits of 𝜌 𝜆
• It can capture the relative frequency shifts of eigenvalues and provides high-dimension vector
representations
• It has wavelengths from 2𝜋 to 10000 ⋅ 2𝜋, which forms a multi-scale representation for eigenvalues
• It can control the influence of 𝜆 by adjusting the value of 𝜖
19. 18
Conclusion
• In this paper, they propose Specformer that leverages Transformer to build a set-to-set spectral filter along with
learnable bases
• Specformer effectively captures magnitudes and relative dependencies of the eigenvalues in a permutation-
equivariant fashion and can perform non-local graph convolution
• Experiments on synthetic and real-world datasets demonstrate that Specformer outperforms various GNNs and
learns meaningful spectrum patterns
• A promising future direction is to improve the efficiency of Specformer through sparsifying the self-attention
matrix of Transformer
Editor's Notes
이미 이전에 propagation을 활용하여 rumor detection 하는 논문들을 모두 봤었다.
이미 이전에 propagation을 활용하여 rumor detection 하는 논문들을 모두 봤었다.
이미 이전에 propagation을 활용하여 rumor detection 하는 논문들을 모두 봤었다.
이미 이전에 propagation을 활용하여 rumor detection 하는 논문들을 모두 봤었다.
이미 이전에 propagation을 활용하여 rumor detection 하는 논문들을 모두 봤었다.
이미 이전에 propagation을 활용하여 rumor detection 하는 논문들을 모두 봤었다.
이미 이전에 propagation을 활용하여 rumor detection 하는 논문들을 모두 봤었다.
이미 이전에 propagation을 활용하여 rumor detection 하는 논문들을 모두 봤었다.
이미 이전에 propagation을 활용하여 rumor detection 하는 논문들을 모두 봤었다.
이미 이전에 propagation을 활용하여 rumor detection 하는 논문들을 모두 봤었다.
이미 이전에 propagation을 활용하여 rumor detection 하는 논문들을 모두 봤었다.
이미 이전에 propagation을 활용하여 rumor detection 하는 논문들을 모두 봤었다.
이미 이전에 propagation을 활용하여 rumor detection 하는 논문들을 모두 봤었다.
이미 이전에 propagation을 활용하여 rumor detection 하는 논문들을 모두 봤었다.
이미 이전에 propagation을 활용하여 rumor detection 하는 논문들을 모두 봤었다.
이미 이전에 propagation을 활용하여 rumor detection 하는 논문들을 모두 봤었다.
이미 이전에 propagation을 활용하여 rumor detection 하는 논문들을 모두 봤었다.
이미 이전에 propagation을 활용하여 rumor detection 하는 논문들을 모두 봤었다.