Call Girls in Dwarka Mor Delhi Contact Us 9654467111
How To Find Your Friendly Neighborhood: Graph Attention Design With Self-Supervision.pptx
1. Ho-Beom Kim
Network Science Lab
Dept. of Mathematics
The Catholic University of Korea
E-mail: hobeom2001@catholic.ac.kr
2023 / 10 / 02
KIM, Dongkwan; OH,.
ICLR 2021
2. 2
Introduction
Problem Statements
• Graphs are widely used in various domains, such as social networks, biology, and chemistry.
• Since their patterns are complex and irregular, learning to represent graphs is challenging
• Real-world graphs are often noisy with connections between unrelated nodes, and this causes GNNs to
learn suboptimal representations
• Graph attention networks adopt self-attention to alleviate this issue.
3. 3
Introduction
Contribution
• They start by assessing and learning the relational importance for each graph via self-supervised
attention.
• They leverage edges that explicitly encode information about the importance of relations provided by
a graph.
• Specifically, They exploit a self-supervised task, using the attention value as input to predict the
likelihood that an edge exists between nodes.
• They observe that DP attention shows better performance than GO attention in the task to predict link
with attention value.
• They propose two variants of SuperGAT, scaled dotproduct (SD) and mixed GO and DP (MX), to
emphasize the strength of GO and DP.
• They present models with self-supervised attention using edge information.
• They analyze the classic attention forms GO and DP using label-agreement and link prediction tasks,
and this analysis reveals that GO is better at label agreement and DP at link prediction
16. 16
Experiments
Conclusoin
• They proposed novel graph neural architecture designs to self-supervise graph attention following the
input graph’s characteristics.
• They first assessed what graph attention is learning and analyzed the effect of edge self-supervision to
link prediction and node classification performance.
• They suggested several graph attention forms that balance these two factors and argued that graph
attention should be designed depending on the input graph’s average degree and homophily.
• Their experiments demonstrated that our graph attention recipe generalizes across various real-world
datasets such that the models designed according to the recipe outperform other baseline models.