240401_Thuy_Labseminar[Train Once and Explain Everywhere: Pre-training Interpretable GNNs].pptx
1. Van Thuy Hoang
Network Science Lab
Dept. of Artificial Intelligence
The Catholic University of Korea
E-mail: hoangvanthuy90@gmail.com
2024-04-01
2. 2
Graph Convolutional Networks (GCNs)
• Generate node embeddings based on local network neighborhoods
• Nodes have embeddings at each layer, repeating combine messages
from their neighbor using neural networks
3. 3
BACKGROUND
• Many important real-world data sets are graphs or networks
• Graph Neural Networks lack transparency in their decision-making
• Nonetheless, they are a promising candidate for producing reach explanations
XAI on Graph Neural Networks
4. 4
BACKGROUND
• By analyzing how the removal or modification of specific nodes influences the model's
output, one can gain insights into which nodes are most influential in the decision-making
process of the GNN.
Perturbation-based Explainer Methods
Ying, Rex, et al. ”Gnnexplainer: Generating explanations for graph neural net-works.” Advances in neural information processing systems 32 (2019): 9240
Lucic, Ana, et al. ”CF-GNNExplainer: Counterfactual Explanations for GraphNeural Networks.”
Funke, Thorben, Megha Khosla, and Avishek Anand. ”Hard Masking for Explain-ing Graph Neural Networks.” (2020)
D. Luo, W. Cheng, D. Xu, W. Yu, B. Zong, H. Chen, and X. Zhang, “Parameterizedexplainer for graph neural network,” in Advances in neural information processingsystems, 2020.
5. 5
BACKGROUND
• GNNs have achieved remarkable success in various applications.
• Black-box nature makes it hard to understand the inner decision-making mechanism.
• Intrinsic interpretable GNNs aim to provide transparent predictions by identifying the
influential fraction of the input graph that guides the model prediction
6. 6
BACKGROUND
• Existing GNN explanation methods are dataset-specific.
• How to construct a interpretable GNN that can generalize to different datasets?
8. 8
MODEL FRAMEWORK
• Two innovative modules, i.e., the structural pattern learning module and the hypergraph
refining module, are designed and integrated into π-GNN.
• The former captures and integrates multiple universal structural patterns for
generalizable graph representation.
• The latter incorporates the universal patterns with local structural interactions to identify
the explanation.
Pre-training on synthetic graphs.
9. 9
MODEL FRAMEWORK
• Each graph G in PT-Motifs dataset consists of one base subgraph G_b and one
explanation subgraph G_e (also known as the motif) and the ground-truth task label y
which is determined by G_e solely.
• Explanatory subgraphs: Diamond, House, Crane, Cycle, and Star
• Basic shapes: Clique, Tree, Wheel, Ladder, and the Barabási–Albert Net
Generating synthetic graphs
12. 12
Theorem
• A 2-layer MLP to fit the transition function f (2) during pre-training phase
13. 13
Conjoint Fine-tuning Phase
• Given the input graph G and the corresponding task label y:
• a probabilistic sampler S to comprise the explanatory subgraph g according to the
predicted edge probability ρˆ:
• Going beyond the probabilistic sampling procedure, the post-positional predictor
takes the explanatory subgraph g as input and fits the mapping function to the
predicted label:
16. 16
SUMMARY
• An interpretable GNN that can generalize to different graph datasets.
• Synthetic pre-training process.
• Hyper-graph transformation based edge representation.