SlideShare a Scribd company logo
1 of 26
Ho-Beom Kim
Network Science Lab
Dept. of Mathematics
The Catholic University of Korea
E-mail: hobeom2001@catholic.ac.kr
2023 / 07 / 03
Annamalai Narayanan, et al.
2
 Introduction
• Problem statements
• Contributions
 Methodology
 Related work
 Experiments
 Results
 Discussion
 Conclusion
3
1. Introduction
Contributions
• Propose subgraph2vec, an unsupervised representation learning technique to learn latent
representations of rooted subgraphs present in large graphs
• Develop a modified version of the skipgram language model which is capable of modeling varying
lengthradial contexts around target subgraphs
• Subgraph2vec’s representation learning technique would help deep learning variant of WL kernel
• Demonstrate that subgraph2vec could significantly outperform state-of-the-art
4
1. Introduction
Limitations of Existing Graph kernels
• (L1) Structural similarity
• Substructures that are used to compute the kernel matrix are not independent.
• (L2) Diagonal Dominance
• Since graph kernels regard these substructures as separate features, the dimensional- ity of the
feature space often grows exponentially with the number of substructures.
• only a few sub- structures will be common across graphs.
• leads to diagonal dominance, that is, a given graph is similar to it- self but not to any other graph
in the dataset.
• This leads to poor classification/clustering accuracy.
5
1. Introduction
Existing Solution
• DGK : Deep Graph Kernels
• 𝐾 𝐺, 𝐺′ = Φ 𝐺 𝑇𝑀Φ 𝐺′
• 𝑀𝑟𝑒𝑝𝑟𝑒𝑠𝑒𝑛𝑡𝑠 𝑎 𝑉 𝑥 𝑉 𝑝𝑜𝑠𝑖𝑡𝑖𝑣𝑒 𝑠𝑒𝑚𝑖 −
𝑑𝑒𝑓𝑖𝑛𝑒 𝑚𝑎𝑡𝑟𝑖𝑥 𝑡ℎ𝑎𝑡 𝑒𝑛𝑐𝑜𝑑𝑒𝑠 𝑡ℎ𝑒 𝑟𝑒𝑙𝑎𝑡𝑖𝑜𝑛𝑠ℎ𝑖𝑝 𝑏𝑒𝑡𝑤𝑒𝑒𝑛 𝑠𝑢𝑏𝑠𝑡𝑟𝑢𝑐𝑡𝑢𝑟𝑒𝑠 𝑎𝑛𝑑 𝑉 𝑟𝑒𝑝𝑟𝑒𝑠𝑒𝑛𝑡𝑠 𝑡ℎ𝑒 𝑣𝑜𝑐𝑎𝑏𝑢
• Lary of substrucures obtained from the training data
• M matrix respects the similarity of the substructure space
6
1. Introduction
Related Work
• Deep Walk and node2vec intend to learn node embeddings by generating random walks in a single
graph.
• Both these works rely on existence of node labels for at least a small portion of nodes and take a
semi-supervised approach to learn node embeddings.
• Subgraph2vec learns subgraph embeddings in an unsupervised manner
• Graph kernel’s categories
• Kernels for limited-size subgraphs
• Kernels based on subtree patterns
• Kernels based on walks and paths
• Subgraph2vec is complementary to these existing graph kernels where the substructures exhibit
reasonable similarities among them
7
1. Introduction
Problem Statements
• Consider the problem of learning distributed representations of rooted subgraphs from a given set of
graphs
• 𝐺 = (𝑉, 𝐸, 𝜆)
• 𝐺 = 𝑉
𝑠𝑔, 𝐸𝑠𝑔, 𝜆𝑠𝑔
• Sg : a sub-graph of G iff there exists an injective mapping 𝜇 ∶ 𝑉
𝑠𝑔 → 𝑉 s.t. (𝑣1, 𝑣2) ∈
𝐸𝑠𝑔 𝑖𝑓𝑓 𝜇 𝑣1 , 𝜇 𝑣2 ∈ 𝐸
• 𝒢 = 𝐺1, 𝐺2, … , 𝐺𝑛 ∶ 𝑎 𝑠𝑒𝑡 𝑜𝑓 𝑔𝑟𝑎𝑝ℎ𝑠
• D : positive integer
8
4. Background : Language Models
Traditional language models
• The traditional language models determine the likelihood of a sequence of words appearing in it.
• Pr 𝑤𝑡 𝑤1, … , 𝑤𝑡−1
• Estimate the likelihood of observing the target word 𝑤𝑡 given n previous words (𝑤1, … , 𝑤𝑡−1)
observed thus far
9
4. Background : Language Models
Neural language models
• The recently developed neural language models focus on learning distributed vector representation of
words
• These models improve traditional n-gram models by using vector embeddings for words
• Neural language models exploit the of the notion of context where a context is. Efined as a. ixed number
of. Ords surrounding the target word
• 𝑡=1
𝑇
𝑙𝑜𝑔𝑃𝑟 𝑤𝑡 𝑤1, … , 𝑤𝑡−1
• 𝑤𝑡 𝑤1, … , 𝑤𝑡−1 are the context of the target word 𝑤𝑡
•
10
4. Background : Language Models
Skip Gram
• The skipgram model maximizes co-occurrence probability among the words that appear within a given
context window.
• Give a context window of size c and the target word wt, skipgram model attempts to predict the words
that appear in the context of the target word, (𝑤𝑡−𝑐, ..., 𝑤𝑡−𝑐 ).
• 𝑡=1
𝑇
𝑙𝑜𝑔𝑃𝑟(𝑤𝑡−𝑐, … , 𝑤𝑡+𝑐|𝑤𝑡)
• 𝑃𝑟(𝑤𝑡−𝑐, … , 𝑤𝑡+𝑐|𝑤𝑡) : computed as Π−𝑐≤𝑗≤𝑐,𝑗≠0Pr(𝑤𝑡+𝑗|𝑤𝑡)
• 𝑃𝑟(𝑤𝑡+𝑗 |𝑤𝑡) :
𝑒𝑥𝑝(Φ𝑤
𝑇
𝑡Φ𝑤𝑡+𝑗
′ )
𝑤=1
𝑉 𝑒𝑥𝑝(Φ𝑤
𝑇
𝑡Φ𝑤
′ )
• Φ𝑤 Φ𝑤
′
: input and output vectors of word w
11
4. Background : Language Models
Negative Sampling
• Negative sampling selects the words that are not in the context at random instead of considering all
words in the vocabulary.
• If a word w appears in the context of another word w′, then the vector embedding of w is closer to
that of w′ compared to any other randomly chosen word from the vocabulary.
• the learned word embeddings preserve semantics
• can utilize word embedding models to learn dimensions of similarity between subgraphs.
• that similar subgraphs will be close to each other in the embedding space.
12
5. Method
Learning Sub-Graph Representations
• Similar to the language modeling convention, the only re- quired input is a corpus and a vocabulary of
subgraphs for subgraph2vec to learn representations.
• Given a dataset of graphs, subgraph2vec considers all the neighbourhoods of rooted subgraphs around
every rooted subgraph as its corpus, and set of all rooted subgraphs around every node in every graph
as its vocabulary.
• Following the language model training process with the subgraphs and their contexts, subgraph2vec
learns the intended subgraph embeddings.
13
5. Method
Algorithm : subgraph2vec
• The algorithm consists of two main components
• A procedure to generate rooted subgraphs
around every node in a given graph and
second every node in a given graph
• The procedure to learn embeddings of
those subgraphs
• Learn δ dimensional embeddings of subgraphs
(up to degree D) from all the graphs in dataset
G in e epochs. We begin by building a
vocabulary of all the subgraphs
14
5. Method
Algorithm : subgraph2vec
• The algorithm consists of two main components
• A procedure to generate rooted subgraphs
around every node in a given graph and
second every node in a given graph
• The procedure to learn embeddings of
those subgraphs
• Learn δ dimensional embeddings of subgraphs
(up to degree D) from all the graphs in dataset
G in e epochs. We begin by building a
vocabulary of all the subgraphs
• Then the embeddings for all subgraphs in the
vocabulary (Φ) is initialized randomly
• we proceed with learning the embeddings in
several epochs iterating over the graphs in G.
15
5. Method
Extracting Rooted Subgraphs
• To extract these subgraphs, we follow the well-known WL relabeling process which lays the basis for the
WL kernel and WL test of graph isomorphism
•
16
5. Method
Radial Skipgram – Modeling the radial context
• unlike words in a traditional text corpora, subgraphs do not have a linear co-occurrence relationship.
• consider the breadth-first neighbours of the root node as its context as it directly follows from the
definition of WL relabeling process.
• Define the context of a degree-d subgraph 𝑠𝑔𝑣
(𝑑)
rooted at v, as the multiset of subgraphs of
degrees d-1, d, d+1 rooted at each of the neighbors of v(lines 2-6 in algorithm3)
• Subgraph of degrees d-1, d, d+1 to be in the context of a subgraph of degree d
• Degree-d subgraph is likely to be rather similar to subgraphs of degrees that are closer to d
17
5. Method
Radial Skipgram – Vanilla Skip Gram
• the vanilla skipgram language model captures fixed-length linear contexts over the words in a given
sentence.
• For learning a subgraph;s radial context , the vanilla skipgram model could not be used
18
5. Method
Radial Skipgram – Modification
• This could be in several thousands/millions in the case of large graphs.
• Training such models would re- quire large amount of computational resources.
• To alleviate this bottleneck, we approximate the probability distribution using the negative sampling
approach.
19
5. Method
Negative sampling
• 𝑠𝑔𝑐𝑜𝑛𝑡 ∈ 𝑆𝐺𝑣𝑜𝑐𝑎𝑏 𝑎𝑛𝑑 𝑆𝐺𝑣𝑜𝑐𝑎𝑏 𝑖𝑠 𝑣𝑒𝑟𝑦 𝑙𝑎𝑟𝑔𝑒, Pr(𝑠𝑔𝑐𝑜𝑢𝑛𝑡|Φ(𝑠𝑔𝑣
(𝑑)
)) is prohibitively expensive
• We follow the negative sampling strategy to calculate above mentioned posterior probability
• every training cycle of Algorithm3 , we choose a fixed number of subgraphs (denoted as negsamples) as
negative samples and update their embeddings as well.
• Negative samples adhere to the following conditions
• If 𝑛𝑒𝑔𝑠𝑎𝑚𝑝𝑙𝑒𝑠 = 𝑠𝑔𝑛𝑒𝑔1, 𝑠𝑔𝑛𝑒𝑔2, … , 𝑡ℎ𝑒𝑛 𝑛𝑒𝑔𝑎𝑠𝑎𝑚𝑝𝑙𝑒𝑠 ⊂ 𝑆𝐺𝑣𝑜𝑐𝑎𝑏, 𝑛𝑒𝑔𝑠𝑎𝑚𝑝𝑙𝑒𝑠 ≪
𝑆𝐺𝑣𝑜𝑐𝑎𝑏 𝑎𝑛𝑑 𝑛𝑒𝑔𝑎𝑠𝑎𝑚𝑝𝑙𝑒𝑠 ∩ 𝑐𝑜𝑛𝑡𝑒𝑥𝑡𝑣
𝑑
= {}
• Φ 𝑠𝑔𝑣
𝑑
𝑐𝑙𝑜𝑠𝑒𝑟 𝑡𝑜 𝑡ℎ𝑒 𝑒𝑚𝑏𝑒𝑑𝑑𝑖𝑛𝑔𝑠 𝑜𝑓 𝑎𝑙𝑙 𝑡ℎ𝑒 𝑠𝑢𝑏𝑔𝑟𝑎𝑝ℎ𝑠 𝑖𝑡𝑠 𝑐𝑜𝑛𝑡𝑒𝑥𝑡 𝑎𝑛𝑑 𝑎𝑡 𝑡ℎ𝑒 𝑠𝑎𝑚𝑒 𝑡𝑖𝑚𝑒 𝑑𝑖𝑠𝑡𝑎𝑛𝑐𝑒𝑠 𝑡ℎ𝑒
Same from the embeddings of a fixed number of subgraphs that are not its context
•
20
5. Method
Optimization
• Stochastic gradient descent (SGD) optimizer is used to op- timize these parameters
• derivatives are estimated using the back-propagation algorithm.
• The learning rate α is empirically tuned.
21
5. Method
Relation to Deep WL kernel
• each of the subgraph in SGvocab is obtained using the WL re-labelling strategy, and hence represents
the WL neighbourhood labels of a node.
• Hence learning latent representations of such subgraphs amounts to learning representations of WL
neighbourhood labels.
• Therefore, once the embeddings of all the subgraph in SGvocab are learnt using Algorithm 1, one could
use it to build the deep learning variant of the WL kernel among the graphs in G.
22
5. Method
Use cases
• Graph Classification
• Graph Clustering
23
6. Evaluation
datasets
• MUTAG,PTC,PROTEINS,NCI1,NCI109
24
6. Evaluation
Results and Discussion
• Accuracy
• SVMs with subgraph2vec’s embeddings achieve better accuracy on 3 datasets and comparable
accuracy on the remaining 2 datasets
25
6. Evaluation
Results and Discussion
• Efficiency
• It is important to note that classification on these benchmark datasets are much simpler than real-
world classification tasks.
• By using trivial features such as number of nodes in the graph, achieved comparable accuracies to
the SOTA graph kernels.
26
7. Conclusion
Evaluation
• Presented subgraph2vec, an unsupervised representation learning technique to learn embedding of
rooted subgraphs that exist in large graphs
• Through our large-scale experiment involving benchmark and real-world graph classification and
clustering datasets
• We demonstrate that subgraph embeddings learnt by our approach could be used in conjunction
with classifiers such as CNNs, SVMs and relational data clustering algorithms to ahieve sign

More Related Content

Similar to NS-CUK Seminar: H.B.Kim, Review on "subgraph2vec: Learning Distributed Representations of Rooted Sub-graphs from Large Graphs", 2016

240415_Thuy_Labseminar[Simple and Asymmetric Graph Contrastive Learning witho...
240415_Thuy_Labseminar[Simple and Asymmetric Graph Contrastive Learning witho...240415_Thuy_Labseminar[Simple and Asymmetric Graph Contrastive Learning witho...
240415_Thuy_Labseminar[Simple and Asymmetric Graph Contrastive Learning witho...thanhdowork
 
240115_Thanh_LabSeminar[Don't walk, skip! online learning of multi-scale netw...
240115_Thanh_LabSeminar[Don't walk, skip! online learning of multi-scale netw...240115_Thanh_LabSeminar[Don't walk, skip! online learning of multi-scale netw...
240115_Thanh_LabSeminar[Don't walk, skip! online learning of multi-scale netw...thanhdowork
 
240325_JW_labseminar[node2vec: Scalable Feature Learning for Networks].pptx
240325_JW_labseminar[node2vec: Scalable Feature Learning for Networks].pptx240325_JW_labseminar[node2vec: Scalable Feature Learning for Networks].pptx
240325_JW_labseminar[node2vec: Scalable Feature Learning for Networks].pptxthanhdowork
 
Weisfeiler and Leman Go Neural: Higher-order Graph Neural Networks, arXiv e-...
Weisfeiler and Leman Go Neural: Higher-order Graph Neural Networks, arXiv e-...Weisfeiler and Leman Go Neural: Higher-order Graph Neural Networks, arXiv e-...
Weisfeiler and Leman Go Neural: Higher-order Graph Neural Networks, arXiv e-...ssuser2624f71
 
Erin LeDell, Machine Learning Scientist, H2O.ai at MLconf ATL 2016
Erin LeDell, Machine Learning Scientist, H2O.ai at MLconf ATL 2016Erin LeDell, Machine Learning Scientist, H2O.ai at MLconf ATL 2016
Erin LeDell, Machine Learning Scientist, H2O.ai at MLconf ATL 2016MLconf
 
Dr. Erin LeDell, Machine Learning Scientist, H2O.ai at MLconf SEA - 5/20/16
Dr. Erin LeDell, Machine Learning Scientist, H2O.ai at MLconf SEA - 5/20/16Dr. Erin LeDell, Machine Learning Scientist, H2O.ai at MLconf SEA - 5/20/16
Dr. Erin LeDell, Machine Learning Scientist, H2O.ai at MLconf SEA - 5/20/16MLconf
 
Generalized Linear Models in Spark MLlib and SparkR by Xiangrui Meng
Generalized Linear Models in Spark MLlib and SparkR by Xiangrui MengGeneralized Linear Models in Spark MLlib and SparkR by Xiangrui Meng
Generalized Linear Models in Spark MLlib and SparkR by Xiangrui MengSpark Summit
 
Generalized Linear Models in Spark MLlib and SparkR
Generalized Linear Models in Spark MLlib and SparkRGeneralized Linear Models in Spark MLlib and SparkR
Generalized Linear Models in Spark MLlib and SparkRDatabricks
 
Recurrent Neural Networks, LSTM and GRU
Recurrent Neural Networks, LSTM and GRURecurrent Neural Networks, LSTM and GRU
Recurrent Neural Networks, LSTM and GRUananth
 
Locally densest subgraph discovery
Locally densest subgraph discoveryLocally densest subgraph discovery
Locally densest subgraph discoveryaftab alam
 
Deep Learning for Personalized Search and Recommender Systems
Deep Learning for Personalized Search and Recommender SystemsDeep Learning for Personalized Search and Recommender Systems
Deep Learning for Personalized Search and Recommender SystemsBenjamin Le
 
Graph Techniques for Natural Language Processing
Graph Techniques for Natural Language ProcessingGraph Techniques for Natural Language Processing
Graph Techniques for Natural Language ProcessingSujit Pal
 
Cjb0912010 lz algorithms
Cjb0912010 lz algorithmsCjb0912010 lz algorithms
Cjb0912010 lz algorithmsRAJAN ST
 
Machine Learning workshop by GDSC Amity University Chhattisgarh
Machine Learning workshop by GDSC Amity University ChhattisgarhMachine Learning workshop by GDSC Amity University Chhattisgarh
Machine Learning workshop by GDSC Amity University ChhattisgarhPoorabpatel
 
Deep learning from a novice perspective
Deep learning from a novice perspectiveDeep learning from a novice perspective
Deep learning from a novice perspectiveAnirban Santara
 

Similar to NS-CUK Seminar: H.B.Kim, Review on "subgraph2vec: Learning Distributed Representations of Rooted Sub-graphs from Large Graphs", 2016 (20)

gSpan algorithm
gSpan algorithmgSpan algorithm
gSpan algorithm
 
240415_Thuy_Labseminar[Simple and Asymmetric Graph Contrastive Learning witho...
240415_Thuy_Labseminar[Simple and Asymmetric Graph Contrastive Learning witho...240415_Thuy_Labseminar[Simple and Asymmetric Graph Contrastive Learning witho...
240415_Thuy_Labseminar[Simple and Asymmetric Graph Contrastive Learning witho...
 
240115_Thanh_LabSeminar[Don't walk, skip! online learning of multi-scale netw...
240115_Thanh_LabSeminar[Don't walk, skip! online learning of multi-scale netw...240115_Thanh_LabSeminar[Don't walk, skip! online learning of multi-scale netw...
240115_Thanh_LabSeminar[Don't walk, skip! online learning of multi-scale netw...
 
Ivd soda-2019
Ivd soda-2019Ivd soda-2019
Ivd soda-2019
 
240325_JW_labseminar[node2vec: Scalable Feature Learning for Networks].pptx
240325_JW_labseminar[node2vec: Scalable Feature Learning for Networks].pptx240325_JW_labseminar[node2vec: Scalable Feature Learning for Networks].pptx
240325_JW_labseminar[node2vec: Scalable Feature Learning for Networks].pptx
 
Weisfeiler and Leman Go Neural: Higher-order Graph Neural Networks, arXiv e-...
Weisfeiler and Leman Go Neural: Higher-order Graph Neural Networks, arXiv e-...Weisfeiler and Leman Go Neural: Higher-order Graph Neural Networks, arXiv e-...
Weisfeiler and Leman Go Neural: Higher-order Graph Neural Networks, arXiv e-...
 
Erin LeDell, Machine Learning Scientist, H2O.ai at MLconf ATL 2016
Erin LeDell, Machine Learning Scientist, H2O.ai at MLconf ATL 2016Erin LeDell, Machine Learning Scientist, H2O.ai at MLconf ATL 2016
Erin LeDell, Machine Learning Scientist, H2O.ai at MLconf ATL 2016
 
Dr. Erin LeDell, Machine Learning Scientist, H2O.ai at MLconf SEA - 5/20/16
Dr. Erin LeDell, Machine Learning Scientist, H2O.ai at MLconf SEA - 5/20/16Dr. Erin LeDell, Machine Learning Scientist, H2O.ai at MLconf SEA - 5/20/16
Dr. Erin LeDell, Machine Learning Scientist, H2O.ai at MLconf SEA - 5/20/16
 
Generalized Linear Models in Spark MLlib and SparkR by Xiangrui Meng
Generalized Linear Models in Spark MLlib and SparkR by Xiangrui MengGeneralized Linear Models in Spark MLlib and SparkR by Xiangrui Meng
Generalized Linear Models in Spark MLlib and SparkR by Xiangrui Meng
 
Generalized Linear Models in Spark MLlib and SparkR
Generalized Linear Models in Spark MLlib and SparkRGeneralized Linear Models in Spark MLlib and SparkR
Generalized Linear Models in Spark MLlib and SparkR
 
NS-CUK Seminar: S.T.Nguyen, Review on "On Generalized Degree Fairness in Grap...
NS-CUK Seminar: S.T.Nguyen, Review on "On Generalized Degree Fairness in Grap...NS-CUK Seminar: S.T.Nguyen, Review on "On Generalized Degree Fairness in Grap...
NS-CUK Seminar: S.T.Nguyen, Review on "On Generalized Degree Fairness in Grap...
 
Recurrent Neural Networks, LSTM and GRU
Recurrent Neural Networks, LSTM and GRURecurrent Neural Networks, LSTM and GRU
Recurrent Neural Networks, LSTM and GRU
 
Locally densest subgraph discovery
Locally densest subgraph discoveryLocally densest subgraph discovery
Locally densest subgraph discovery
 
Deep Learning for Personalized Search and Recommender Systems
Deep Learning for Personalized Search and Recommender SystemsDeep Learning for Personalized Search and Recommender Systems
Deep Learning for Personalized Search and Recommender Systems
 
Graph Techniques for Natural Language Processing
Graph Techniques for Natural Language ProcessingGraph Techniques for Natural Language Processing
Graph Techniques for Natural Language Processing
 
Cjb0912010 lz algorithms
Cjb0912010 lz algorithmsCjb0912010 lz algorithms
Cjb0912010 lz algorithms
 
Machine Learning workshop by GDSC Amity University Chhattisgarh
Machine Learning workshop by GDSC Amity University ChhattisgarhMachine Learning workshop by GDSC Amity University Chhattisgarh
Machine Learning workshop by GDSC Amity University Chhattisgarh
 
Deep learning from a novice perspective
Deep learning from a novice perspectiveDeep learning from a novice perspective
Deep learning from a novice perspective
 
Deep Learning for Machine Translation
Deep Learning for Machine TranslationDeep Learning for Machine Translation
Deep Learning for Machine Translation
 
wordembedding.pptx
wordembedding.pptxwordembedding.pptx
wordembedding.pptx
 

More from ssuser4b1f48

NS-CUK Seminar: V.T.Hoang, Review on "GOAT: A Global Transformer on Large-sca...
NS-CUK Seminar: V.T.Hoang, Review on "GOAT: A Global Transformer on Large-sca...NS-CUK Seminar: V.T.Hoang, Review on "GOAT: A Global Transformer on Large-sca...
NS-CUK Seminar: V.T.Hoang, Review on "GOAT: A Global Transformer on Large-sca...ssuser4b1f48
 
NS-CUK Seminar: J.H.Lee, Review on "Graph Propagation Transformer for Graph R...
NS-CUK Seminar: J.H.Lee, Review on "Graph Propagation Transformer for Graph R...NS-CUK Seminar: J.H.Lee, Review on "Graph Propagation Transformer for Graph R...
NS-CUK Seminar: J.H.Lee, Review on "Graph Propagation Transformer for Graph R...ssuser4b1f48
 
NS-CUK Seminar: H.B.Kim, Review on "Cluster-GCN: An Efficient Algorithm for ...
NS-CUK Seminar: H.B.Kim,  Review on "Cluster-GCN: An Efficient Algorithm for ...NS-CUK Seminar: H.B.Kim,  Review on "Cluster-GCN: An Efficient Algorithm for ...
NS-CUK Seminar: H.B.Kim, Review on "Cluster-GCN: An Efficient Algorithm for ...ssuser4b1f48
 
NS-CUK Seminar: H.E.Lee, Review on "Weisfeiler and Leman Go Neural: Higher-O...
NS-CUK Seminar: H.E.Lee,  Review on "Weisfeiler and Leman Go Neural: Higher-O...NS-CUK Seminar: H.E.Lee,  Review on "Weisfeiler and Leman Go Neural: Higher-O...
NS-CUK Seminar: H.E.Lee, Review on "Weisfeiler and Leman Go Neural: Higher-O...ssuser4b1f48
 
NS-CUK Seminar:V.T.Hoang, Review on "GRPE: Relative Positional Encoding for G...
NS-CUK Seminar:V.T.Hoang, Review on "GRPE: Relative Positional Encoding for G...NS-CUK Seminar:V.T.Hoang, Review on "GRPE: Relative Positional Encoding for G...
NS-CUK Seminar:V.T.Hoang, Review on "GRPE: Relative Positional Encoding for G...ssuser4b1f48
 
NS-CUK Seminar: J.H.Lee, Review on "Learnable Structural Semantic Readout for...
NS-CUK Seminar: J.H.Lee, Review on "Learnable Structural Semantic Readout for...NS-CUK Seminar: J.H.Lee, Review on "Learnable Structural Semantic Readout for...
NS-CUK Seminar: J.H.Lee, Review on "Learnable Structural Semantic Readout for...ssuser4b1f48
 
Aug 22nd, 2023: Case Studies - The Art and Science of Animation Production)
Aug 22nd, 2023: Case Studies - The Art and Science of Animation Production)Aug 22nd, 2023: Case Studies - The Art and Science of Animation Production)
Aug 22nd, 2023: Case Studies - The Art and Science of Animation Production)ssuser4b1f48
 
Aug 17th, 2023: Case Studies - Examining Gamification through Virtual/Augment...
Aug 17th, 2023: Case Studies - Examining Gamification through Virtual/Augment...Aug 17th, 2023: Case Studies - Examining Gamification through Virtual/Augment...
Aug 17th, 2023: Case Studies - Examining Gamification through Virtual/Augment...ssuser4b1f48
 
Aug 10th, 2023: Case Studies - The Power of eXtended Reality (XR) with 360°
Aug 10th, 2023: Case Studies - The Power of eXtended Reality (XR) with 360°Aug 10th, 2023: Case Studies - The Power of eXtended Reality (XR) with 360°
Aug 10th, 2023: Case Studies - The Power of eXtended Reality (XR) with 360°ssuser4b1f48
 
Aug 8th, 2023: Case Studies - Utilizing eXtended Reality (XR) in Drones)
Aug 8th, 2023: Case Studies - Utilizing eXtended Reality (XR) in Drones)Aug 8th, 2023: Case Studies - Utilizing eXtended Reality (XR) in Drones)
Aug 8th, 2023: Case Studies - Utilizing eXtended Reality (XR) in Drones)ssuser4b1f48
 
NS-CUK Seminar: J.H.Lee, Review on "Learnable Structural Semantic Readout for...
NS-CUK Seminar: J.H.Lee, Review on "Learnable Structural Semantic Readout for...NS-CUK Seminar: J.H.Lee, Review on "Learnable Structural Semantic Readout for...
NS-CUK Seminar: J.H.Lee, Review on "Learnable Structural Semantic Readout for...ssuser4b1f48
 
NS-CUK Seminar: H.E.Lee, Review on "Gated Graph Sequence Neural Networks", I...
NS-CUK Seminar: H.E.Lee,  Review on "Gated Graph Sequence Neural Networks", I...NS-CUK Seminar: H.E.Lee,  Review on "Gated Graph Sequence Neural Networks", I...
NS-CUK Seminar: H.E.Lee, Review on "Gated Graph Sequence Neural Networks", I...ssuser4b1f48
 
NS-CUK Seminar:V.T.Hoang, Review on "Augmentation-Free Self-Supervised Learni...
NS-CUK Seminar:V.T.Hoang, Review on "Augmentation-Free Self-Supervised Learni...NS-CUK Seminar:V.T.Hoang, Review on "Augmentation-Free Self-Supervised Learni...
NS-CUK Seminar:V.T.Hoang, Review on "Augmentation-Free Self-Supervised Learni...ssuser4b1f48
 
NS-CUK Journal club: H.E.Lee, Review on " A biomedical knowledge graph-based ...
NS-CUK Journal club: H.E.Lee, Review on " A biomedical knowledge graph-based ...NS-CUK Journal club: H.E.Lee, Review on " A biomedical knowledge graph-based ...
NS-CUK Journal club: H.E.Lee, Review on " A biomedical knowledge graph-based ...ssuser4b1f48
 
NS-CUK Seminar: H.E.Lee, Review on "PTE: Predictive Text Embedding through L...
NS-CUK Seminar: H.E.Lee,  Review on "PTE: Predictive Text Embedding through L...NS-CUK Seminar: H.E.Lee,  Review on "PTE: Predictive Text Embedding through L...
NS-CUK Seminar: H.E.Lee, Review on "PTE: Predictive Text Embedding through L...ssuser4b1f48
 
NS-CUK Seminar: H.B.Kim, Review on "Inductive Representation Learning on Lar...
NS-CUK Seminar: H.B.Kim,  Review on "Inductive Representation Learning on Lar...NS-CUK Seminar: H.B.Kim,  Review on "Inductive Representation Learning on Lar...
NS-CUK Seminar: H.B.Kim, Review on "Inductive Representation Learning on Lar...ssuser4b1f48
 
NS-CUK Seminar: H.E.Lee, Review on "PTE: Predictive Text Embedding through L...
NS-CUK Seminar: H.E.Lee,  Review on "PTE: Predictive Text Embedding through L...NS-CUK Seminar: H.E.Lee,  Review on "PTE: Predictive Text Embedding through L...
NS-CUK Seminar: H.E.Lee, Review on "PTE: Predictive Text Embedding through L...ssuser4b1f48
 
NS-CUK Seminar: J.H.Lee, Review on "Relational Self-Supervised Learning on Gr...
NS-CUK Seminar: J.H.Lee, Review on "Relational Self-Supervised Learning on Gr...NS-CUK Seminar: J.H.Lee, Review on "Relational Self-Supervised Learning on Gr...
NS-CUK Seminar: J.H.Lee, Review on "Relational Self-Supervised Learning on Gr...ssuser4b1f48
 
NS-CUK Seminar: H.B.Kim, Review on "metapath2vec: Scalable representation le...
NS-CUK Seminar: H.B.Kim,  Review on "metapath2vec: Scalable representation le...NS-CUK Seminar: H.B.Kim,  Review on "metapath2vec: Scalable representation le...
NS-CUK Seminar: H.B.Kim, Review on "metapath2vec: Scalable representation le...ssuser4b1f48
 
NS-CUK Seminar: H.E.Lee, Review on "Graph Star Net for Generalized Multi-Tas...
NS-CUK Seminar: H.E.Lee,  Review on "Graph Star Net for Generalized Multi-Tas...NS-CUK Seminar: H.E.Lee,  Review on "Graph Star Net for Generalized Multi-Tas...
NS-CUK Seminar: H.E.Lee, Review on "Graph Star Net for Generalized Multi-Tas...ssuser4b1f48
 

More from ssuser4b1f48 (20)

NS-CUK Seminar: V.T.Hoang, Review on "GOAT: A Global Transformer on Large-sca...
NS-CUK Seminar: V.T.Hoang, Review on "GOAT: A Global Transformer on Large-sca...NS-CUK Seminar: V.T.Hoang, Review on "GOAT: A Global Transformer on Large-sca...
NS-CUK Seminar: V.T.Hoang, Review on "GOAT: A Global Transformer on Large-sca...
 
NS-CUK Seminar: J.H.Lee, Review on "Graph Propagation Transformer for Graph R...
NS-CUK Seminar: J.H.Lee, Review on "Graph Propagation Transformer for Graph R...NS-CUK Seminar: J.H.Lee, Review on "Graph Propagation Transformer for Graph R...
NS-CUK Seminar: J.H.Lee, Review on "Graph Propagation Transformer for Graph R...
 
NS-CUK Seminar: H.B.Kim, Review on "Cluster-GCN: An Efficient Algorithm for ...
NS-CUK Seminar: H.B.Kim,  Review on "Cluster-GCN: An Efficient Algorithm for ...NS-CUK Seminar: H.B.Kim,  Review on "Cluster-GCN: An Efficient Algorithm for ...
NS-CUK Seminar: H.B.Kim, Review on "Cluster-GCN: An Efficient Algorithm for ...
 
NS-CUK Seminar: H.E.Lee, Review on "Weisfeiler and Leman Go Neural: Higher-O...
NS-CUK Seminar: H.E.Lee,  Review on "Weisfeiler and Leman Go Neural: Higher-O...NS-CUK Seminar: H.E.Lee,  Review on "Weisfeiler and Leman Go Neural: Higher-O...
NS-CUK Seminar: H.E.Lee, Review on "Weisfeiler and Leman Go Neural: Higher-O...
 
NS-CUK Seminar:V.T.Hoang, Review on "GRPE: Relative Positional Encoding for G...
NS-CUK Seminar:V.T.Hoang, Review on "GRPE: Relative Positional Encoding for G...NS-CUK Seminar:V.T.Hoang, Review on "GRPE: Relative Positional Encoding for G...
NS-CUK Seminar:V.T.Hoang, Review on "GRPE: Relative Positional Encoding for G...
 
NS-CUK Seminar: J.H.Lee, Review on "Learnable Structural Semantic Readout for...
NS-CUK Seminar: J.H.Lee, Review on "Learnable Structural Semantic Readout for...NS-CUK Seminar: J.H.Lee, Review on "Learnable Structural Semantic Readout for...
NS-CUK Seminar: J.H.Lee, Review on "Learnable Structural Semantic Readout for...
 
Aug 22nd, 2023: Case Studies - The Art and Science of Animation Production)
Aug 22nd, 2023: Case Studies - The Art and Science of Animation Production)Aug 22nd, 2023: Case Studies - The Art and Science of Animation Production)
Aug 22nd, 2023: Case Studies - The Art and Science of Animation Production)
 
Aug 17th, 2023: Case Studies - Examining Gamification through Virtual/Augment...
Aug 17th, 2023: Case Studies - Examining Gamification through Virtual/Augment...Aug 17th, 2023: Case Studies - Examining Gamification through Virtual/Augment...
Aug 17th, 2023: Case Studies - Examining Gamification through Virtual/Augment...
 
Aug 10th, 2023: Case Studies - The Power of eXtended Reality (XR) with 360°
Aug 10th, 2023: Case Studies - The Power of eXtended Reality (XR) with 360°Aug 10th, 2023: Case Studies - The Power of eXtended Reality (XR) with 360°
Aug 10th, 2023: Case Studies - The Power of eXtended Reality (XR) with 360°
 
Aug 8th, 2023: Case Studies - Utilizing eXtended Reality (XR) in Drones)
Aug 8th, 2023: Case Studies - Utilizing eXtended Reality (XR) in Drones)Aug 8th, 2023: Case Studies - Utilizing eXtended Reality (XR) in Drones)
Aug 8th, 2023: Case Studies - Utilizing eXtended Reality (XR) in Drones)
 
NS-CUK Seminar: J.H.Lee, Review on "Learnable Structural Semantic Readout for...
NS-CUK Seminar: J.H.Lee, Review on "Learnable Structural Semantic Readout for...NS-CUK Seminar: J.H.Lee, Review on "Learnable Structural Semantic Readout for...
NS-CUK Seminar: J.H.Lee, Review on "Learnable Structural Semantic Readout for...
 
NS-CUK Seminar: H.E.Lee, Review on "Gated Graph Sequence Neural Networks", I...
NS-CUK Seminar: H.E.Lee,  Review on "Gated Graph Sequence Neural Networks", I...NS-CUK Seminar: H.E.Lee,  Review on "Gated Graph Sequence Neural Networks", I...
NS-CUK Seminar: H.E.Lee, Review on "Gated Graph Sequence Neural Networks", I...
 
NS-CUK Seminar:V.T.Hoang, Review on "Augmentation-Free Self-Supervised Learni...
NS-CUK Seminar:V.T.Hoang, Review on "Augmentation-Free Self-Supervised Learni...NS-CUK Seminar:V.T.Hoang, Review on "Augmentation-Free Self-Supervised Learni...
NS-CUK Seminar:V.T.Hoang, Review on "Augmentation-Free Self-Supervised Learni...
 
NS-CUK Journal club: H.E.Lee, Review on " A biomedical knowledge graph-based ...
NS-CUK Journal club: H.E.Lee, Review on " A biomedical knowledge graph-based ...NS-CUK Journal club: H.E.Lee, Review on " A biomedical knowledge graph-based ...
NS-CUK Journal club: H.E.Lee, Review on " A biomedical knowledge graph-based ...
 
NS-CUK Seminar: H.E.Lee, Review on "PTE: Predictive Text Embedding through L...
NS-CUK Seminar: H.E.Lee,  Review on "PTE: Predictive Text Embedding through L...NS-CUK Seminar: H.E.Lee,  Review on "PTE: Predictive Text Embedding through L...
NS-CUK Seminar: H.E.Lee, Review on "PTE: Predictive Text Embedding through L...
 
NS-CUK Seminar: H.B.Kim, Review on "Inductive Representation Learning on Lar...
NS-CUK Seminar: H.B.Kim,  Review on "Inductive Representation Learning on Lar...NS-CUK Seminar: H.B.Kim,  Review on "Inductive Representation Learning on Lar...
NS-CUK Seminar: H.B.Kim, Review on "Inductive Representation Learning on Lar...
 
NS-CUK Seminar: H.E.Lee, Review on "PTE: Predictive Text Embedding through L...
NS-CUK Seminar: H.E.Lee,  Review on "PTE: Predictive Text Embedding through L...NS-CUK Seminar: H.E.Lee,  Review on "PTE: Predictive Text Embedding through L...
NS-CUK Seminar: H.E.Lee, Review on "PTE: Predictive Text Embedding through L...
 
NS-CUK Seminar: J.H.Lee, Review on "Relational Self-Supervised Learning on Gr...
NS-CUK Seminar: J.H.Lee, Review on "Relational Self-Supervised Learning on Gr...NS-CUK Seminar: J.H.Lee, Review on "Relational Self-Supervised Learning on Gr...
NS-CUK Seminar: J.H.Lee, Review on "Relational Self-Supervised Learning on Gr...
 
NS-CUK Seminar: H.B.Kim, Review on "metapath2vec: Scalable representation le...
NS-CUK Seminar: H.B.Kim,  Review on "metapath2vec: Scalable representation le...NS-CUK Seminar: H.B.Kim,  Review on "metapath2vec: Scalable representation le...
NS-CUK Seminar: H.B.Kim, Review on "metapath2vec: Scalable representation le...
 
NS-CUK Seminar: H.E.Lee, Review on "Graph Star Net for Generalized Multi-Tas...
NS-CUK Seminar: H.E.Lee,  Review on "Graph Star Net for Generalized Multi-Tas...NS-CUK Seminar: H.E.Lee,  Review on "Graph Star Net for Generalized Multi-Tas...
NS-CUK Seminar: H.E.Lee, Review on "Graph Star Net for Generalized Multi-Tas...
 

Recently uploaded

Pigging Solutions Piggable Sweeping Elbows
Pigging Solutions Piggable Sweeping ElbowsPigging Solutions Piggable Sweeping Elbows
Pigging Solutions Piggable Sweeping ElbowsPigging Solutions
 
Making_way_through_DLL_hollowing_inspite_of_CFG_by_Debjeet Banerjee.pptx
Making_way_through_DLL_hollowing_inspite_of_CFG_by_Debjeet Banerjee.pptxMaking_way_through_DLL_hollowing_inspite_of_CFG_by_Debjeet Banerjee.pptx
Making_way_through_DLL_hollowing_inspite_of_CFG_by_Debjeet Banerjee.pptxnull - The Open Security Community
 
CloudStudio User manual (basic edition):
CloudStudio User manual (basic edition):CloudStudio User manual (basic edition):
CloudStudio User manual (basic edition):comworks
 
New from BookNet Canada for 2024: BNC BiblioShare - Tech Forum 2024
New from BookNet Canada for 2024: BNC BiblioShare - Tech Forum 2024New from BookNet Canada for 2024: BNC BiblioShare - Tech Forum 2024
New from BookNet Canada for 2024: BNC BiblioShare - Tech Forum 2024BookNet Canada
 
APIForce Zurich 5 April Automation LPDG
APIForce Zurich 5 April  Automation LPDGAPIForce Zurich 5 April  Automation LPDG
APIForce Zurich 5 April Automation LPDGMarianaLemus7
 
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 3652toLead Limited
 
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024BookNet Canada
 
My INSURER PTE LTD - Insurtech Innovation Award 2024
My INSURER PTE LTD - Insurtech Innovation Award 2024My INSURER PTE LTD - Insurtech Innovation Award 2024
My INSURER PTE LTD - Insurtech Innovation Award 2024The Digital Insurer
 
Unlocking the Potential of the Cloud for IBM Power Systems
Unlocking the Potential of the Cloud for IBM Power SystemsUnlocking the Potential of the Cloud for IBM Power Systems
Unlocking the Potential of the Cloud for IBM Power SystemsPrecisely
 
Benefits Of Flutter Compared To Other Frameworks
Benefits Of Flutter Compared To Other FrameworksBenefits Of Flutter Compared To Other Frameworks
Benefits Of Flutter Compared To Other FrameworksSoftradix Technologies
 
Unleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding ClubUnleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding ClubKalema Edgar
 
Swan(sea) Song – personal research during my six years at Swansea ... and bey...
Swan(sea) Song – personal research during my six years at Swansea ... and bey...Swan(sea) Song – personal research during my six years at Swansea ... and bey...
Swan(sea) Song – personal research during my six years at Swansea ... and bey...Alan Dix
 
Unblocking The Main Thread Solving ANRs and Frozen Frames
Unblocking The Main Thread Solving ANRs and Frozen FramesUnblocking The Main Thread Solving ANRs and Frozen Frames
Unblocking The Main Thread Solving ANRs and Frozen FramesSinan KOZAK
 
Snow Chain-Integrated Tire for a Safe Drive on Winter Roads
Snow Chain-Integrated Tire for a Safe Drive on Winter RoadsSnow Chain-Integrated Tire for a Safe Drive on Winter Roads
Snow Chain-Integrated Tire for a Safe Drive on Winter RoadsHyundai Motor Group
 
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmaticsKotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmaticscarlostorres15106
 
Artificial intelligence in the post-deep learning era
Artificial intelligence in the post-deep learning eraArtificial intelligence in the post-deep learning era
Artificial intelligence in the post-deep learning eraDeakin University
 
My Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 PresentationMy Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 PresentationRidwan Fadjar
 
Designing IA for AI - Information Architecture Conference 2024
Designing IA for AI - Information Architecture Conference 2024Designing IA for AI - Information Architecture Conference 2024
Designing IA for AI - Information Architecture Conference 2024Enterprise Knowledge
 
Build your next Gen AI Breakthrough - April 2024
Build your next Gen AI Breakthrough - April 2024Build your next Gen AI Breakthrough - April 2024
Build your next Gen AI Breakthrough - April 2024Neo4j
 

Recently uploaded (20)

Pigging Solutions Piggable Sweeping Elbows
Pigging Solutions Piggable Sweeping ElbowsPigging Solutions Piggable Sweeping Elbows
Pigging Solutions Piggable Sweeping Elbows
 
Making_way_through_DLL_hollowing_inspite_of_CFG_by_Debjeet Banerjee.pptx
Making_way_through_DLL_hollowing_inspite_of_CFG_by_Debjeet Banerjee.pptxMaking_way_through_DLL_hollowing_inspite_of_CFG_by_Debjeet Banerjee.pptx
Making_way_through_DLL_hollowing_inspite_of_CFG_by_Debjeet Banerjee.pptx
 
CloudStudio User manual (basic edition):
CloudStudio User manual (basic edition):CloudStudio User manual (basic edition):
CloudStudio User manual (basic edition):
 
New from BookNet Canada for 2024: BNC BiblioShare - Tech Forum 2024
New from BookNet Canada for 2024: BNC BiblioShare - Tech Forum 2024New from BookNet Canada for 2024: BNC BiblioShare - Tech Forum 2024
New from BookNet Canada for 2024: BNC BiblioShare - Tech Forum 2024
 
APIForce Zurich 5 April Automation LPDG
APIForce Zurich 5 April  Automation LPDGAPIForce Zurich 5 April  Automation LPDG
APIForce Zurich 5 April Automation LPDG
 
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
 
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
 
DMCC Future of Trade Web3 - Special Edition
DMCC Future of Trade Web3 - Special EditionDMCC Future of Trade Web3 - Special Edition
DMCC Future of Trade Web3 - Special Edition
 
My INSURER PTE LTD - Insurtech Innovation Award 2024
My INSURER PTE LTD - Insurtech Innovation Award 2024My INSURER PTE LTD - Insurtech Innovation Award 2024
My INSURER PTE LTD - Insurtech Innovation Award 2024
 
Unlocking the Potential of the Cloud for IBM Power Systems
Unlocking the Potential of the Cloud for IBM Power SystemsUnlocking the Potential of the Cloud for IBM Power Systems
Unlocking the Potential of the Cloud for IBM Power Systems
 
Benefits Of Flutter Compared To Other Frameworks
Benefits Of Flutter Compared To Other FrameworksBenefits Of Flutter Compared To Other Frameworks
Benefits Of Flutter Compared To Other Frameworks
 
Unleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding ClubUnleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding Club
 
Swan(sea) Song – personal research during my six years at Swansea ... and bey...
Swan(sea) Song – personal research during my six years at Swansea ... and bey...Swan(sea) Song – personal research during my six years at Swansea ... and bey...
Swan(sea) Song – personal research during my six years at Swansea ... and bey...
 
Unblocking The Main Thread Solving ANRs and Frozen Frames
Unblocking The Main Thread Solving ANRs and Frozen FramesUnblocking The Main Thread Solving ANRs and Frozen Frames
Unblocking The Main Thread Solving ANRs and Frozen Frames
 
Snow Chain-Integrated Tire for a Safe Drive on Winter Roads
Snow Chain-Integrated Tire for a Safe Drive on Winter RoadsSnow Chain-Integrated Tire for a Safe Drive on Winter Roads
Snow Chain-Integrated Tire for a Safe Drive on Winter Roads
 
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmaticsKotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
 
Artificial intelligence in the post-deep learning era
Artificial intelligence in the post-deep learning eraArtificial intelligence in the post-deep learning era
Artificial intelligence in the post-deep learning era
 
My Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 PresentationMy Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 Presentation
 
Designing IA for AI - Information Architecture Conference 2024
Designing IA for AI - Information Architecture Conference 2024Designing IA for AI - Information Architecture Conference 2024
Designing IA for AI - Information Architecture Conference 2024
 
Build your next Gen AI Breakthrough - April 2024
Build your next Gen AI Breakthrough - April 2024Build your next Gen AI Breakthrough - April 2024
Build your next Gen AI Breakthrough - April 2024
 

NS-CUK Seminar: H.B.Kim, Review on "subgraph2vec: Learning Distributed Representations of Rooted Sub-graphs from Large Graphs", 2016

  • 1. Ho-Beom Kim Network Science Lab Dept. of Mathematics The Catholic University of Korea E-mail: hobeom2001@catholic.ac.kr 2023 / 07 / 03 Annamalai Narayanan, et al.
  • 2. 2  Introduction • Problem statements • Contributions  Methodology  Related work  Experiments  Results  Discussion  Conclusion
  • 3. 3 1. Introduction Contributions • Propose subgraph2vec, an unsupervised representation learning technique to learn latent representations of rooted subgraphs present in large graphs • Develop a modified version of the skipgram language model which is capable of modeling varying lengthradial contexts around target subgraphs • Subgraph2vec’s representation learning technique would help deep learning variant of WL kernel • Demonstrate that subgraph2vec could significantly outperform state-of-the-art
  • 4. 4 1. Introduction Limitations of Existing Graph kernels • (L1) Structural similarity • Substructures that are used to compute the kernel matrix are not independent. • (L2) Diagonal Dominance • Since graph kernels regard these substructures as separate features, the dimensional- ity of the feature space often grows exponentially with the number of substructures. • only a few sub- structures will be common across graphs. • leads to diagonal dominance, that is, a given graph is similar to it- self but not to any other graph in the dataset. • This leads to poor classification/clustering accuracy.
  • 5. 5 1. Introduction Existing Solution • DGK : Deep Graph Kernels • 𝐾 𝐺, 𝐺′ = Φ 𝐺 𝑇𝑀Φ 𝐺′ • 𝑀𝑟𝑒𝑝𝑟𝑒𝑠𝑒𝑛𝑡𝑠 𝑎 𝑉 𝑥 𝑉 𝑝𝑜𝑠𝑖𝑡𝑖𝑣𝑒 𝑠𝑒𝑚𝑖 − 𝑑𝑒𝑓𝑖𝑛𝑒 𝑚𝑎𝑡𝑟𝑖𝑥 𝑡ℎ𝑎𝑡 𝑒𝑛𝑐𝑜𝑑𝑒𝑠 𝑡ℎ𝑒 𝑟𝑒𝑙𝑎𝑡𝑖𝑜𝑛𝑠ℎ𝑖𝑝 𝑏𝑒𝑡𝑤𝑒𝑒𝑛 𝑠𝑢𝑏𝑠𝑡𝑟𝑢𝑐𝑡𝑢𝑟𝑒𝑠 𝑎𝑛𝑑 𝑉 𝑟𝑒𝑝𝑟𝑒𝑠𝑒𝑛𝑡𝑠 𝑡ℎ𝑒 𝑣𝑜𝑐𝑎𝑏𝑢 • Lary of substrucures obtained from the training data • M matrix respects the similarity of the substructure space
  • 6. 6 1. Introduction Related Work • Deep Walk and node2vec intend to learn node embeddings by generating random walks in a single graph. • Both these works rely on existence of node labels for at least a small portion of nodes and take a semi-supervised approach to learn node embeddings. • Subgraph2vec learns subgraph embeddings in an unsupervised manner • Graph kernel’s categories • Kernels for limited-size subgraphs • Kernels based on subtree patterns • Kernels based on walks and paths • Subgraph2vec is complementary to these existing graph kernels where the substructures exhibit reasonable similarities among them
  • 7. 7 1. Introduction Problem Statements • Consider the problem of learning distributed representations of rooted subgraphs from a given set of graphs • 𝐺 = (𝑉, 𝐸, 𝜆) • 𝐺 = 𝑉 𝑠𝑔, 𝐸𝑠𝑔, 𝜆𝑠𝑔 • Sg : a sub-graph of G iff there exists an injective mapping 𝜇 ∶ 𝑉 𝑠𝑔 → 𝑉 s.t. (𝑣1, 𝑣2) ∈ 𝐸𝑠𝑔 𝑖𝑓𝑓 𝜇 𝑣1 , 𝜇 𝑣2 ∈ 𝐸 • 𝒢 = 𝐺1, 𝐺2, … , 𝐺𝑛 ∶ 𝑎 𝑠𝑒𝑡 𝑜𝑓 𝑔𝑟𝑎𝑝ℎ𝑠 • D : positive integer
  • 8. 8 4. Background : Language Models Traditional language models • The traditional language models determine the likelihood of a sequence of words appearing in it. • Pr 𝑤𝑡 𝑤1, … , 𝑤𝑡−1 • Estimate the likelihood of observing the target word 𝑤𝑡 given n previous words (𝑤1, … , 𝑤𝑡−1) observed thus far
  • 9. 9 4. Background : Language Models Neural language models • The recently developed neural language models focus on learning distributed vector representation of words • These models improve traditional n-gram models by using vector embeddings for words • Neural language models exploit the of the notion of context where a context is. Efined as a. ixed number of. Ords surrounding the target word • 𝑡=1 𝑇 𝑙𝑜𝑔𝑃𝑟 𝑤𝑡 𝑤1, … , 𝑤𝑡−1 • 𝑤𝑡 𝑤1, … , 𝑤𝑡−1 are the context of the target word 𝑤𝑡 •
  • 10. 10 4. Background : Language Models Skip Gram • The skipgram model maximizes co-occurrence probability among the words that appear within a given context window. • Give a context window of size c and the target word wt, skipgram model attempts to predict the words that appear in the context of the target word, (𝑤𝑡−𝑐, ..., 𝑤𝑡−𝑐 ). • 𝑡=1 𝑇 𝑙𝑜𝑔𝑃𝑟(𝑤𝑡−𝑐, … , 𝑤𝑡+𝑐|𝑤𝑡) • 𝑃𝑟(𝑤𝑡−𝑐, … , 𝑤𝑡+𝑐|𝑤𝑡) : computed as Π−𝑐≤𝑗≤𝑐,𝑗≠0Pr(𝑤𝑡+𝑗|𝑤𝑡) • 𝑃𝑟(𝑤𝑡+𝑗 |𝑤𝑡) : 𝑒𝑥𝑝(Φ𝑤 𝑇 𝑡Φ𝑤𝑡+𝑗 ′ ) 𝑤=1 𝑉 𝑒𝑥𝑝(Φ𝑤 𝑇 𝑡Φ𝑤 ′ ) • Φ𝑤 Φ𝑤 ′ : input and output vectors of word w
  • 11. 11 4. Background : Language Models Negative Sampling • Negative sampling selects the words that are not in the context at random instead of considering all words in the vocabulary. • If a word w appears in the context of another word w′, then the vector embedding of w is closer to that of w′ compared to any other randomly chosen word from the vocabulary. • the learned word embeddings preserve semantics • can utilize word embedding models to learn dimensions of similarity between subgraphs. • that similar subgraphs will be close to each other in the embedding space.
  • 12. 12 5. Method Learning Sub-Graph Representations • Similar to the language modeling convention, the only re- quired input is a corpus and a vocabulary of subgraphs for subgraph2vec to learn representations. • Given a dataset of graphs, subgraph2vec considers all the neighbourhoods of rooted subgraphs around every rooted subgraph as its corpus, and set of all rooted subgraphs around every node in every graph as its vocabulary. • Following the language model training process with the subgraphs and their contexts, subgraph2vec learns the intended subgraph embeddings.
  • 13. 13 5. Method Algorithm : subgraph2vec • The algorithm consists of two main components • A procedure to generate rooted subgraphs around every node in a given graph and second every node in a given graph • The procedure to learn embeddings of those subgraphs • Learn δ dimensional embeddings of subgraphs (up to degree D) from all the graphs in dataset G in e epochs. We begin by building a vocabulary of all the subgraphs
  • 14. 14 5. Method Algorithm : subgraph2vec • The algorithm consists of two main components • A procedure to generate rooted subgraphs around every node in a given graph and second every node in a given graph • The procedure to learn embeddings of those subgraphs • Learn δ dimensional embeddings of subgraphs (up to degree D) from all the graphs in dataset G in e epochs. We begin by building a vocabulary of all the subgraphs • Then the embeddings for all subgraphs in the vocabulary (Φ) is initialized randomly • we proceed with learning the embeddings in several epochs iterating over the graphs in G.
  • 15. 15 5. Method Extracting Rooted Subgraphs • To extract these subgraphs, we follow the well-known WL relabeling process which lays the basis for the WL kernel and WL test of graph isomorphism •
  • 16. 16 5. Method Radial Skipgram – Modeling the radial context • unlike words in a traditional text corpora, subgraphs do not have a linear co-occurrence relationship. • consider the breadth-first neighbours of the root node as its context as it directly follows from the definition of WL relabeling process. • Define the context of a degree-d subgraph 𝑠𝑔𝑣 (𝑑) rooted at v, as the multiset of subgraphs of degrees d-1, d, d+1 rooted at each of the neighbors of v(lines 2-6 in algorithm3) • Subgraph of degrees d-1, d, d+1 to be in the context of a subgraph of degree d • Degree-d subgraph is likely to be rather similar to subgraphs of degrees that are closer to d
  • 17. 17 5. Method Radial Skipgram – Vanilla Skip Gram • the vanilla skipgram language model captures fixed-length linear contexts over the words in a given sentence. • For learning a subgraph;s radial context , the vanilla skipgram model could not be used
  • 18. 18 5. Method Radial Skipgram – Modification • This could be in several thousands/millions in the case of large graphs. • Training such models would re- quire large amount of computational resources. • To alleviate this bottleneck, we approximate the probability distribution using the negative sampling approach.
  • 19. 19 5. Method Negative sampling • 𝑠𝑔𝑐𝑜𝑛𝑡 ∈ 𝑆𝐺𝑣𝑜𝑐𝑎𝑏 𝑎𝑛𝑑 𝑆𝐺𝑣𝑜𝑐𝑎𝑏 𝑖𝑠 𝑣𝑒𝑟𝑦 𝑙𝑎𝑟𝑔𝑒, Pr(𝑠𝑔𝑐𝑜𝑢𝑛𝑡|Φ(𝑠𝑔𝑣 (𝑑) )) is prohibitively expensive • We follow the negative sampling strategy to calculate above mentioned posterior probability • every training cycle of Algorithm3 , we choose a fixed number of subgraphs (denoted as negsamples) as negative samples and update their embeddings as well. • Negative samples adhere to the following conditions • If 𝑛𝑒𝑔𝑠𝑎𝑚𝑝𝑙𝑒𝑠 = 𝑠𝑔𝑛𝑒𝑔1, 𝑠𝑔𝑛𝑒𝑔2, … , 𝑡ℎ𝑒𝑛 𝑛𝑒𝑔𝑎𝑠𝑎𝑚𝑝𝑙𝑒𝑠 ⊂ 𝑆𝐺𝑣𝑜𝑐𝑎𝑏, 𝑛𝑒𝑔𝑠𝑎𝑚𝑝𝑙𝑒𝑠 ≪ 𝑆𝐺𝑣𝑜𝑐𝑎𝑏 𝑎𝑛𝑑 𝑛𝑒𝑔𝑎𝑠𝑎𝑚𝑝𝑙𝑒𝑠 ∩ 𝑐𝑜𝑛𝑡𝑒𝑥𝑡𝑣 𝑑 = {} • Φ 𝑠𝑔𝑣 𝑑 𝑐𝑙𝑜𝑠𝑒𝑟 𝑡𝑜 𝑡ℎ𝑒 𝑒𝑚𝑏𝑒𝑑𝑑𝑖𝑛𝑔𝑠 𝑜𝑓 𝑎𝑙𝑙 𝑡ℎ𝑒 𝑠𝑢𝑏𝑔𝑟𝑎𝑝ℎ𝑠 𝑖𝑡𝑠 𝑐𝑜𝑛𝑡𝑒𝑥𝑡 𝑎𝑛𝑑 𝑎𝑡 𝑡ℎ𝑒 𝑠𝑎𝑚𝑒 𝑡𝑖𝑚𝑒 𝑑𝑖𝑠𝑡𝑎𝑛𝑐𝑒𝑠 𝑡ℎ𝑒 Same from the embeddings of a fixed number of subgraphs that are not its context •
  • 20. 20 5. Method Optimization • Stochastic gradient descent (SGD) optimizer is used to op- timize these parameters • derivatives are estimated using the back-propagation algorithm. • The learning rate α is empirically tuned.
  • 21. 21 5. Method Relation to Deep WL kernel • each of the subgraph in SGvocab is obtained using the WL re-labelling strategy, and hence represents the WL neighbourhood labels of a node. • Hence learning latent representations of such subgraphs amounts to learning representations of WL neighbourhood labels. • Therefore, once the embeddings of all the subgraph in SGvocab are learnt using Algorithm 1, one could use it to build the deep learning variant of the WL kernel among the graphs in G.
  • 22. 22 5. Method Use cases • Graph Classification • Graph Clustering
  • 24. 24 6. Evaluation Results and Discussion • Accuracy • SVMs with subgraph2vec’s embeddings achieve better accuracy on 3 datasets and comparable accuracy on the remaining 2 datasets
  • 25. 25 6. Evaluation Results and Discussion • Efficiency • It is important to note that classification on these benchmark datasets are much simpler than real- world classification tasks. • By using trivial features such as number of nodes in the graph, achieved comparable accuracies to the SOTA graph kernels.
  • 26. 26 7. Conclusion Evaluation • Presented subgraph2vec, an unsupervised representation learning technique to learn embedding of rooted subgraphs that exist in large graphs • Through our large-scale experiment involving benchmark and real-world graph classification and clustering datasets • We demonstrate that subgraph embeddings learnt by our approach could be used in conjunction with classifiers such as CNNs, SVMs and relational data clustering algorithms to ahieve sign