SlideShare a Scribd company logo
Ho-Beom Kim
Network Science Lab
Dept. of Mathematics
The Catholic University of Korea
E-mail: hobeom2001@catholic.ac.kr
2023 / 07 / 03
Annamalai Narayanan, et al.
2
 Introduction
• Problem statements
• Contributions
 Methodology
 Related work
 Experiments
 Results
 Discussion
 Conclusion
3
1. Introduction
Contributions
• Propose subgraph2vec, an unsupervised representation learning technique to learn latent
representations of rooted subgraphs present in large graphs
• Develop a modified version of the skipgram language model which is capable of modeling varying
lengthradial contexts around target subgraphs
• Subgraph2vec’s representation learning technique would help deep learning variant of WL kernel
• Demonstrate that subgraph2vec could significantly outperform state-of-the-art
4
1. Introduction
Limitations of Existing Graph kernels
• (L1) Structural similarity
• Substructures that are used to compute the kernel matrix are not independent.
• (L2) Diagonal Dominance
• Since graph kernels regard these substructures as separate features, the dimensional- ity of the
feature space often grows exponentially with the number of substructures.
• only a few sub- structures will be common across graphs.
• leads to diagonal dominance, that is, a given graph is similar to it- self but not to any other graph
in the dataset.
• This leads to poor classification/clustering accuracy.
5
1. Introduction
Existing Solution
• DGK : Deep Graph Kernels
• 𝐾 𝐺, 𝐺′ = Φ 𝐺 𝑇𝑀Φ 𝐺′
• 𝑀𝑟𝑒𝑝𝑟𝑒𝑠𝑒𝑛𝑡𝑠 𝑎 𝑉 𝑥 𝑉 𝑝𝑜𝑠𝑖𝑡𝑖𝑣𝑒 𝑠𝑒𝑚𝑖 −
𝑑𝑒𝑓𝑖𝑛𝑒 𝑚𝑎𝑡𝑟𝑖𝑥 𝑡ℎ𝑎𝑡 𝑒𝑛𝑐𝑜𝑑𝑒𝑠 𝑡ℎ𝑒 𝑟𝑒𝑙𝑎𝑡𝑖𝑜𝑛𝑠ℎ𝑖𝑝 𝑏𝑒𝑡𝑤𝑒𝑒𝑛 𝑠𝑢𝑏𝑠𝑡𝑟𝑢𝑐𝑡𝑢𝑟𝑒𝑠 𝑎𝑛𝑑 𝑉 𝑟𝑒𝑝𝑟𝑒𝑠𝑒𝑛𝑡𝑠 𝑡ℎ𝑒 𝑣𝑜𝑐𝑎𝑏𝑢
• Lary of substrucures obtained from the training data
• M matrix respects the similarity of the substructure space
6
1. Introduction
Related Work
• Deep Walk and node2vec intend to learn node embeddings by generating random walks in a single
graph.
• Both these works rely on existence of node labels for at least a small portion of nodes and take a
semi-supervised approach to learn node embeddings.
• Subgraph2vec learns subgraph embeddings in an unsupervised manner
• Graph kernel’s categories
• Kernels for limited-size subgraphs
• Kernels based on subtree patterns
• Kernels based on walks and paths
• Subgraph2vec is complementary to these existing graph kernels where the substructures exhibit
reasonable similarities among them
7
1. Introduction
Problem Statements
• Consider the problem of learning distributed representations of rooted subgraphs from a given set of
graphs
• 𝐺 = (𝑉, 𝐸, 𝜆)
• 𝐺 = 𝑉
𝑠𝑔, 𝐸𝑠𝑔, 𝜆𝑠𝑔
• Sg : a sub-graph of G iff there exists an injective mapping 𝜇 ∶ 𝑉
𝑠𝑔 → 𝑉 s.t. (𝑣1, 𝑣2) ∈
𝐸𝑠𝑔 𝑖𝑓𝑓 𝜇 𝑣1 , 𝜇 𝑣2 ∈ 𝐸
• 𝒢 = 𝐺1, 𝐺2, … , 𝐺𝑛 ∶ 𝑎 𝑠𝑒𝑡 𝑜𝑓 𝑔𝑟𝑎𝑝ℎ𝑠
• D : positive integer
8
4. Background : Language Models
Traditional language models
• The traditional language models determine the likelihood of a sequence of words appearing in it.
• Pr 𝑤𝑡 𝑤1, … , 𝑤𝑡−1
• Estimate the likelihood of observing the target word 𝑤𝑡 given n previous words (𝑤1, … , 𝑤𝑡−1)
observed thus far
9
4. Background : Language Models
Neural language models
• The recently developed neural language models focus on learning distributed vector representation of
words
• These models improve traditional n-gram models by using vector embeddings for words
• Neural language models exploit the of the notion of context where a context is. Efined as a. ixed number
of. Ords surrounding the target word
• 𝑡=1
𝑇
𝑙𝑜𝑔𝑃𝑟 𝑤𝑡 𝑤1, … , 𝑤𝑡−1
• 𝑤𝑡 𝑤1, … , 𝑤𝑡−1 are the context of the target word 𝑤𝑡
•
10
4. Background : Language Models
Skip Gram
• The skipgram model maximizes co-occurrence probability among the words that appear within a given
context window.
• Give a context window of size c and the target word wt, skipgram model attempts to predict the words
that appear in the context of the target word, (𝑤𝑡−𝑐, ..., 𝑤𝑡−𝑐 ).
• 𝑡=1
𝑇
𝑙𝑜𝑔𝑃𝑟(𝑤𝑡−𝑐, … , 𝑤𝑡+𝑐|𝑤𝑡)
• 𝑃𝑟(𝑤𝑡−𝑐, … , 𝑤𝑡+𝑐|𝑤𝑡) : computed as Π−𝑐≤𝑗≤𝑐,𝑗≠0Pr(𝑤𝑡+𝑗|𝑤𝑡)
• 𝑃𝑟(𝑤𝑡+𝑗 |𝑤𝑡) :
𝑒𝑥𝑝(Φ𝑤
𝑇
𝑡Φ𝑤𝑡+𝑗
′ )
𝑤=1
𝑉 𝑒𝑥𝑝(Φ𝑤
𝑇
𝑡Φ𝑤
′ )
• Φ𝑤 Φ𝑤
′
: input and output vectors of word w
11
4. Background : Language Models
Negative Sampling
• Negative sampling selects the words that are not in the context at random instead of considering all
words in the vocabulary.
• If a word w appears in the context of another word w′, then the vector embedding of w is closer to
that of w′ compared to any other randomly chosen word from the vocabulary.
• the learned word embeddings preserve semantics
• can utilize word embedding models to learn dimensions of similarity between subgraphs.
• that similar subgraphs will be close to each other in the embedding space.
12
5. Method
Learning Sub-Graph Representations
• Similar to the language modeling convention, the only re- quired input is a corpus and a vocabulary of
subgraphs for subgraph2vec to learn representations.
• Given a dataset of graphs, subgraph2vec considers all the neighbourhoods of rooted subgraphs around
every rooted subgraph as its corpus, and set of all rooted subgraphs around every node in every graph
as its vocabulary.
• Following the language model training process with the subgraphs and their contexts, subgraph2vec
learns the intended subgraph embeddings.
13
5. Method
Algorithm : subgraph2vec
• The algorithm consists of two main components
• A procedure to generate rooted subgraphs
around every node in a given graph and
second every node in a given graph
• The procedure to learn embeddings of
those subgraphs
• Learn δ dimensional embeddings of subgraphs
(up to degree D) from all the graphs in dataset
G in e epochs. We begin by building a
vocabulary of all the subgraphs
14
5. Method
Algorithm : subgraph2vec
• The algorithm consists of two main components
• A procedure to generate rooted subgraphs
around every node in a given graph and
second every node in a given graph
• The procedure to learn embeddings of
those subgraphs
• Learn δ dimensional embeddings of subgraphs
(up to degree D) from all the graphs in dataset
G in e epochs. We begin by building a
vocabulary of all the subgraphs
• Then the embeddings for all subgraphs in the
vocabulary (Φ) is initialized randomly
• we proceed with learning the embeddings in
several epochs iterating over the graphs in G.
15
5. Method
Extracting Rooted Subgraphs
• To extract these subgraphs, we follow the well-known WL relabeling process which lays the basis for the
WL kernel and WL test of graph isomorphism
•
16
5. Method
Radial Skipgram – Modeling the radial context
• unlike words in a traditional text corpora, subgraphs do not have a linear co-occurrence relationship.
• consider the breadth-first neighbours of the root node as its context as it directly follows from the
definition of WL relabeling process.
• Define the context of a degree-d subgraph 𝑠𝑔𝑣
(𝑑)
rooted at v, as the multiset of subgraphs of
degrees d-1, d, d+1 rooted at each of the neighbors of v(lines 2-6 in algorithm3)
• Subgraph of degrees d-1, d, d+1 to be in the context of a subgraph of degree d
• Degree-d subgraph is likely to be rather similar to subgraphs of degrees that are closer to d
17
5. Method
Radial Skipgram – Vanilla Skip Gram
• the vanilla skipgram language model captures fixed-length linear contexts over the words in a given
sentence.
• For learning a subgraph;s radial context , the vanilla skipgram model could not be used
18
5. Method
Radial Skipgram – Modification
• This could be in several thousands/millions in the case of large graphs.
• Training such models would re- quire large amount of computational resources.
• To alleviate this bottleneck, we approximate the probability distribution using the negative sampling
approach.
19
5. Method
Negative sampling
• 𝑠𝑔𝑐𝑜𝑛𝑡 ∈ 𝑆𝐺𝑣𝑜𝑐𝑎𝑏 𝑎𝑛𝑑 𝑆𝐺𝑣𝑜𝑐𝑎𝑏 𝑖𝑠 𝑣𝑒𝑟𝑦 𝑙𝑎𝑟𝑔𝑒, Pr(𝑠𝑔𝑐𝑜𝑢𝑛𝑡|Φ(𝑠𝑔𝑣
(𝑑)
)) is prohibitively expensive
• We follow the negative sampling strategy to calculate above mentioned posterior probability
• every training cycle of Algorithm3 , we choose a fixed number of subgraphs (denoted as negsamples) as
negative samples and update their embeddings as well.
• Negative samples adhere to the following conditions
• If 𝑛𝑒𝑔𝑠𝑎𝑚𝑝𝑙𝑒𝑠 = 𝑠𝑔𝑛𝑒𝑔1, 𝑠𝑔𝑛𝑒𝑔2, … , 𝑡ℎ𝑒𝑛 𝑛𝑒𝑔𝑎𝑠𝑎𝑚𝑝𝑙𝑒𝑠 ⊂ 𝑆𝐺𝑣𝑜𝑐𝑎𝑏, 𝑛𝑒𝑔𝑠𝑎𝑚𝑝𝑙𝑒𝑠 ≪
𝑆𝐺𝑣𝑜𝑐𝑎𝑏 𝑎𝑛𝑑 𝑛𝑒𝑔𝑎𝑠𝑎𝑚𝑝𝑙𝑒𝑠 ∩ 𝑐𝑜𝑛𝑡𝑒𝑥𝑡𝑣
𝑑
= {}
• Φ 𝑠𝑔𝑣
𝑑
𝑐𝑙𝑜𝑠𝑒𝑟 𝑡𝑜 𝑡ℎ𝑒 𝑒𝑚𝑏𝑒𝑑𝑑𝑖𝑛𝑔𝑠 𝑜𝑓 𝑎𝑙𝑙 𝑡ℎ𝑒 𝑠𝑢𝑏𝑔𝑟𝑎𝑝ℎ𝑠 𝑖𝑡𝑠 𝑐𝑜𝑛𝑡𝑒𝑥𝑡 𝑎𝑛𝑑 𝑎𝑡 𝑡ℎ𝑒 𝑠𝑎𝑚𝑒 𝑡𝑖𝑚𝑒 𝑑𝑖𝑠𝑡𝑎𝑛𝑐𝑒𝑠 𝑡ℎ𝑒
Same from the embeddings of a fixed number of subgraphs that are not its context
•
20
5. Method
Optimization
• Stochastic gradient descent (SGD) optimizer is used to op- timize these parameters
• derivatives are estimated using the back-propagation algorithm.
• The learning rate α is empirically tuned.
21
5. Method
Relation to Deep WL kernel
• each of the subgraph in SGvocab is obtained using the WL re-labelling strategy, and hence represents
the WL neighbourhood labels of a node.
• Hence learning latent representations of such subgraphs amounts to learning representations of WL
neighbourhood labels.
• Therefore, once the embeddings of all the subgraph in SGvocab are learnt using Algorithm 1, one could
use it to build the deep learning variant of the WL kernel among the graphs in G.
22
5. Method
Use cases
• Graph Classification
• Graph Clustering
23
6. Evaluation
datasets
• MUTAG,PTC,PROTEINS,NCI1,NCI109
24
6. Evaluation
Results and Discussion
• Accuracy
• SVMs with subgraph2vec’s embeddings achieve better accuracy on 3 datasets and comparable
accuracy on the remaining 2 datasets
25
6. Evaluation
Results and Discussion
• Efficiency
• It is important to note that classification on these benchmark datasets are much simpler than real-
world classification tasks.
• By using trivial features such as number of nodes in the graph, achieved comparable accuracies to
the SOTA graph kernels.
26
7. Conclusion
Evaluation
• Presented subgraph2vec, an unsupervised representation learning technique to learn embedding of
rooted subgraphs that exist in large graphs
• Through our large-scale experiment involving benchmark and real-world graph classification and
clustering datasets
• We demonstrate that subgraph embeddings learnt by our approach could be used in conjunction
with classifiers such as CNNs, SVMs and relational data clustering algorithms to ahieve sign

More Related Content

Similar to NS-CUK Seminar: H.B.Kim, Review on "subgraph2vec: Learning Distributed Representations of Rooted Sub-graphs from Large Graphs", 2016

gSpan algorithm
gSpan algorithmgSpan algorithm
gSpan algorithm
Sadik Mussah
 
240415_Thuy_Labseminar[Simple and Asymmetric Graph Contrastive Learning witho...
240415_Thuy_Labseminar[Simple and Asymmetric Graph Contrastive Learning witho...240415_Thuy_Labseminar[Simple and Asymmetric Graph Contrastive Learning witho...
240415_Thuy_Labseminar[Simple and Asymmetric Graph Contrastive Learning witho...
thanhdowork
 
240115_Thanh_LabSeminar[Don't walk, skip! online learning of multi-scale netw...
240115_Thanh_LabSeminar[Don't walk, skip! online learning of multi-scale netw...240115_Thanh_LabSeminar[Don't walk, skip! online learning of multi-scale netw...
240115_Thanh_LabSeminar[Don't walk, skip! online learning of multi-scale netw...
thanhdowork
 
Ivd soda-2019
Ivd soda-2019Ivd soda-2019
Ivd soda-2019
AkankshaAgrawal55
 
240325_JW_labseminar[node2vec: Scalable Feature Learning for Networks].pptx
240325_JW_labseminar[node2vec: Scalable Feature Learning for Networks].pptx240325_JW_labseminar[node2vec: Scalable Feature Learning for Networks].pptx
240325_JW_labseminar[node2vec: Scalable Feature Learning for Networks].pptx
thanhdowork
 
Weisfeiler and Leman Go Neural: Higher-order Graph Neural Networks, arXiv e-...
Weisfeiler and Leman Go Neural: Higher-order Graph Neural Networks, arXiv e-...Weisfeiler and Leman Go Neural: Higher-order Graph Neural Networks, arXiv e-...
Weisfeiler and Leman Go Neural: Higher-order Graph Neural Networks, arXiv e-...
ssuser2624f71
 
Erin LeDell, Machine Learning Scientist, H2O.ai at MLconf ATL 2016
Erin LeDell, Machine Learning Scientist, H2O.ai at MLconf ATL 2016Erin LeDell, Machine Learning Scientist, H2O.ai at MLconf ATL 2016
Erin LeDell, Machine Learning Scientist, H2O.ai at MLconf ATL 2016
MLconf
 
240506_Thuy_Labseminar[GraphPrompt: Unifying Pre-Training and Downstream Task...
240506_Thuy_Labseminar[GraphPrompt: Unifying Pre-Training and Downstream Task...240506_Thuy_Labseminar[GraphPrompt: Unifying Pre-Training and Downstream Task...
240506_Thuy_Labseminar[GraphPrompt: Unifying Pre-Training and Downstream Task...
thanhdowork
 
Dr. Erin LeDell, Machine Learning Scientist, H2O.ai at MLconf SEA - 5/20/16
Dr. Erin LeDell, Machine Learning Scientist, H2O.ai at MLconf SEA - 5/20/16Dr. Erin LeDell, Machine Learning Scientist, H2O.ai at MLconf SEA - 5/20/16
Dr. Erin LeDell, Machine Learning Scientist, H2O.ai at MLconf SEA - 5/20/16
MLconf
 
Generalized Linear Models in Spark MLlib and SparkR by Xiangrui Meng
Generalized Linear Models in Spark MLlib and SparkR by Xiangrui MengGeneralized Linear Models in Spark MLlib and SparkR by Xiangrui Meng
Generalized Linear Models in Spark MLlib and SparkR by Xiangrui Meng
Spark Summit
 
Generalized Linear Models in Spark MLlib and SparkR
Generalized Linear Models in Spark MLlib and SparkRGeneralized Linear Models in Spark MLlib and SparkR
Generalized Linear Models in Spark MLlib and SparkR
Databricks
 
NS-CUK Seminar: S.T.Nguyen, Review on "On Generalized Degree Fairness in Grap...
NS-CUK Seminar: S.T.Nguyen, Review on "On Generalized Degree Fairness in Grap...NS-CUK Seminar: S.T.Nguyen, Review on "On Generalized Degree Fairness in Grap...
NS-CUK Seminar: S.T.Nguyen, Review on "On Generalized Degree Fairness in Grap...
Network Science Lab, The Catholic University of Korea
 
Recurrent Neural Networks, LSTM and GRU
Recurrent Neural Networks, LSTM and GRURecurrent Neural Networks, LSTM and GRU
Recurrent Neural Networks, LSTM and GRU
ananth
 
Locally densest subgraph discovery
Locally densest subgraph discoveryLocally densest subgraph discovery
Locally densest subgraph discovery
aftab alam
 
Deep Learning for Personalized Search and Recommender Systems
Deep Learning for Personalized Search and Recommender SystemsDeep Learning for Personalized Search and Recommender Systems
Deep Learning for Personalized Search and Recommender Systems
Benjamin Le
 
Graph Techniques for Natural Language Processing
Graph Techniques for Natural Language ProcessingGraph Techniques for Natural Language Processing
Graph Techniques for Natural Language Processing
Sujit Pal
 
Cjb0912010 lz algorithms
Cjb0912010 lz algorithmsCjb0912010 lz algorithms
Cjb0912010 lz algorithms
RAJAN ST
 
Machine Learning workshop by GDSC Amity University Chhattisgarh
Machine Learning workshop by GDSC Amity University ChhattisgarhMachine Learning workshop by GDSC Amity University Chhattisgarh
Machine Learning workshop by GDSC Amity University Chhattisgarh
Poorabpatel
 
Deep learning from a novice perspective
Deep learning from a novice perspectiveDeep learning from a novice perspective
Deep learning from a novice perspective
Anirban Santara
 
Deep Learning for Machine Translation
Deep Learning for Machine TranslationDeep Learning for Machine Translation
Deep Learning for Machine Translation
Matīss ‎‎‎‎‎‎‎  
 

Similar to NS-CUK Seminar: H.B.Kim, Review on "subgraph2vec: Learning Distributed Representations of Rooted Sub-graphs from Large Graphs", 2016 (20)

gSpan algorithm
gSpan algorithmgSpan algorithm
gSpan algorithm
 
240415_Thuy_Labseminar[Simple and Asymmetric Graph Contrastive Learning witho...
240415_Thuy_Labseminar[Simple and Asymmetric Graph Contrastive Learning witho...240415_Thuy_Labseminar[Simple and Asymmetric Graph Contrastive Learning witho...
240415_Thuy_Labseminar[Simple and Asymmetric Graph Contrastive Learning witho...
 
240115_Thanh_LabSeminar[Don't walk, skip! online learning of multi-scale netw...
240115_Thanh_LabSeminar[Don't walk, skip! online learning of multi-scale netw...240115_Thanh_LabSeminar[Don't walk, skip! online learning of multi-scale netw...
240115_Thanh_LabSeminar[Don't walk, skip! online learning of multi-scale netw...
 
Ivd soda-2019
Ivd soda-2019Ivd soda-2019
Ivd soda-2019
 
240325_JW_labseminar[node2vec: Scalable Feature Learning for Networks].pptx
240325_JW_labseminar[node2vec: Scalable Feature Learning for Networks].pptx240325_JW_labseminar[node2vec: Scalable Feature Learning for Networks].pptx
240325_JW_labseminar[node2vec: Scalable Feature Learning for Networks].pptx
 
Weisfeiler and Leman Go Neural: Higher-order Graph Neural Networks, arXiv e-...
Weisfeiler and Leman Go Neural: Higher-order Graph Neural Networks, arXiv e-...Weisfeiler and Leman Go Neural: Higher-order Graph Neural Networks, arXiv e-...
Weisfeiler and Leman Go Neural: Higher-order Graph Neural Networks, arXiv e-...
 
Erin LeDell, Machine Learning Scientist, H2O.ai at MLconf ATL 2016
Erin LeDell, Machine Learning Scientist, H2O.ai at MLconf ATL 2016Erin LeDell, Machine Learning Scientist, H2O.ai at MLconf ATL 2016
Erin LeDell, Machine Learning Scientist, H2O.ai at MLconf ATL 2016
 
240506_Thuy_Labseminar[GraphPrompt: Unifying Pre-Training and Downstream Task...
240506_Thuy_Labseminar[GraphPrompt: Unifying Pre-Training and Downstream Task...240506_Thuy_Labseminar[GraphPrompt: Unifying Pre-Training and Downstream Task...
240506_Thuy_Labseminar[GraphPrompt: Unifying Pre-Training and Downstream Task...
 
Dr. Erin LeDell, Machine Learning Scientist, H2O.ai at MLconf SEA - 5/20/16
Dr. Erin LeDell, Machine Learning Scientist, H2O.ai at MLconf SEA - 5/20/16Dr. Erin LeDell, Machine Learning Scientist, H2O.ai at MLconf SEA - 5/20/16
Dr. Erin LeDell, Machine Learning Scientist, H2O.ai at MLconf SEA - 5/20/16
 
Generalized Linear Models in Spark MLlib and SparkR by Xiangrui Meng
Generalized Linear Models in Spark MLlib and SparkR by Xiangrui MengGeneralized Linear Models in Spark MLlib and SparkR by Xiangrui Meng
Generalized Linear Models in Spark MLlib and SparkR by Xiangrui Meng
 
Generalized Linear Models in Spark MLlib and SparkR
Generalized Linear Models in Spark MLlib and SparkRGeneralized Linear Models in Spark MLlib and SparkR
Generalized Linear Models in Spark MLlib and SparkR
 
NS-CUK Seminar: S.T.Nguyen, Review on "On Generalized Degree Fairness in Grap...
NS-CUK Seminar: S.T.Nguyen, Review on "On Generalized Degree Fairness in Grap...NS-CUK Seminar: S.T.Nguyen, Review on "On Generalized Degree Fairness in Grap...
NS-CUK Seminar: S.T.Nguyen, Review on "On Generalized Degree Fairness in Grap...
 
Recurrent Neural Networks, LSTM and GRU
Recurrent Neural Networks, LSTM and GRURecurrent Neural Networks, LSTM and GRU
Recurrent Neural Networks, LSTM and GRU
 
Locally densest subgraph discovery
Locally densest subgraph discoveryLocally densest subgraph discovery
Locally densest subgraph discovery
 
Deep Learning for Personalized Search and Recommender Systems
Deep Learning for Personalized Search and Recommender SystemsDeep Learning for Personalized Search and Recommender Systems
Deep Learning for Personalized Search and Recommender Systems
 
Graph Techniques for Natural Language Processing
Graph Techniques for Natural Language ProcessingGraph Techniques for Natural Language Processing
Graph Techniques for Natural Language Processing
 
Cjb0912010 lz algorithms
Cjb0912010 lz algorithmsCjb0912010 lz algorithms
Cjb0912010 lz algorithms
 
Machine Learning workshop by GDSC Amity University Chhattisgarh
Machine Learning workshop by GDSC Amity University ChhattisgarhMachine Learning workshop by GDSC Amity University Chhattisgarh
Machine Learning workshop by GDSC Amity University Chhattisgarh
 
Deep learning from a novice perspective
Deep learning from a novice perspectiveDeep learning from a novice perspective
Deep learning from a novice perspective
 
Deep Learning for Machine Translation
Deep Learning for Machine TranslationDeep Learning for Machine Translation
Deep Learning for Machine Translation
 

More from ssuser4b1f48

NS-CUK Seminar: V.T.Hoang, Review on "GOAT: A Global Transformer on Large-sca...
NS-CUK Seminar: V.T.Hoang, Review on "GOAT: A Global Transformer on Large-sca...NS-CUK Seminar: V.T.Hoang, Review on "GOAT: A Global Transformer on Large-sca...
NS-CUK Seminar: V.T.Hoang, Review on "GOAT: A Global Transformer on Large-sca...
ssuser4b1f48
 
NS-CUK Seminar: J.H.Lee, Review on "Graph Propagation Transformer for Graph R...
NS-CUK Seminar: J.H.Lee, Review on "Graph Propagation Transformer for Graph R...NS-CUK Seminar: J.H.Lee, Review on "Graph Propagation Transformer for Graph R...
NS-CUK Seminar: J.H.Lee, Review on "Graph Propagation Transformer for Graph R...
ssuser4b1f48
 
NS-CUK Seminar: H.B.Kim, Review on "Cluster-GCN: An Efficient Algorithm for ...
NS-CUK Seminar: H.B.Kim,  Review on "Cluster-GCN: An Efficient Algorithm for ...NS-CUK Seminar: H.B.Kim,  Review on "Cluster-GCN: An Efficient Algorithm for ...
NS-CUK Seminar: H.B.Kim, Review on "Cluster-GCN: An Efficient Algorithm for ...
ssuser4b1f48
 
NS-CUK Seminar: H.E.Lee, Review on "Weisfeiler and Leman Go Neural: Higher-O...
NS-CUK Seminar: H.E.Lee,  Review on "Weisfeiler and Leman Go Neural: Higher-O...NS-CUK Seminar: H.E.Lee,  Review on "Weisfeiler and Leman Go Neural: Higher-O...
NS-CUK Seminar: H.E.Lee, Review on "Weisfeiler and Leman Go Neural: Higher-O...
ssuser4b1f48
 
NS-CUK Seminar:V.T.Hoang, Review on "GRPE: Relative Positional Encoding for G...
NS-CUK Seminar:V.T.Hoang, Review on "GRPE: Relative Positional Encoding for G...NS-CUK Seminar:V.T.Hoang, Review on "GRPE: Relative Positional Encoding for G...
NS-CUK Seminar:V.T.Hoang, Review on "GRPE: Relative Positional Encoding for G...
ssuser4b1f48
 
NS-CUK Seminar: J.H.Lee, Review on "Learnable Structural Semantic Readout for...
NS-CUK Seminar: J.H.Lee, Review on "Learnable Structural Semantic Readout for...NS-CUK Seminar: J.H.Lee, Review on "Learnable Structural Semantic Readout for...
NS-CUK Seminar: J.H.Lee, Review on "Learnable Structural Semantic Readout for...
ssuser4b1f48
 
Aug 22nd, 2023: Case Studies - The Art and Science of Animation Production)
Aug 22nd, 2023: Case Studies - The Art and Science of Animation Production)Aug 22nd, 2023: Case Studies - The Art and Science of Animation Production)
Aug 22nd, 2023: Case Studies - The Art and Science of Animation Production)
ssuser4b1f48
 
Aug 17th, 2023: Case Studies - Examining Gamification through Virtual/Augment...
Aug 17th, 2023: Case Studies - Examining Gamification through Virtual/Augment...Aug 17th, 2023: Case Studies - Examining Gamification through Virtual/Augment...
Aug 17th, 2023: Case Studies - Examining Gamification through Virtual/Augment...
ssuser4b1f48
 
Aug 10th, 2023: Case Studies - The Power of eXtended Reality (XR) with 360°
Aug 10th, 2023: Case Studies - The Power of eXtended Reality (XR) with 360°Aug 10th, 2023: Case Studies - The Power of eXtended Reality (XR) with 360°
Aug 10th, 2023: Case Studies - The Power of eXtended Reality (XR) with 360°
ssuser4b1f48
 
Aug 8th, 2023: Case Studies - Utilizing eXtended Reality (XR) in Drones)
Aug 8th, 2023: Case Studies - Utilizing eXtended Reality (XR) in Drones)Aug 8th, 2023: Case Studies - Utilizing eXtended Reality (XR) in Drones)
Aug 8th, 2023: Case Studies - Utilizing eXtended Reality (XR) in Drones)
ssuser4b1f48
 
NS-CUK Seminar: J.H.Lee, Review on "Learnable Structural Semantic Readout for...
NS-CUK Seminar: J.H.Lee, Review on "Learnable Structural Semantic Readout for...NS-CUK Seminar: J.H.Lee, Review on "Learnable Structural Semantic Readout for...
NS-CUK Seminar: J.H.Lee, Review on "Learnable Structural Semantic Readout for...
ssuser4b1f48
 
NS-CUK Seminar: H.E.Lee, Review on "Gated Graph Sequence Neural Networks", I...
NS-CUK Seminar: H.E.Lee,  Review on "Gated Graph Sequence Neural Networks", I...NS-CUK Seminar: H.E.Lee,  Review on "Gated Graph Sequence Neural Networks", I...
NS-CUK Seminar: H.E.Lee, Review on "Gated Graph Sequence Neural Networks", I...
ssuser4b1f48
 
NS-CUK Seminar:V.T.Hoang, Review on "Augmentation-Free Self-Supervised Learni...
NS-CUK Seminar:V.T.Hoang, Review on "Augmentation-Free Self-Supervised Learni...NS-CUK Seminar:V.T.Hoang, Review on "Augmentation-Free Self-Supervised Learni...
NS-CUK Seminar:V.T.Hoang, Review on "Augmentation-Free Self-Supervised Learni...
ssuser4b1f48
 
NS-CUK Journal club: H.E.Lee, Review on " A biomedical knowledge graph-based ...
NS-CUK Journal club: H.E.Lee, Review on " A biomedical knowledge graph-based ...NS-CUK Journal club: H.E.Lee, Review on " A biomedical knowledge graph-based ...
NS-CUK Journal club: H.E.Lee, Review on " A biomedical knowledge graph-based ...
ssuser4b1f48
 
NS-CUK Seminar: H.E.Lee, Review on "PTE: Predictive Text Embedding through L...
NS-CUK Seminar: H.E.Lee,  Review on "PTE: Predictive Text Embedding through L...NS-CUK Seminar: H.E.Lee,  Review on "PTE: Predictive Text Embedding through L...
NS-CUK Seminar: H.E.Lee, Review on "PTE: Predictive Text Embedding through L...
ssuser4b1f48
 
NS-CUK Seminar: H.B.Kim, Review on "Inductive Representation Learning on Lar...
NS-CUK Seminar: H.B.Kim,  Review on "Inductive Representation Learning on Lar...NS-CUK Seminar: H.B.Kim,  Review on "Inductive Representation Learning on Lar...
NS-CUK Seminar: H.B.Kim, Review on "Inductive Representation Learning on Lar...
ssuser4b1f48
 
NS-CUK Seminar: H.E.Lee, Review on "PTE: Predictive Text Embedding through L...
NS-CUK Seminar: H.E.Lee,  Review on "PTE: Predictive Text Embedding through L...NS-CUK Seminar: H.E.Lee,  Review on "PTE: Predictive Text Embedding through L...
NS-CUK Seminar: H.E.Lee, Review on "PTE: Predictive Text Embedding through L...
ssuser4b1f48
 
NS-CUK Seminar: J.H.Lee, Review on "Relational Self-Supervised Learning on Gr...
NS-CUK Seminar: J.H.Lee, Review on "Relational Self-Supervised Learning on Gr...NS-CUK Seminar: J.H.Lee, Review on "Relational Self-Supervised Learning on Gr...
NS-CUK Seminar: J.H.Lee, Review on "Relational Self-Supervised Learning on Gr...
ssuser4b1f48
 
NS-CUK Seminar: H.B.Kim, Review on "metapath2vec: Scalable representation le...
NS-CUK Seminar: H.B.Kim,  Review on "metapath2vec: Scalable representation le...NS-CUK Seminar: H.B.Kim,  Review on "metapath2vec: Scalable representation le...
NS-CUK Seminar: H.B.Kim, Review on "metapath2vec: Scalable representation le...
ssuser4b1f48
 
NS-CUK Seminar: H.E.Lee, Review on "Graph Star Net for Generalized Multi-Tas...
NS-CUK Seminar: H.E.Lee,  Review on "Graph Star Net for Generalized Multi-Tas...NS-CUK Seminar: H.E.Lee,  Review on "Graph Star Net for Generalized Multi-Tas...
NS-CUK Seminar: H.E.Lee, Review on "Graph Star Net for Generalized Multi-Tas...
ssuser4b1f48
 

More from ssuser4b1f48 (20)

NS-CUK Seminar: V.T.Hoang, Review on "GOAT: A Global Transformer on Large-sca...
NS-CUK Seminar: V.T.Hoang, Review on "GOAT: A Global Transformer on Large-sca...NS-CUK Seminar: V.T.Hoang, Review on "GOAT: A Global Transformer on Large-sca...
NS-CUK Seminar: V.T.Hoang, Review on "GOAT: A Global Transformer on Large-sca...
 
NS-CUK Seminar: J.H.Lee, Review on "Graph Propagation Transformer for Graph R...
NS-CUK Seminar: J.H.Lee, Review on "Graph Propagation Transformer for Graph R...NS-CUK Seminar: J.H.Lee, Review on "Graph Propagation Transformer for Graph R...
NS-CUK Seminar: J.H.Lee, Review on "Graph Propagation Transformer for Graph R...
 
NS-CUK Seminar: H.B.Kim, Review on "Cluster-GCN: An Efficient Algorithm for ...
NS-CUK Seminar: H.B.Kim,  Review on "Cluster-GCN: An Efficient Algorithm for ...NS-CUK Seminar: H.B.Kim,  Review on "Cluster-GCN: An Efficient Algorithm for ...
NS-CUK Seminar: H.B.Kim, Review on "Cluster-GCN: An Efficient Algorithm for ...
 
NS-CUK Seminar: H.E.Lee, Review on "Weisfeiler and Leman Go Neural: Higher-O...
NS-CUK Seminar: H.E.Lee,  Review on "Weisfeiler and Leman Go Neural: Higher-O...NS-CUK Seminar: H.E.Lee,  Review on "Weisfeiler and Leman Go Neural: Higher-O...
NS-CUK Seminar: H.E.Lee, Review on "Weisfeiler and Leman Go Neural: Higher-O...
 
NS-CUK Seminar:V.T.Hoang, Review on "GRPE: Relative Positional Encoding for G...
NS-CUK Seminar:V.T.Hoang, Review on "GRPE: Relative Positional Encoding for G...NS-CUK Seminar:V.T.Hoang, Review on "GRPE: Relative Positional Encoding for G...
NS-CUK Seminar:V.T.Hoang, Review on "GRPE: Relative Positional Encoding for G...
 
NS-CUK Seminar: J.H.Lee, Review on "Learnable Structural Semantic Readout for...
NS-CUK Seminar: J.H.Lee, Review on "Learnable Structural Semantic Readout for...NS-CUK Seminar: J.H.Lee, Review on "Learnable Structural Semantic Readout for...
NS-CUK Seminar: J.H.Lee, Review on "Learnable Structural Semantic Readout for...
 
Aug 22nd, 2023: Case Studies - The Art and Science of Animation Production)
Aug 22nd, 2023: Case Studies - The Art and Science of Animation Production)Aug 22nd, 2023: Case Studies - The Art and Science of Animation Production)
Aug 22nd, 2023: Case Studies - The Art and Science of Animation Production)
 
Aug 17th, 2023: Case Studies - Examining Gamification through Virtual/Augment...
Aug 17th, 2023: Case Studies - Examining Gamification through Virtual/Augment...Aug 17th, 2023: Case Studies - Examining Gamification through Virtual/Augment...
Aug 17th, 2023: Case Studies - Examining Gamification through Virtual/Augment...
 
Aug 10th, 2023: Case Studies - The Power of eXtended Reality (XR) with 360°
Aug 10th, 2023: Case Studies - The Power of eXtended Reality (XR) with 360°Aug 10th, 2023: Case Studies - The Power of eXtended Reality (XR) with 360°
Aug 10th, 2023: Case Studies - The Power of eXtended Reality (XR) with 360°
 
Aug 8th, 2023: Case Studies - Utilizing eXtended Reality (XR) in Drones)
Aug 8th, 2023: Case Studies - Utilizing eXtended Reality (XR) in Drones)Aug 8th, 2023: Case Studies - Utilizing eXtended Reality (XR) in Drones)
Aug 8th, 2023: Case Studies - Utilizing eXtended Reality (XR) in Drones)
 
NS-CUK Seminar: J.H.Lee, Review on "Learnable Structural Semantic Readout for...
NS-CUK Seminar: J.H.Lee, Review on "Learnable Structural Semantic Readout for...NS-CUK Seminar: J.H.Lee, Review on "Learnable Structural Semantic Readout for...
NS-CUK Seminar: J.H.Lee, Review on "Learnable Structural Semantic Readout for...
 
NS-CUK Seminar: H.E.Lee, Review on "Gated Graph Sequence Neural Networks", I...
NS-CUK Seminar: H.E.Lee,  Review on "Gated Graph Sequence Neural Networks", I...NS-CUK Seminar: H.E.Lee,  Review on "Gated Graph Sequence Neural Networks", I...
NS-CUK Seminar: H.E.Lee, Review on "Gated Graph Sequence Neural Networks", I...
 
NS-CUK Seminar:V.T.Hoang, Review on "Augmentation-Free Self-Supervised Learni...
NS-CUK Seminar:V.T.Hoang, Review on "Augmentation-Free Self-Supervised Learni...NS-CUK Seminar:V.T.Hoang, Review on "Augmentation-Free Self-Supervised Learni...
NS-CUK Seminar:V.T.Hoang, Review on "Augmentation-Free Self-Supervised Learni...
 
NS-CUK Journal club: H.E.Lee, Review on " A biomedical knowledge graph-based ...
NS-CUK Journal club: H.E.Lee, Review on " A biomedical knowledge graph-based ...NS-CUK Journal club: H.E.Lee, Review on " A biomedical knowledge graph-based ...
NS-CUK Journal club: H.E.Lee, Review on " A biomedical knowledge graph-based ...
 
NS-CUK Seminar: H.E.Lee, Review on "PTE: Predictive Text Embedding through L...
NS-CUK Seminar: H.E.Lee,  Review on "PTE: Predictive Text Embedding through L...NS-CUK Seminar: H.E.Lee,  Review on "PTE: Predictive Text Embedding through L...
NS-CUK Seminar: H.E.Lee, Review on "PTE: Predictive Text Embedding through L...
 
NS-CUK Seminar: H.B.Kim, Review on "Inductive Representation Learning on Lar...
NS-CUK Seminar: H.B.Kim,  Review on "Inductive Representation Learning on Lar...NS-CUK Seminar: H.B.Kim,  Review on "Inductive Representation Learning on Lar...
NS-CUK Seminar: H.B.Kim, Review on "Inductive Representation Learning on Lar...
 
NS-CUK Seminar: H.E.Lee, Review on "PTE: Predictive Text Embedding through L...
NS-CUK Seminar: H.E.Lee,  Review on "PTE: Predictive Text Embedding through L...NS-CUK Seminar: H.E.Lee,  Review on "PTE: Predictive Text Embedding through L...
NS-CUK Seminar: H.E.Lee, Review on "PTE: Predictive Text Embedding through L...
 
NS-CUK Seminar: J.H.Lee, Review on "Relational Self-Supervised Learning on Gr...
NS-CUK Seminar: J.H.Lee, Review on "Relational Self-Supervised Learning on Gr...NS-CUK Seminar: J.H.Lee, Review on "Relational Self-Supervised Learning on Gr...
NS-CUK Seminar: J.H.Lee, Review on "Relational Self-Supervised Learning on Gr...
 
NS-CUK Seminar: H.B.Kim, Review on "metapath2vec: Scalable representation le...
NS-CUK Seminar: H.B.Kim,  Review on "metapath2vec: Scalable representation le...NS-CUK Seminar: H.B.Kim,  Review on "metapath2vec: Scalable representation le...
NS-CUK Seminar: H.B.Kim, Review on "metapath2vec: Scalable representation le...
 
NS-CUK Seminar: H.E.Lee, Review on "Graph Star Net for Generalized Multi-Tas...
NS-CUK Seminar: H.E.Lee,  Review on "Graph Star Net for Generalized Multi-Tas...NS-CUK Seminar: H.E.Lee,  Review on "Graph Star Net for Generalized Multi-Tas...
NS-CUK Seminar: H.E.Lee, Review on "Graph Star Net for Generalized Multi-Tas...
 

Recently uploaded

Azure API Management to expose backend services securely
Azure API Management to expose backend services securelyAzure API Management to expose backend services securely
Azure API Management to expose backend services securely
Dinusha Kumarasiri
 
Building Production Ready Search Pipelines with Spark and Milvus
Building Production Ready Search Pipelines with Spark and MilvusBuilding Production Ready Search Pipelines with Spark and Milvus
Building Production Ready Search Pipelines with Spark and Milvus
Zilliz
 
Salesforce Integration for Bonterra Impact Management (fka Social Solutions A...
Salesforce Integration for Bonterra Impact Management (fka Social Solutions A...Salesforce Integration for Bonterra Impact Management (fka Social Solutions A...
Salesforce Integration for Bonterra Impact Management (fka Social Solutions A...
Jeffrey Haguewood
 
System Design Case Study: Building a Scalable E-Commerce Platform - Hiike
System Design Case Study: Building a Scalable E-Commerce Platform - HiikeSystem Design Case Study: Building a Scalable E-Commerce Platform - Hiike
System Design Case Study: Building a Scalable E-Commerce Platform - Hiike
Hiike
 
Astute Business Solutions | Oracle Cloud Partner |
Astute Business Solutions | Oracle Cloud Partner |Astute Business Solutions | Oracle Cloud Partner |
Astute Business Solutions | Oracle Cloud Partner |
AstuteBusiness
 
A Comprehensive Guide to DeFi Development Services in 2024
A Comprehensive Guide to DeFi Development Services in 2024A Comprehensive Guide to DeFi Development Services in 2024
A Comprehensive Guide to DeFi Development Services in 2024
Intelisync
 
5th LF Energy Power Grid Model Meet-up Slides
5th LF Energy Power Grid Model Meet-up Slides5th LF Energy Power Grid Model Meet-up Slides
5th LF Energy Power Grid Model Meet-up Slides
DanBrown980551
 
Deep Dive: AI-Powered Marketing to Get More Leads and Customers with HyperGro...
Deep Dive: AI-Powered Marketing to Get More Leads and Customers with HyperGro...Deep Dive: AI-Powered Marketing to Get More Leads and Customers with HyperGro...
Deep Dive: AI-Powered Marketing to Get More Leads and Customers with HyperGro...
saastr
 
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slack
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with SlackLet's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slack
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slack
shyamraj55
 
AWS Cloud Cost Optimization Presentation.pptx
AWS Cloud Cost Optimization Presentation.pptxAWS Cloud Cost Optimization Presentation.pptx
AWS Cloud Cost Optimization Presentation.pptx
HarisZaheer8
 
Dandelion Hashtable: beyond billion requests per second on a commodity server
Dandelion Hashtable: beyond billion requests per second on a commodity serverDandelion Hashtable: beyond billion requests per second on a commodity server
Dandelion Hashtable: beyond billion requests per second on a commodity server
Antonios Katsarakis
 
HCL Notes and Domino License Cost Reduction in the World of DLAU
HCL Notes and Domino License Cost Reduction in the World of DLAUHCL Notes and Domino License Cost Reduction in the World of DLAU
HCL Notes and Domino License Cost Reduction in the World of DLAU
panagenda
 
Energy Efficient Video Encoding for Cloud and Edge Computing Instances
Energy Efficient Video Encoding for Cloud and Edge Computing InstancesEnergy Efficient Video Encoding for Cloud and Edge Computing Instances
Energy Efficient Video Encoding for Cloud and Edge Computing Instances
Alpen-Adria-Universität
 
Freshworks Rethinks NoSQL for Rapid Scaling & Cost-Efficiency
Freshworks Rethinks NoSQL for Rapid Scaling & Cost-EfficiencyFreshworks Rethinks NoSQL for Rapid Scaling & Cost-Efficiency
Freshworks Rethinks NoSQL for Rapid Scaling & Cost-Efficiency
ScyllaDB
 
Driving Business Innovation: Latest Generative AI Advancements & Success Story
Driving Business Innovation: Latest Generative AI Advancements & Success StoryDriving Business Innovation: Latest Generative AI Advancements & Success Story
Driving Business Innovation: Latest Generative AI Advancements & Success Story
Safe Software
 
TrustArc Webinar - 2024 Global Privacy Survey
TrustArc Webinar - 2024 Global Privacy SurveyTrustArc Webinar - 2024 Global Privacy Survey
TrustArc Webinar - 2024 Global Privacy Survey
TrustArc
 
GNSS spoofing via SDR (Criptored Talks 2024)
GNSS spoofing via SDR (Criptored Talks 2024)GNSS spoofing via SDR (Criptored Talks 2024)
GNSS spoofing via SDR (Criptored Talks 2024)
Javier Junquera
 
Choosing The Best AWS Service For Your Website + API.pptx
Choosing The Best AWS Service For Your Website + API.pptxChoosing The Best AWS Service For Your Website + API.pptx
Choosing The Best AWS Service For Your Website + API.pptx
Brandon Minnick, MBA
 
Serial Arm Control in Real Time Presentation
Serial Arm Control in Real Time PresentationSerial Arm Control in Real Time Presentation
Serial Arm Control in Real Time Presentation
tolgahangng
 
Taking AI to the Next Level in Manufacturing.pdf
Taking AI to the Next Level in Manufacturing.pdfTaking AI to the Next Level in Manufacturing.pdf
Taking AI to the Next Level in Manufacturing.pdf
ssuserfac0301
 

Recently uploaded (20)

Azure API Management to expose backend services securely
Azure API Management to expose backend services securelyAzure API Management to expose backend services securely
Azure API Management to expose backend services securely
 
Building Production Ready Search Pipelines with Spark and Milvus
Building Production Ready Search Pipelines with Spark and MilvusBuilding Production Ready Search Pipelines with Spark and Milvus
Building Production Ready Search Pipelines with Spark and Milvus
 
Salesforce Integration for Bonterra Impact Management (fka Social Solutions A...
Salesforce Integration for Bonterra Impact Management (fka Social Solutions A...Salesforce Integration for Bonterra Impact Management (fka Social Solutions A...
Salesforce Integration for Bonterra Impact Management (fka Social Solutions A...
 
System Design Case Study: Building a Scalable E-Commerce Platform - Hiike
System Design Case Study: Building a Scalable E-Commerce Platform - HiikeSystem Design Case Study: Building a Scalable E-Commerce Platform - Hiike
System Design Case Study: Building a Scalable E-Commerce Platform - Hiike
 
Astute Business Solutions | Oracle Cloud Partner |
Astute Business Solutions | Oracle Cloud Partner |Astute Business Solutions | Oracle Cloud Partner |
Astute Business Solutions | Oracle Cloud Partner |
 
A Comprehensive Guide to DeFi Development Services in 2024
A Comprehensive Guide to DeFi Development Services in 2024A Comprehensive Guide to DeFi Development Services in 2024
A Comprehensive Guide to DeFi Development Services in 2024
 
5th LF Energy Power Grid Model Meet-up Slides
5th LF Energy Power Grid Model Meet-up Slides5th LF Energy Power Grid Model Meet-up Slides
5th LF Energy Power Grid Model Meet-up Slides
 
Deep Dive: AI-Powered Marketing to Get More Leads and Customers with HyperGro...
Deep Dive: AI-Powered Marketing to Get More Leads and Customers with HyperGro...Deep Dive: AI-Powered Marketing to Get More Leads and Customers with HyperGro...
Deep Dive: AI-Powered Marketing to Get More Leads and Customers with HyperGro...
 
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slack
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with SlackLet's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slack
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slack
 
AWS Cloud Cost Optimization Presentation.pptx
AWS Cloud Cost Optimization Presentation.pptxAWS Cloud Cost Optimization Presentation.pptx
AWS Cloud Cost Optimization Presentation.pptx
 
Dandelion Hashtable: beyond billion requests per second on a commodity server
Dandelion Hashtable: beyond billion requests per second on a commodity serverDandelion Hashtable: beyond billion requests per second on a commodity server
Dandelion Hashtable: beyond billion requests per second on a commodity server
 
HCL Notes and Domino License Cost Reduction in the World of DLAU
HCL Notes and Domino License Cost Reduction in the World of DLAUHCL Notes and Domino License Cost Reduction in the World of DLAU
HCL Notes and Domino License Cost Reduction in the World of DLAU
 
Energy Efficient Video Encoding for Cloud and Edge Computing Instances
Energy Efficient Video Encoding for Cloud and Edge Computing InstancesEnergy Efficient Video Encoding for Cloud and Edge Computing Instances
Energy Efficient Video Encoding for Cloud and Edge Computing Instances
 
Freshworks Rethinks NoSQL for Rapid Scaling & Cost-Efficiency
Freshworks Rethinks NoSQL for Rapid Scaling & Cost-EfficiencyFreshworks Rethinks NoSQL for Rapid Scaling & Cost-Efficiency
Freshworks Rethinks NoSQL for Rapid Scaling & Cost-Efficiency
 
Driving Business Innovation: Latest Generative AI Advancements & Success Story
Driving Business Innovation: Latest Generative AI Advancements & Success StoryDriving Business Innovation: Latest Generative AI Advancements & Success Story
Driving Business Innovation: Latest Generative AI Advancements & Success Story
 
TrustArc Webinar - 2024 Global Privacy Survey
TrustArc Webinar - 2024 Global Privacy SurveyTrustArc Webinar - 2024 Global Privacy Survey
TrustArc Webinar - 2024 Global Privacy Survey
 
GNSS spoofing via SDR (Criptored Talks 2024)
GNSS spoofing via SDR (Criptored Talks 2024)GNSS spoofing via SDR (Criptored Talks 2024)
GNSS spoofing via SDR (Criptored Talks 2024)
 
Choosing The Best AWS Service For Your Website + API.pptx
Choosing The Best AWS Service For Your Website + API.pptxChoosing The Best AWS Service For Your Website + API.pptx
Choosing The Best AWS Service For Your Website + API.pptx
 
Serial Arm Control in Real Time Presentation
Serial Arm Control in Real Time PresentationSerial Arm Control in Real Time Presentation
Serial Arm Control in Real Time Presentation
 
Taking AI to the Next Level in Manufacturing.pdf
Taking AI to the Next Level in Manufacturing.pdfTaking AI to the Next Level in Manufacturing.pdf
Taking AI to the Next Level in Manufacturing.pdf
 

NS-CUK Seminar: H.B.Kim, Review on "subgraph2vec: Learning Distributed Representations of Rooted Sub-graphs from Large Graphs", 2016

  • 1. Ho-Beom Kim Network Science Lab Dept. of Mathematics The Catholic University of Korea E-mail: hobeom2001@catholic.ac.kr 2023 / 07 / 03 Annamalai Narayanan, et al.
  • 2. 2  Introduction • Problem statements • Contributions  Methodology  Related work  Experiments  Results  Discussion  Conclusion
  • 3. 3 1. Introduction Contributions • Propose subgraph2vec, an unsupervised representation learning technique to learn latent representations of rooted subgraphs present in large graphs • Develop a modified version of the skipgram language model which is capable of modeling varying lengthradial contexts around target subgraphs • Subgraph2vec’s representation learning technique would help deep learning variant of WL kernel • Demonstrate that subgraph2vec could significantly outperform state-of-the-art
  • 4. 4 1. Introduction Limitations of Existing Graph kernels • (L1) Structural similarity • Substructures that are used to compute the kernel matrix are not independent. • (L2) Diagonal Dominance • Since graph kernels regard these substructures as separate features, the dimensional- ity of the feature space often grows exponentially with the number of substructures. • only a few sub- structures will be common across graphs. • leads to diagonal dominance, that is, a given graph is similar to it- self but not to any other graph in the dataset. • This leads to poor classification/clustering accuracy.
  • 5. 5 1. Introduction Existing Solution • DGK : Deep Graph Kernels • 𝐾 𝐺, 𝐺′ = Φ 𝐺 𝑇𝑀Φ 𝐺′ • 𝑀𝑟𝑒𝑝𝑟𝑒𝑠𝑒𝑛𝑡𝑠 𝑎 𝑉 𝑥 𝑉 𝑝𝑜𝑠𝑖𝑡𝑖𝑣𝑒 𝑠𝑒𝑚𝑖 − 𝑑𝑒𝑓𝑖𝑛𝑒 𝑚𝑎𝑡𝑟𝑖𝑥 𝑡ℎ𝑎𝑡 𝑒𝑛𝑐𝑜𝑑𝑒𝑠 𝑡ℎ𝑒 𝑟𝑒𝑙𝑎𝑡𝑖𝑜𝑛𝑠ℎ𝑖𝑝 𝑏𝑒𝑡𝑤𝑒𝑒𝑛 𝑠𝑢𝑏𝑠𝑡𝑟𝑢𝑐𝑡𝑢𝑟𝑒𝑠 𝑎𝑛𝑑 𝑉 𝑟𝑒𝑝𝑟𝑒𝑠𝑒𝑛𝑡𝑠 𝑡ℎ𝑒 𝑣𝑜𝑐𝑎𝑏𝑢 • Lary of substrucures obtained from the training data • M matrix respects the similarity of the substructure space
  • 6. 6 1. Introduction Related Work • Deep Walk and node2vec intend to learn node embeddings by generating random walks in a single graph. • Both these works rely on existence of node labels for at least a small portion of nodes and take a semi-supervised approach to learn node embeddings. • Subgraph2vec learns subgraph embeddings in an unsupervised manner • Graph kernel’s categories • Kernels for limited-size subgraphs • Kernels based on subtree patterns • Kernels based on walks and paths • Subgraph2vec is complementary to these existing graph kernels where the substructures exhibit reasonable similarities among them
  • 7. 7 1. Introduction Problem Statements • Consider the problem of learning distributed representations of rooted subgraphs from a given set of graphs • 𝐺 = (𝑉, 𝐸, 𝜆) • 𝐺 = 𝑉 𝑠𝑔, 𝐸𝑠𝑔, 𝜆𝑠𝑔 • Sg : a sub-graph of G iff there exists an injective mapping 𝜇 ∶ 𝑉 𝑠𝑔 → 𝑉 s.t. (𝑣1, 𝑣2) ∈ 𝐸𝑠𝑔 𝑖𝑓𝑓 𝜇 𝑣1 , 𝜇 𝑣2 ∈ 𝐸 • 𝒢 = 𝐺1, 𝐺2, … , 𝐺𝑛 ∶ 𝑎 𝑠𝑒𝑡 𝑜𝑓 𝑔𝑟𝑎𝑝ℎ𝑠 • D : positive integer
  • 8. 8 4. Background : Language Models Traditional language models • The traditional language models determine the likelihood of a sequence of words appearing in it. • Pr 𝑤𝑡 𝑤1, … , 𝑤𝑡−1 • Estimate the likelihood of observing the target word 𝑤𝑡 given n previous words (𝑤1, … , 𝑤𝑡−1) observed thus far
  • 9. 9 4. Background : Language Models Neural language models • The recently developed neural language models focus on learning distributed vector representation of words • These models improve traditional n-gram models by using vector embeddings for words • Neural language models exploit the of the notion of context where a context is. Efined as a. ixed number of. Ords surrounding the target word • 𝑡=1 𝑇 𝑙𝑜𝑔𝑃𝑟 𝑤𝑡 𝑤1, … , 𝑤𝑡−1 • 𝑤𝑡 𝑤1, … , 𝑤𝑡−1 are the context of the target word 𝑤𝑡 •
  • 10. 10 4. Background : Language Models Skip Gram • The skipgram model maximizes co-occurrence probability among the words that appear within a given context window. • Give a context window of size c and the target word wt, skipgram model attempts to predict the words that appear in the context of the target word, (𝑤𝑡−𝑐, ..., 𝑤𝑡−𝑐 ). • 𝑡=1 𝑇 𝑙𝑜𝑔𝑃𝑟(𝑤𝑡−𝑐, … , 𝑤𝑡+𝑐|𝑤𝑡) • 𝑃𝑟(𝑤𝑡−𝑐, … , 𝑤𝑡+𝑐|𝑤𝑡) : computed as Π−𝑐≤𝑗≤𝑐,𝑗≠0Pr(𝑤𝑡+𝑗|𝑤𝑡) • 𝑃𝑟(𝑤𝑡+𝑗 |𝑤𝑡) : 𝑒𝑥𝑝(Φ𝑤 𝑇 𝑡Φ𝑤𝑡+𝑗 ′ ) 𝑤=1 𝑉 𝑒𝑥𝑝(Φ𝑤 𝑇 𝑡Φ𝑤 ′ ) • Φ𝑤 Φ𝑤 ′ : input and output vectors of word w
  • 11. 11 4. Background : Language Models Negative Sampling • Negative sampling selects the words that are not in the context at random instead of considering all words in the vocabulary. • If a word w appears in the context of another word w′, then the vector embedding of w is closer to that of w′ compared to any other randomly chosen word from the vocabulary. • the learned word embeddings preserve semantics • can utilize word embedding models to learn dimensions of similarity between subgraphs. • that similar subgraphs will be close to each other in the embedding space.
  • 12. 12 5. Method Learning Sub-Graph Representations • Similar to the language modeling convention, the only re- quired input is a corpus and a vocabulary of subgraphs for subgraph2vec to learn representations. • Given a dataset of graphs, subgraph2vec considers all the neighbourhoods of rooted subgraphs around every rooted subgraph as its corpus, and set of all rooted subgraphs around every node in every graph as its vocabulary. • Following the language model training process with the subgraphs and their contexts, subgraph2vec learns the intended subgraph embeddings.
  • 13. 13 5. Method Algorithm : subgraph2vec • The algorithm consists of two main components • A procedure to generate rooted subgraphs around every node in a given graph and second every node in a given graph • The procedure to learn embeddings of those subgraphs • Learn δ dimensional embeddings of subgraphs (up to degree D) from all the graphs in dataset G in e epochs. We begin by building a vocabulary of all the subgraphs
  • 14. 14 5. Method Algorithm : subgraph2vec • The algorithm consists of two main components • A procedure to generate rooted subgraphs around every node in a given graph and second every node in a given graph • The procedure to learn embeddings of those subgraphs • Learn δ dimensional embeddings of subgraphs (up to degree D) from all the graphs in dataset G in e epochs. We begin by building a vocabulary of all the subgraphs • Then the embeddings for all subgraphs in the vocabulary (Φ) is initialized randomly • we proceed with learning the embeddings in several epochs iterating over the graphs in G.
  • 15. 15 5. Method Extracting Rooted Subgraphs • To extract these subgraphs, we follow the well-known WL relabeling process which lays the basis for the WL kernel and WL test of graph isomorphism •
  • 16. 16 5. Method Radial Skipgram – Modeling the radial context • unlike words in a traditional text corpora, subgraphs do not have a linear co-occurrence relationship. • consider the breadth-first neighbours of the root node as its context as it directly follows from the definition of WL relabeling process. • Define the context of a degree-d subgraph 𝑠𝑔𝑣 (𝑑) rooted at v, as the multiset of subgraphs of degrees d-1, d, d+1 rooted at each of the neighbors of v(lines 2-6 in algorithm3) • Subgraph of degrees d-1, d, d+1 to be in the context of a subgraph of degree d • Degree-d subgraph is likely to be rather similar to subgraphs of degrees that are closer to d
  • 17. 17 5. Method Radial Skipgram – Vanilla Skip Gram • the vanilla skipgram language model captures fixed-length linear contexts over the words in a given sentence. • For learning a subgraph;s radial context , the vanilla skipgram model could not be used
  • 18. 18 5. Method Radial Skipgram – Modification • This could be in several thousands/millions in the case of large graphs. • Training such models would re- quire large amount of computational resources. • To alleviate this bottleneck, we approximate the probability distribution using the negative sampling approach.
  • 19. 19 5. Method Negative sampling • 𝑠𝑔𝑐𝑜𝑛𝑡 ∈ 𝑆𝐺𝑣𝑜𝑐𝑎𝑏 𝑎𝑛𝑑 𝑆𝐺𝑣𝑜𝑐𝑎𝑏 𝑖𝑠 𝑣𝑒𝑟𝑦 𝑙𝑎𝑟𝑔𝑒, Pr(𝑠𝑔𝑐𝑜𝑢𝑛𝑡|Φ(𝑠𝑔𝑣 (𝑑) )) is prohibitively expensive • We follow the negative sampling strategy to calculate above mentioned posterior probability • every training cycle of Algorithm3 , we choose a fixed number of subgraphs (denoted as negsamples) as negative samples and update their embeddings as well. • Negative samples adhere to the following conditions • If 𝑛𝑒𝑔𝑠𝑎𝑚𝑝𝑙𝑒𝑠 = 𝑠𝑔𝑛𝑒𝑔1, 𝑠𝑔𝑛𝑒𝑔2, … , 𝑡ℎ𝑒𝑛 𝑛𝑒𝑔𝑎𝑠𝑎𝑚𝑝𝑙𝑒𝑠 ⊂ 𝑆𝐺𝑣𝑜𝑐𝑎𝑏, 𝑛𝑒𝑔𝑠𝑎𝑚𝑝𝑙𝑒𝑠 ≪ 𝑆𝐺𝑣𝑜𝑐𝑎𝑏 𝑎𝑛𝑑 𝑛𝑒𝑔𝑎𝑠𝑎𝑚𝑝𝑙𝑒𝑠 ∩ 𝑐𝑜𝑛𝑡𝑒𝑥𝑡𝑣 𝑑 = {} • Φ 𝑠𝑔𝑣 𝑑 𝑐𝑙𝑜𝑠𝑒𝑟 𝑡𝑜 𝑡ℎ𝑒 𝑒𝑚𝑏𝑒𝑑𝑑𝑖𝑛𝑔𝑠 𝑜𝑓 𝑎𝑙𝑙 𝑡ℎ𝑒 𝑠𝑢𝑏𝑔𝑟𝑎𝑝ℎ𝑠 𝑖𝑡𝑠 𝑐𝑜𝑛𝑡𝑒𝑥𝑡 𝑎𝑛𝑑 𝑎𝑡 𝑡ℎ𝑒 𝑠𝑎𝑚𝑒 𝑡𝑖𝑚𝑒 𝑑𝑖𝑠𝑡𝑎𝑛𝑐𝑒𝑠 𝑡ℎ𝑒 Same from the embeddings of a fixed number of subgraphs that are not its context •
  • 20. 20 5. Method Optimization • Stochastic gradient descent (SGD) optimizer is used to op- timize these parameters • derivatives are estimated using the back-propagation algorithm. • The learning rate α is empirically tuned.
  • 21. 21 5. Method Relation to Deep WL kernel • each of the subgraph in SGvocab is obtained using the WL re-labelling strategy, and hence represents the WL neighbourhood labels of a node. • Hence learning latent representations of such subgraphs amounts to learning representations of WL neighbourhood labels. • Therefore, once the embeddings of all the subgraph in SGvocab are learnt using Algorithm 1, one could use it to build the deep learning variant of the WL kernel among the graphs in G.
  • 22. 22 5. Method Use cases • Graph Classification • Graph Clustering
  • 24. 24 6. Evaluation Results and Discussion • Accuracy • SVMs with subgraph2vec’s embeddings achieve better accuracy on 3 datasets and comparable accuracy on the remaining 2 datasets
  • 25. 25 6. Evaluation Results and Discussion • Efficiency • It is important to note that classification on these benchmark datasets are much simpler than real- world classification tasks. • By using trivial features such as number of nodes in the graph, achieved comparable accuracies to the SOTA graph kernels.
  • 26. 26 7. Conclusion Evaluation • Presented subgraph2vec, an unsupervised representation learning technique to learn embedding of rooted subgraphs that exist in large graphs • Through our large-scale experiment involving benchmark and real-world graph classification and clustering datasets • We demonstrate that subgraph embeddings learnt by our approach could be used in conjunction with classifiers such as CNNs, SVMs and relational data clustering algorithms to ahieve sign