Mar. 30, 2023•0 likes## 0 likes

•54 views## views

Be the first to like this

Show More

Total views

0

On Slideshare

0

From embeds

0

Number of embeds

0

Download to read offline

Report

Technology

NS-CUK Seminar: S.T.Nguyen, Review on "Geom-GCN: Geometric Graph Convolutional Networks", ICLR 2020

ssuser4b1f48Follow

NS-CUK Joint Journal Club : S.T.Nguyen, Review on "Graph Neural Networks for ...ssuser4b1f48

NS-CUK Seminar: S.T.Nguyen, Review on "Graph Pointer Neural Networks", AAAI 2022ssuser4b1f48

NS-CUK Seminar: S.T.Nguyen, Review on "Hierarchical Graph Convolutional Netwo...ssuser4b1f48

Image Segmentation Using Deep Learning : A surveyNUPUR YADAV

NS-CUK Seminar: S.T.Nguyen, Review on "On Generalized Degree Fairness in Grap...Network Science Lab, The Catholic University of Korea

20191107 deeplearningapproachesfornetworkstm1966

- Nguyen Thanh Sang Network Science Lab Dept. of Artificial Intelligence The Catholic University of Korea E-mail: sang.ngt99@gmail.com 2023-03-24
- 1 SOTA problems • In a layer of MPNNs, each node sends its feature representation, a “message”, to the nodes in its neighborhood; and then updates its feature representation by aggregating all “messages” received from the neighborhood. => the aggregators lose the structural information of nodes in neighborhoods.
- 2 SOTA problems • In MPNNs, • the neighborhood is defined as the set of all neighbors one hop away or all neighbors up to r hops away. • After n layers, all the embeddings feature of nodes are similar. • They cannot capture the important features from distant but informative nodes.: => the aggregators lack the ability to capture long-range dependencies in disassortative Graphs.
- 3 SOTA problems • Several existing works attempt to learn some implicit structure-like information to distinguish different neighbors when aggregating features: • Learn weights on “messages” from different neighbors by using attention mechanisms and node and/or edge attributes • CCN utilizes a covariance architecture to learn structure-aware representations. How about utilize geometry structure?
- 4 Contributions • Map a graph to a continuous latent, and use the geometric relationships defined in the latent space to build structural neighborhoods for aggregation. • A bi-level aggregator operating on the structural neighborhoods to update the feature representations of nodes in graph neural networks, which are able to guarantee permutation invariance for graph-structured data. extracts more structural information of the graph and aggregate feature representations from distant nodes via mapping them to neighborhoods defined in the latent space. • Design particular geometric relationships to build the structural neighborhood in Euclidean and hyperbolic embedding space respectively.
- 5 Geometric Aggregation • Node embedding: maps the nodes in a graph to a latent continuous space: • Structural neighborhood: • Bi-level aggregation:
- 6 Distinguish the non-isomorphic graphs • Mean and maximum aggregators fail to distinguish the two different graphs. • Structural neighborhood in aggregation. • Different mapping function
- 7 Geom-GCN • The geometric operator τ as four relationships of the relative positions between two nodes in a 2-D Euclidean or hyperbolic space.: • Bi-level aggregation: • Overall bi-level aggregation of Geom-GCN: • Specify embedding methods to create suitable latent spaces where particular topology patterns (e.g., hierarchy) are preserved: • Isomap is a widely used isometry embedding method, by which distance patterns (lengths of shortest paths) are preserved explicitly in the latent space. • Poincare embedding and struc2vec can create particular latent spaces that preserve hierarchies and local structures in a graph, respectively.
- 8 Datasets
- 9 Experiments • Geom-GCN achieves state-of-the-art performance.
- 10 Experiments • Geom-GCN with two neighborhoods aggregate more irrelevant “messages” than Geom-GCN with only one neighborhood, and the irrelevant “messages” adversely affect the performance
- 11 Experiments • One can observe that several combinations achieve better performance than Geom-GCN with neighborhoods built by only one embedding space; • There are also many combinations that have bad performance.
- 12 Experiments • Time complexity is very important for graph neural networks because real-world graphs are always very large. • All nodes distribute radially in the figure indicates the proposed model learn graph’s hierarchy by Poincare embedding.
- 13 Conclusions • Tackling the two major weaknesses of existing message-passing neural networks over graphs losses of : • discriminative structures and • long-range dependencies. • Bridge a discrete graph to a continuous geometric space via graph embedding. • Exploit the principle of convolution: spatial aggregation over a meaningful space.
- 14