[paper review] 손규빈 - Eye in the sky & 3D human pose estimation in video with ...Gyubin Son
1. Eye in the Sky: Real-time Drone Surveillance System (DSS) for Violent Individuals Identification using ScatterNet Hybrid Deep Learning Network
https://arxiv.org/abs/1806.00746
2. 3D human pose estimation in video with temporal convolutions and semi-supervised training
https://arxiv.org/abs/1811.11742
[한국어] Neural Architecture Search with Reinforcement LearningKiho Suh
모두의연구소에서 발표했던 “Neural Architecture Search with Reinforcement Learning”이라는 논문발표 자료를 공유합니다. 머신러닝 개발 업무중 일부를 자동화하는 구글의 AutoML이 뭘하려는지 이 논문을 통해 잘 보여줍니다.
이 논문에서는 딥러닝 구조를 만드는 딥러닝 구조에 대해서 설명합니다. 800개의 GPU를 혹은 400개의 CPU를 썼고 State of Art 혹은 State of Art 바로 아래이지만 더 빠르고 더 작은 네트워크를 이것을 통해 만들었습니다. 이제 Feature Engineering에서 Neural Network Engineering으로 페러다임이 변했는데 이것의 첫 시도 한 논문입니다.
1) The document proposes a graph-based method using graph convolutional networks to address challenges in sequential recommendation, such as extracting implicit preferences from long behavior sequences and adapting to changing user preferences over time.
2) It constructs an interest graph from user behaviors and designs an attentive graph convolutional network and dynamic pooling technique to aggregate implicit signals into explicit preferences.
3) Experimental results on two large-scale datasets show the proposed method significantly outperforms state-of-the-art sequential recommendation methods.
This document proposes the NGNN framework to improve the representation power of graph neural networks. NGNN extracts rooted subgraphs around each node and applies a base GNN independently to learn subgraph representations. These are then aggregated to obtain final node representations. The document outlines limitations of existing GNNs, describes the NGNN framework, and poses research questions about its theoretical power, performance improvements over base GNNs, results on benchmarks, and computational overhead. Key experiments are conducted on graph isomorphism, molecular property prediction, and node classification datasets to evaluate NGNN.
This document presents a novel Claim-guided Hierarchical Graph Attention Network (ClaHi-GAT) model for rumor detection using undirected interaction graphs. The model uses multi-level attention - post-level attention considers the content of individual tweets, while event-level attention compares tweets responding to the same claim. This allows the model to better capture features indicative of rumors. Experimental results on three Twitter datasets show the proposed model achieves superior performance for rumor classification and early detection compared to previous structure-based methods.
[paper review] 손규빈 - Eye in the sky & 3D human pose estimation in video with ...Gyubin Son
1. Eye in the Sky: Real-time Drone Surveillance System (DSS) for Violent Individuals Identification using ScatterNet Hybrid Deep Learning Network
https://arxiv.org/abs/1806.00746
2. 3D human pose estimation in video with temporal convolutions and semi-supervised training
https://arxiv.org/abs/1811.11742
[한국어] Neural Architecture Search with Reinforcement LearningKiho Suh
모두의연구소에서 발표했던 “Neural Architecture Search with Reinforcement Learning”이라는 논문발표 자료를 공유합니다. 머신러닝 개발 업무중 일부를 자동화하는 구글의 AutoML이 뭘하려는지 이 논문을 통해 잘 보여줍니다.
이 논문에서는 딥러닝 구조를 만드는 딥러닝 구조에 대해서 설명합니다. 800개의 GPU를 혹은 400개의 CPU를 썼고 State of Art 혹은 State of Art 바로 아래이지만 더 빠르고 더 작은 네트워크를 이것을 통해 만들었습니다. 이제 Feature Engineering에서 Neural Network Engineering으로 페러다임이 변했는데 이것의 첫 시도 한 논문입니다.
1) The document proposes a graph-based method using graph convolutional networks to address challenges in sequential recommendation, such as extracting implicit preferences from long behavior sequences and adapting to changing user preferences over time.
2) It constructs an interest graph from user behaviors and designs an attentive graph convolutional network and dynamic pooling technique to aggregate implicit signals into explicit preferences.
3) Experimental results on two large-scale datasets show the proposed method significantly outperforms state-of-the-art sequential recommendation methods.
This document proposes the NGNN framework to improve the representation power of graph neural networks. NGNN extracts rooted subgraphs around each node and applies a base GNN independently to learn subgraph representations. These are then aggregated to obtain final node representations. The document outlines limitations of existing GNNs, describes the NGNN framework, and poses research questions about its theoretical power, performance improvements over base GNNs, results on benchmarks, and computational overhead. Key experiments are conducted on graph isomorphism, molecular property prediction, and node classification datasets to evaluate NGNN.
This document presents a novel Claim-guided Hierarchical Graph Attention Network (ClaHi-GAT) model for rumor detection using undirected interaction graphs. The model uses multi-level attention - post-level attention considers the content of individual tweets, while event-level attention compares tweets responding to the same claim. This allows the model to better capture features indicative of rumors. Experimental results on three Twitter datasets show the proposed model achieves superior performance for rumor classification and early detection compared to previous structure-based methods.
This document presents a novel graph neural network (GNN) convolutional layer based on Auto-Regressive Moving Average (ARMA) filters. The ARMA layer aims to address limitations of existing GNN layers that use polynomial filters by providing a more flexible frequency response with fewer parameters. It models graph signals using parallel stacks of recurrent operations to approximate high-order neighborhoods efficiently. Experimental results show the ARMA layer outperforms other GNN architectures on tasks like node classification, graph signal classification, and graph regression. Future work could explore incorporating text and content metadata into graph convolutional models.
The document proposes a novel graph transformer model called DeepGraph. DeepGraph uses substructure sampling to encode local graph information and add substructure tokens. It applies localized self-attention on substructures using a mask. The document experiments with DeepGraph on various graph datasets and analyzes its performance as its depth increases. Deeper models show diminishing returns, indicating a limitation of increasing depth in graph transformers.
This document summarizes a research paper on Graph Multiset Pooling, a new method for graph pooling using a Graph Multiset Transformer (GMT). The GMT treats graph pooling as a multiset encoding problem and uses multi-head attention to capture relationships among nodes. It satisfies the injectiveness and permutation invariance properties needed to be as powerful as the Weisfeiler-Lehman graph isomorphism test. Experimental results show the GMT outperforms other pooling methods on tasks like graph classification, reconstruction, and generation. The GMT provides a powerful and efficient way to learn meaningful representations of entire graphs.
This document summarizes a research paper that introduces Hyperbolic Graph Convolutional Networks (HGCNs) to address limitations of previous Euclidean graph neural networks. HGCNs map node features to hyperbolic spaces and use a novel attention-based aggregation scheme to capture hierarchical structure. The paper presents HGCNs, evaluates them on citation networks, disease propagation trees, protein networks and flight networks, and finds they outperform Euclidean baselines for link prediction and node classification by learning more interpretable hierarchical representations.
This document presents Graphormer, a Transformer-based model for graph representation learning. Graphormer achieves state-of-the-art performance on graph tasks by introducing three novel encodings: centrality encoding to capture node importance, spatial encoding to encode structural relations between nodes, and edge encoding to incorporate edge features. Experiments show Graphormer outperforms GNN baselines by over 10% on various graph datasets and tasks.
This document presents Graphormer, a Transformer-based model for graph representation learning. Graphormer achieves state-of-the-art performance on graph tasks by introducing three novel encodings: centrality encoding to capture node importance, spatial encoding to encode structural relations using shortest path distance, and edge encoding using edge features. Experiments show Graphormer outperforms GNN baselines by over 10% on various graph datasets and leaderboards.
The 2nd NS-CUK Weekly Seminar
Presenter: Sang Thanh Nguyen
Date: Mar 6th, 2023
Topic: Review on "DeeperGCN: All You Need to Train Deeper GCNs," arXiv
Schedule: https://nslab-cuk.github.io/seminar/
The 2nd NS-CUK Weekly Seminar
Presenter: Van Thuy Hoang
Date: Mar 6th, 2023
Topic: Review on "Everything is Connected: Graph Neural Networks," Current Opinion in Structural Biology
Schedule: https://nslab-cuk.github.io/seminar/
More from Network Science Lab, The Catholic University of Korea (20)