The 2nd NS-CUK Weekly Seminar
Presenter: Van Thuy Hoang
Date: Mar 6th, 2023
Topic: Review on "Everything is Connected: Graph Neural Networks," Current Opinion in Structural Biology
Schedule: https://nslab-cuk.github.io/seminar/
This document provides an overview of graph neural networks for node classification. It discusses supervised graph neural network approaches like graph convolutional networks (GCN) and graph attention networks. It also covers unsupervised approaches like variational graph auto-encoders and deep graph infomax. Additionally, it discusses general frameworks for graph neural networks like neural message passing networks and issues like over-smoothing when GNNs become too deep.
This document summarizes a research paper on hypergraph neural networks. It introduces hypergraphs as a generalization of graphs that allows edges to connect any number of vertices. It then discusses how traditional graph neural networks are limited by only modeling pairwise connections. The paper proposes a hypergraph neural networks framework that uses hypergraph structures to better formulate complex data correlations. Key contributions include a hyperedge convolution operation and experiments showing the framework outperforms traditional graph neural networks on citation network classification and visual object classification tasks by capturing multi-modal data representations.
Weisfeiler and Leman Go Neural: Higher-order Graph Neural Networks, arXiv e-...ssuser2624f71
This document summarizes research on k-dimensional graph neural networks (k-GNNs), which are a generalization of graph neural networks (GNNs) based on the k-dimensional Weisfeiler-Leman graph isomorphism test. It presents the theoretical basis for k-GNNs, describes the k-GNN model and a hierarchical variant, and reports the results of experimental studies comparing k-GNNs to GNNs and kernel methods on several benchmark datasets. The research found that k-GNNs outperformed GNNs and were able to match the performance of kernel methods, demonstrating their ability to learn graph properties beyond what GNNs can represent.
Slides for a talk about Graph Neural Networks architectures, overview taken from very good paper by Zonghan Wu et al. (https://arxiv.org/pdf/1901.00596.pdf)
Exploring Randomly Wired Neural Networks for Image RecognitionYongsu Baek
The document discusses exploring randomly wired neural networks for image recognition. It introduces randomly wired neural networks as a new approach to neural architecture search. Random network generators are used to stochastically sample network topologies. Experiments show that randomly wired networks can achieve competitive accuracy to hand-designed and NAS networks on ImageNet classification, using fewer resources than typical NAS. The authors hope further exploring network generator designs will yield more powerful network topologies.
240401_Thuy_Labseminar[Train Once and Explain Everywhere: Pre-training Interp...thanhdowork
The document describes a graph convolutional network (GCN) model that aims to be interpretable and generalizable across different graph datasets. It utilizes a pre-training process on synthetic graphs to learn universal structural patterns. The model features a structural pattern learning module to capture these patterns and a hypergraph refining module to identify explanations incorporating local structural interactions. It is shown to outperform comparable methods on a graph interpretation task without requiring pre-training.
This document provides an overview of graph neural networks for node classification. It discusses supervised graph neural network approaches like graph convolutional networks (GCN) and graph attention networks. It also covers unsupervised approaches like variational graph auto-encoders and deep graph infomax. Additionally, it discusses general frameworks for graph neural networks like neural message passing networks and issues like over-smoothing when GNNs become too deep.
This document summarizes a research paper on hypergraph neural networks. It introduces hypergraphs as a generalization of graphs that allows edges to connect any number of vertices. It then discusses how traditional graph neural networks are limited by only modeling pairwise connections. The paper proposes a hypergraph neural networks framework that uses hypergraph structures to better formulate complex data correlations. Key contributions include a hyperedge convolution operation and experiments showing the framework outperforms traditional graph neural networks on citation network classification and visual object classification tasks by capturing multi-modal data representations.
Weisfeiler and Leman Go Neural: Higher-order Graph Neural Networks, arXiv e-...ssuser2624f71
This document summarizes research on k-dimensional graph neural networks (k-GNNs), which are a generalization of graph neural networks (GNNs) based on the k-dimensional Weisfeiler-Leman graph isomorphism test. It presents the theoretical basis for k-GNNs, describes the k-GNN model and a hierarchical variant, and reports the results of experimental studies comparing k-GNNs to GNNs and kernel methods on several benchmark datasets. The research found that k-GNNs outperformed GNNs and were able to match the performance of kernel methods, demonstrating their ability to learn graph properties beyond what GNNs can represent.
Slides for a talk about Graph Neural Networks architectures, overview taken from very good paper by Zonghan Wu et al. (https://arxiv.org/pdf/1901.00596.pdf)
Exploring Randomly Wired Neural Networks for Image RecognitionYongsu Baek
The document discusses exploring randomly wired neural networks for image recognition. It introduces randomly wired neural networks as a new approach to neural architecture search. Random network generators are used to stochastically sample network topologies. Experiments show that randomly wired networks can achieve competitive accuracy to hand-designed and NAS networks on ImageNet classification, using fewer resources than typical NAS. The authors hope further exploring network generator designs will yield more powerful network topologies.
240401_Thuy_Labseminar[Train Once and Explain Everywhere: Pre-training Interp...thanhdowork
The document describes a graph convolutional network (GCN) model that aims to be interpretable and generalizable across different graph datasets. It utilizes a pre-training process on synthetic graphs to learn universal structural patterns. The model features a structural pattern learning module to capture these patterns and a hypergraph refining module to identify explanations incorporating local structural interactions. It is shown to outperform comparable methods on a graph interpretation task without requiring pre-training.
240415_Thuy_Labseminar[Simple and Asymmetric Graph Contrastive Learning witho...thanhdowork
1. GraphACL is a self-supervised contrastive learning method for graph-structured data that aims to capture both one-hop neighborhood context and two-hop monophily without relying on homophily assumptions.
2. It introduces an additional predict objective to encourage the encoder to learn representations that can predict neighboring node features, implicitly capturing neighborhood context.
3. GraphACL minimizes an upper bound on a contrastive loss to push node representations away from each other and avoid collapsed representations. It performs well on both heterophilic and homophilic graphs for node classification.
A Generalization of Transformer Networks to Graphs.pptxssuser2624f71
This document summarizes a research paper on Graph Transformers, which generalizes transformer networks to graph-structured data. It introduces the Graph Transformer model, which addresses two key challenges of applying transformers to graphs: sparsity and positional encodings. The model uses Laplacian eigenvectors to encode node positions and handles sparsity through restricted self-attention. Experiments show the Graph Transformer outperforms GNN baselines on molecular property prediction and node classification tasks. Future work may explore efficient training on large graphs and heterogeneous domains.
This document proposes a Graph Pointer Neural Network (GPNN) model to address challenges in learning representations for heterophilic graphs where neighboring nodes often have different labels. The GPNN uses a pointer network to select the most relevant nodes from multi-hop neighborhoods, filters out irrelevant nodes, and applies 1D convolution to extract high-level features from the ranked node sequence. Experiments on heterophilic graphs show the GPNN significantly outperforms state-of-the-art baselines. The model helps alleviate over-smoothing issues and better captures signals from distant nodes.
Towards Deep Attention in Graph Neural Networks: Problems and Remedies.pptxssuser2624f71
This document discusses graph convolutional networks (GCNs) and graph attention networks (GATs). It proposes a new method called AERO-GNN that uses cumulative attention across layers to allow GATs to remain expressive in deep layers. The method assigns different importance weights to nodes at different hop distances using hop attention. Experiments on node classification benchmarks show AERO-GNN outperforms other GAT baselines.
Lecture conducted by me on Deep Learning concepts and applications. Discussed FNNs, CNNs, Simple RNNs and LSTM Networks in detail. Finally conducted a hands-on session on deep-learning using Keras and scikit-learn.
This document summarizes a student course project on Kervolutional Neural Networks. The students implemented Kervolutional layers using PyTorch to introduce non-linearity, which improved performance over standard CNNs. Their models achieved 98.4% accuracy on MNIST using polynomial Kervolution, outperforming a CNN. Gaussian Kervolution maps inputs to infinite dimensions. Results showed Kervolution can significantly improve CNN performance on common datasets like MNIST and CIFAR by replacing convolutional layers. Future work includes exploring Kervolution in more architectures with extensive hyperparameter searches.
NS-CUK Joint Journal Club: S.T.Nguyen, Review on "Neural Sheaf Diffusion: A ...ssuser4b1f48
NS-CUK Joint Journal Club: S.T.Nguyen, Review on "Neural Sheaf Diffusion: A Topological Perspective on Heterophily and Oversmoothing in GNNs" , NeurIPS 2022
NS-CUK Joint Journal Club: S.T.Nguyen, Review on "Neural Sheaf Diffusion: A T...ssuser4b1f48
This document proposes a new framework called neural sheaf diffusion for graph representation learning. It uses cellular sheaf theory to provide a topological perspective on heterophily and oversmoothing in graph neural networks (GNNs). The framework learns the underlying sheaf structure of a graph, which determines how information diffuses and is connected to heterophily and oversmoothing. Experimental results show the proposed models achieve competitive performance on heterophilic graphs and are within 1% of top models on homophilic graphs. Learning sheaf structures provides a novel way for GNNs to evolve graph geometry as well as node features.
Brains rely on spiking neural networks for ultra-low-power information processing. Building artificial intelligence with similar efficiency requires learning algorithms to instantiate complex spiking neural networks and brain-inspired neuromorphic hardware to emulate them efficiently. Toward this end, I will briefly introduce surrogate gradients as a general framework for training spiking neural networks and showcase their robustness and self-calibration capabilities on analog neuromorphic hardware. Drawing further inspiration from biology, I will discuss the impact of homeostatic plasticity and network initialization in the excitatory-inhibitory balanced regime on deep spiking neural network training. Finally, I will show how approximations relate surrogate gradients to biologically plausible online learning rules with a minor impact on their effectiveness.
This presentation is Part 2 of my September Lisp NYC presentation on Reinforcement Learning and Artificial Neural Nets. We will continue from where we left off by covering Convolutional Neural Nets (CNN) and Recurrent Neural Nets (RNN) in depth.
Time permitting I also plan on having a few slides on each of the following topics:
1. Generative Adversarial Networks (GANs)
2. Differentiable Neural Computers (DNCs)
3. Deep Reinforcement Learning (DRL)
Some code examples will be provided in Clojure.
After a very brief recap of Part 1 (ANN & RL), we will jump right into CNN and their appropriateness for image recognition. We will start by covering the convolution operator. We will then explain feature maps and pooling operations and then explain the LeNet 5 architecture. The MNIST data will be used to illustrate a fully functioning CNN.
Next we cover Recurrent Neural Nets in depth and describe how they have been used in Natural Language Processing. We will explain why gated networks and LSTM are used in practice.
Please note that some exposure or familiarity with Gradient Descent and Backpropagation will be assumed. These are covered in the first part of the talk for which both video and slides are available online.
A lot of material will be drawn from the new Deep Learning book by Goodfellow & Bengio as well as Michael Nielsen's online book on Neural Networks and Deep Learning as well several other online resources.
Bio
Pierre de Lacaze has over 20 years industry experience with AI and Lisp based technologies. He holds a Bachelor of Science in Applied Mathematics and a Master’s Degree in Computer Science.
https://www.linkedin.com/in/pierre-de-lacaze-b11026b/
This document provides an outline for a presentation on convolutional neural networks on graphs. It begins with a brief history of deep learning and discusses how convolutional neural networks leverage the compositional and hierarchical nature of data like images. It then introduces spectral graph theory and defines key concepts like graphs, graph operators, and the graph Laplacian that are necessary to extend convolutional networks to non-Euclidean graph-structured data. The outline concludes by describing different approaches to defining graph convolutional networks and their applications.
The MNIST dataset is a classic benchmark dataset in the field of machine learning and computer vision. It consists of 28x28 grayscale images of handwritten digits (0-9) along with their corresponding labels. The goal of this project is to build and train a deep learning model that can accurately classify these handwritten digits.
PR-155: Exploring Randomly Wired Neural Networks for Image RecognitionJinwon Lee
The document discusses exploring randomly wired neural networks for image recognition. The authors define network generators based on random graph models like Erdos-Renyi, Barabasi-Albert, and Watts-Strogatz. These generators produce neural networks with random connectivity patterns. Experiments show that networks generated this way achieve competitive accuracy compared to hand-designed architectures, demonstrating that the generator design is important. The random wiring patterns provide performance comparable to networks from neural architecture search with fewer parameters and computations.
This document summarizes a research paper that introduces Hyperbolic Graph Convolutional Networks (HGCNs) to address limitations of previous Euclidean graph neural networks. HGCNs map node features to hyperbolic spaces and use a novel attention-based aggregation scheme to capture hierarchical structure. The paper presents HGCNs, evaluates them on citation networks, disease propagation trees, protein networks and flight networks, and finds they outperform Euclidean baselines for link prediction and node classification by learning more interpretable hierarchical representations.
The 1st NS-CUK Weekly Seminar
Presenter: Van Thuy Hoang
Date: Feb 27th, 2023
Topic: Review on "Graph Neural Networks Go Forward-Forward," arXiv
Schedule: https://nslab-cuk.github.io/seminar/
NS-CUK Seminar: H.E.Lee, Review on "Structural Deep Embedding for Hyper-Netw...ssuser4b1f48
This document presents a Deep Hyper-Network Embedding (DHNE) model to learn low-dimensional representations of hypernetworks. The DHNE model uses an autoencoder and fully connected layer to preserve both local and global proximity in the embedding space. The DHNE model is tested on four datasets and outperforms other network embedding methods on tasks like network reconstruction, link prediction, and classification. The DHNE model can learn embeddings of hypernetworks while preserving the decomposability property of hyperedges using a nonlinear tuple similarity function.
A comprehensive tutorial on Convolutional Neural Networks (CNN) which talks about the motivation behind CNNs and Deep Learning in general, followed by a description of the various components involved in a typical CNN layer. It explains the theory involved with the different variants used in practice and also, gives a big picture of the whole network by putting everything together.
Next, there's a discussion of the various state-of-the-art frameworks being used to implement CNNs to tackle real-world classification and regression problems.
Finally, the implementation of the CNNs is demonstrated by implementing the paper 'Age ang Gender Classification Using Convolutional Neural Networks' by Hassner (2015).
1) The document proposes a graph-based method using graph convolutional networks to address challenges in sequential recommendation, such as extracting implicit preferences from long behavior sequences and adapting to changing user preferences over time.
2) It constructs an interest graph from user behaviors and designs an attentive graph convolutional network and dynamic pooling technique to aggregate implicit signals into explicit preferences.
3) Experimental results on two large-scale datasets show the proposed method significantly outperforms state-of-the-art sequential recommendation methods.
240415_Thuy_Labseminar[Simple and Asymmetric Graph Contrastive Learning witho...thanhdowork
1. GraphACL is a self-supervised contrastive learning method for graph-structured data that aims to capture both one-hop neighborhood context and two-hop monophily without relying on homophily assumptions.
2. It introduces an additional predict objective to encourage the encoder to learn representations that can predict neighboring node features, implicitly capturing neighborhood context.
3. GraphACL minimizes an upper bound on a contrastive loss to push node representations away from each other and avoid collapsed representations. It performs well on both heterophilic and homophilic graphs for node classification.
A Generalization of Transformer Networks to Graphs.pptxssuser2624f71
This document summarizes a research paper on Graph Transformers, which generalizes transformer networks to graph-structured data. It introduces the Graph Transformer model, which addresses two key challenges of applying transformers to graphs: sparsity and positional encodings. The model uses Laplacian eigenvectors to encode node positions and handles sparsity through restricted self-attention. Experiments show the Graph Transformer outperforms GNN baselines on molecular property prediction and node classification tasks. Future work may explore efficient training on large graphs and heterogeneous domains.
This document proposes a Graph Pointer Neural Network (GPNN) model to address challenges in learning representations for heterophilic graphs where neighboring nodes often have different labels. The GPNN uses a pointer network to select the most relevant nodes from multi-hop neighborhoods, filters out irrelevant nodes, and applies 1D convolution to extract high-level features from the ranked node sequence. Experiments on heterophilic graphs show the GPNN significantly outperforms state-of-the-art baselines. The model helps alleviate over-smoothing issues and better captures signals from distant nodes.
Towards Deep Attention in Graph Neural Networks: Problems and Remedies.pptxssuser2624f71
This document discusses graph convolutional networks (GCNs) and graph attention networks (GATs). It proposes a new method called AERO-GNN that uses cumulative attention across layers to allow GATs to remain expressive in deep layers. The method assigns different importance weights to nodes at different hop distances using hop attention. Experiments on node classification benchmarks show AERO-GNN outperforms other GAT baselines.
Lecture conducted by me on Deep Learning concepts and applications. Discussed FNNs, CNNs, Simple RNNs and LSTM Networks in detail. Finally conducted a hands-on session on deep-learning using Keras and scikit-learn.
This document summarizes a student course project on Kervolutional Neural Networks. The students implemented Kervolutional layers using PyTorch to introduce non-linearity, which improved performance over standard CNNs. Their models achieved 98.4% accuracy on MNIST using polynomial Kervolution, outperforming a CNN. Gaussian Kervolution maps inputs to infinite dimensions. Results showed Kervolution can significantly improve CNN performance on common datasets like MNIST and CIFAR by replacing convolutional layers. Future work includes exploring Kervolution in more architectures with extensive hyperparameter searches.
NS-CUK Joint Journal Club: S.T.Nguyen, Review on "Neural Sheaf Diffusion: A ...ssuser4b1f48
NS-CUK Joint Journal Club: S.T.Nguyen, Review on "Neural Sheaf Diffusion: A Topological Perspective on Heterophily and Oversmoothing in GNNs" , NeurIPS 2022
NS-CUK Joint Journal Club: S.T.Nguyen, Review on "Neural Sheaf Diffusion: A T...ssuser4b1f48
This document proposes a new framework called neural sheaf diffusion for graph representation learning. It uses cellular sheaf theory to provide a topological perspective on heterophily and oversmoothing in graph neural networks (GNNs). The framework learns the underlying sheaf structure of a graph, which determines how information diffuses and is connected to heterophily and oversmoothing. Experimental results show the proposed models achieve competitive performance on heterophilic graphs and are within 1% of top models on homophilic graphs. Learning sheaf structures provides a novel way for GNNs to evolve graph geometry as well as node features.
Brains rely on spiking neural networks for ultra-low-power information processing. Building artificial intelligence with similar efficiency requires learning algorithms to instantiate complex spiking neural networks and brain-inspired neuromorphic hardware to emulate them efficiently. Toward this end, I will briefly introduce surrogate gradients as a general framework for training spiking neural networks and showcase their robustness and self-calibration capabilities on analog neuromorphic hardware. Drawing further inspiration from biology, I will discuss the impact of homeostatic plasticity and network initialization in the excitatory-inhibitory balanced regime on deep spiking neural network training. Finally, I will show how approximations relate surrogate gradients to biologically plausible online learning rules with a minor impact on their effectiveness.
This presentation is Part 2 of my September Lisp NYC presentation on Reinforcement Learning and Artificial Neural Nets. We will continue from where we left off by covering Convolutional Neural Nets (CNN) and Recurrent Neural Nets (RNN) in depth.
Time permitting I also plan on having a few slides on each of the following topics:
1. Generative Adversarial Networks (GANs)
2. Differentiable Neural Computers (DNCs)
3. Deep Reinforcement Learning (DRL)
Some code examples will be provided in Clojure.
After a very brief recap of Part 1 (ANN & RL), we will jump right into CNN and their appropriateness for image recognition. We will start by covering the convolution operator. We will then explain feature maps and pooling operations and then explain the LeNet 5 architecture. The MNIST data will be used to illustrate a fully functioning CNN.
Next we cover Recurrent Neural Nets in depth and describe how they have been used in Natural Language Processing. We will explain why gated networks and LSTM are used in practice.
Please note that some exposure or familiarity with Gradient Descent and Backpropagation will be assumed. These are covered in the first part of the talk for which both video and slides are available online.
A lot of material will be drawn from the new Deep Learning book by Goodfellow & Bengio as well as Michael Nielsen's online book on Neural Networks and Deep Learning as well several other online resources.
Bio
Pierre de Lacaze has over 20 years industry experience with AI and Lisp based technologies. He holds a Bachelor of Science in Applied Mathematics and a Master’s Degree in Computer Science.
https://www.linkedin.com/in/pierre-de-lacaze-b11026b/
This document provides an outline for a presentation on convolutional neural networks on graphs. It begins with a brief history of deep learning and discusses how convolutional neural networks leverage the compositional and hierarchical nature of data like images. It then introduces spectral graph theory and defines key concepts like graphs, graph operators, and the graph Laplacian that are necessary to extend convolutional networks to non-Euclidean graph-structured data. The outline concludes by describing different approaches to defining graph convolutional networks and their applications.
The MNIST dataset is a classic benchmark dataset in the field of machine learning and computer vision. It consists of 28x28 grayscale images of handwritten digits (0-9) along with their corresponding labels. The goal of this project is to build and train a deep learning model that can accurately classify these handwritten digits.
PR-155: Exploring Randomly Wired Neural Networks for Image RecognitionJinwon Lee
The document discusses exploring randomly wired neural networks for image recognition. The authors define network generators based on random graph models like Erdos-Renyi, Barabasi-Albert, and Watts-Strogatz. These generators produce neural networks with random connectivity patterns. Experiments show that networks generated this way achieve competitive accuracy compared to hand-designed architectures, demonstrating that the generator design is important. The random wiring patterns provide performance comparable to networks from neural architecture search with fewer parameters and computations.
This document summarizes a research paper that introduces Hyperbolic Graph Convolutional Networks (HGCNs) to address limitations of previous Euclidean graph neural networks. HGCNs map node features to hyperbolic spaces and use a novel attention-based aggregation scheme to capture hierarchical structure. The paper presents HGCNs, evaluates them on citation networks, disease propagation trees, protein networks and flight networks, and finds they outperform Euclidean baselines for link prediction and node classification by learning more interpretable hierarchical representations.
The 1st NS-CUK Weekly Seminar
Presenter: Van Thuy Hoang
Date: Feb 27th, 2023
Topic: Review on "Graph Neural Networks Go Forward-Forward," arXiv
Schedule: https://nslab-cuk.github.io/seminar/
NS-CUK Seminar: H.E.Lee, Review on "Structural Deep Embedding for Hyper-Netw...ssuser4b1f48
This document presents a Deep Hyper-Network Embedding (DHNE) model to learn low-dimensional representations of hypernetworks. The DHNE model uses an autoencoder and fully connected layer to preserve both local and global proximity in the embedding space. The DHNE model is tested on four datasets and outperforms other network embedding methods on tasks like network reconstruction, link prediction, and classification. The DHNE model can learn embeddings of hypernetworks while preserving the decomposability property of hyperedges using a nonlinear tuple similarity function.
A comprehensive tutorial on Convolutional Neural Networks (CNN) which talks about the motivation behind CNNs and Deep Learning in general, followed by a description of the various components involved in a typical CNN layer. It explains the theory involved with the different variants used in practice and also, gives a big picture of the whole network by putting everything together.
Next, there's a discussion of the various state-of-the-art frameworks being used to implement CNNs to tackle real-world classification and regression problems.
Finally, the implementation of the CNNs is demonstrated by implementing the paper 'Age ang Gender Classification Using Convolutional Neural Networks' by Hassner (2015).
Similar to NS-CUK Seminar: V.T.Hoang, Review on "Everything is Connected: Graph Neural Networks," Current Opinion in Structural Biology, Mar 6th, 2023 (20)
1) The document proposes a graph-based method using graph convolutional networks to address challenges in sequential recommendation, such as extracting implicit preferences from long behavior sequences and adapting to changing user preferences over time.
2) It constructs an interest graph from user behaviors and designs an attentive graph convolutional network and dynamic pooling technique to aggregate implicit signals into explicit preferences.
3) Experimental results on two large-scale datasets show the proposed method significantly outperforms state-of-the-art sequential recommendation methods.
This document proposes the NGNN framework to improve the representation power of graph neural networks. NGNN extracts rooted subgraphs around each node and applies a base GNN independently to learn subgraph representations. These are then aggregated to obtain final node representations. The document outlines limitations of existing GNNs, describes the NGNN framework, and poses research questions about its theoretical power, performance improvements over base GNNs, results on benchmarks, and computational overhead. Key experiments are conducted on graph isomorphism, molecular property prediction, and node classification datasets to evaluate NGNN.
This document presents a novel Claim-guided Hierarchical Graph Attention Network (ClaHi-GAT) model for rumor detection using undirected interaction graphs. The model uses multi-level attention - post-level attention considers the content of individual tweets, while event-level attention compares tweets responding to the same claim. This allows the model to better capture features indicative of rumors. Experimental results on three Twitter datasets show the proposed model achieves superior performance for rumor classification and early detection compared to previous structure-based methods.
This document presents a novel graph neural network (GNN) convolutional layer based on Auto-Regressive Moving Average (ARMA) filters. The ARMA layer aims to address limitations of existing GNN layers that use polynomial filters by providing a more flexible frequency response with fewer parameters. It models graph signals using parallel stacks of recurrent operations to approximate high-order neighborhoods efficiently. Experimental results show the ARMA layer outperforms other GNN architectures on tasks like node classification, graph signal classification, and graph regression. Future work could explore incorporating text and content metadata into graph convolutional models.
The document proposes a novel graph transformer model called DeepGraph. DeepGraph uses substructure sampling to encode local graph information and add substructure tokens. It applies localized self-attention on substructures using a mask. The document experiments with DeepGraph on various graph datasets and analyzes its performance as its depth increases. Deeper models show diminishing returns, indicating a limitation of increasing depth in graph transformers.
This document summarizes a research paper on Graph Multiset Pooling, a new method for graph pooling using a Graph Multiset Transformer (GMT). The GMT treats graph pooling as a multiset encoding problem and uses multi-head attention to capture relationships among nodes. It satisfies the injectiveness and permutation invariance properties needed to be as powerful as the Weisfeiler-Lehman graph isomorphism test. Experimental results show the GMT outperforms other pooling methods on tasks like graph classification, reconstruction, and generation. The GMT provides a powerful and efficient way to learn meaningful representations of entire graphs.
This document presents Graphormer, a Transformer-based model for graph representation learning. Graphormer achieves state-of-the-art performance on graph tasks by introducing three novel encodings: centrality encoding to capture node importance, spatial encoding to encode structural relations between nodes, and edge encoding to incorporate edge features. Experiments show Graphormer outperforms GNN baselines by over 10% on various graph datasets and tasks.
This document presents Graphormer, a Transformer-based model for graph representation learning. Graphormer achieves state-of-the-art performance on graph tasks by introducing three novel encodings: centrality encoding to capture node importance, spatial encoding to encode structural relations using shortest path distance, and edge encoding using edge features. Experiments show Graphormer outperforms GNN baselines by over 10% on various graph datasets and leaderboards.
The 2nd NS-CUK Weekly Seminar
Presenter: Sang Thanh Nguyen
Date: Mar 6th, 2023
Topic: Review on "DeeperGCN: All You Need to Train Deeper GCNs," arXiv
Schedule: https://nslab-cuk.github.io/seminar/
The 1st AI-CUK Weekly Joint Journal Club
Presenter: Van Thuy Hoang
Date: Jan 3rd, 2023
Topic: Review on "Global self-attention as a replacement for graph convolution," KDD 2022
Schedule: https://nslab-cuk.github.io/joint-journal-club/
More from Network Science Lab, The Catholic University of Korea (20)
Essentials of Automations: Exploring Attributes & Automation ParametersSafe Software
Building automations in FME Flow can save time, money, and help businesses scale by eliminating data silos and providing data to stakeholders in real-time. One essential component to orchestrating complex automations is the use of attributes & automation parameters (both formerly known as “keys”). In fact, it’s unlikely you’ll ever build an Automation without using these components, but what exactly are they?
Attributes & automation parameters enable the automation author to pass data values from one automation component to the next. During this webinar, our FME Flow Specialists will cover leveraging the three types of these output attributes & parameters in FME Flow: Event, Custom, and Automation. As a bonus, they’ll also be making use of the Split-Merge Block functionality.
You’ll leave this webinar with a better understanding of how to maximize the potential of automations by making use of attributes & automation parameters, with the ultimate goal of setting your enterprise integration workflows up on autopilot.
In the realm of cybersecurity, offensive security practices act as a critical shield. By simulating real-world attacks in a controlled environment, these techniques expose vulnerabilities before malicious actors can exploit them. This proactive approach allows manufacturers to identify and fix weaknesses, significantly enhancing system security.
This presentation delves into the development of a system designed to mimic Galileo's Open Service signal using software-defined radio (SDR) technology. We'll begin with a foundational overview of both Global Navigation Satellite Systems (GNSS) and the intricacies of digital signal processing.
The presentation culminates in a live demonstration. We'll showcase the manipulation of Galileo's Open Service pilot signal, simulating an attack on various software and hardware systems. This practical demonstration serves to highlight the potential consequences of unaddressed vulnerabilities, emphasizing the importance of offensive security practices in safeguarding critical infrastructure.
How to Interpret Trends in the Kalyan Rajdhani Mix Chart.pdfChart Kalyan
A Mix Chart displays historical data of numbers in a graphical or tabular form. The Kalyan Rajdhani Mix Chart specifically shows the results of a sequence of numbers over different periods.
How information systems are built or acquired puts information, which is what they should be about, in a secondary place. Our language adapted accordingly, and we no longer talk about information systems but applications. Applications evolved in a way to break data into diverse fragments, tightly coupled with applications and expensive to integrate. The result is technical debt, which is re-paid by taking even bigger "loans", resulting in an ever-increasing technical debt. Software engineering and procurement practices work in sync with market forces to maintain this trend. This talk demonstrates how natural this situation is. The question is: can something be done to reverse the trend?
This talk will cover ScyllaDB Architecture from the cluster-level view and zoom in on data distribution and internal node architecture. In the process, we will learn the secret sauce used to get ScyllaDB's high availability and superior performance. We will also touch on the upcoming changes to ScyllaDB architecture, moving to strongly consistent metadata and tablets.
"Choosing proper type of scaling", Olena SyrotaFwdays
Imagine an IoT processing system that is already quite mature and production-ready and for which client coverage is growing and scaling and performance aspects are life and death questions. The system has Redis, MongoDB, and stream processing based on ksqldb. In this talk, firstly, we will analyze scaling approaches and then select the proper ones for our system.
Freshworks Rethinks NoSQL for Rapid Scaling & Cost-EfficiencyScyllaDB
Freshworks creates AI-boosted business software that helps employees work more efficiently and effectively. Managing data across multiple RDBMS and NoSQL databases was already a challenge at their current scale. To prepare for 10X growth, they knew it was time to rethink their database strategy. Learn how they architected a solution that would simplify scaling while keeping costs under control.
zkStudyClub - LatticeFold: A Lattice-based Folding Scheme and its Application...Alex Pruden
Folding is a recent technique for building efficient recursive SNARKs. Several elegant folding protocols have been proposed, such as Nova, Supernova, Hypernova, Protostar, and others. However, all of them rely on an additively homomorphic commitment scheme based on discrete log, and are therefore not post-quantum secure. In this work we present LatticeFold, the first lattice-based folding protocol based on the Module SIS problem. This folding protocol naturally leads to an efficient recursive lattice-based SNARK and an efficient PCD scheme. LatticeFold supports folding low-degree relations, such as R1CS, as well as high-degree relations, such as CCS. The key challenge is to construct a secure folding protocol that works with the Ajtai commitment scheme. The difficulty, is ensuring that extracted witnesses are low norm through many rounds of folding. We present a novel technique using the sumcheck protocol to ensure that extracted witnesses are always low norm no matter how many rounds of folding are used. Our evaluation of the final proof system suggests that it is as performant as Hypernova, while providing post-quantum security.
Paper Link: https://eprint.iacr.org/2024/257
The Department of Veteran Affairs (VA) invited Taylor Paschal, Knowledge & Information Management Consultant at Enterprise Knowledge, to speak at a Knowledge Management Lunch and Learn hosted on June 12, 2024. All Office of Administration staff were invited to attend and received professional development credit for participating in the voluntary event.
The objectives of the Lunch and Learn presentation were to:
- Review what KM ‘is’ and ‘isn’t’
- Understand the value of KM and the benefits of engaging
- Define and reflect on your “what’s in it for me?”
- Share actionable ways you can participate in Knowledge - - Capture & Transfer
Must Know Postgres Extension for DBA and Developer during MigrationMydbops
Mydbops Opensource Database Meetup 16
Topic: Must-Know PostgreSQL Extensions for Developers and DBAs During Migration
Speaker: Deepak Mahto, Founder of DataCloudGaze Consulting
Date & Time: 8th June | 10 AM - 1 PM IST
Venue: Bangalore International Centre, Bangalore
Abstract: Discover how PostgreSQL extensions can be your secret weapon! This talk explores how key extensions enhance database capabilities and streamline the migration process for users moving from other relational databases like Oracle.
Key Takeaways:
* Learn about crucial extensions like oracle_fdw, pgtt, and pg_audit that ease migration complexities.
* Gain valuable strategies for implementing these extensions in PostgreSQL to achieve license freedom.
* Discover how these key extensions can empower both developers and DBAs during the migration process.
* Don't miss this chance to gain practical knowledge from an industry expert and stay updated on the latest open-source database trends.
Mydbops Managed Services specializes in taking the pain out of database management while optimizing performance. Since 2015, we have been providing top-notch support and assistance for the top three open-source databases: MySQL, MongoDB, and PostgreSQL.
Our team offers a wide range of services, including assistance, support, consulting, 24/7 operations, and expertise in all relevant technologies. We help organizations improve their database's performance, scalability, efficiency, and availability.
Contact us: info@mydbops.com
Visit: https://www.mydbops.com/
Follow us on LinkedIn: https://in.linkedin.com/company/mydbops
For more details and updates, please follow up the below links.
Meetup Page : https://www.meetup.com/mydbops-databa...
Twitter: https://twitter.com/mydbopsofficial
Blogs: https://www.mydbops.com/blog/
Facebook(Meta): https://www.facebook.com/mydbops/
From Natural Language to Structured Solr Queries using LLMsSease
This talk draws on experimentation to enable AI applications with Solr. One important use case is to use AI for better accessibility and discoverability of the data: while User eXperience techniques, lexical search improvements, and data harmonization can take organizations to a good level of accessibility, a structural (or “cognitive” gap) remains between the data user needs and the data producer constraints.
That is where AI – and most importantly, Natural Language Processing and Large Language Model techniques – could make a difference. This natural language, conversational engine could facilitate access and usage of the data leveraging the semantics of any data source.
The objective of the presentation is to propose a technical approach and a way forward to achieve this goal.
The key concept is to enable users to express their search queries in natural language, which the LLM then enriches, interprets, and translates into structured queries based on the Solr index’s metadata.
This approach leverages the LLM’s ability to understand the nuances of natural language and the structure of documents within Apache Solr.
The LLM acts as an intermediary agent, offering a transparent experience to users automatically and potentially uncovering relevant documents that conventional search methods might overlook. The presentation will include the results of this experimental work, lessons learned, best practices, and the scope of future work that should improve the approach and make it production-ready.
QA or the Highway - Component Testing: Bridging the gap between frontend appl...zjhamm304
These are the slides for the presentation, "Component Testing: Bridging the gap between frontend applications" that was presented at QA or the Highway 2024 in Columbus, OH by Zachary Hamm.
"NATO Hackathon Winner: AI-Powered Drug Search", Taras KlobaFwdays
This is a session that details how PostgreSQL's features and Azure AI Services can be effectively used to significantly enhance the search functionality in any application.
In this session, we'll share insights on how we used PostgreSQL to facilitate precise searches across multiple fields in our mobile application. The techniques include using LIKE and ILIKE operators and integrating a trigram-based search to handle potential misspellings, thereby increasing the search accuracy.
We'll also discuss how the azure_ai extension on PostgreSQL databases in Azure and Azure AI Services were utilized to create vectors from user input, a feature beneficial when users wish to find specific items based on text prompts. While our application's case study involves a drug search, the techniques and principles shared in this session can be adapted to improve search functionality in a wide range of applications. Join us to learn how PostgreSQL and Azure AI can be harnessed to enhance your application's search capability.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/temporal-event-neural-networks-a-more-efficient-alternative-to-the-transformer-a-presentation-from-brainchip/
Chris Jones, Director of Product Management at BrainChip , presents the “Temporal Event Neural Networks: A More Efficient Alternative to the Transformer” tutorial at the May 2024 Embedded Vision Summit.
The expansion of AI services necessitates enhanced computational capabilities on edge devices. Temporal Event Neural Networks (TENNs), developed by BrainChip, represent a novel and highly efficient state-space network. TENNs demonstrate exceptional proficiency in handling multi-dimensional streaming data, facilitating advancements in object detection, action recognition, speech enhancement and language model/sequence generation. Through the utilization of polynomial-based continuous convolutions, TENNs streamline models, expedite training processes and significantly diminish memory requirements, achieving notable reductions of up to 50x in parameters and 5,000x in energy consumption compared to prevailing methodologies like transformers.
Integration with BrainChip’s Akida neuromorphic hardware IP further enhances TENNs’ capabilities, enabling the realization of highly capable, portable and passively cooled edge devices. This presentation delves into the technical innovations underlying TENNs, presents real-world benchmarks, and elucidates how this cutting-edge approach is positioned to revolutionize edge AI across diverse applications.
What is an RPA CoE? Session 2 – CoE RolesDianaGray10
In this session, we will review the players involved in the CoE and how each role impacts opportunities.
Topics covered:
• What roles are essential?
• What place in the automation journey does each role play?
Speaker:
Chris Bolin, Senior Intelligent Automation Architect Anika Systems
inQuba Webinar Mastering Customer Journey Management with Dr Graham HillLizaNolte
HERE IS YOUR WEBINAR CONTENT! 'Mastering Customer Journey Management with Dr. Graham Hill'. We hope you find the webinar recording both insightful and enjoyable.
In this webinar, we explored essential aspects of Customer Journey Management and personalization. Here’s a summary of the key insights and topics discussed:
Key Takeaways:
Understanding the Customer Journey: Dr. Hill emphasized the importance of mapping and understanding the complete customer journey to identify touchpoints and opportunities for improvement.
Personalization Strategies: We discussed how to leverage data and insights to create personalized experiences that resonate with customers.
Technology Integration: Insights were shared on how inQuba’s advanced technology can streamline customer interactions and drive operational efficiency.
2. 1
• GFF, graph neural networks are trained
greedily layer by layer, using both positive
and negative samples
• Solve the noise from neighbours
3. 2
The fundamentals: Permutation equivariance and invariance
➢ node feature matrix.
➢ adjacency matrix, A
➢ Neighbours of nodes
➢ Features
➢ Hidden state
4. 3
Graph Neural Networks
➢ Main Idea: Pass massages between pairs of nodes and agglomerate
➢ Alternative Interpretation: Pass massages between nodes to refine node
(and possibly edge) representations
8. 7
Graph Neural Networks
➢ Graph Convolutional Networks (GCNs)
Kipf & Welling (ICLR 2017), related previous works by D
uvenaud et al. (NIPS 2015) and Li et al. (ICLR 2016)
11. 10
GNNs without a graph: Deep Sets and Transformers
➢ To reverse-engineer why Transformers appear here,
let us consider the NLP perspective.
12. 11
GNNs beyond permutation equivariance: Geometric Graphs
➢ We have assumed our graphs to be a discrete, unordered, collection of
nodes and edges—hence, only susceptible to permutation symmetries.
➢ But in many cases, this is not the entire story!