"Sparse Graph Attention Networks", IEEE Transactions on Knowledge and Data En...ssuser2624f71
This document proposes sparse graph attention networks (SGATs) which integrate a sparse attention mechanism into graph attention networks. SGATs simplify GAT architectures by sharing attention coefficients across heads and layers. SGATs can identify and remove noisy edges from graphs to achieve similar or improved accuracy on classification tasks. The proposed method is tested on several graph datasets and is shown to learn more robust representations compared to GAT, especially on disassortative graphs where GAT fails. Future work involves applying SGATs to edge detection against adversarial attacks and unsupervised domain adaptation.
LINE: Large-scale Information Network Embedding.pptxssuser2624f71
LINE is a network embedding algorithm that learns distributed representations of nodes in a graph. It aims to preserve both first-order and second-order proximity structures by optimizing an objective function. The algorithm is efficient and can learn embeddings for networks with millions of nodes and billions of edges. Empirical experiments on language, social, and citation networks demonstrate LINE's effectiveness at capturing network structures.
node2vec: Scalable Feature Learning for Networks.pptxssuser2624f71
Node2Vec is an algorithm for learning continuous feature representations or embeddings of nodes in graphs. It extends traditional graph embedding techniques by leveraging both breadth-first and depth-first search to learn the local and global network structure. The algorithm uses a skip-gram model to maximize the likelihood of preserving neighborhood relationships from random walks on the graph. Learned embeddings have applications in tasks like node classification, link prediction, and graph visualization.
This document summarizes a research paper on sparse graph attention networks (SGATs). SGATs apply an attention mechanism to only a subset of neighbors for each node to improve the scalability and memory efficiency of graph attention networks. The key ideas are a sparse attention mechanism using techniques like neighbor sampling and a binary gate attached to each edge. SGATs show advantages in scalability, memory usage, and performance on disassortative graphs by removing up to 80% of edges while maintaining classification accuracy. Evaluation on synthetic and real-world graphs demonstrates SGATs can identify and remove noisy edges.
Image Segmentation Using Deep Learning : A surveyNUPUR YADAV
1. The document discusses various deep learning models for image segmentation, including fully convolutional networks, encoder-decoder models, multi-scale pyramid networks, and dilated convolutional models.
2. It provides details on popular architectures like U-Net, SegNet, and models from the DeepLab family.
3. The document also reviews datasets commonly used to evaluate image segmentation methods and reports accuracies of different models on the Cityscapes dataset.
240226_Thanh_LabSeminar[Structure-Aware Transformer for Graph Representation ...thanhdowork
The document proposes the Structure-Aware Transformer, a new type of graph neural network that incorporates structural information into self-attention. It does this by extracting subgraph representations rooted at each node before computing attention. This allows it to capture structural similarity between nodes better than traditional Transformers. The model achieves state-of-the-art performance on graph classification and property prediction tasks while avoiding over-smoothing and over-squashing issues of message passing networks.
"Sparse Graph Attention Networks", IEEE Transactions on Knowledge and Data En...ssuser2624f71
This document proposes sparse graph attention networks (SGATs) which integrate a sparse attention mechanism into graph attention networks. SGATs simplify GAT architectures by sharing attention coefficients across heads and layers. SGATs can identify and remove noisy edges from graphs to achieve similar or improved accuracy on classification tasks. The proposed method is tested on several graph datasets and is shown to learn more robust representations compared to GAT, especially on disassortative graphs where GAT fails. Future work involves applying SGATs to edge detection against adversarial attacks and unsupervised domain adaptation.
LINE: Large-scale Information Network Embedding.pptxssuser2624f71
LINE is a network embedding algorithm that learns distributed representations of nodes in a graph. It aims to preserve both first-order and second-order proximity structures by optimizing an objective function. The algorithm is efficient and can learn embeddings for networks with millions of nodes and billions of edges. Empirical experiments on language, social, and citation networks demonstrate LINE's effectiveness at capturing network structures.
node2vec: Scalable Feature Learning for Networks.pptxssuser2624f71
Node2Vec is an algorithm for learning continuous feature representations or embeddings of nodes in graphs. It extends traditional graph embedding techniques by leveraging both breadth-first and depth-first search to learn the local and global network structure. The algorithm uses a skip-gram model to maximize the likelihood of preserving neighborhood relationships from random walks on the graph. Learned embeddings have applications in tasks like node classification, link prediction, and graph visualization.
This document summarizes a research paper on sparse graph attention networks (SGATs). SGATs apply an attention mechanism to only a subset of neighbors for each node to improve the scalability and memory efficiency of graph attention networks. The key ideas are a sparse attention mechanism using techniques like neighbor sampling and a binary gate attached to each edge. SGATs show advantages in scalability, memory usage, and performance on disassortative graphs by removing up to 80% of edges while maintaining classification accuracy. Evaluation on synthetic and real-world graphs demonstrates SGATs can identify and remove noisy edges.
Image Segmentation Using Deep Learning : A surveyNUPUR YADAV
1. The document discusses various deep learning models for image segmentation, including fully convolutional networks, encoder-decoder models, multi-scale pyramid networks, and dilated convolutional models.
2. It provides details on popular architectures like U-Net, SegNet, and models from the DeepLab family.
3. The document also reviews datasets commonly used to evaluate image segmentation methods and reports accuracies of different models on the Cityscapes dataset.
240226_Thanh_LabSeminar[Structure-Aware Transformer for Graph Representation ...thanhdowork
The document proposes the Structure-Aware Transformer, a new type of graph neural network that incorporates structural information into self-attention. It does this by extracting subgraph representations rooted at each node before computing attention. This allows it to capture structural similarity between nodes better than traditional Transformers. The model achieves state-of-the-art performance on graph classification and property prediction tasks while avoiding over-smoothing and over-squashing issues of message passing networks.
NS-CUK Seminar: S.T.Nguyen, Review on "Geom-GCN: Geometric Graph Convolutiona...ssuser4b1f48
This document presents the Geom-GCN model for graph neural networks. Geom-GCN maps graphs to continuous latent spaces to build structural neighborhoods for message passing aggregation. It uses a bi-level aggregator operating on these structural neighborhoods to update node representations while maintaining permutation invariance. Geom-GCN designs geometric relationships in Euclidean and hyperbolic spaces to define neighborhoods. It achieves state-of-the-art performance on benchmarks while addressing limitations of existing message passing approaches in capturing structure and long-range dependencies.
This document introduces graph attention networks (GATs) for node classification of graph-structured data. GATs use self-attention mechanisms over a node's neighbors to compute hidden representations. The proposed approach achieves state-of-the-art results on four benchmarks, demonstrating the potential of attention models on graphs. GATs are computationally efficient and do not require upfront knowledge of global graph structure.
This paper proposes DaViT, a vision transformer architecture that uses both spatial and channel attention to efficiently capture global context. Spatial attention performs local interactions across spatial locations while channel attention captures global representations by attending to all spatial positions across channels. Together, they complement each other to achieve state-of-the-art performance on image classification, semantic segmentation, and object detection tasks, with linear computational complexity scaling to high-resolution inputs.
Hanjun Dai, PhD Student, School of Computational Science and Engineering, Geo...MLconf
Graph Representation Learning with Deep Embedding Approach:
Graphs are commonly used data structure for representing the real-world relationships, e.g., molecular structure, knowledge graphs, social and communication networks. The effective encoding of graphical information is essential to the success of such applications. In this talk I’ll first describe a general deep learning framework, namely structure2vec, for end to end graph feature representation learning. Then I’ll present the direct application of this model on graph problems on different scales, including community detection and molecule graph classification/regression. We then extend the embedding idea to temporal evolving user-product interaction graph for recommendation. Finally I’ll present our latest work on leveraging the reinforcement learning technique for graph combinatorial optimization, including vertex cover problem for social influence maximization and traveling salesman problem for scheduling management.
DDGK: Learning Graph Representations for Deep Divergence Graph Kernelsivaderivader
This document summarizes a research paper on learning graph representations for deep divergence graph kernels (DDGK). DDGK learns graph representations without supervision or domain knowledge by using a node-to-edges encoder and isomorphism attention. The isomorphism attention provides a bidirectional mapping between nodes in two graphs. DDGK then calculates a divergence score between the source and target graphs as a measure of their (dis)similarity. Experimental results showed DDGK produces representations competitive with other graph kernel baselines. The paper proposes several extensions, including different graph encoders and attention mechanisms, as well as improved regularization and scalability.
A Generalization of Transformer Networks to Graphs.pptxssuser2624f71
This document summarizes a research paper on Graph Transformers, which generalizes transformer networks to graph-structured data. It introduces the Graph Transformer model, which addresses two key challenges of applying transformers to graphs: sparsity and positional encodings. The model uses Laplacian eigenvectors to encode node positions and handles sparsity through restricted self-attention. Experiments show the Graph Transformer outperforms GNN baselines on molecular property prediction and node classification tasks. Future work may explore efficient training on large graphs and heterogeneous domains.
https://telecombcn-dl.github.io/2017-dlcv/
Deep learning technologies are at the core of the current revolution in artificial intelligence for multimedia data analysis. The convergence of large-scale annotated datasets and affordable GPU hardware has allowed the training of neural networks for data analysis tasks which were previously addressed with hand-crafted features. Architectures such as convolutional neural networks, recurrent neural networks and Q-nets for reinforcement learning have shaped a brand new scenario in signal processing. This course will cover the basic principles and applications of deep learning to computer vision problems, such as image classification, object detection or image captioning.
Attentive Relational Networks for Mapping Images to Scene GraphsSangmin Woo
M. Qi, W. Li, Z. Yang, Y. Wang, and J. Luo.: Attentive relational networks for mapping images to scene graphs. In The
IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2019.
This document presents a novel protocol for developing structure-property linkages for polycrystalline materials. It generates synthetic microstructure datasets and uses finite element simulations to calculate their elastic properties. Principal component analysis is used to reduce the dimensionality of the microstructure representations. Initial regression analyses show promising results in establishing structure-property linkages for elastic response, though further improvement is needed. The approach provides a compact and continuous representation of crystal orientations using generalized spherical harmonics.
Semantic Segmentation on Satellite ImageryRAHUL BHOJWANI
This is an Image Semantic Segmentation project targeted on Satellite Imagery. The goal was to detect the pixel-wise segmentation map for various objects in Satellite Imagery including buildings, water bodies, roads etc. The data for this was taken from the Kaggle competition <https://www.kaggle.com/c/dstl-satellite-imagery-feature-detection>.
We implemented FCN, U-Net and Segnet Deep learning architectures for this task.
V2 final presentation 08-12-2014 (akash gupta's conflicted copy 2014-12-08)0309akash
This document presents a novel protocol for developing structure-property linkages for polycrystalline materials. The protocol uses generalized spherical harmonics (GSH) basis functions to provide a continuous and compact representation of crystal orientations, rather than discrete binning. Principal component analysis is used to reduce the dimensionality of synthetic microstructure datasets. Preliminary regression analysis shows the structure-property linkages do not yet accurately predict elastic properties from microstructure, and further improvement is needed. The goal is to extend the linkages to capture plastic material response.
NS-CUK Seminar: S.T.Nguyen, Review on "Are More Layers Beneficial to Graph Tr...ssuser4b1f48
This document summarizes a research paper on improving the depth of graph transformer models using local attention to graph substructures. The paper proposes adding substructure tokens to the graph and applying local attention between each substructure and its nodes. This addresses limitations in graph transformers' ability to learn substructure features as depth increases. The model achieves state-of-the-art results on graph benchmarks, with performance continuing to improve as depth increases up to 48 layers, demonstrating it alleviates problems of shrinking attention capacity with depth. Ablation studies show local attention and substructure encoding are important for the model's performance, especially on deeper models and datasets where specific substructures are key features.
This document presents a novel protocol for developing structure-property linkages for polycrystalline materials. It generates synthetic microstructure datasets and uses finite element simulations to calculate their mechanical properties. Principal component analysis is used to reduce the dimensionality of the microstructure representations. Initial regression analyses show promising results in establishing structure-property linkages for elastic response, though further improvement is needed. The approach provides a compact and continuous representation of crystal orientations using generalized spherical harmonics.
240325_JW_labseminar[node2vec: Scalable Feature Learning for Networks].pptxthanhdowork
This document describes the node2vec algorithm for feature learning in networks. Node2vec uses random walks to sample the neighborhood of nodes in a network. It learns feature representations that maximize the likelihood of preserving network neighborhoods in a low-dimensional space. The algorithm introduces two parameters, p and q, that allow it to flexibly explore node neighborhoods. Experiments on real-world networks show node2vec produces high quality feature representations that achieve strong performance on tasks like multi-label classification and link prediction.
InternImage: Exploring Large-Scale Vision Foundation Models with Deformable C...taeseon ryu
요즘 대형 비전 트랜스포머(ViT)의 발전에 비해, 합성곱 신경망(CNN)을 기반으로 한 대형 모델은 아직 초기 단계에 머물러 있습니다. 본 연구는 InternImage라는 새로운 대규모 CNN 기반 모델을 제안합니다. 이 모델은 ViT와 같이 매개변수와 학습 데이터를 늘리는 이점을 얻을 수 있습니다. 최근에는 대형 밀집 커널에 초점을 맞춘 CNN과는 달리, InternImage는 변형 가능한 컨볼루션을 핵심 연산자로 사용합니다. 이를 통해 모델은 감지 및 세분화와 같은 하향 작업에 필요한 큰 유효 수용영역을 갖게 되며, 입력 및 작업 정보에 의존하는 적응형 공간 집계도 가능합니다. 이로 인해, InternImage는 기존 CNN의 엄격한 귀납적 편향을 줄이고, ViT와 같은 대규모 매개변수와 대규모 데이터로 더 강력하고 견고한 패턴을 학습할 수 있게 됩니다. 논문에서 제시한 모델의 효과성은 ImageNet, COCO 및 ADE20K와 같은 어려운 벤치마크에서 입증되었습니다. InternImage-H는 COCO test-dev에서 65.4 mAP, ADE20K에서 62.9 mIoU를 달성하여 현재 최고의 CNN 및 ViT를 능가하는 새로운 기록을 세웠습니다
NS-CUK Seminar: V.T.Hoang, Review on "GOAT: A Global Transformer on Large-sca...ssuser4b1f48
This document presents GOAT, a scalable global transformer model for graph-structured data. GOAT uses a novel local attention module to absorb rich local information from node neighborhoods, in addition to a global attention mechanism that allows each node to attend to all other nodes. The document reports that GOAT achieves strong performance on large-scale homophilous and heterophilous node classification benchmarks, demonstrating its ability to leverage both local and global graph information for prediction tasks. Ablation studies on codebook size further indicate GOAT's effectiveness at modeling long-range interactions through its global attention.
NS-CUK Seminar: S.T.Nguyen, Review on "Geom-GCN: Geometric Graph Convolutiona...ssuser4b1f48
This document presents the Geom-GCN model for graph neural networks. Geom-GCN maps graphs to continuous latent spaces to build structural neighborhoods for message passing aggregation. It uses a bi-level aggregator operating on these structural neighborhoods to update node representations while maintaining permutation invariance. Geom-GCN designs geometric relationships in Euclidean and hyperbolic spaces to define neighborhoods. It achieves state-of-the-art performance on benchmarks while addressing limitations of existing message passing approaches in capturing structure and long-range dependencies.
This document introduces graph attention networks (GATs) for node classification of graph-structured data. GATs use self-attention mechanisms over a node's neighbors to compute hidden representations. The proposed approach achieves state-of-the-art results on four benchmarks, demonstrating the potential of attention models on graphs. GATs are computationally efficient and do not require upfront knowledge of global graph structure.
This paper proposes DaViT, a vision transformer architecture that uses both spatial and channel attention to efficiently capture global context. Spatial attention performs local interactions across spatial locations while channel attention captures global representations by attending to all spatial positions across channels. Together, they complement each other to achieve state-of-the-art performance on image classification, semantic segmentation, and object detection tasks, with linear computational complexity scaling to high-resolution inputs.
Hanjun Dai, PhD Student, School of Computational Science and Engineering, Geo...MLconf
Graph Representation Learning with Deep Embedding Approach:
Graphs are commonly used data structure for representing the real-world relationships, e.g., molecular structure, knowledge graphs, social and communication networks. The effective encoding of graphical information is essential to the success of such applications. In this talk I’ll first describe a general deep learning framework, namely structure2vec, for end to end graph feature representation learning. Then I’ll present the direct application of this model on graph problems on different scales, including community detection and molecule graph classification/regression. We then extend the embedding idea to temporal evolving user-product interaction graph for recommendation. Finally I’ll present our latest work on leveraging the reinforcement learning technique for graph combinatorial optimization, including vertex cover problem for social influence maximization and traveling salesman problem for scheduling management.
DDGK: Learning Graph Representations for Deep Divergence Graph Kernelsivaderivader
This document summarizes a research paper on learning graph representations for deep divergence graph kernels (DDGK). DDGK learns graph representations without supervision or domain knowledge by using a node-to-edges encoder and isomorphism attention. The isomorphism attention provides a bidirectional mapping between nodes in two graphs. DDGK then calculates a divergence score between the source and target graphs as a measure of their (dis)similarity. Experimental results showed DDGK produces representations competitive with other graph kernel baselines. The paper proposes several extensions, including different graph encoders and attention mechanisms, as well as improved regularization and scalability.
A Generalization of Transformer Networks to Graphs.pptxssuser2624f71
This document summarizes a research paper on Graph Transformers, which generalizes transformer networks to graph-structured data. It introduces the Graph Transformer model, which addresses two key challenges of applying transformers to graphs: sparsity and positional encodings. The model uses Laplacian eigenvectors to encode node positions and handles sparsity through restricted self-attention. Experiments show the Graph Transformer outperforms GNN baselines on molecular property prediction and node classification tasks. Future work may explore efficient training on large graphs and heterogeneous domains.
https://telecombcn-dl.github.io/2017-dlcv/
Deep learning technologies are at the core of the current revolution in artificial intelligence for multimedia data analysis. The convergence of large-scale annotated datasets and affordable GPU hardware has allowed the training of neural networks for data analysis tasks which were previously addressed with hand-crafted features. Architectures such as convolutional neural networks, recurrent neural networks and Q-nets for reinforcement learning have shaped a brand new scenario in signal processing. This course will cover the basic principles and applications of deep learning to computer vision problems, such as image classification, object detection or image captioning.
Attentive Relational Networks for Mapping Images to Scene GraphsSangmin Woo
M. Qi, W. Li, Z. Yang, Y. Wang, and J. Luo.: Attentive relational networks for mapping images to scene graphs. In The
IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2019.
This document presents a novel protocol for developing structure-property linkages for polycrystalline materials. It generates synthetic microstructure datasets and uses finite element simulations to calculate their elastic properties. Principal component analysis is used to reduce the dimensionality of the microstructure representations. Initial regression analyses show promising results in establishing structure-property linkages for elastic response, though further improvement is needed. The approach provides a compact and continuous representation of crystal orientations using generalized spherical harmonics.
Semantic Segmentation on Satellite ImageryRAHUL BHOJWANI
This is an Image Semantic Segmentation project targeted on Satellite Imagery. The goal was to detect the pixel-wise segmentation map for various objects in Satellite Imagery including buildings, water bodies, roads etc. The data for this was taken from the Kaggle competition <https://www.kaggle.com/c/dstl-satellite-imagery-feature-detection>.
We implemented FCN, U-Net and Segnet Deep learning architectures for this task.
V2 final presentation 08-12-2014 (akash gupta's conflicted copy 2014-12-08)0309akash
This document presents a novel protocol for developing structure-property linkages for polycrystalline materials. The protocol uses generalized spherical harmonics (GSH) basis functions to provide a continuous and compact representation of crystal orientations, rather than discrete binning. Principal component analysis is used to reduce the dimensionality of synthetic microstructure datasets. Preliminary regression analysis shows the structure-property linkages do not yet accurately predict elastic properties from microstructure, and further improvement is needed. The goal is to extend the linkages to capture plastic material response.
NS-CUK Seminar: S.T.Nguyen, Review on "Are More Layers Beneficial to Graph Tr...ssuser4b1f48
This document summarizes a research paper on improving the depth of graph transformer models using local attention to graph substructures. The paper proposes adding substructure tokens to the graph and applying local attention between each substructure and its nodes. This addresses limitations in graph transformers' ability to learn substructure features as depth increases. The model achieves state-of-the-art results on graph benchmarks, with performance continuing to improve as depth increases up to 48 layers, demonstrating it alleviates problems of shrinking attention capacity with depth. Ablation studies show local attention and substructure encoding are important for the model's performance, especially on deeper models and datasets where specific substructures are key features.
This document presents a novel protocol for developing structure-property linkages for polycrystalline materials. It generates synthetic microstructure datasets and uses finite element simulations to calculate their mechanical properties. Principal component analysis is used to reduce the dimensionality of the microstructure representations. Initial regression analyses show promising results in establishing structure-property linkages for elastic response, though further improvement is needed. The approach provides a compact and continuous representation of crystal orientations using generalized spherical harmonics.
240325_JW_labseminar[node2vec: Scalable Feature Learning for Networks].pptxthanhdowork
This document describes the node2vec algorithm for feature learning in networks. Node2vec uses random walks to sample the neighborhood of nodes in a network. It learns feature representations that maximize the likelihood of preserving network neighborhoods in a low-dimensional space. The algorithm introduces two parameters, p and q, that allow it to flexibly explore node neighborhoods. Experiments on real-world networks show node2vec produces high quality feature representations that achieve strong performance on tasks like multi-label classification and link prediction.
InternImage: Exploring Large-Scale Vision Foundation Models with Deformable C...taeseon ryu
요즘 대형 비전 트랜스포머(ViT)의 발전에 비해, 합성곱 신경망(CNN)을 기반으로 한 대형 모델은 아직 초기 단계에 머물러 있습니다. 본 연구는 InternImage라는 새로운 대규모 CNN 기반 모델을 제안합니다. 이 모델은 ViT와 같이 매개변수와 학습 데이터를 늘리는 이점을 얻을 수 있습니다. 최근에는 대형 밀집 커널에 초점을 맞춘 CNN과는 달리, InternImage는 변형 가능한 컨볼루션을 핵심 연산자로 사용합니다. 이를 통해 모델은 감지 및 세분화와 같은 하향 작업에 필요한 큰 유효 수용영역을 갖게 되며, 입력 및 작업 정보에 의존하는 적응형 공간 집계도 가능합니다. 이로 인해, InternImage는 기존 CNN의 엄격한 귀납적 편향을 줄이고, ViT와 같은 대규모 매개변수와 대규모 데이터로 더 강력하고 견고한 패턴을 학습할 수 있게 됩니다. 논문에서 제시한 모델의 효과성은 ImageNet, COCO 및 ADE20K와 같은 어려운 벤치마크에서 입증되었습니다. InternImage-H는 COCO test-dev에서 65.4 mAP, ADE20K에서 62.9 mIoU를 달성하여 현재 최고의 CNN 및 ViT를 능가하는 새로운 기록을 세웠습니다
Similar to NS-CUK Seminar: J.H.Lee, Review on "Learnable Structural Semantic Readout for Graph Classification", ICDM 2021 (20)
NS-CUK Seminar: V.T.Hoang, Review on "GOAT: A Global Transformer on Large-sca...ssuser4b1f48
This document presents GOAT, a scalable global transformer model for graph-structured data. GOAT uses a novel local attention module to absorb rich local information from node neighborhoods, in addition to a global attention mechanism that allows each node to attend to all other nodes. The document reports that GOAT achieves strong performance on large-scale homophilous and heterophilous node classification benchmarks, demonstrating its ability to leverage both local and global graph information for prediction tasks. Ablation studies on codebook size further indicate GOAT's effectiveness at modeling long-range interactions through its global attention.
NS-CUK Seminar: H.B.Kim, Review on "Cluster-GCN: An Efficient Algorithm for ...ssuser4b1f48
This document summarizes the Cluster-GCN method for training graph convolutional networks (GCNs) in a memory-efficient and scalable way. The key contributions of Cluster-GCN are that it achieves the best memory usage for training GCNs on large graphs, especially deep GCNs, while maintaining training speed comparable to or faster than existing methods. Experimental results demonstrate that Cluster-GCN can efficiently train very deep GCNs on large graphs and achieve state-of-the-art performance.
This document summarizes a research paper on Gate Graph Sequence Neural Networks (GGSNN). GGSNN is a model that incorporates time dependencies and higher-order relationships in graphs using GRU-based methods. It generates an output sequence to allow for graph-level analysis. The model can be used for a wide range of tasks involving logical formulas. It uses GRU to compute slopes via backpropagation over time, allowing it to capture long-term dependencies between output time steps. Node representations in GGSNN can be updated over time using label data, unlike previous graph neural networks.
NS-CUK Journal club: H.E.Lee, Review on " A biomedical knowledge graph-based ...ssuser4b1f48
1) The document proposes a deep learning framework called DeepLGF to predict drug-drug interactions by combining local and global feature extraction from biomedical knowledge graphs.
2) DeepLGF uses graph neural networks and knowledge graph embedding methods to extract local drug features from chemical structures and biological functions, and global features from the relationships between drugs and other biological entities.
3) Experimental results on prediction tasks using several drug interaction datasets demonstrate that DeepLGF outperforms other state-of-the-art models and has promising applications in drug development and clinical use.
NS-CUK Seminar: H.B.Kim, Review on "Inductive Representation Learning on Lar...ssuser4b1f48
1. The document summarizes the GraphSAGE framework for inductive node embedding proposed by Hamilton et al.
2. GraphSAGE leverages node features to learn an embedding function that generalizes to unseen nodes using a sample and aggregate approach.
3. Across citation, Reddit, and other datasets, GraphSAGE improves classification F1-scores by 51% on average compared to using node features alone and outperforms strong baselines.
NS-CUK Seminar: J.H.Lee, Review on "Relational Self-Supervised Learning on Gr...ssuser4b1f48
This document proposes a new self-supervised learning framework called Relational Graph Representation Learning (RGRL). RGRL aims to learn node representations that preserve relationships between nodes even after augmentation. It does this by focusing training on low-degree nodes and using both global and local contexts to sample anchor nodes. Experiments on 14 real-world datasets show RGRL outperforms previous methods on tasks like node classification and link prediction.
Ivanti’s Patch Tuesday breakdown goes beyond patching your applications and brings you the intelligence and guidance needed to prioritize where to focus your attention first. Catch early analysis on our Ivanti blog, then join industry expert Chris Goettl for the Patch Tuesday Webinar Event. There we’ll do a deep dive into each of the bulletins and give guidance on the risks associated with the newly-identified vulnerabilities.
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
Driving Business Innovation: Latest Generative AI Advancements & Success StorySafe Software
Are you ready to revolutionize how you handle data? Join us for a webinar where we’ll bring you up to speed with the latest advancements in Generative AI technology and discover how leveraging FME with tools from giants like Google Gemini, Amazon, and Microsoft OpenAI can supercharge your workflow efficiency.
During the hour, we’ll take you through:
Guest Speaker Segment with Hannah Barrington: Dive into the world of dynamic real estate marketing with Hannah, the Marketing Manager at Workspace Group. Hear firsthand how their team generates engaging descriptions for thousands of office units by integrating diverse data sources—from PDF floorplans to web pages—using FME transformers, like OpenAIVisionConnector and AnthropicVisionConnector. This use case will show you how GenAI can streamline content creation for marketing across the board.
Ollama Use Case: Learn how Scenario Specialist Dmitri Bagh has utilized Ollama within FME to input data, create custom models, and enhance security protocols. This segment will include demos to illustrate the full capabilities of FME in AI-driven processes.
Custom AI Models: Discover how to leverage FME to build personalized AI models using your data. Whether it’s populating a model with local data for added security or integrating public AI tools, find out how FME facilitates a versatile and secure approach to AI.
We’ll wrap up with a live Q&A session where you can engage with our experts on your specific use cases, and learn more about optimizing your data workflows with AI.
This webinar is ideal for professionals seeking to harness the power of AI within their data management systems while ensuring high levels of customization and security. Whether you're a novice or an expert, gain actionable insights and strategies to elevate your data processes. Join us to see how FME and AI can revolutionize how you work with data!
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/building-and-scaling-ai-applications-with-the-nx-ai-manager-a-presentation-from-network-optix/
Robin van Emden, Senior Director of Data Science at Network Optix, presents the “Building and Scaling AI Applications with the Nx AI Manager,” tutorial at the May 2024 Embedded Vision Summit.
In this presentation, van Emden covers the basics of scaling edge AI solutions using the Nx tool kit. He emphasizes the process of developing AI models and deploying them globally. He also showcases the conversion of AI models and the creation of effective edge AI pipelines, with a focus on pre-processing, model conversion, selecting the appropriate inference engine for the target hardware and post-processing.
van Emden shows how Nx can simplify the developer’s life and facilitate a rapid transition from concept to production-ready applications.He provides valuable insights into developing scalable and efficient edge AI solutions, with a strong focus on practical implementation.
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
Taking AI to the Next Level in Manufacturing.pdfssuserfac0301
Read Taking AI to the Next Level in Manufacturing to gain insights on AI adoption in the manufacturing industry, such as:
1. How quickly AI is being implemented in manufacturing.
2. Which barriers stand in the way of AI adoption.
3. How data quality and governance form the backbone of AI.
4. Organizational processes and structures that may inhibit effective AI adoption.
6. Ideas and approaches to help build your organization's AI strategy.
AI-Powered Food Delivery Transforming App Development in Saudi Arabia.pdfTechgropse Pvt.Ltd.
In this blog post, we'll delve into the intersection of AI and app development in Saudi Arabia, focusing on the food delivery sector. We'll explore how AI is revolutionizing the way Saudi consumers order food, how restaurants manage their operations, and how delivery partners navigate the bustling streets of cities like Riyadh, Jeddah, and Dammam. Through real-world case studies, we'll showcase how leading Saudi food delivery apps are leveraging AI to redefine convenience, personalization, and efficiency.
Your One-Stop Shop for Python Success: Top 10 US Python Development Providersakankshawande
Simplify your search for a reliable Python development partner! This list presents the top 10 trusted US providers offering comprehensive Python development services, ensuring your project's success from conception to completion.
Ocean lotus Threat actors project by John Sitima 2024 (1).pptxSitimaJohn
Ocean Lotus cyber threat actors represent a sophisticated, persistent, and politically motivated group that poses a significant risk to organizations and individuals in the Southeast Asian region. Their continuous evolution and adaptability underscore the need for robust cybersecurity measures and international cooperation to identify and mitigate the threats posed by such advanced persistent threat groups.
Ocean lotus Threat actors project by John Sitima 2024 (1).pptx
NS-CUK Seminar: J.H.Lee, Review on "Learnable Structural Semantic Readout for Graph Classification", ICDM 2021
1. Joo-Ho Lee
School of Computer Science and Information Engineering,
The Catholic University of Korea
E-mail: jooho414@gmail.com
2023-08-21
2. 1
Introduction
Problem Statement
• Graph classification refers to the task of predicting class labels of input graphs, and it has been applied to a
wide range of graphs, including molecular structures, biological networks, and social networks
• The key challenge is to extract informative (or discriminative) graph features from topological structures (i.e.,
nodes and edges) and auxiliary node features
3. 2
Introduction
Problem Statement
• A recent challenge for improving the expressivity of the GNNs is leveraging the global (or graph-level)
structural information
• Most studies have developed GNN modules that capture such global position information into the
representations at the node-level or at the subgraph-level
structure-aware message passing which further considers structural information of neighbor
nodes
graph pooling based on spectral clustering
4. 3
Introduction
Problem Statement
• Despite these efforts, the global structural information has not been carefully considered at the graph-level
representations yet
• Note that every GNN classifier uses a global readout operation, which simply aggregates all remaining node
(or subgraph) representations, to obtain a permutation-invariant graph-level representation
• In this work, we point out that the global readout does not consider the structural information of each node,
which incurs information loss on the global structure of an input graph
5. 4
Introduction
Problem Statement
• To tackle this challenge, we propose a novel graph readout technique, Structural Semantic Readout (SSRead),
that outputs the graph-level representation explicitly keeping the global structural information
• Motivated by consistently-morphed graphs in the latent space according to its structural semantic, our
SSRead takes advantage of consistent positions in the latent space, which eventually correspond to the
structurally-meaningful positions in each graph
6. 5
Introduction
Contribution
• Compatibility
SSRead can be easily embedded into any GNN architectures (i.e., compatible with a variety of message
passing and graph pooling layers) and make use of any aggregation functions for its position-level
readout
• Performance
SSRead improves the classification accuracy by learning the position-level graph representations with
the position-aware classification layer, which exploits the global structural information
• Interpretability
With the structural prototypes optimized from training data, SSRead can perform segmentation on a
graph according to the structural semantic and localize further discriminative regions for the target class
10. 9
TERACON
Position-level readout based on node-position alignment
• a summarized vector for structural position k
• “SSRead is a permutation-invariant function”
21. 20
Conclusion
• This paper proposes a novel graph readout technique, named as SSRead, which outputs structured (or
position-level) representations in order to explicitly leverage the global structural information for graph
classification
• To this end, SSRead first identifies the structural position of the nodes by using the semantic alignment
between the node representations and the structural prototypes, which are optimized to best summarize K
structural semantics observed in the training graphs
• Their experiments support that SSRead consistently enhances the classification performance and
interpretability of the GNN classifier while providing great compatibility with various aggregation functions,
GNN architectures, and learning frameworks
Editor's Notes
이미 이전에 propagation을 활용하여 rumor detection 하는 논문들을 모두 봤었다.
이미 이전에 propagation을 활용하여 rumor detection 하는 논문들을 모두 봤었다.
이미 이전에 propagation을 활용하여 rumor detection 하는 논문들을 모두 봤었다.
이미 이전에 propagation을 활용하여 rumor detection 하는 논문들을 모두 봤었다.
이미 이전에 propagation을 활용하여 rumor detection 하는 논문들을 모두 봤었다.
이미 이전에 propagation을 활용하여 rumor detection 하는 논문들을 모두 봤었다.
이미 이전에 propagation을 활용하여 rumor detection 하는 논문들을 모두 봤었다.
이미 이전에 propagation을 활용하여 rumor detection 하는 논문들을 모두 봤었다.
이미 이전에 propagation을 활용하여 rumor detection 하는 논문들을 모두 봤었다.
이미 이전에 propagation을 활용하여 rumor detection 하는 논문들을 모두 봤었다.
이미 이전에 propagation을 활용하여 rumor detection 하는 논문들을 모두 봤었다.
이미 이전에 propagation을 활용하여 rumor detection 하는 논문들을 모두 봤었다.
이미 이전에 propagation을 활용하여 rumor detection 하는 논문들을 모두 봤었다.
이미 이전에 propagation을 활용하여 rumor detection 하는 논문들을 모두 봤었다.
이미 이전에 propagation을 활용하여 rumor detection 하는 논문들을 모두 봤었다.
이미 이전에 propagation을 활용하여 rumor detection 하는 논문들을 모두 봤었다.
이미 이전에 propagation을 활용하여 rumor detection 하는 논문들을 모두 봤었다.
이미 이전에 propagation을 활용하여 rumor detection 하는 논문들을 모두 봤었다.
이미 이전에 propagation을 활용하여 rumor detection 하는 논문들을 모두 봤었다.
이미 이전에 propagation을 활용하여 rumor detection 하는 논문들을 모두 봤었다.