240401_Thuy_Labseminar[Train Once and Explain Everywhere: Pre-training Interp...thanhdowork
The document describes a graph convolutional network (GCN) model that aims to be interpretable and generalizable across different graph datasets. It utilizes a pre-training process on synthetic graphs to learn universal structural patterns. The model features a structural pattern learning module to capture these patterns and a hypergraph refining module to identify explanations incorporating local structural interactions. It is shown to outperform comparable methods on a graph interpretation task without requiring pre-training.
- Tsuyoshi Murata from the Tokyo Institute of Technology discusses using deep learning approaches for complex networks and graph neural networks.
- He summarizes recent work on network embedding, including a paper on learning community structure with variational autoencoders and another on embedding multiplex networks.
- Murata then discusses applications of graph neural networks, challenges in training deep GCNs, the representational power and limitations of GNNs, and open problems in the field like handling shallow structures, dynamic graphs, and scalability issues.
This document introduces graph attention networks (GATs) for node classification of graph-structured data. GATs use self-attention mechanisms over a node's neighbors to compute hidden representations. The proposed approach achieves state-of-the-art results on four benchmarks, demonstrating the potential of attention models on graphs. GATs are computationally efficient and do not require upfront knowledge of global graph structure.
240401_Thuy_Labseminar[Train Once and Explain Everywhere: Pre-training Interp...thanhdowork
The document describes a graph convolutional network (GCN) model that aims to be interpretable and generalizable across different graph datasets. It utilizes a pre-training process on synthetic graphs to learn universal structural patterns. The model features a structural pattern learning module to capture these patterns and a hypergraph refining module to identify explanations incorporating local structural interactions. It is shown to outperform comparable methods on a graph interpretation task without requiring pre-training.
- Tsuyoshi Murata from the Tokyo Institute of Technology discusses using deep learning approaches for complex networks and graph neural networks.
- He summarizes recent work on network embedding, including a paper on learning community structure with variational autoencoders and another on embedding multiplex networks.
- Murata then discusses applications of graph neural networks, challenges in training deep GCNs, the representational power and limitations of GNNs, and open problems in the field like handling shallow structures, dynamic graphs, and scalability issues.
This document introduces graph attention networks (GATs) for node classification of graph-structured data. GATs use self-attention mechanisms over a node's neighbors to compute hidden representations. The proposed approach achieves state-of-the-art results on four benchmarks, demonstrating the potential of attention models on graphs. GATs are computationally efficient and do not require upfront knowledge of global graph structure.
ExaLearn Overview - ECP Co-Design Center for Machine Learninginside-BigData.com
In this deck from the HPC User Forum, Frank Alexander, from Brookhaven National Laboratory presents: ExaLearn Overview - ECP Co-Design Center for Machine Learning.
"ExaLearn is a co-design center for Exascale Machine Learning (ML) Technologies and is a collaboration initially consisting of experts from eight multipurpose DOE labs. Rapid growth in the amount of data and computational power is driving a revolution in machine learning (ML) and artificial intelligence (AI). Beyond the highly visible successes in machine-based natural language translation, these new ML technologies have profound implications for computational and experimental science and engineering and the exascale computing systems that DOE is deploying to support those disciplines.
To address these challenges, the ExaLearn co-design center will provide exascale ML software for use by ECP Applications projects, other ECP Co-Design Centers and DOE experimental facilities and leadership class computing facilities. The ExaLearn Co-Design Center will also collaborate with ECP PathForward vendors on the development of exascale ML software."
Watch the video: https://wp.me/p3RLHQ-kdJ
Learn more: https://www.exascaleproject.org/ecp-announces-new-co-design-center-to-focus-on-exascale-machine-learning-technologies/
and
http://hpcuserforum.com
Sign up for our insideHPC Newsletter: http://insidehpc.com/letter
nnU-Net: a self-configuring method for deep learning-based biomedical image s...ivaderivader
nnU-Net is a self-configuring method for biomedical image segmentation that automatically adapts to new datasets without manual intervention or expertise. It formulates the pipeline optimization problem based on a data fingerprint capturing key dataset properties and a pipeline fingerprint describing design choices. nnU-Net uses heuristic rules derived from domain knowledge to select pipeline configurations according to the data fingerprint. It was tested on 10 challenges and 19 diverse datasets, outperforming specialized methods and demonstrating that pipeline configuration is more important than architectural variations. However, nnU-Net may require modifications for some state-of-the-art tasks.
This document proposes GraphTrans, a framework that uses graph neural networks (GNNs) to learn local structure and a modified Transformer to learn global structure from graphs. It introduces challenges in capturing long-range dependencies with GNNs alone. GraphTrans leverages a GNN backbone to learn local structure and adds a Transformer to pool local embeddings and extract global structures. The document evaluates GraphTrans on biomolecular and computer programming benchmark datasets and analyzes the impact of adding a CLS token.
Graph neural networks (GNNs) are neural network architectures that operate on graph-structured data. GNNs iteratively update node representations by aggregating neighbor representations and can be used for tasks like node classification. There are many frontiers for GNN research, including graph generation/transformation, dynamic/heterogeneous graphs, and applications in domains that can be modeled with graphs like social networks and drug discovery. Automated machine learning techniques are also being applied to GNNs.
NS-CUK Seminar: H.E.Lee, Review on "Structural Deep Embedding for Hyper-Net...ssuser4b1f48
This document summarizes a research paper on learning representations of heterogeneous hypernetworks. It introduces the challenges of indecomposability and structure preservation when applying existing network embedding methods to heterogeneous hypernetworks. It then proposes a new method called DHNE that uses deep autoencoders to learn embeddings that can preserve both local and global structure while addressing the non-decomposability of hyperedges. The document outlines the key contributions of DHNE and experiments showing it achieves good performance on network reconstruction and link prediction tasks.
NS-CUK Seminar: S.T.Nguyen, Review on "Are More Layers Beneficial to Graph Tr...ssuser4b1f48
This document summarizes a research paper on improving the depth of graph transformer models using local attention to graph substructures. The paper proposes adding substructure tokens to the graph and applying local attention between each substructure and its nodes. This addresses limitations in graph transformers' ability to learn substructure features as depth increases. The model achieves state-of-the-art results on graph benchmarks, with performance continuing to improve as depth increases up to 48 layers, demonstrating it alleviates problems of shrinking attention capacity with depth. Ablation studies show local attention and substructure encoding are important for the model's performance, especially on deeper models and datasets where specific substructures are key features.
The document summarizes two seminar presentations. The first presentation discusses exploiting social networks for internet search by comparing how information is published and located on the web vs. social networks. It describes a study where a social network-based search engine called PeerSpective was able to find URLs not indexed by Google. The second presentation discusses an experimental study of the graph coloring problem on human subject networks, where people collaborated over various network topologies to solve the problem with different information levels. Key results included subjects successfully solving the problem and structural properties influencing behavior and dynamics.
Image Segmentation Using Deep Learning : A surveyNUPUR YADAV
1. The document discusses various deep learning models for image segmentation, including fully convolutional networks, encoder-decoder models, multi-scale pyramid networks, and dilated convolutional models.
2. It provides details on popular architectures like U-Net, SegNet, and models from the DeepLab family.
3. The document also reviews datasets commonly used to evaluate image segmentation methods and reports accuracies of different models on the Cityscapes dataset.
This document provides a summary of large scale machine learning frameworks. It discusses out-of-core learning, data parallelism using MapReduce, graph parallel frameworks like Pregel, and model parallelism using parameter servers. Spark is described as easy to use with a well-designed API, while GraphLab is designed for ML researchers with vertex programming. Parameter servers are presented as aiming to support very large learning but still being in early development.
Towards Deep Attention in Graph Neural Networks: Problems and Remedies.pptxssuser2624f71
This document discusses graph convolutional networks (GCNs) and graph attention networks (GATs). It proposes a new method called AERO-GNN that uses cumulative attention across layers to allow GATs to remain expressive in deep layers. The method assigns different importance weights to nodes at different hop distances using hop attention. Experiments on node classification benchmarks show AERO-GNN outperforms other GAT baselines.
NS-CUK Seminar: S.T.Nguyen, Review on "Do We Really Need Complicated Model Ar...ssuser4b1f48
Nguyen Thanh Sang presented a paper on developing a simpler model called GraphMixer for temporal link prediction. GraphMixer achieves better performance than more complex baseline models like RNNs and self-attention networks. It uses a fixed time encoding function and MLP mixer to summarize link information, avoiding complex neural architectures. Experiments on five datasets show GraphMixer converges faster and generalizes better. The success of the simpler GraphMixer model suggests complex neural architectures and data processing may not always be needed for temporal network tasks.
Memory Efficient Graph Convolutional Network based Distributed Link Predictionmiyurud
Graph Convolutional Networks (GCN) have found multiple applications of graph-based machine learning. However, training GCNs on large graphs of billions of nodes and edges with rich node attributes consume significant amount of time and memory resources. This makes it impossible to train such GCNs on general purpose commodity hardware. Such use cases demand high-end servers with accelerators and ample amounts of memory. In this paper we implement a memory efficient GCN based link prediction on top of a distributed graph database server called JasmineGraph. Our approach is based on federated training on partitioned graphs with multiple parallel workers. We conduct experiments with three real world graph datasets called DBLP-V11, Reddit, and Twitter. We demonstrate that our approach produces optimal performance for a given hardware setting. JasmineGraph was able to train a GCN on the largest dataset DBLP-V11(>10GB) in 20 hours and 24 minutes for 5 training rounds and 3 epochs by partitioning it into 16 partitions with 2 workers on a single server while the conventional training method could not process it at all due to lack of memory. The second largest dataset Reddit took 9 hours 8 minutes to train with conventional training while JasmineGraph took only 3 hours and 11 minutes with 8 partitions-4 workers in the same hardware giving 3 times improved performance. In case of Twitter dataset JasmineGraph was able to give 5 times improved performance. (10 hours 31 minutes vs 2 hours 6 minutes;16 partitions-16 workers).
Shibo Hou is a graduate student seeking job opportunities with strong technical skills including programming languages like C/C++, Java, and MATLAB. He has a Master's degree in Computer Engineering from North Carolina State University and relevant project experience designing databases and websites. His research focused on wireless communication systems and green communication techniques.
J. Park, J. Song, ICLR 2022, MLILAB, KAISTAIMLILAB
This paper proposes GraphENS, a method to synthesize ego networks to address the neighbor memorization problem that causes GNN models to overfit to minor classes in class-imbalanced node classification tasks. GraphENS samples ego networks from minor and target classes, assigns neighbors through sampling, mixes node features based on saliency, and attaches the synthesized ego network to the original graph to construct a balanced graph. Experiments show GraphENS mitigates both node and neighbor memorization, outperforming baselines on citation and co-purchase networks.
NS-CUK Seminar: V.T.Hoang, Review on "GOAT: A Global Transformer on Large-sca...ssuser4b1f48
This document presents GOAT, a scalable global transformer model for graph-structured data. GOAT uses a novel local attention module to absorb rich local information from node neighborhoods, in addition to a global attention mechanism that allows each node to attend to all other nodes. The document reports that GOAT achieves strong performance on large-scale homophilous and heterophilous node classification benchmarks, demonstrating its ability to leverage both local and global graph information for prediction tasks. Ablation studies on codebook size further indicate GOAT's effectiveness at modeling long-range interactions through its global attention.
NS-CUK Seminar: J.H.Lee, Review on "Graph Propagation Transformer for Graph R...ssuser4b1f48
NS-CUK Seminar:
J.H.Lee, Review on "Graph Propagation Transformer for Graph Representation Learning", IJCAI 2023
More Related Content
Similar to NS-CUK Joint Journal Club: V.T.Hoang, Review on "NAGphormer: A Tokenized Graph Transformer for Node Classification in Large Graphs", ICLR 2023
ExaLearn Overview - ECP Co-Design Center for Machine Learninginside-BigData.com
In this deck from the HPC User Forum, Frank Alexander, from Brookhaven National Laboratory presents: ExaLearn Overview - ECP Co-Design Center for Machine Learning.
"ExaLearn is a co-design center for Exascale Machine Learning (ML) Technologies and is a collaboration initially consisting of experts from eight multipurpose DOE labs. Rapid growth in the amount of data and computational power is driving a revolution in machine learning (ML) and artificial intelligence (AI). Beyond the highly visible successes in machine-based natural language translation, these new ML technologies have profound implications for computational and experimental science and engineering and the exascale computing systems that DOE is deploying to support those disciplines.
To address these challenges, the ExaLearn co-design center will provide exascale ML software for use by ECP Applications projects, other ECP Co-Design Centers and DOE experimental facilities and leadership class computing facilities. The ExaLearn Co-Design Center will also collaborate with ECP PathForward vendors on the development of exascale ML software."
Watch the video: https://wp.me/p3RLHQ-kdJ
Learn more: https://www.exascaleproject.org/ecp-announces-new-co-design-center-to-focus-on-exascale-machine-learning-technologies/
and
http://hpcuserforum.com
Sign up for our insideHPC Newsletter: http://insidehpc.com/letter
nnU-Net: a self-configuring method for deep learning-based biomedical image s...ivaderivader
nnU-Net is a self-configuring method for biomedical image segmentation that automatically adapts to new datasets without manual intervention or expertise. It formulates the pipeline optimization problem based on a data fingerprint capturing key dataset properties and a pipeline fingerprint describing design choices. nnU-Net uses heuristic rules derived from domain knowledge to select pipeline configurations according to the data fingerprint. It was tested on 10 challenges and 19 diverse datasets, outperforming specialized methods and demonstrating that pipeline configuration is more important than architectural variations. However, nnU-Net may require modifications for some state-of-the-art tasks.
This document proposes GraphTrans, a framework that uses graph neural networks (GNNs) to learn local structure and a modified Transformer to learn global structure from graphs. It introduces challenges in capturing long-range dependencies with GNNs alone. GraphTrans leverages a GNN backbone to learn local structure and adds a Transformer to pool local embeddings and extract global structures. The document evaluates GraphTrans on biomolecular and computer programming benchmark datasets and analyzes the impact of adding a CLS token.
Graph neural networks (GNNs) are neural network architectures that operate on graph-structured data. GNNs iteratively update node representations by aggregating neighbor representations and can be used for tasks like node classification. There are many frontiers for GNN research, including graph generation/transformation, dynamic/heterogeneous graphs, and applications in domains that can be modeled with graphs like social networks and drug discovery. Automated machine learning techniques are also being applied to GNNs.
NS-CUK Seminar: H.E.Lee, Review on "Structural Deep Embedding for Hyper-Net...ssuser4b1f48
This document summarizes a research paper on learning representations of heterogeneous hypernetworks. It introduces the challenges of indecomposability and structure preservation when applying existing network embedding methods to heterogeneous hypernetworks. It then proposes a new method called DHNE that uses deep autoencoders to learn embeddings that can preserve both local and global structure while addressing the non-decomposability of hyperedges. The document outlines the key contributions of DHNE and experiments showing it achieves good performance on network reconstruction and link prediction tasks.
NS-CUK Seminar: S.T.Nguyen, Review on "Are More Layers Beneficial to Graph Tr...ssuser4b1f48
This document summarizes a research paper on improving the depth of graph transformer models using local attention to graph substructures. The paper proposes adding substructure tokens to the graph and applying local attention between each substructure and its nodes. This addresses limitations in graph transformers' ability to learn substructure features as depth increases. The model achieves state-of-the-art results on graph benchmarks, with performance continuing to improve as depth increases up to 48 layers, demonstrating it alleviates problems of shrinking attention capacity with depth. Ablation studies show local attention and substructure encoding are important for the model's performance, especially on deeper models and datasets where specific substructures are key features.
The document summarizes two seminar presentations. The first presentation discusses exploiting social networks for internet search by comparing how information is published and located on the web vs. social networks. It describes a study where a social network-based search engine called PeerSpective was able to find URLs not indexed by Google. The second presentation discusses an experimental study of the graph coloring problem on human subject networks, where people collaborated over various network topologies to solve the problem with different information levels. Key results included subjects successfully solving the problem and structural properties influencing behavior and dynamics.
Image Segmentation Using Deep Learning : A surveyNUPUR YADAV
1. The document discusses various deep learning models for image segmentation, including fully convolutional networks, encoder-decoder models, multi-scale pyramid networks, and dilated convolutional models.
2. It provides details on popular architectures like U-Net, SegNet, and models from the DeepLab family.
3. The document also reviews datasets commonly used to evaluate image segmentation methods and reports accuracies of different models on the Cityscapes dataset.
This document provides a summary of large scale machine learning frameworks. It discusses out-of-core learning, data parallelism using MapReduce, graph parallel frameworks like Pregel, and model parallelism using parameter servers. Spark is described as easy to use with a well-designed API, while GraphLab is designed for ML researchers with vertex programming. Parameter servers are presented as aiming to support very large learning but still being in early development.
Towards Deep Attention in Graph Neural Networks: Problems and Remedies.pptxssuser2624f71
This document discusses graph convolutional networks (GCNs) and graph attention networks (GATs). It proposes a new method called AERO-GNN that uses cumulative attention across layers to allow GATs to remain expressive in deep layers. The method assigns different importance weights to nodes at different hop distances using hop attention. Experiments on node classification benchmarks show AERO-GNN outperforms other GAT baselines.
NS-CUK Seminar: S.T.Nguyen, Review on "Do We Really Need Complicated Model Ar...ssuser4b1f48
Nguyen Thanh Sang presented a paper on developing a simpler model called GraphMixer for temporal link prediction. GraphMixer achieves better performance than more complex baseline models like RNNs and self-attention networks. It uses a fixed time encoding function and MLP mixer to summarize link information, avoiding complex neural architectures. Experiments on five datasets show GraphMixer converges faster and generalizes better. The success of the simpler GraphMixer model suggests complex neural architectures and data processing may not always be needed for temporal network tasks.
Memory Efficient Graph Convolutional Network based Distributed Link Predictionmiyurud
Graph Convolutional Networks (GCN) have found multiple applications of graph-based machine learning. However, training GCNs on large graphs of billions of nodes and edges with rich node attributes consume significant amount of time and memory resources. This makes it impossible to train such GCNs on general purpose commodity hardware. Such use cases demand high-end servers with accelerators and ample amounts of memory. In this paper we implement a memory efficient GCN based link prediction on top of a distributed graph database server called JasmineGraph. Our approach is based on federated training on partitioned graphs with multiple parallel workers. We conduct experiments with three real world graph datasets called DBLP-V11, Reddit, and Twitter. We demonstrate that our approach produces optimal performance for a given hardware setting. JasmineGraph was able to train a GCN on the largest dataset DBLP-V11(>10GB) in 20 hours and 24 minutes for 5 training rounds and 3 epochs by partitioning it into 16 partitions with 2 workers on a single server while the conventional training method could not process it at all due to lack of memory. The second largest dataset Reddit took 9 hours 8 minutes to train with conventional training while JasmineGraph took only 3 hours and 11 minutes with 8 partitions-4 workers in the same hardware giving 3 times improved performance. In case of Twitter dataset JasmineGraph was able to give 5 times improved performance. (10 hours 31 minutes vs 2 hours 6 minutes;16 partitions-16 workers).
Shibo Hou is a graduate student seeking job opportunities with strong technical skills including programming languages like C/C++, Java, and MATLAB. He has a Master's degree in Computer Engineering from North Carolina State University and relevant project experience designing databases and websites. His research focused on wireless communication systems and green communication techniques.
J. Park, J. Song, ICLR 2022, MLILAB, KAISTAIMLILAB
This paper proposes GraphENS, a method to synthesize ego networks to address the neighbor memorization problem that causes GNN models to overfit to minor classes in class-imbalanced node classification tasks. GraphENS samples ego networks from minor and target classes, assigns neighbors through sampling, mixes node features based on saliency, and attaches the synthesized ego network to the original graph to construct a balanced graph. Experiments show GraphENS mitigates both node and neighbor memorization, outperforming baselines on citation and co-purchase networks.
Similar to NS-CUK Joint Journal Club: V.T.Hoang, Review on "NAGphormer: A Tokenized Graph Transformer for Node Classification in Large Graphs", ICLR 2023 (20)
NS-CUK Seminar: V.T.Hoang, Review on "GOAT: A Global Transformer on Large-sca...ssuser4b1f48
This document presents GOAT, a scalable global transformer model for graph-structured data. GOAT uses a novel local attention module to absorb rich local information from node neighborhoods, in addition to a global attention mechanism that allows each node to attend to all other nodes. The document reports that GOAT achieves strong performance on large-scale homophilous and heterophilous node classification benchmarks, demonstrating its ability to leverage both local and global graph information for prediction tasks. Ablation studies on codebook size further indicate GOAT's effectiveness at modeling long-range interactions through its global attention.
NS-CUK Seminar: H.B.Kim, Review on "Cluster-GCN: An Efficient Algorithm for ...ssuser4b1f48
This document summarizes the Cluster-GCN method for training graph convolutional networks (GCNs) in a memory-efficient and scalable way. The key contributions of Cluster-GCN are that it achieves the best memory usage for training GCNs on large graphs, especially deep GCNs, while maintaining training speed comparable to or faster than existing methods. Experimental results demonstrate that Cluster-GCN can efficiently train very deep GCNs on large graphs and achieve state-of-the-art performance.
This document summarizes a research paper on Gate Graph Sequence Neural Networks (GGSNN). GGSNN is a model that incorporates time dependencies and higher-order relationships in graphs using GRU-based methods. It generates an output sequence to allow for graph-level analysis. The model can be used for a wide range of tasks involving logical formulas. It uses GRU to compute slopes via backpropagation over time, allowing it to capture long-term dependencies between output time steps. Node representations in GGSNN can be updated over time using label data, unlike previous graph neural networks.
NS-CUK Journal club: H.E.Lee, Review on " A biomedical knowledge graph-based ...ssuser4b1f48
1) The document proposes a deep learning framework called DeepLGF to predict drug-drug interactions by combining local and global feature extraction from biomedical knowledge graphs.
2) DeepLGF uses graph neural networks and knowledge graph embedding methods to extract local drug features from chemical structures and biological functions, and global features from the relationships between drugs and other biological entities.
3) Experimental results on prediction tasks using several drug interaction datasets demonstrate that DeepLGF outperforms other state-of-the-art models and has promising applications in drug development and clinical use.
NS-CUK Seminar: H.B.Kim, Review on "Inductive Representation Learning on Lar...ssuser4b1f48
1. The document summarizes the GraphSAGE framework for inductive node embedding proposed by Hamilton et al.
2. GraphSAGE leverages node features to learn an embedding function that generalizes to unseen nodes using a sample and aggregate approach.
3. Across citation, Reddit, and other datasets, GraphSAGE improves classification F1-scores by 51% on average compared to using node features alone and outperforms strong baselines.
NS-CUK Seminar: J.H.Lee, Review on "Relational Self-Supervised Learning on Gr...ssuser4b1f48
This document proposes a new self-supervised learning framework called Relational Graph Representation Learning (RGRL). RGRL aims to learn node representations that preserve relationships between nodes even after augmentation. It does this by focusing training on low-degree nodes and using both global and local contexts to sample anchor nodes. Experiments on 14 real-world datasets show RGRL outperforms previous methods on tasks like node classification and link prediction.
NS-CUK Seminar: H.E.Lee, Review on "Structural Deep Embedding for Hyper-Netw...ssuser4b1f48
This document presents a Deep Hyper-Network Embedding (DHNE) model to learn low-dimensional representations of hypernetworks. The DHNE model uses an autoencoder and fully connected layer to preserve both local and global proximity in the embedding space. The DHNE model is tested on four datasets and outperforms other network embedding methods on tasks like network reconstruction, link prediction, and classification. The DHNE model can learn embeddings of hypernetworks while preserving the decomposability property of hyperedges using a nonlinear tuple similarity function.
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slackshyamraj55
Discover the seamless integration of RPA (Robotic Process Automation), COMPOSER, and APM with AWS IDP enhanced with Slack notifications. Explore how these technologies converge to streamline workflows, optimize performance, and ensure secure access, all while leveraging the power of AWS IDP and real-time communication via Slack notifications.
Your One-Stop Shop for Python Success: Top 10 US Python Development Providersakankshawande
Simplify your search for a reliable Python development partner! This list presents the top 10 trusted US providers offering comprehensive Python development services, ensuring your project's success from conception to completion.
A Comprehensive Guide to DeFi Development Services in 2024Intelisync
DeFi represents a paradigm shift in the financial industry. Instead of relying on traditional, centralized institutions like banks, DeFi leverages blockchain technology to create a decentralized network of financial services. This means that financial transactions can occur directly between parties, without intermediaries, using smart contracts on platforms like Ethereum.
In 2024, we are witnessing an explosion of new DeFi projects and protocols, each pushing the boundaries of what’s possible in finance.
In summary, DeFi in 2024 is not just a trend; it’s a revolution that democratizes finance, enhances security and transparency, and fosters continuous innovation. As we proceed through this presentation, we'll explore the various components and services of DeFi in detail, shedding light on how they are transforming the financial landscape.
At Intelisync, we specialize in providing comprehensive DeFi development services tailored to meet the unique needs of our clients. From smart contract development to dApp creation and security audits, we ensure that your DeFi project is built with innovation, security, and scalability in mind. Trust Intelisync to guide you through the intricate landscape of decentralized finance and unlock the full potential of blockchain technology.
Ready to take your DeFi project to the next level? Partner with Intelisync for expert DeFi development services today!
Generating privacy-protected synthetic data using Secludy and MilvusZilliz
During this demo, the founders of Secludy will demonstrate how their system utilizes Milvus to store and manipulate embeddings for generating privacy-protected synthetic data. Their approach not only maintains the confidentiality of the original data but also enhances the utility and scalability of LLMs under privacy constraints. Attendees, including machine learning engineers, data scientists, and data managers, will witness first-hand how Secludy's integration with Milvus empowers organizations to harness the power of LLMs securely and efficiently.
Programming Foundation Models with DSPy - Meetup SlidesZilliz
Prompting language models is hard, while programming language models is easy. In this talk, I will discuss the state-of-the-art framework DSPy for programming foundation models with its powerful optimizers and runtime constraint system.
Skybuffer AI: Advanced Conversational and Generative AI Solution on SAP Busin...Tatiana Kojar
Skybuffer AI, built on the robust SAP Business Technology Platform (SAP BTP), is the latest and most advanced version of our AI development, reaffirming our commitment to delivering top-tier AI solutions. Skybuffer AI harnesses all the innovative capabilities of the SAP BTP in the AI domain, from Conversational AI to cutting-edge Generative AI and Retrieval-Augmented Generation (RAG). It also helps SAP customers safeguard their investments into SAP Conversational AI and ensure a seamless, one-click transition to SAP Business AI.
With Skybuffer AI, various AI models can be integrated into a single communication channel such as Microsoft Teams. This integration empowers business users with insights drawn from SAP backend systems, enterprise documents, and the expansive knowledge of Generative AI. And the best part of it is that it is all managed through our intuitive no-code Action Server interface, requiring no extensive coding knowledge and making the advanced AI accessible to more users.
Digital Marketing Trends in 2024 | Guide for Staying AheadWask
https://www.wask.co/ebooks/digital-marketing-trends-in-2024
Feeling lost in the digital marketing whirlwind of 2024? Technology is changing, consumer habits are evolving, and staying ahead of the curve feels like a never-ending pursuit. This e-book is your compass. Dive into actionable insights to handle the complexities of modern marketing. From hyper-personalization to the power of user-generated content, learn how to build long-term relationships with your audience and unlock the secrets to success in the ever-shifting digital landscape.
zkStudyClub - LatticeFold: A Lattice-based Folding Scheme and its Application...Alex Pruden
Folding is a recent technique for building efficient recursive SNARKs. Several elegant folding protocols have been proposed, such as Nova, Supernova, Hypernova, Protostar, and others. However, all of them rely on an additively homomorphic commitment scheme based on discrete log, and are therefore not post-quantum secure. In this work we present LatticeFold, the first lattice-based folding protocol based on the Module SIS problem. This folding protocol naturally leads to an efficient recursive lattice-based SNARK and an efficient PCD scheme. LatticeFold supports folding low-degree relations, such as R1CS, as well as high-degree relations, such as CCS. The key challenge is to construct a secure folding protocol that works with the Ajtai commitment scheme. The difficulty, is ensuring that extracted witnesses are low norm through many rounds of folding. We present a novel technique using the sumcheck protocol to ensure that extracted witnesses are always low norm no matter how many rounds of folding are used. Our evaluation of the final proof system suggests that it is as performant as Hypernova, while providing post-quantum security.
Paper Link: https://eprint.iacr.org/2024/257
Trusted Execution Environment for Decentralized Process MiningLucaBarbaro3
Presentation of the paper "Trusted Execution Environment for Decentralized Process Mining" given during the CAiSE 2024 Conference in Cyprus on June 7, 2024.
Taking AI to the Next Level in Manufacturing.pdfssuserfac0301
Read Taking AI to the Next Level in Manufacturing to gain insights on AI adoption in the manufacturing industry, such as:
1. How quickly AI is being implemented in manufacturing.
2. Which barriers stand in the way of AI adoption.
3. How data quality and governance form the backbone of AI.
4. Organizational processes and structures that may inhibit effective AI adoption.
6. Ideas and approaches to help build your organization's AI strategy.
Introduction of Cybersecurity with OSS at Code Europe 2024Hiroshi SHIBATA
I develop the Ruby programming language, RubyGems, and Bundler, which are package managers for Ruby. Today, I will introduce how to enhance the security of your application using open-source software (OSS) examples from Ruby and RubyGems.
The first topic is CVE (Common Vulnerabilities and Exposures). I have published CVEs many times. But what exactly is a CVE? I'll provide a basic understanding of CVEs and explain how to detect and handle vulnerabilities in OSS.
Next, let's discuss package managers. Package managers play a critical role in the OSS ecosystem. I'll explain how to manage library dependencies in your application.
I'll share insights into how the Ruby and RubyGems core team works to keep our ecosystem safe. By the end of this talk, you'll have a better understanding of how to safeguard your code.
Digital Banking in the Cloud: How Citizens Bank Unlocked Their MainframePrecisely
Inconsistent user experience and siloed data, high costs, and changing customer expectations – Citizens Bank was experiencing these challenges while it was attempting to deliver a superior digital banking experience for its clients. Its core banking applications run on the mainframe and Citizens was using legacy utilities to get the critical mainframe data to feed customer-facing channels, like call centers, web, and mobile. Ultimately, this led to higher operating costs (MIPS), delayed response times, and longer time to market.
Ever-changing customer expectations demand more modern digital experiences, and the bank needed to find a solution that could provide real-time data to its customer channels with low latency and operating costs. Join this session to learn how Citizens is leveraging Precisely to replicate mainframe data to its customer channels and deliver on their “modern digital bank” experiences.
Dandelion Hashtable: beyond billion requests per second on a commodity serverAntonios Katsarakis
This slide deck presents DLHT, a concurrent in-memory hashtable. Despite efforts to optimize hashtables, that go as far as sacrificing core functionality, state-of-the-art designs still incur multiple memory accesses per request and block request processing in three cases. First, most hashtables block while waiting for data to be retrieved from memory. Second, open-addressing designs, which represent the current state-of-the-art, either cannot free index slots on deletes or must block all requests to do so. Third, index resizes block every request until all objects are copied to the new index. Defying folklore wisdom, DLHT forgoes open-addressing and adopts a fully-featured and memory-aware closed-addressing design based on bounded cache-line-chaining. This design offers lock-free index operations and deletes that free slots instantly, (2) completes most requests with a single memory access, (3) utilizes software prefetching to hide memory latencies, and (4) employs a novel non-blocking and parallel resizing. In a commodity server and a memory-resident workload, DLHT surpasses 1.6B requests per second and provides 3.5x (12x) the throughput of the state-of-the-art closed-addressing (open-addressing) resizable hashtable on Gets (Deletes).
Skybuffer SAM4U tool for SAP license adoptionTatiana Kojar
Manage and optimize your license adoption and consumption with SAM4U, an SAP free customer software asset management tool.
SAM4U, an SAP complimentary software asset management tool for customers, delivers a detailed and well-structured overview of license inventory and usage with a user-friendly interface. We offer a hosted, cost-effective, and performance-optimized SAM4U setup in the Skybuffer Cloud environment. You retain ownership of the system and data, while we manage the ABAP 7.58 infrastructure, ensuring fixed Total Cost of Ownership (TCO) and exceptional services through the SAP Fiori interface.
Driving Business Innovation: Latest Generative AI Advancements & Success StorySafe Software
Are you ready to revolutionize how you handle data? Join us for a webinar where we’ll bring you up to speed with the latest advancements in Generative AI technology and discover how leveraging FME with tools from giants like Google Gemini, Amazon, and Microsoft OpenAI can supercharge your workflow efficiency.
During the hour, we’ll take you through:
Guest Speaker Segment with Hannah Barrington: Dive into the world of dynamic real estate marketing with Hannah, the Marketing Manager at Workspace Group. Hear firsthand how their team generates engaging descriptions for thousands of office units by integrating diverse data sources—from PDF floorplans to web pages—using FME transformers, like OpenAIVisionConnector and AnthropicVisionConnector. This use case will show you how GenAI can streamline content creation for marketing across the board.
Ollama Use Case: Learn how Scenario Specialist Dmitri Bagh has utilized Ollama within FME to input data, create custom models, and enhance security protocols. This segment will include demos to illustrate the full capabilities of FME in AI-driven processes.
Custom AI Models: Discover how to leverage FME to build personalized AI models using your data. Whether it’s populating a model with local data for added security or integrating public AI tools, find out how FME facilitates a versatile and secure approach to AI.
We’ll wrap up with a live Q&A session where you can engage with our experts on your specific use cases, and learn more about optimizing your data workflows with AI.
This webinar is ideal for professionals seeking to harness the power of AI within their data management systems while ensuring high levels of customization and security. Whether you're a novice or an expert, gain actionable insights and strategies to elevate your data processes. Join us to see how FME and AI can revolutionize how you work with data!
3. 3
• The goal:
• Mapping individual nodes to vector points in latent space.
Graph embedding learning
4. 4
• GNN-based approaches provide faster and practical training and
state-of-the-art results on benchmark datasets for downstream
tasks such as node classification
GNN-based models
6. 6
Problems
Most of the existing GNN architectures have two fundamental weaknesses
which restrict their learning ability on general graph-structured data:
Over-smoothing problems
GNNs fail to be deep enough: Trade-off between features and
structure
Noise from neighbours
GNNs seem to be tailor-made to work on homophilic (associative)
graphs
7. 7
Problems
The existing graph transformers:
Treat the nodes as independent tokens
Construct a single sequence of all the node tokens to train the model
8. 8
Contributions
Training such models on large graphs will cost a huge GPU resources
NAGphormer:
Hop2Token
Attention-based readout function
12. 12
IMPLEMENTATION DETAILS
Structural encoding
Besides the attribute information of nodes, the structural information of
nodes is also a crucial feature for graph mining tasks
the eigenvectors of Laplacian matrix of the graph for capturing the
structural information of nodes
18. 18
Pros & Cons
Pros
What Hop2Token can do:
Capture global information (up to K-hop)
Solve isomophic subgraphs
Attention layer:
learning more informative node representations from the multi-hop
neighborhoods
Cons:
Expressive: <= 1-d Weisfeiler lehman test
Fails to capture graph structure
Noise from neighbourhood