A framework for low communication approaches for large scale 3D convolutionCarlos Reaño González
Paper presented at the 2nd International Workshop on Deployment and Use of Accelerators (DUAC). Co-located with the 51st International Conference on Parallel Processing (ICPP). August 29, 2021 (virtual event). More information at: https://duac2022.wordpress.com/
A framework for low communication approaches for large scale 3D convolutionCarlos Reaño González
Paper presented at the 2nd International Workshop on Deployment and Use of Accelerators (DUAC). Co-located with the 51st International Conference on Parallel Processing (ICPP). August 29, 2021 (virtual event). More information at: https://duac2022.wordpress.com/
From Hours to Minutes: The Journey of Optimizing Mask-RCNN and BERT Using MXNetEric Haibin Lin
Training large deep learning models like Mask R-CNN and BERT takes lots of time and compute resources. Using MXNet, the Amazon Web Services deep learning framework team has been working with NVIDIA to optimize many different areas to cut the training time from hours to minutes.
LLMs for the “GPU-Poor” - Franck Nijimbere.pdfGDG Bujumbura
Struggling with limited GPU resources but want to leverage large language models (LLMs)? This session provides a deep dive into cutting-edge LLM compression methods like quantization, pruning, and knowledge distillation. Learn how to efficiently run LLMs without sacrificing performance. Ideal for data scientists, machine learning engineers, and AI enthusiasts keen on cost-effective solutions. Includes a 5-minute Q&A.
hint co hint-based configuration of co-simulationsmehmor
Simulation-based analyses of Cyber-Physical Systems are fundamental in industrial design and testing approaches. The utility of analyses relies on the correct configuration of the simulation tools, which can be
highly complicated. System engineers can normally judge the results, and either evaluate multiple simulation
algorithms, or change the models. However, this is not possible in a co-simulation approach. Co-simulation is a
technique to perform full-system simulation, by combining multiple black-box simulators, each responsible
for a part of the system. In this paper, we demonstrate the difficulty of correctly configuring a co-simulation
scenario using an industrial case study. We propose an approach to tackle this challenge by allowing multiple
engineers, specialized in different domains, to encode some of their experience in the form of hints. These
hints, together with state-of-the-art best practices, are then used to semi-automatically guide the configuration
process of the co-simulation. We report the application of this approach to a use case proposed by our industrial
partners, and discuss some of the lessons learned.
The Knowledge Graph Conference 2022 - Bo Wu's PresentationKatana Graph
Real-world applications may involve different kinds of graph computing workloads, which can be categorized into graph querying, graph analytics, graph mining, and graph machine learning. To address such needs, data scientists have to manually integrate multiple systems, such as graph databases and deep learning systems. The process is tedious and error-prone while the integrated pipeline is often inefficient and complicated to use. In this talk, I will use applications in life science to show how data scientists benefit from a high-performance and unified graph computing platform.
Visual Prompt Tuning (VPT),Parameter-efficient fine-tuning
지금까지 발표한 논문 :https://github.com/Lilcob/-DL_PaperReadingMeeting
발표자료 : https://www.slideshare.net/taeseonryu/mplug
안녕하세요 딥러닝 논문읽기 모임 입니다! 오늘 소개 드릴 논문은 'Visual Prompt Tuning for Transformers with Frozen Weights' 입니다.
오늘 소개하는 논문은 대규모 Transformer 모델을 비전에 효율적이고 효과적으로 미세조정하는 대안인 Visual Prompt Tuning (VPT)를 소개하고 있습니다. VPT는 입력 공간에서 작은 양의 훈련 가능한 매개변수를 도입하면서 모델 백본을 고정합니다.
이 방법을 통해, VPT는 다른 매개변수 효율적인 튜닝 프로토콜에 비해 상당한 성능 향상을 달성하며, 많은 경우에는 전체 미세조정을 능가하면서 작업당 저장 비용을 줄인다는 것을 실험적으로 보여줍니다.
이 논문은 효과성과 효율성 면에서 대규모 사전 훈련된 Transformer를 하위 작업에 적용하는 도전을 다룹니다. 이를 통해, 더 효율적인 방식으로 다양한 비전-언어 작업에 대한 성능을 향상시킬 수 있음을 보여줍니다.
오늘 논문 리뷰를 위해 이미지처리 조경진님이 자세한 리뷰를 도와주셨습니다 많은 관심 미리 감사드립니다!
https://youtu.be/bVOk-hSYyZw
From Hours to Minutes: The Journey of Optimizing Mask-RCNN and BERT Using MXNetEric Haibin Lin
Training large deep learning models like Mask R-CNN and BERT takes lots of time and compute resources. Using MXNet, the Amazon Web Services deep learning framework team has been working with NVIDIA to optimize many different areas to cut the training time from hours to minutes.
LLMs for the “GPU-Poor” - Franck Nijimbere.pdfGDG Bujumbura
Struggling with limited GPU resources but want to leverage large language models (LLMs)? This session provides a deep dive into cutting-edge LLM compression methods like quantization, pruning, and knowledge distillation. Learn how to efficiently run LLMs without sacrificing performance. Ideal for data scientists, machine learning engineers, and AI enthusiasts keen on cost-effective solutions. Includes a 5-minute Q&A.
hint co hint-based configuration of co-simulationsmehmor
Simulation-based analyses of Cyber-Physical Systems are fundamental in industrial design and testing approaches. The utility of analyses relies on the correct configuration of the simulation tools, which can be
highly complicated. System engineers can normally judge the results, and either evaluate multiple simulation
algorithms, or change the models. However, this is not possible in a co-simulation approach. Co-simulation is a
technique to perform full-system simulation, by combining multiple black-box simulators, each responsible
for a part of the system. In this paper, we demonstrate the difficulty of correctly configuring a co-simulation
scenario using an industrial case study. We propose an approach to tackle this challenge by allowing multiple
engineers, specialized in different domains, to encode some of their experience in the form of hints. These
hints, together with state-of-the-art best practices, are then used to semi-automatically guide the configuration
process of the co-simulation. We report the application of this approach to a use case proposed by our industrial
partners, and discuss some of the lessons learned.
The Knowledge Graph Conference 2022 - Bo Wu's PresentationKatana Graph
Real-world applications may involve different kinds of graph computing workloads, which can be categorized into graph querying, graph analytics, graph mining, and graph machine learning. To address such needs, data scientists have to manually integrate multiple systems, such as graph databases and deep learning systems. The process is tedious and error-prone while the integrated pipeline is often inefficient and complicated to use. In this talk, I will use applications in life science to show how data scientists benefit from a high-performance and unified graph computing platform.
Visual Prompt Tuning (VPT),Parameter-efficient fine-tuning
지금까지 발표한 논문 :https://github.com/Lilcob/-DL_PaperReadingMeeting
발표자료 : https://www.slideshare.net/taeseonryu/mplug
안녕하세요 딥러닝 논문읽기 모임 입니다! 오늘 소개 드릴 논문은 'Visual Prompt Tuning for Transformers with Frozen Weights' 입니다.
오늘 소개하는 논문은 대규모 Transformer 모델을 비전에 효율적이고 효과적으로 미세조정하는 대안인 Visual Prompt Tuning (VPT)를 소개하고 있습니다. VPT는 입력 공간에서 작은 양의 훈련 가능한 매개변수를 도입하면서 모델 백본을 고정합니다.
이 방법을 통해, VPT는 다른 매개변수 효율적인 튜닝 프로토콜에 비해 상당한 성능 향상을 달성하며, 많은 경우에는 전체 미세조정을 능가하면서 작업당 저장 비용을 줄인다는 것을 실험적으로 보여줍니다.
이 논문은 효과성과 효율성 면에서 대규모 사전 훈련된 Transformer를 하위 작업에 적용하는 도전을 다룹니다. 이를 통해, 더 효율적인 방식으로 다양한 비전-언어 작업에 대한 성능을 향상시킬 수 있음을 보여줍니다.
오늘 논문 리뷰를 위해 이미지처리 조경진님이 자세한 리뷰를 도와주셨습니다 많은 관심 미리 감사드립니다!
https://youtu.be/bVOk-hSYyZw
The Indian economy is classified into different sectors to simplify the analysis and understanding of economic activities. For Class 10, it's essential to grasp the sectors of the Indian economy, understand their characteristics, and recognize their importance. This guide will provide detailed notes on the Sectors of the Indian Economy Class 10, using specific long-tail keywords to enhance comprehension.
For more information, visit-www.vavaclasses.com
Welcome to TechSoup New Member Orientation and Q&A (May 2024).pdfTechSoup
In this webinar you will learn how your organization can access TechSoup's wide variety of product discount and donation programs. From hardware to software, we'll give you a tour of the tools available to help your nonprofit with productivity, collaboration, financial management, donor tracking, security, and more.
The Art Pastor's Guide to Sabbath | Steve ThomasonSteve Thomason
What is the purpose of the Sabbath Law in the Torah. It is interesting to compare how the context of the law shifts from Exodus to Deuteronomy. Who gets to rest, and why?
Model Attribute Check Company Auto PropertyCeline George
In Odoo, the multi-company feature allows you to manage multiple companies within a single Odoo database instance. Each company can have its own configurations while still sharing common resources such as products, customers, and suppliers.
This is a presentation by Dada Robert in a Your Skill Boost masterclass organised by the Excellence Foundation for South Sudan (EFSS) on Saturday, the 25th and Sunday, the 26th of May 2024.
He discussed the concept of quality improvement, emphasizing its applicability to various aspects of life, including personal, project, and program improvements. He defined quality as doing the right thing at the right time in the right way to achieve the best possible results and discussed the concept of the "gap" between what we know and what we do, and how this gap represents the areas we need to improve. He explained the scientific approach to quality improvement, which involves systematic performance analysis, testing and learning, and implementing change ideas. He also highlighted the importance of client focus and a team approach to quality improvement.
Ethnobotany and Ethnopharmacology:
Ethnobotany in herbal drug evaluation,
Impact of Ethnobotany in traditional medicine,
New development in herbals,
Bio-prospecting tools for drug discovery,
Role of Ethnopharmacology in drug evaluation,
Reverse Pharmacology.
How to Create Map Views in the Odoo 17 ERPCeline George
The map views are useful for providing a geographical representation of data. They allow users to visualize and analyze the data in a more intuitive manner.
240513_Thuy_Labseminar[Universal Prompt Tuning for Graph Neural Networks].pptx
1. Van Thuy Hoang
Network Science Lab
Dept. of Artificial Intelligence
The Catholic University of Korea
E-mail: hoangvanthuy90@gmail.com
2024-05-13
2. 2
BACKGROUND: Graph Convolutional Networks (GCNs)
• Generate node embeddings based on local network neighborhoods
• Nodes have embeddings at each layer, repeating combine messages
from their neighbor using neural networks
3. 3
Background
• Prompt tuning has achieved a great success in adapting large language model (LLM).
• e.g. GPT-4, Llama 2, ChatGLM …
• This technique leads the way for adapting pre-trained models in a new direction.
4. 4
Background
• Prompt tuning a pre-trained LLM
• Step 1: Pre-training an LLM using the Masked Language Modeling (MLM).
• Step 2: Reformulating the downstream task by a prompt on the input sentence.
6. 6
Background
• Existing graph prompt tuning methods for GNNs.
• Some pioneering works GPPT and GraphPrompt utilize graph prompt tuning by
modifying the downstream task to the link prediction, which is consistent with the
pre-training strategy they use.
7. 7
Background
• Limitations
• There is no unified pre-training task for GNNs, making it challenging to design
general prompting functions.
• Existing methods have limited applicability and are only compatible with models
pre-trained by the link prediction.
8. 8
Methodology
• Graph prompt tuning
• Step 1: Template design: generate the graph template, which includes learnable
components in its adjacency matrix and feature matrix
• Step 2: Prompt optimization. We search for the optimal prompt parameters
according to the downstream task.
9. 9
Methodology
• Specialized graph prompt tuning
• According to the motivation of prompt tuning, the graph prompt design is close
related to the pre-training task involved.
• However, there are so many pre-training strategies in the graph field. Can we
design a universal graph prompt tuning method for all these strategies?
10. 10
Methodology
• Universal graph prompt tuning
• GPF focuses on incorporating additional learnable parameters into the feature space
of the input graph.
• The learnable vector p is added to the graph features X to generate the prompted
features X∗.
• Graph Prompt Feature-Plus (GPF-plus)
• GPF-plus sets a different feature vector for each node in the graph.
11. 11
Methodology
• Rethinking the process of graph prompt tuning
• Complex template design and prompt optimization can be divided into several
simple steps.
12. 12
Methodology
• Rethinking the process of graph prompt tuning
• We assume the pre-trained GNN model is a single layer GIN with sum pooling
• Isolated component transformation
13. 13
Empirical Analysis
• GPF and GPF-plus achieved better results than fine-tuning in 80% of the experiments.
14. 14
Empirical Analysis
• Comparison with existing graph prompt-based methods
• GPF and GPF-plus achieved better results than specialized graph prompt methods
by a large margin.
15. 15
Empirical Analysis
• Full-shot (50-shot) experiments
• GPF and GPF-plus have a greater advantage over fine-tuning in few-shot scenarios.
16. 16
Conclusions
• A universal prompt tuning method for graph neural networks, which can be applied to
the models pre-trained by any strategy.
• Theoretical guarantees and design principles for graph prompt tuning, offering valuable
insights for future investigations in this field.