For the full video of this presentation, please visit:
https://www.embedded-vision.com/platinum-members/intel/embedded-vision-training/videos/pages/may-2017-embedded-vision-summit-park
For more information about embedded vision, please visit:
http://www.embedded-vision.com
Minje Park, Software Engineering Manager at Intel, presents the "Designing Deep Neural Network Algorithms for Embedded Devices" tutorial at the May 2017 Embedded Vision Summit.
Deep neural networks have shown state-of-the-art results in a variety of vision tasks. Although accurate, most of these deep neural networks are computationally intensive, creating challenges for embedded devices. In this talk, Park provides several ideas and insights on how to design deep neural network architectures small enough for embedded deployment. He also explores how to further reduce the processing load by adopting simple but effective compression and quantization techniques. He shows a set of practical applications, such as face recognition, facial attribute classification, and person detection, which can be run in near real-time without any heavy GPU or dedicated DSP and without losing accuracy.
Here, we have implemented CNN network in FPGA by incorporating a novel technique of convolution which includes pipelining technique as well as parallelism (by optimizing) between the two.
For the full video of this presentation, please visit:
https://www.embedded-vision.com/platinum-members/intel/embedded-vision-training/videos/pages/may-2017-embedded-vision-summit-park
For more information about embedded vision, please visit:
http://www.embedded-vision.com
Minje Park, Software Engineering Manager at Intel, presents the "Designing Deep Neural Network Algorithms for Embedded Devices" tutorial at the May 2017 Embedded Vision Summit.
Deep neural networks have shown state-of-the-art results in a variety of vision tasks. Although accurate, most of these deep neural networks are computationally intensive, creating challenges for embedded devices. In this talk, Park provides several ideas and insights on how to design deep neural network architectures small enough for embedded deployment. He also explores how to further reduce the processing load by adopting simple but effective compression and quantization techniques. He shows a set of practical applications, such as face recognition, facial attribute classification, and person detection, which can be run in near real-time without any heavy GPU or dedicated DSP and without losing accuracy.
Here, we have implemented CNN network in FPGA by incorporating a novel technique of convolution which includes pipelining technique as well as parallelism (by optimizing) between the two.
Restricting the Flow: Information Bottlenecks for Attributiontaeseon ryu
101번째 영상,
펀디멘탈팀 김준호 님의
Restricting the Flow: Information Bottlenecks for Attribution
논문 리뷰 입니다
Explanable ai, xai와 관련된 페이퍼 입니다! 관련되어 관심있으신 분들이 많은 도움이 되시길 바랍니다! attribution map을 이용하여 결과물에 영향을 준 네트워크의 gradient를 직접 추적하여 비주얼 explanation을 추적하는 방식입니다! 펀디멘탈팀 김준호님이 밑바닥부터 자세한 리뷰를 도와주셨습니다!
오늘도 많은 관심과 사랑 감사합니다!
Lightweight DNN Processor Design (based on NVDLA)Shien-Chun Luo
https://sites.google.com/view/itri-icl-dla/
(Public Information Share) This is our lightweight DNN inference processor presentation, including a system solution (from Caffe prototxt to HW controls files), hardware features, and an example of object detection (Tiny YOLO) RTL simulation results. We modified open-source NVDLA, small configuration, and developed a RISC-V MCU in this accelerating system.
A Unified Framework for Computer Vision Tasks: (Conditional) Generative Model...Sangwoo Mo
Lab seminar introduces Ting Chen's recent 3 works:
- Pix2seq: A Language Modeling Framework for Object Detection (ICLR’22)
- A Unified Sequence Interface for Vision Tasks (NeurIPS’22)
- A Generalist Framework for Panoptic Segmentation of Images and Videos (submitted to ICLR’23)
Close encounters in MDD: when Models meet Codelbergmans
Model-Driven Development (MDD) promises a number of advantages, which include the ability to work at higher abstraction levels, static reasoning about models, and generation of platform-specific code. To achieve this, generally a transformation-based approach is adopted, which generates code from models. In this presentation we discuss –in addition to the potential advantages– a number of possible misunderstandings and risks of MDD.
In particular, we address the risks of transformation-based software development, such as:
• It is rarely possible to generate the full functionality of a (sub-)system from models; as a result, it is necessary to either do additional ‘manual coding’ –a challenge to integrate with the generated code– or annotate the model with small or larger fragments of executable code, which has several restrictions and practical consequences: for instance it mingles abstraction levels, and reduces maintainability of code and models.
• MDD is particularly effective when various different models can be used, each optimized for a specific domain. However, when using transformation techniques, de combination of multiple models in an integrated application is far from trivial.
In this talk we propose –as a low-threshold approach–, ‘bottom-up’ model-driven development. This means that the focus on domain-specific abstractions remains, as well as the separation of platform-specific and platform-independent software. This approach, which is related to Domain-Driven Design and domain-specific languages (DSLs), aims to exploit the advantages of modeling in terms of abstractions, while at the same time reducing the gap between models and code. This can be achieved by specifying the models in code, while separating platform-specific code from the model code. An important issue is the capability to combine several different models, without getting into technical difficulties: we discuss existing as well as a novel approach, entitled Co-op, which aim to address this problem.
Close Encounters in MDD: when models meet codelbergmans
“Close encounters in MDD: when Models meet Code”
Model-Driven Development (MDD) promises a number of advantages, which include the ability to work at higher abstraction levels, static reasoning about models, and generation of platform-specific code. To achieve this, generally a transformation-based approach is adopted, which generates code from models. In this presentation we discuss –in addition to the potential advantages– a number of possible misunderstandings and risks of MDD.
In particular, we address the risks of transformation-based software development, such as:
• It is rarely possible to generate the full functionality of a (sub-)system from models; as a result, it is necessary to either do additional ‘manual coding’ –a challenge to integrate with the generated code– or annotate the model with small or larger fragments of executable code, which has several restrictions and practical consequences: for instance it mingles abstraction levels, and reduces maintainability of code and models.
• MDD is particularly effective when various different models can be used, each optimized for a specific domain. However, when using transformation techniques, de combination of multiple models in an integrated application is far from trivial.
In this talk we propose –as a low-threshold approach–, ‘bottom-up’ model-driven development. This means that the focus on domain-specific abstractions remains, as well as the separation of platform-specific and platform-independent software. This approach, which is related to Domain-Driven Design and domain-specific languages (DSLs), aims to exploit the advantages of modeling in terms of abstractions, while at the same time reducing the gap between models and code. This can be achieved by specifying the models in code, while separating platform-specific code from the model code. An important issue is the capability to combine several different models, without getting into technical difficulties: we discuss existing as well as a novel approach, entitled Co-op, which aim to address this problem.
Finally, we discuss how the presented approach fits with the ‘scalable design’ approach for developing software that is scalable with respect to evolving requirements.
For the full video of this presentation, please visit:
https://www.embedded-vision.com/platinum-members/mathworks/embedded-vision-training/videos/pages/may-2017-embedded-vision-summit-venkataramani
For more information about embedded vision, please visit:
http://www.embedded-vision.com
Avinash Nehemiah, Product Marketing Manager for Computer Vision, and Girish Venkataramani, Product Development Manager, both of MathWorks, presents the "Deep Learning and Vision Algorithm Development in MATLAB Targeting Embedded GPUs" tutorial at the May 2017 Embedded Vision Summit.
In this presentation, you'll learn how to adopt a MATLAB-centric workflow to design, verify and deploy your computer vision and deep learning applications onto embedded NVIDIA Tegra-based platforms including Jetson TK1/TX1 and DrivePX boards. The workflow starts with algorithm design in MATLAB, which enjoys universal appeal among engineers and scientists because of its expressive power and ease-of-use. The algorithm may employ deep learning networks augmented with traditional computer vision techniques and can be tested and verified within MATLAB.
Next, a compiler auto-generates portable and optimized CUDA code from the MATLAB algorithm, which is then cross-compiled and deployed to the Tegra board. The workflow affords on-board real-time prototyping and verification controlled through MATLAB. Examples of common computer vision algorithms and deep learning networks are used to describe this workflow, and their performance benchmarks are presented.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2022/08/understanding-dnn-based-object-detectors-a-presentation-from-au-zone-technologies/
Azhar Quddus, Senior Computer Vision Engineer at Au-Zone Technologies, presents the “Understanding DNN-Based Object Detectors” tutorial at the May 2022 Embedded Vision Summit.
Unlike image classifiers, which merely report on the most important objects within or attributes of an image, object detectors determine where objects of interest are located within an image. Consequently, object detectors are central to many computer vision applications including (but not limited to) autonomous vehicles and virtual reality.
In this presentation, Quddus provides a technical introduction to deep-neural-network-based object detectors. He explains how these algorithms work, and how they have evolved in recent years, utilizing examples of popular object detectors. Quddus examines some of the trade-offs to consider when selecting an object detector for an application, and touches on accuracy measurement. He also discusses performance comparison among the models discussed in this presentation.
Explaining the decisions of image/video classifiersVasileiosMezaris
Presentation delivered by Vasileios Mezaris at the 1st Nice Workshop on Interpretability, November 2022, Nice, France.
This presentation starts by discussing the motivation of explainability approaches for image and video classifiers. Then, we focus on three distinct problems: learning how to derive explanations for the decisions of a legacy (trained) image classifier; designing a classifier for video event recognition that can also deliver explanations for its decisions; and, taking a first look at possible explanation signals of a video summarizer. Technical details of our proposed solutions to these three problems are presented. Besides quantitative results concerning the goodness of the derived explanations, qualitative examples are also discussed in order to provide insight on the reasons behind classification errors, including possible dataset biases affecting the trained classifiers.
#6 PyData Warsaw: Deep learning for image segmentationMatthew Opala
Deep learning techniques ignited a great progress in many computer vision tasks like image classification, object detection, and segmentation. Almost every month a new method is published that achieves state-of-the-art result on some common benchmark dataset. In addition to that, DL is being applied to new problems in CV.
In the talk we’re going to focus on DL application to image segmentation task. We want to show the practical importance of this task for the fashion industry by presenting our case study with results achieved with various attempts and methods.
by Vikram Madan, Sr. Product Manager, AWS Deep Learning
In this workshop, we will provide cover deep learning fundamentals and focus on the powerful and scalable Apache MXNet open source deep learning framework. At the end of this tutorial you’ll be able to train your own deep neural network and fine tune existing state of the art models for image and object recognition. We’ll also deep dive on setting up your deep learning infrastructure on AWS and model deployment on AWS Lambda.
Building a Tensorflow-based model that extracts the "best" frames from a video, which are then used as auto-generated thumbnails and thumbstrips. We used transfer learning on Google's Inceptionv3 model, which was pretrained with ImageNet data and retrained on JW Player's thumbnail library.
Restricting the Flow: Information Bottlenecks for Attributiontaeseon ryu
101번째 영상,
펀디멘탈팀 김준호 님의
Restricting the Flow: Information Bottlenecks for Attribution
논문 리뷰 입니다
Explanable ai, xai와 관련된 페이퍼 입니다! 관련되어 관심있으신 분들이 많은 도움이 되시길 바랍니다! attribution map을 이용하여 결과물에 영향을 준 네트워크의 gradient를 직접 추적하여 비주얼 explanation을 추적하는 방식입니다! 펀디멘탈팀 김준호님이 밑바닥부터 자세한 리뷰를 도와주셨습니다!
오늘도 많은 관심과 사랑 감사합니다!
Lightweight DNN Processor Design (based on NVDLA)Shien-Chun Luo
https://sites.google.com/view/itri-icl-dla/
(Public Information Share) This is our lightweight DNN inference processor presentation, including a system solution (from Caffe prototxt to HW controls files), hardware features, and an example of object detection (Tiny YOLO) RTL simulation results. We modified open-source NVDLA, small configuration, and developed a RISC-V MCU in this accelerating system.
A Unified Framework for Computer Vision Tasks: (Conditional) Generative Model...Sangwoo Mo
Lab seminar introduces Ting Chen's recent 3 works:
- Pix2seq: A Language Modeling Framework for Object Detection (ICLR’22)
- A Unified Sequence Interface for Vision Tasks (NeurIPS’22)
- A Generalist Framework for Panoptic Segmentation of Images and Videos (submitted to ICLR’23)
Close encounters in MDD: when Models meet Codelbergmans
Model-Driven Development (MDD) promises a number of advantages, which include the ability to work at higher abstraction levels, static reasoning about models, and generation of platform-specific code. To achieve this, generally a transformation-based approach is adopted, which generates code from models. In this presentation we discuss –in addition to the potential advantages– a number of possible misunderstandings and risks of MDD.
In particular, we address the risks of transformation-based software development, such as:
• It is rarely possible to generate the full functionality of a (sub-)system from models; as a result, it is necessary to either do additional ‘manual coding’ –a challenge to integrate with the generated code– or annotate the model with small or larger fragments of executable code, which has several restrictions and practical consequences: for instance it mingles abstraction levels, and reduces maintainability of code and models.
• MDD is particularly effective when various different models can be used, each optimized for a specific domain. However, when using transformation techniques, de combination of multiple models in an integrated application is far from trivial.
In this talk we propose –as a low-threshold approach–, ‘bottom-up’ model-driven development. This means that the focus on domain-specific abstractions remains, as well as the separation of platform-specific and platform-independent software. This approach, which is related to Domain-Driven Design and domain-specific languages (DSLs), aims to exploit the advantages of modeling in terms of abstractions, while at the same time reducing the gap between models and code. This can be achieved by specifying the models in code, while separating platform-specific code from the model code. An important issue is the capability to combine several different models, without getting into technical difficulties: we discuss existing as well as a novel approach, entitled Co-op, which aim to address this problem.
Close Encounters in MDD: when models meet codelbergmans
“Close encounters in MDD: when Models meet Code”
Model-Driven Development (MDD) promises a number of advantages, which include the ability to work at higher abstraction levels, static reasoning about models, and generation of platform-specific code. To achieve this, generally a transformation-based approach is adopted, which generates code from models. In this presentation we discuss –in addition to the potential advantages– a number of possible misunderstandings and risks of MDD.
In particular, we address the risks of transformation-based software development, such as:
• It is rarely possible to generate the full functionality of a (sub-)system from models; as a result, it is necessary to either do additional ‘manual coding’ –a challenge to integrate with the generated code– or annotate the model with small or larger fragments of executable code, which has several restrictions and practical consequences: for instance it mingles abstraction levels, and reduces maintainability of code and models.
• MDD is particularly effective when various different models can be used, each optimized for a specific domain. However, when using transformation techniques, de combination of multiple models in an integrated application is far from trivial.
In this talk we propose –as a low-threshold approach–, ‘bottom-up’ model-driven development. This means that the focus on domain-specific abstractions remains, as well as the separation of platform-specific and platform-independent software. This approach, which is related to Domain-Driven Design and domain-specific languages (DSLs), aims to exploit the advantages of modeling in terms of abstractions, while at the same time reducing the gap between models and code. This can be achieved by specifying the models in code, while separating platform-specific code from the model code. An important issue is the capability to combine several different models, without getting into technical difficulties: we discuss existing as well as a novel approach, entitled Co-op, which aim to address this problem.
Finally, we discuss how the presented approach fits with the ‘scalable design’ approach for developing software that is scalable with respect to evolving requirements.
For the full video of this presentation, please visit:
https://www.embedded-vision.com/platinum-members/mathworks/embedded-vision-training/videos/pages/may-2017-embedded-vision-summit-venkataramani
For more information about embedded vision, please visit:
http://www.embedded-vision.com
Avinash Nehemiah, Product Marketing Manager for Computer Vision, and Girish Venkataramani, Product Development Manager, both of MathWorks, presents the "Deep Learning and Vision Algorithm Development in MATLAB Targeting Embedded GPUs" tutorial at the May 2017 Embedded Vision Summit.
In this presentation, you'll learn how to adopt a MATLAB-centric workflow to design, verify and deploy your computer vision and deep learning applications onto embedded NVIDIA Tegra-based platforms including Jetson TK1/TX1 and DrivePX boards. The workflow starts with algorithm design in MATLAB, which enjoys universal appeal among engineers and scientists because of its expressive power and ease-of-use. The algorithm may employ deep learning networks augmented with traditional computer vision techniques and can be tested and verified within MATLAB.
Next, a compiler auto-generates portable and optimized CUDA code from the MATLAB algorithm, which is then cross-compiled and deployed to the Tegra board. The workflow affords on-board real-time prototyping and verification controlled through MATLAB. Examples of common computer vision algorithms and deep learning networks are used to describe this workflow, and their performance benchmarks are presented.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2022/08/understanding-dnn-based-object-detectors-a-presentation-from-au-zone-technologies/
Azhar Quddus, Senior Computer Vision Engineer at Au-Zone Technologies, presents the “Understanding DNN-Based Object Detectors” tutorial at the May 2022 Embedded Vision Summit.
Unlike image classifiers, which merely report on the most important objects within or attributes of an image, object detectors determine where objects of interest are located within an image. Consequently, object detectors are central to many computer vision applications including (but not limited to) autonomous vehicles and virtual reality.
In this presentation, Quddus provides a technical introduction to deep-neural-network-based object detectors. He explains how these algorithms work, and how they have evolved in recent years, utilizing examples of popular object detectors. Quddus examines some of the trade-offs to consider when selecting an object detector for an application, and touches on accuracy measurement. He also discusses performance comparison among the models discussed in this presentation.
Explaining the decisions of image/video classifiersVasileiosMezaris
Presentation delivered by Vasileios Mezaris at the 1st Nice Workshop on Interpretability, November 2022, Nice, France.
This presentation starts by discussing the motivation of explainability approaches for image and video classifiers. Then, we focus on three distinct problems: learning how to derive explanations for the decisions of a legacy (trained) image classifier; designing a classifier for video event recognition that can also deliver explanations for its decisions; and, taking a first look at possible explanation signals of a video summarizer. Technical details of our proposed solutions to these three problems are presented. Besides quantitative results concerning the goodness of the derived explanations, qualitative examples are also discussed in order to provide insight on the reasons behind classification errors, including possible dataset biases affecting the trained classifiers.
#6 PyData Warsaw: Deep learning for image segmentationMatthew Opala
Deep learning techniques ignited a great progress in many computer vision tasks like image classification, object detection, and segmentation. Almost every month a new method is published that achieves state-of-the-art result on some common benchmark dataset. In addition to that, DL is being applied to new problems in CV.
In the talk we’re going to focus on DL application to image segmentation task. We want to show the practical importance of this task for the fashion industry by presenting our case study with results achieved with various attempts and methods.
by Vikram Madan, Sr. Product Manager, AWS Deep Learning
In this workshop, we will provide cover deep learning fundamentals and focus on the powerful and scalable Apache MXNet open source deep learning framework. At the end of this tutorial you’ll be able to train your own deep neural network and fine tune existing state of the art models for image and object recognition. We’ll also deep dive on setting up your deep learning infrastructure on AWS and model deployment on AWS Lambda.
Building a Tensorflow-based model that extracts the "best" frames from a video, which are then used as auto-generated thumbnails and thumbstrips. We used transfer learning on Google's Inceptionv3 model, which was pretrained with ImageNet data and retrained on JW Player's thumbnail library.
Similar to 社内勉強会資料_Object Recognition as Next Token Prediction (20)
How can I successfully sell my pi coins in Philippines?DOT TECH
Even tho pi not launched globally, crypto whales, holders, investors are looking forward to hold up to 20,000 pi coins before mainnet launch in 2026.
All a miner or pioneer has to do to sell is to get in contact with a legitimate pi vendor ( a person that buys pi coins from miners and resell them to investors)
I will leave the telegram contact of my personal pi vendor:
@Pi_vendor_247
#pi network
#pi 2024
#sell pi
Chatty Kathy - UNC Bootcamp Final Project Presentation - Final Version - 5.23...John Andrews
SlideShare Description for "Chatty Kathy - UNC Bootcamp Final Project Presentation"
Title: Chatty Kathy: Enhancing Physical Activity Among Older Adults
Description:
Discover how Chatty Kathy, an innovative project developed at the UNC Bootcamp, aims to tackle the challenge of low physical activity among older adults. Our AI-driven solution uses peer interaction to boost and sustain exercise levels, significantly improving health outcomes. This presentation covers our problem statement, the rationale behind Chatty Kathy, synthetic data and persona creation, model performance metrics, a visual demonstration of the project, and potential future developments. Join us for an insightful Q&A session to explore the potential of this groundbreaking project.
Project Team: Jay Requarth, Jana Avery, John Andrews, Dr. Dick Davis II, Nee Buntoum, Nam Yeongjin & Mat Nicholas
Explore our comprehensive data analysis project presentation on predicting product ad campaign performance. Learn how data-driven insights can optimize your marketing strategies and enhance campaign effectiveness. Perfect for professionals and students looking to understand the power of data analysis in advertising. for more details visit: https://bostoninstituteofanalytics.org/data-science-and-artificial-intelligence/