For the full video of this presentation, please visit:
https://www.edge-ai-vision.com/2021/01/mlperf-an-industry-standard-performance-benchmark-suite-for-machine-learning-a-presentation-from-facebook-and-arizona-state-university/
Carole-Jean Wu, Research Scientist at Facebook AI Research and an Associate Professor at Arizona State University, presents the “MLPerf: An Industry Standard Performance Benchmark Suite for Machine Learning” tutorial at the September 2020 Embedded Vision Summit.
The rapid growth in the use of DNNs has spurred the development of numerous specialized processor architectures and software frameworks. System and application developers need reliable performance metrics to help them select processors and frameworks. Processor and framework developers need reliable performance metrics so that they can improve their products.
This talk presents MLPerf, an industry-standard performance benchmark suite, and the design philosophies behind the benchmark suite and associated benchmarking methodologies.
For the full video of this presentation, please visit:
https://www.edge-ai-vision.com/2020/08/once-for-all-dnns-simplifying-design-of-efficient-models-for-diverse-hardware-a-presentation-from-mit/
For more information about edge AI and vision, please visit:
http://www.edge-ai-vision.com
Christine Cheng, co-chair of the inference benchmark working group at MLPerf and a senior machine learning optimization engineer at Intel, delivers the presentation “MLPerf: An Industry Standard Performance Benchmark Suite for Machine Learning” at the Edge AI and Vision Alliance’s July 2020 Edge AI and Vision Innovation Forum. Cheng explains how MLPerf’s inference benchmark suite for evaluating processor performance works and is evolving.
Platform Requirements for CI/CD Success—and the Enterprises Leading the WayVMware Tanzu
All enterprises want to increase the speed of software delivery to get new products to market faster. The means for achieving this is often through the practice of continuous integration/continuous delivery. But speed alone isn’t enough—teams also require the ability to pivot when conditions change. They must ensure their software is stable and reliable, and be able to roll out patches and other security measures quickly and at scale.
A cloud-native platform coupled with test-driven development and CI/CD practices can help make this a reality. In this webinar, 451 Research’s Jay Lyman presents the results of his research into cloud-native platform requirements for enterprise CI/CD and DevOps success. Pivotal’s James Ma joins Lyman to discuss best practices from DevOps teams charged with running and managing cloud-native platforms, including applying CI/CD to the platform itself.
Speakers: James Ma, Pivotal and Jay Lyman, 451 Research
Critical steps in Determining Your Value Stream Management SolutionDevOps.com
In order to increase your delivery velocity, you must find, identify and solve the bottlenecks of delivery. Value Stream management solutions capture metrics and processes helping guide your digital transformation journey.
Join Marc Hornbeek, Principal Consultant and Jeff Keyes from Plutora where they will discuss a methodology determining a value stream management solution for your organization. It will consist of critical steps including a Review of VSM Assessments, Future-State Value Stream Mapping, Road-Mapping VSM Transformation, and more. Following these steps provide a logical and comprehensive approach to determine a value stream management solution that fits for your organization’s requirements.
What will be learned:
WHY – is following steps for determining a VSM solution important?
HOW – are VSM solutions determined?
WHAT – is the expected outcome of a Value Stream Management solution recommendation?
Observability is the most important capability needed to manage the development, deployment, and operation of modern systems.
These slides—based on the webinar with EMA Research and LightStep--explore the importance of observability and how to address this capability for complex systems.
Extending the Visual Studio 2010 Code Editor to Visualize Runtime Intelligenc...Joe Kuemerle
Come see how PreEmptive Solutions built an editor extension for Visual Studio 2010 that provides in-line visualizations of usage and stability data collected from applications in production via Runtime Intelligence Services. Learn about the new code editor’s extensibility model, how to write editor extensions using the Managed Extensibility Framework, how to interact with the text buffer, how to create custom margins and Windows Presentation Foundation-based adornments, and how to distribute the extension.
For the full video of this presentation, please visit:
https://www.edge-ai-vision.com/2020/08/once-for-all-dnns-simplifying-design-of-efficient-models-for-diverse-hardware-a-presentation-from-mit/
For more information about edge AI and vision, please visit:
http://www.edge-ai-vision.com
Christine Cheng, co-chair of the inference benchmark working group at MLPerf and a senior machine learning optimization engineer at Intel, delivers the presentation “MLPerf: An Industry Standard Performance Benchmark Suite for Machine Learning” at the Edge AI and Vision Alliance’s July 2020 Edge AI and Vision Innovation Forum. Cheng explains how MLPerf’s inference benchmark suite for evaluating processor performance works and is evolving.
Platform Requirements for CI/CD Success—and the Enterprises Leading the WayVMware Tanzu
All enterprises want to increase the speed of software delivery to get new products to market faster. The means for achieving this is often through the practice of continuous integration/continuous delivery. But speed alone isn’t enough—teams also require the ability to pivot when conditions change. They must ensure their software is stable and reliable, and be able to roll out patches and other security measures quickly and at scale.
A cloud-native platform coupled with test-driven development and CI/CD practices can help make this a reality. In this webinar, 451 Research’s Jay Lyman presents the results of his research into cloud-native platform requirements for enterprise CI/CD and DevOps success. Pivotal’s James Ma joins Lyman to discuss best practices from DevOps teams charged with running and managing cloud-native platforms, including applying CI/CD to the platform itself.
Speakers: James Ma, Pivotal and Jay Lyman, 451 Research
Critical steps in Determining Your Value Stream Management SolutionDevOps.com
In order to increase your delivery velocity, you must find, identify and solve the bottlenecks of delivery. Value Stream management solutions capture metrics and processes helping guide your digital transformation journey.
Join Marc Hornbeek, Principal Consultant and Jeff Keyes from Plutora where they will discuss a methodology determining a value stream management solution for your organization. It will consist of critical steps including a Review of VSM Assessments, Future-State Value Stream Mapping, Road-Mapping VSM Transformation, and more. Following these steps provide a logical and comprehensive approach to determine a value stream management solution that fits for your organization’s requirements.
What will be learned:
WHY – is following steps for determining a VSM solution important?
HOW – are VSM solutions determined?
WHAT – is the expected outcome of a Value Stream Management solution recommendation?
Observability is the most important capability needed to manage the development, deployment, and operation of modern systems.
These slides—based on the webinar with EMA Research and LightStep--explore the importance of observability and how to address this capability for complex systems.
Extending the Visual Studio 2010 Code Editor to Visualize Runtime Intelligenc...Joe Kuemerle
Come see how PreEmptive Solutions built an editor extension for Visual Studio 2010 that provides in-line visualizations of usage and stability data collected from applications in production via Runtime Intelligence Services. Learn about the new code editor’s extensibility model, how to write editor extensions using the Managed Extensibility Framework, how to interact with the text buffer, how to create custom margins and Windows Presentation Foundation-based adornments, and how to distribute the extension.
DevOpsGuys FutureDecoded 2016 - is DevOps the AnswerDevOpsGroup
DevOps is the Answer, what was the question again?
Everyone is talking DevOps and in this presentation we try and put DevOps in context of Digital Transformation, how you can move to a DevOps model, how Microsoft solutions might form a part of that, and a bit of tech demo thrown in for fun!
Understand the concept of DevOps by employing DevOps Strategy Roadmap Lifecycle PowerPoint Presentation Slides Complete Deck. Describe how DevOps is different from traditional IT with these content-ready PPT themes. The slides also help to discuss DevOps use cases in the business, roadmap, and its lifecycle. Explain the roles, responsibilities, and skills of DevOps engineers by utilizing this visually appealing slide deck. Demonstrate DevOp roadmap for implementation in the organization with the help of a thoroughly researched PPT slideshow. Describe the characteristics of cloud computing, its benefits, and risks with the aid of this PPT layout. Utilize this easy-to-use DevOps transformation strategy PowerPoint slide deck to showcase the difference between cloud and traditional data centers. This ready-to-use PowerPoint layout also discusses the roadmap to integrate cloud computing in business. Highlight the usages of cloud computing and deployment models with the help of visual attention-grabbing DevOps implementation roadmap PowerPoint slides. https://bit.ly/3eFxYYr
Comparative Recommender System Evaluation: Benchmarking Recommendation Frame...Alan Said
Video available here http://www.youtube.com/watch?v=1jHxGCl8RXc
Recommender systems research is often based on comparisons of predictive accuracy: the better the evaluation scores, the better the recommender.
However, it is difficult to compare results from different recommender systems due to the many options in design and implementation of an evaluation strategy.
Additionally, algorithmic implementations can diverge from the standard formulation due to manual tuning and modifications that work better in some situations.
In this work we compare common recommendation algorithms as implemented in three popular recommendation frameworks.
To provide a fair comparison, we have complete control of the evaluation dimensions being benchmarked: dataset, data splitting, evaluation strategies, and metrics.
We also include results using the internal evaluation mechanisms of these frameworks.
Our analysis points to large differences in recommendation accuracy across frameworks and strategies, i.e. the same baselines may perform orders of magnitude better or worse across frameworks.
Our results show the necessity of clear guidelines when reporting evaluation of recommender systems to ensure reproducibility and comparison of results.
Cloud and Network Transformation using DevOps methodology : Cisco Live 2015Vimal Suba
Content presented as part of Cisco Live 2015 in San Diego
Why DevOps and what it means to be a DevOps-Enabled Organization?
Recommendations on Toolchain, Metrics framework, best practices and tips to help you embark on your IT Organization on DevOps journey
For this talk, I will be discussing about various approaches to accelerate deep learning solutions from notebooks or research environment to production environment and how these solutions can be transformed as an enterprise level end to end Deep Learning Solution, which can be consumed as a service by any software application, with a practical use-case example.
Network and IT Ops Series: Build Production Solutions Neo4j
Jeff Morris, Director, Neo4j:Are you building a breakthrough product or extending an existing one? Do you need introduce new capabilities based on insights from data relationships? If so, you should consider embedding a graph database.
For software providers building products to assure quality network operations or security, using an embedded graph database may open new customer opportunities. Watch this webinar to learn how you can easily differentiate your applications and take your solutions to market faster with a native graph database like Neo4j.
These slides, based on the webinar, provide insights into DevOps research, including:
- How fast is DevOps being adopted and what is the sequencing and actual/planned penetration of DevOps capabilities across 20 markets?
- Deep insight into the highest growth areas of DevOps which include value stream management (VSM), operational analytics (OA), and test environment management (TEM).
- What strategies and approaches are enterprises leveraging to address their VSM, POA, and TEM needs?
- Where is the DevOps market headed, and what are likely to be the next important growth areas?
.NET Fest 2019. Оля Гавриш. Машинное обучение для .NET программистовNETFest
А Вы знали, что практически для каждого проекта можно применить машинное обучение? И теперь для этого не нужно изучать новый язык программирования (как Python или R) и осваивать численные методы. В этом докладе я расскажу об основах машинного обучения и о том, как легко начать использовать его в своих .NET проектах с помощью ML.NET и других решений от Microsoft.
Experimentation to Industrialization: Implementing MLOpsDatabricks
In this presentation, drawing upon Thorogood’s experience with a customer’s global Data & Analytics division as their MLOps delivery partner, we share important learnings and takeaways from delivering productionized ML solutions and shaping MLOps best practices and organizational standards needed to be successful.
We open by providing high-level context & answering key questions such as “What is MLOps exactly?” & “What are the benefits of establishing MLOps Standards?”
The subsequent presentation focuses on our learnings & best practices. We start by discussing common challenges when refactoring experimentation use-cases & how to best get ahead of these issues in a global organization. We then outline an Engagement Model for MLOps addressing: People, Processes, and Tools. ‘Processes’ highlights how to manage the often siloed data science use case demand pipeline for MLOps & documentation to facilitate seamless integration with an MLOps framework. ‘People’ provides context around the appropriate team structures & roles to be involved in an MLOps initiative. ‘Tools’ addresses key requirements of tools used for MLOps, considering the match of services to use-cases.
This presentation was made on June 9th, 2020.
Video recording of the session can be viewed here: https://youtu.be/OCB9sTUnUug
In this meetup with Sanyam Bhutani, Machine Learning Engineer at H2O.ai, he gives a recap of the eight annual ICLR (International Conference on Learning Representations) 2020 - a niche deep learning conference whose focus is to study how to learn representations of data, which is basically what deep learning does.
Sanyam goes through a few of his favorite selected papers from this year’s ICLR, note this session may not be able to capture the richness of all papers or allow a detailed discussion.
You will be able to find Sanyam in our community slack (https://www.h2o.ai/slack-community/), please feel free to start a discussion with him, if you send a emoji greeting, you’ll find the answers.
Following are the papers we will look into:
U-GAT-IT: Unsupervised Generative Attentional Networks with Adaptive Layer-Instance Normalization for Image-to-Image Translation
AugMix: A Simple Data Processing Method to Improve Robustness and Uncertainty
Your classifier is secretly an energy based model and you should treat it like one
ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators
ALBERT: A Lite BERT for Self-supervised Learning of Language Representations
Reformer: The Efficient Transformer
Generative Models for Effective ML on Private, Decentralized Datasets
Once for All: Train One Network and Specialize it for Efficient Deployment
Thieves on Sesame Street! Model Extraction of BERT-based APIs
Plug and Play Language Models: A Simple Approach to Controlled Text Generation
BatchEnsemble: An Alternative Approach to Efficient Ensemble and Lifelong Learning
Real or Not Real, that is the Question
MSR 2022 Foundational Contribution Award Talk: Software Analytics: Reflection...Tao Xie
MSR 2022 Foundational Contribution Award Talk on "Software Analytics: Reflection and Path Forward" by Dongmei Zhang and Tao Xie
https://conf.researchr.org/info/msr-2022/awards
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/deploying-large-models-on-the-edge-success-stories-and-challenges-a-presentation-from-qualcomm/
Vinesh Sukumar, Senior Director of Product Management at Qualcomm Technologies, presents the “Deploying Large Models on the Edge: Success Stories and Challenges” tutorial at the May 2024 Embedded Vision Summit.
In this talk, Dr. Sukumar explains and demonstrates how Qualcomm has been successful in deploying large generative AI and multimodal models on the edge for a variety of use cases in consumer and enterprise markets. He examines key challenges that must be overcome before large models at the edge can reach their full commercial potential. He also highlights how Qualcomm is addressing these challenges through upgraded processor hardware, improved developer tools and a comprehensive library of fully optimized AI models in the Qualcomm AI Hub.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/scaling-vision-based-edge-ai-solutions-from-prototype-to-global-deployment-a-presentation-from-network-optix/
Maurits Kaptein, Chief Data Scientist at Network Optix and Professor at the University of Eindhoven, presents the “Scaling Vision-based Edge AI Solutions: From Prototype to Global Deployment” tutorial at the May 2024 Embedded Vision Summit.
The Embedded Vision Summit brings together innovators in silicon, devices, software and applications and empowers them to bring computer vision and perceptual AI into reliable and scalable products. However, integrating recent hardware, software and algorithm innovations into prime-time-ready products is quite challenging. Scaling from a proof of concept—for example, a novel neural network architecture performing a valuable task efficiently on a new piece of silicon—to an AI vision system installed in hundreds of sites requires surmounting myriad hurdles.
First, building on Network Optix’s 14 years of experience, Professor Kaptein details how to overcome the networking, fleet management, visualization and monetization challenges that come with scaling a global vision solution. Second, Kaptein discusses the complexities of making vision AI solutions device-agnostic and remotely manageable, proposing an open standard for AI model deployment to edge devices. The proposed standard aims to simplify market entry for silicon manufacturers and enhance scalability for solution developers. Kaptein outlines the standard’s core components and invites collaborative contributions to drive market expansion.
More Related Content
Similar to “MLPerf: An Industry Standard Performance Benchmark Suite for Machine Learning,” a Presentation from Facebook and Arizona State University
DevOpsGuys FutureDecoded 2016 - is DevOps the AnswerDevOpsGroup
DevOps is the Answer, what was the question again?
Everyone is talking DevOps and in this presentation we try and put DevOps in context of Digital Transformation, how you can move to a DevOps model, how Microsoft solutions might form a part of that, and a bit of tech demo thrown in for fun!
Understand the concept of DevOps by employing DevOps Strategy Roadmap Lifecycle PowerPoint Presentation Slides Complete Deck. Describe how DevOps is different from traditional IT with these content-ready PPT themes. The slides also help to discuss DevOps use cases in the business, roadmap, and its lifecycle. Explain the roles, responsibilities, and skills of DevOps engineers by utilizing this visually appealing slide deck. Demonstrate DevOp roadmap for implementation in the organization with the help of a thoroughly researched PPT slideshow. Describe the characteristics of cloud computing, its benefits, and risks with the aid of this PPT layout. Utilize this easy-to-use DevOps transformation strategy PowerPoint slide deck to showcase the difference between cloud and traditional data centers. This ready-to-use PowerPoint layout also discusses the roadmap to integrate cloud computing in business. Highlight the usages of cloud computing and deployment models with the help of visual attention-grabbing DevOps implementation roadmap PowerPoint slides. https://bit.ly/3eFxYYr
Comparative Recommender System Evaluation: Benchmarking Recommendation Frame...Alan Said
Video available here http://www.youtube.com/watch?v=1jHxGCl8RXc
Recommender systems research is often based on comparisons of predictive accuracy: the better the evaluation scores, the better the recommender.
However, it is difficult to compare results from different recommender systems due to the many options in design and implementation of an evaluation strategy.
Additionally, algorithmic implementations can diverge from the standard formulation due to manual tuning and modifications that work better in some situations.
In this work we compare common recommendation algorithms as implemented in three popular recommendation frameworks.
To provide a fair comparison, we have complete control of the evaluation dimensions being benchmarked: dataset, data splitting, evaluation strategies, and metrics.
We also include results using the internal evaluation mechanisms of these frameworks.
Our analysis points to large differences in recommendation accuracy across frameworks and strategies, i.e. the same baselines may perform orders of magnitude better or worse across frameworks.
Our results show the necessity of clear guidelines when reporting evaluation of recommender systems to ensure reproducibility and comparison of results.
Cloud and Network Transformation using DevOps methodology : Cisco Live 2015Vimal Suba
Content presented as part of Cisco Live 2015 in San Diego
Why DevOps and what it means to be a DevOps-Enabled Organization?
Recommendations on Toolchain, Metrics framework, best practices and tips to help you embark on your IT Organization on DevOps journey
For this talk, I will be discussing about various approaches to accelerate deep learning solutions from notebooks or research environment to production environment and how these solutions can be transformed as an enterprise level end to end Deep Learning Solution, which can be consumed as a service by any software application, with a practical use-case example.
Network and IT Ops Series: Build Production Solutions Neo4j
Jeff Morris, Director, Neo4j:Are you building a breakthrough product or extending an existing one? Do you need introduce new capabilities based on insights from data relationships? If so, you should consider embedding a graph database.
For software providers building products to assure quality network operations or security, using an embedded graph database may open new customer opportunities. Watch this webinar to learn how you can easily differentiate your applications and take your solutions to market faster with a native graph database like Neo4j.
These slides, based on the webinar, provide insights into DevOps research, including:
- How fast is DevOps being adopted and what is the sequencing and actual/planned penetration of DevOps capabilities across 20 markets?
- Deep insight into the highest growth areas of DevOps which include value stream management (VSM), operational analytics (OA), and test environment management (TEM).
- What strategies and approaches are enterprises leveraging to address their VSM, POA, and TEM needs?
- Where is the DevOps market headed, and what are likely to be the next important growth areas?
.NET Fest 2019. Оля Гавриш. Машинное обучение для .NET программистовNETFest
А Вы знали, что практически для каждого проекта можно применить машинное обучение? И теперь для этого не нужно изучать новый язык программирования (как Python или R) и осваивать численные методы. В этом докладе я расскажу об основах машинного обучения и о том, как легко начать использовать его в своих .NET проектах с помощью ML.NET и других решений от Microsoft.
Experimentation to Industrialization: Implementing MLOpsDatabricks
In this presentation, drawing upon Thorogood’s experience with a customer’s global Data & Analytics division as their MLOps delivery partner, we share important learnings and takeaways from delivering productionized ML solutions and shaping MLOps best practices and organizational standards needed to be successful.
We open by providing high-level context & answering key questions such as “What is MLOps exactly?” & “What are the benefits of establishing MLOps Standards?”
The subsequent presentation focuses on our learnings & best practices. We start by discussing common challenges when refactoring experimentation use-cases & how to best get ahead of these issues in a global organization. We then outline an Engagement Model for MLOps addressing: People, Processes, and Tools. ‘Processes’ highlights how to manage the often siloed data science use case demand pipeline for MLOps & documentation to facilitate seamless integration with an MLOps framework. ‘People’ provides context around the appropriate team structures & roles to be involved in an MLOps initiative. ‘Tools’ addresses key requirements of tools used for MLOps, considering the match of services to use-cases.
This presentation was made on June 9th, 2020.
Video recording of the session can be viewed here: https://youtu.be/OCB9sTUnUug
In this meetup with Sanyam Bhutani, Machine Learning Engineer at H2O.ai, he gives a recap of the eight annual ICLR (International Conference on Learning Representations) 2020 - a niche deep learning conference whose focus is to study how to learn representations of data, which is basically what deep learning does.
Sanyam goes through a few of his favorite selected papers from this year’s ICLR, note this session may not be able to capture the richness of all papers or allow a detailed discussion.
You will be able to find Sanyam in our community slack (https://www.h2o.ai/slack-community/), please feel free to start a discussion with him, if you send a emoji greeting, you’ll find the answers.
Following are the papers we will look into:
U-GAT-IT: Unsupervised Generative Attentional Networks with Adaptive Layer-Instance Normalization for Image-to-Image Translation
AugMix: A Simple Data Processing Method to Improve Robustness and Uncertainty
Your classifier is secretly an energy based model and you should treat it like one
ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators
ALBERT: A Lite BERT for Self-supervised Learning of Language Representations
Reformer: The Efficient Transformer
Generative Models for Effective ML on Private, Decentralized Datasets
Once for All: Train One Network and Specialize it for Efficient Deployment
Thieves on Sesame Street! Model Extraction of BERT-based APIs
Plug and Play Language Models: A Simple Approach to Controlled Text Generation
BatchEnsemble: An Alternative Approach to Efficient Ensemble and Lifelong Learning
Real or Not Real, that is the Question
MSR 2022 Foundational Contribution Award Talk: Software Analytics: Reflection...Tao Xie
MSR 2022 Foundational Contribution Award Talk on "Software Analytics: Reflection and Path Forward" by Dongmei Zhang and Tao Xie
https://conf.researchr.org/info/msr-2022/awards
Similar to “MLPerf: An Industry Standard Performance Benchmark Suite for Machine Learning,” a Presentation from Facebook and Arizona State University (20)
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/deploying-large-models-on-the-edge-success-stories-and-challenges-a-presentation-from-qualcomm/
Vinesh Sukumar, Senior Director of Product Management at Qualcomm Technologies, presents the “Deploying Large Models on the Edge: Success Stories and Challenges” tutorial at the May 2024 Embedded Vision Summit.
In this talk, Dr. Sukumar explains and demonstrates how Qualcomm has been successful in deploying large generative AI and multimodal models on the edge for a variety of use cases in consumer and enterprise markets. He examines key challenges that must be overcome before large models at the edge can reach their full commercial potential. He also highlights how Qualcomm is addressing these challenges through upgraded processor hardware, improved developer tools and a comprehensive library of fully optimized AI models in the Qualcomm AI Hub.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/scaling-vision-based-edge-ai-solutions-from-prototype-to-global-deployment-a-presentation-from-network-optix/
Maurits Kaptein, Chief Data Scientist at Network Optix and Professor at the University of Eindhoven, presents the “Scaling Vision-based Edge AI Solutions: From Prototype to Global Deployment” tutorial at the May 2024 Embedded Vision Summit.
The Embedded Vision Summit brings together innovators in silicon, devices, software and applications and empowers them to bring computer vision and perceptual AI into reliable and scalable products. However, integrating recent hardware, software and algorithm innovations into prime-time-ready products is quite challenging. Scaling from a proof of concept—for example, a novel neural network architecture performing a valuable task efficiently on a new piece of silicon—to an AI vision system installed in hundreds of sites requires surmounting myriad hurdles.
First, building on Network Optix’s 14 years of experience, Professor Kaptein details how to overcome the networking, fleet management, visualization and monetization challenges that come with scaling a global vision solution. Second, Kaptein discusses the complexities of making vision AI solutions device-agnostic and remotely manageable, proposing an open standard for AI model deployment to edge devices. The proposed standard aims to simplify market entry for silicon manufacturers and enhance scalability for solution developers. Kaptein outlines the standard’s core components and invites collaborative contributions to drive market expansion.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/whats-next-in-on-device-generative-ai-a-presentation-from-qualcomm/
Jilei Hou, Vice President of Engineering and Head of AI Research at Qualcomm Technologies, presents the “What’s Next in On-device Generative AI” tutorial at the May 2024 Embedded Vision Summit.
The generative AI era has begun! Large multimodal models are bringing the power of language understanding to machine perception, and transformer models are expanding to allow machines to understand using multiple types of sensors. This new wave of approaches is poised to revolutionize user experiences, disrupt industries and enable powerful new capabilities. For generative AI to reach its full potential, however, we must deploy it on edge devices, providing improved latency, pervasive interaction and enhanced privacy.
In this talk, Hou shares Qualcomm’s vision of the compelling opportunities enabled by efficient generative AI at the edge. He also identifies the key challenges that the industry must overcome to realize the massive potential of these technologies. And he highlights research and product development work that Qualcomm is doing to lead the way via an end-to-end system approach—including techniques for efficient on-device execution of LLMs, LVMs and LMMs, methods for orchestration of large models at the edge and approaches for adaptation and personalization.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2023/11/learning-compact-dnn-models-for-embedded-vision-a-presentation-from-the-university-of-maryland-at-college-park/
Shuvra Bhattacharyya, Professor at the University of Maryland at College Park, presents the “Learning Compact DNN Models for Embedded Vision” tutorial at the May 2023 Embedded Vision Summit.
In this talk, Bhattacharyya explores methods to transform large deep neural network (DNN) models into effective compact models. The transformation process that he focuses on—from large to compact DNN form—is referred to as pruning. Pruning involves the removal of neurons or parameters from a neural network. When performed strategically, pruning can lead to significant reductions in computational complexity without significant degradation in accuracy. It is sometimes even possible to increase accuracy through pruning.
Pruning provides a general approach for facilitating real-time inference in resource-constrained embedded computer vision systems. Bhattacharyya provides an overview of important aspects to consider when applying or developing a DNN pruning method and presents details on a recently introduced pruning method called NeuroGRS. NeuroGRS considers structures and trained weights jointly throughout the pruning process and can result in significantly more compact models compared to other pruning methods.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2023/11/introduction-to-computer-vision-with-cnns-a-presentation-from-mohammad-haghighat/
Independent consultant Mohammad Haghighat presents the “Introduction to Computer Vision with Convolutional Neural Networks” tutorial at the May 2023 Embedded Vision Summit.
This presentation covers the basics of computer vision using convolutional neural networks. Haghighat begins by introducing some important conventional computer vision techniques and then transition to explaining the basics of machine learning and convolutional neural networks (CNNs) and showing how CNNs are used in visual perception.
Haghighat illustrates the building blocks and computational elements of neural networks through examples. This session provides an overview of how modern computer vision algorithms are designed, trained and used in real-world applications.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2023/11/selecting-tools-for-developing-monitoring-and-maintaining-ml-models-a-presentation-from-yummly/
Parshad Patel, Data Scientist at Yummly, presents the “Selecting Tools for Developing, Monitoring and Maintaining ML Models” tutorial at the May 2023 Embedded Vision Summit.
With the boom in tools for developing, monitoring and maintaining ML models, data science teams have many options to choose from. Proprietary tools provided by cloud service providers are enticing, but teams may fear being locked in—and may worry that these tools are too costly or missing important features when compared with alternatives from specialized providers.
Fortunately, most proprietary, fee-based tools have an open-source component that can be integrated into a home-grown solution at low cost. This can be a good starting point, enabling teams to get started with modern tools without making big investments and leaving the door open to evolve tool selection over time. In this talk, Patel presents a step-by-step process for creating an MLOps tool set that enables you to deliver maximum value as a data scientist. He shares how Yummly built pipelines for model development and put them into production using open-source projects.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2023/11/building-accelerated-gstreamer-applications-for-video-and-audio-ai-a-presentation-from-wave-spectrum/
Abdo Babukr, Accelerated Computing Consultant at Wave Spectrum, presents the “Building Accelerated GStreamer Applications for Video and Audio AI,” tutorial at the May 2023 Embedded Vision Summit.
GStreamer is a popular open-source framework for creating streaming media applications. Developers often use GStreamer to streamline the development of computer vision and audio perception applications. Since perceptual algorithms are often quite demanding in terms of processing performance, in many cases developers need to find ways to accelerate key GStreamer building blocks, taking advantage of specialized features of their target processor or co-processor.
In this talk, Babukr introduces GStreamer and shows how to use it to build computer vision and audio perception applications. He also shows how to create efficient, high-performance GStreamer applications that utilize specialized hardware features.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2023/11/understanding-selecting-and-optimizing-object-detectors-for-edge-applications-a-presentation-from-walmart-global-tech/
Md Nasir Uddin Laskar, Staff Machine Learning Engineer at Walmart Global Tech, presents the “Understanding, Selecting and Optimizing Object Detectors for Edge Applications” tutorial at the May 2023 Embedded Vision Summit.
Object detectors count objects in a scene and determine their precise locations, while also labeling them. Object detection plays a crucial role in many vision applications, from autonomous driving to smart appliances. In many of these applications, it’s necessary or desirable to implement object detection at the edge.
In this presentation, Laskar explores the evolution of object detection algorithms, from traditional approaches to deep learning-based methods and transformer-based architectures. He delves into widely used approaches for object detection, such as two-stage R-CNNs and one-stage YOLO algorithms, and examines their strengths and weaknesses. And he provides guidance on how to evaluate and select an object detector for an edge application.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2023/10/introduction-to-modern-lidar-for-machine-perception-a-presentation-from-the-university-of-ottawa/
Robert Laganière, Professor at the University of Ottawa and CEO of Sensor Cortek, presents the “Introduction to Modern LiDAR for Machine Perception” tutorial at the May 2023 Embedded Vision Summit.
In this presentation, Laganière provides an introduction to light detection and ranging (LiDAR) technology. He explains how LiDAR sensors work and their main advantages and disadvantages. He also introduces different approaches to LiDAR, including scanning and flash LiDAR.
Laganière explores the types of data produced by LiDAR sensors and explains how this data can be processed using deep neural networks. He also examines the synergy between LiDAR and cameras, and the concept of pseudo-LiDAR for detection.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2023/11/vision-language-representations-for-robotics-a-presentation-from-the-university-of-pennsylvania/
Dinesh Jayaraman, Assistant Professor at the University of Pennsylvania, presents the “Vision-language Representations for Robotics” tutorial at the May 2023 Embedded Vision Summit.
In what format can an AI system best present what it “sees” in a visual scene to help robots accomplish tasks? This question has been a long-standing challenge for computer scientists and robotics engineers. In this presentation, Jayaraman provides insights into cutting-edge techniques being used to help robots better understand their surroundings, learn new skills with minimal guidance and become more capable of performing complex tasks.
Jayaraman discusses recent advances in unsupervised representation learning and explains how these approaches can be used to build visual representations that are appropriate for a controller that decides how the robot should act. In particular, he presents insights from his research group’s recent work on how to represent the constituent objects and entities in a visual scene, and how to combine vision and language in a way that permits effectively translating language-based task descriptions into images depicting the robot’s goals.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2023/10/adas-and-av-sensors-whats-winning-and-why-a-presentation-from-techinsights/
Ian Riches, Vice President of the Global Automotive Practice at TechInsights, presents the “ADAS and AV Sensors: What’s Winning and Why?” tutorial at the May 2023 Embedded Vision Summit.
It’s clear that the number of sensors per vehicle—and the sophistication of these sensors—is growing rapidly, largely thanks to increased adoption of advanced safety and driver assistance features. In this presentation, Riches explores likely future demand for automotive radars, cameras and LiDARs.
Riches examines which vehicle features will drive demand out to 2030, how vehicle architecture change is impacting the market and what sorts of compute platforms these sensors will be connected to. Finally, he shares his firm’s vision of what the landscape could look like far beyond 2030, considering scenarios out to 2050 for automated driving and the resulting sensor demand.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2023/10/computer-vision-in-sports-scalable-solutions-for-downmarkets-a-presentation-from-sportlogiq/
Mehrsan Javan, Co-founder and CTO of Sportlogiq, presents the “Computer Vision in Sports: Scalable Solutions for Downmarket Leagues” tutorial at the May 2023 Embedded Vision Summit.
Sports analytics is about observing, understanding and describing the game in an intelligent manner. In practice, this requires a fully automated, robust end-to-end pipeline, spanning from visual input, to player and group activities, to player and team evaluation to planning. Despite major advancements in computer vision and machine learning, today sports analytics solutions are limited to top leagues and are not widely available for downmarket leagues and youth sports.
In this talk, Javan explains how his company has developed scalable and robust computer vision solutions to democratize sport analytics and offer pro-league-level insights to leagues with modest resources, including youth leagues. He highlights key challenges—such as the requirement for low-cost, low-latency processing and the need for robustness despite variations in venues. He discusses the approaches Sportlogiq tried and how it ultimately overcame these challenges, including the use of transformers and fusion of multiple type of data streams to maximize accuracy.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2023/10/detecting-data-drift-in-image-classification-neural-networks-a-presentation-from-southern-illinois-university/
Spyros Tragoudas, Professor and School Director at Southern Illinois University Carbondale, presents the “Detecting Data Drift in Image Classification Neural Networks” tutorial at the May 2023 Embedded Vision Summit.
An unforeseen change in the input data is called “drift,” and may impact the accuracy of machine learning models. In this talk, Tragoudas presents a novel scheme for diagnosing data drift in the input streams of image classification neural networks. His proposed method for drift detection and quantification uses a threshold dictionary for the prediction probabilities of each class in the neural network model.
The method is applicable to any drift type in images such as noise and weather effects, among others. Tragoudas shares experimental results on various data sets, drift types and neural network models to show that his proposed method estimates the drift magnitude with high accuracy, especially when the level of drift significantly impacts the model’s performance.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2023/10/deep-neural-network-training-diagnosing-problems-and-implementing-solutions-a-presentation-from-sensor-cortek/
Fahed Hassanat, Chief Operating Officer and Head of Engineering at Sensor Cortek, presents the “Deep Neural Network Training: Diagnosing Problems and Implementing Solutions” tutorial at the May 2023 Embedded Vision Summit.
In this presentation, Hassanat delves into some of the most common problems that arise when training deep neural networks. He provides a brief overview of essential training metrics, including accuracy, precision, false positives, false negatives and F1 score.
Hassanat then explores training challenges that arise from problems with hyperparameters, inappropriately sized models, inadequate models, poor-quality datasets, imbalances within training datasets and mismatches between training and testing datasets. To help detect and diagnose training problems, he also covers techniques such as understanding performance curves, recognizing overfitting and underfitting, analyzing confusion matrices and identifying class interaction issues.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2023/10/ai-start-ups-the-perils-of-fishing-for-whales-war-stories-from-the-entrepreneurial-front-lines-a-presentation-from-seechange-technologies/
Tim Hartley, Vice President of Product for SeeChange Technologies, presents the “AI Start-ups: The Perils of Fishing for Whales (War Stories from the Entrepreneurial Front Lines)” tutorial at the May 2023 Embedded Vision Summit.
You have a killer idea that will change the world. You’ve thought through product-market fit and differentiation. You have seed funding and a world-beating team. Best of all, you’ve caught the attention of major players in your industry. You’ve reached peak “start-up”—that point of limitless possibility—when you go to bed with the same level of energy and enthusiasm you had when you woke. And then the first proof of concept starts…
In this talk, Hartley lays out some of the pitfalls that await those building the next big thing. Using real examples, he shares some of the dos and don’ts, particularly when dealing with that big potential first customer. Hartley discusses the importance of end-to-end design, ensuring your product solves real-world problems. He explores how far the big companies will tell you to jump—and then jump again—for free. And, most importantly, how to build long-term partnerships with major corporations without relying on over-promising sales pitches.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2023/10/a-computer-vision-system-for-autonomous-satellite-maneuvering-a-presentation-from-scout-space/
Andrew Harris, Spacecraft Systems Engineer at SCOUT Space, presents the “Developing a Computer Vision System for Autonomous Satellite Maneuvering” tutorial at the May 2023 Embedded Vision Summit.
Computer vision systems for mobile autonomous machines experience a wide variety of real-world conditions and inputs that can be challenging to capture accurately in training datasets. Few autonomous systems experience more challenging conditions than those in orbit. In this talk, Harris describes how SCOUT Space has designed and trained satellite vision systems using dynamic and physically informed synthetic image datasets.
Harris describes how his company generates synthetic data for this challenging environment and how it leverages new real-world data to improve our datasets. In particular, he explains how these synthetic datasets account for and can replicate real sources of noise and error in the orbital environment, and how his company supplements them with in-space data from the first SCOUT-Vision system, which has been in orbit since 2021.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2023/10/bias-in-computer-vision-its-bigger-than-facial-recognition-a-presentation-from-santa-clara-university/
Susan Kennedy, Assistant Professor of Philosophy at Santa Clara University, presents the “Bias in Computer Vision—It’s Bigger Than Facial Recognition!” tutorial at the May 2023 Embedded Vision Summit.
As AI is increasingly integrated into various industries, concerns about its potential to reproduce or exacerbate bias have become widespread. While the use of AI holds the promise of reducing bias, it can also have unintended consequences, particularly in high-stakes computer vision applications such as facial recognition. However, even seemingly low-stakes computer vision applications such as identifying potholes and damaged roads can also present ethical challenges related to bias.
This talk explores how bias in computer vision often poses an ethical challenge, regardless of the stakes involved. Kennedy discusses the limitations of technical solutions aimed at mitigating bias, and why “bias-free” AI may not be achievable. Instead, she focuses on the importance of adopting a “bias-aware” approach to responsible AI design and explores strategies that can be employed to achieve this.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2023/10/sensor-fusion-techniques-for-accurate-perception-of-objects-in-the-environment-a-presentation-from-sanborn-map-company/
Baharak Soltanian, Vice President of Research and Development for the Sanborn Map Company, presents the “Sensor Fusion Techniques for Accurate Perception of Objects in the Environment” tutorial at the May 2023 Embedded Vision Summit.
Increasingly, perceptual AI is being used to enable devices and systems to obtain accurate estimates of object locations, speeds and trajectories. In demanding applications, this is often best done using a heterogeneous combination of sensors (e.g., vision, radar, LiDAR). In this talk, Soltanian introduces techniques for combining data from multiple sensors to obtain accurate information about objects in the environment.
Soltanian briefly introduces the roles played by Kalman filters, particle filters, Bayesian networks and neural networks in this type of fusion. She then examines alternative fusion architectures, such as centralized and decentralized approaches, to better understand the trade-offs associated with different approaches to sensor fusion as used to enhance the ability of machines to understand their environment.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2023/10/updating-the-edge-ml-development-process-a-presentation-from-samsara/
Jim Steele, Vice President of Embedded Software at Samsara, presents the “Updating the Edge ML Development Process” tutorial at the May 2023 Embedded Vision Summit.
Samsara (NYSE:IOT) is focused on digitizing the world of operations. The company helps customers across many industries—including food and beverage, utilities and energy, field services and government—get information about their physical operations into the cloud, so they can operate more safely, efficiently and sustainably. Samsara’s sensors collect billions of data points per day and on-device processing is instrumental to its success. The company is constantly developing, improving and deploying ML models at the edge.
Samsara has found that the traditional development process—where ML scientists create models and hand them off to firmware engineers for embedded implementation—is slow and often produces difficult-to-resolve differences between the original model and the embedded implementation. In this talk, Steele presents an alternative development process that his company has adopted with good results. In this process, firmware engineers develop a general framework that ML scientists use to design, develop and deploy their models. This enables quick iterations and fewer confounding bugs.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2023/10/combating-bias-in-production-computer-vision-systems-a-presentation-from-red-cell-partners/
Alex Thaman, Chief Architect at Red Cell Partners, presents the “Combating Bias in Production Computer Vision Systems” tutorial at the May 2023 Embedded Vision Summit.
Bias is a critical challenge in predictive and generative AI that involves images of humans. People have a variety of body shapes, skin tones and other features that can be challenging to represent completely in training data. Without attention to bias risks, ML systems have the potential to treat people unfairly, and even to make humans more likely to do so.
In this talk, Thaman examines the ways in which bias can arise in visual AI systems. He shares techniques for detecting bias and strategies for minimizing it in production AI systems.
PHP Frameworks: I want to break free (IPC Berlin 2024)Ralf Eggert
In this presentation, we examine the challenges and limitations of relying too heavily on PHP frameworks in web development. We discuss the history of PHP and its frameworks to understand how this dependence has evolved. The focus will be on providing concrete tips and strategies to reduce reliance on these frameworks, based on real-world examples and practical considerations. The goal is to equip developers with the skills and knowledge to create more flexible and future-proof web applications. We'll explore the importance of maintaining autonomy in a rapidly changing tech landscape and how to make informed decisions in PHP development.
This talk is aimed at encouraging a more independent approach to using PHP frameworks, moving towards a more flexible and future-proof approach to PHP development.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
"Impact of front-end architecture on development cost", Viktor TurskyiFwdays
I have heard many times that architecture is not important for the front-end. Also, many times I have seen how developers implement features on the front-end just following the standard rules for a framework and think that this is enough to successfully launch the project, and then the project fails. How to prevent this and what approach to choose? I have launched dozens of complex projects and during the talk we will analyze which approaches have worked for me and which have not.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Search and Society: Reimagining Information Access for Radical FuturesBhaskar Mitra
The field of Information retrieval (IR) is currently undergoing a transformative shift, at least partly due to the emerging applications of generative AI to information access. In this talk, we will deliberate on the sociotechnical implications of generative AI for information access. We will argue that there is both a critical necessity and an exciting opportunity for the IR community to re-center our research agendas on societal needs while dismantling the artificial separation between the work on fairness, accountability, transparency, and ethics in IR and the rest of IR research. Instead of adopting a reactionary strategy of trying to mitigate potential social harms from emerging technologies, the community should aim to proactively set the research agenda for the kinds of systems we should build inspired by diverse explicitly stated sociotechnical imaginaries. The sociotechnical imaginaries that underpin the design and development of information access technologies needs to be explicitly articulated, and we need to develop theories of change in context of these diverse perspectives. Our guiding future imaginaries must be informed by other academic fields, such as democratic theory and critical theory, and should be co-developed with social science scholars, legal scholars, civil rights and social justice activists, and artists, among others.
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.