For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2022/06/the-future-of-ai-is-here-today-deep-dive-into-qualcomms-on-device-ai-offerings-a-presentation-from-qualcomm/
Vinesh Sukumar, Senior Director and Head of AI/ML Product Management at Qualcomm, presents the “Future of AI is Here Today: Deep Dive into Qualcomm’s On-Device AI Offerings” tutorial at the May 2022 Embedded Vision Summit.
As a leader in on-device AI, Qualcomm is in a unique position to deliver optimized and now personalized AI experiences to consumers, made possible via innovation in hardware technology and investment across the entire software stack. This investment is now deeply rooted in all of our product offerings, spread across multiple verticals from mobile to automotive.
In this talk, Sukumar explores the high-performance, low-power Hexagon processor — the core of his company’s latest 7th Generation AI Engine — and shows how the company scales it across the range of products that Qualcomm offers. He also highlights Qualcomm’s investment in advanced techniques such as the latest quantization approaches and neural architecture search to accelerate AI deployment. Finally, he shares details on how his company incorporates these technologies into AI solutions that power Qualcomm’s vision of on-device AI — and shows how these solutions are employed in real-world use cases across many verticals.
As generative AI adoption grows at record-setting speeds and computing demands increase, hybrid processing is more important than ever. But just like traditional computing evolved from mainframes and thin clients to today’s mix of cloud and edge devices, AI processing must be distributed between the cloud and devices for AI to scale and reach its full potential. In this talk you’ll learn:
• Why on-device AI is key
• Which generative AI models can run on device
• Why the future of AI is hybrid
• Qualcomm Technologies’ role in making hybrid AI a reality
Chat GPT 4 can pass the American state bar exam, but before you go expecting to see robot lawyers taking over the courtroom, hold your horses cowboys – we're not quite there yet. That being said, AI is becoming increasingly more human-like, and as a VC we need to start thinking about how this new wave of technology is going to affect the way we build and run businesses. What do we need to do differently? How can we make sure that our investment strategies are reflecting these changes? It's a brave new world out there, and we’ve got to keep the big picture in mind!
Sharing here with you what we at Cavalry Ventures found out during our Generative AI deep dive.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2022/06/powering-the-connected-intelligent-edge-and-the-future-of-on-device-ai-a-presentation-from-qualcomm/
Ziad Asghar, Vice President of Product Management at Qualcomm, presents the “Powering the Connected Intelligent Edge and the Future of On-Device AI” tutorial at the May 2022 Embedded Vision Summit.
Qualcomm is leading the realization of the “connected intelligent edge,” where the convergence of wireless connectivity, efficient computing and distributed AI will power the devices and experiences that you deserve. In this talk, Asghar explores some of the key challenges in deploying AI across diverse edge products in markets including mobile, automotive, XR, IoT, robotics and PCs — and some of the important differences in the AI requirements of these applications.
Asghar identifies unique AI features that will be needed as physical and digital spaces converge in what is now called the “metaverse”. He highlights key AI technologies offered within Qualcomm products, and how the company connects them to enable the connected intelligent edge. Finally, he shares his vision of the future of on-device AI — including on-device learning, efficient models, state-of-the-art quantization, and how Qualcomm plans to make this vision a reality.
Research presentation on the impact of AI on the advertising and CX landscape.
We start with a short introduction of AI, the causes of recent focus and hype, as well as a simplified model to compartmentalise different AI models.
The presentation constructs a framework to assess the potential impact of AI against :
- the complexity of the work
- the type of work being done - analysis, decision-making, and execution.
Based on the framework, the presentation argues for four possible futures:
- Creativity at the centre
- Digitalization of marketing
- Efficiency of marketing
- Impact of marketing
Furthermore, the presentation lists dangers and limitations inherent in the technology, as well as how agencies or individuals can get started to navigate the unknown future.
At its conclusion, it's argued that AI will likely have a substantial impact on the advertising and marketing industry. The agency business model is already under strain and will need to quickly adapt in light of significant threats posed by continued advancements in automation and generative AI models.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2023/06/accelerating-newer-ml-models-using-the-qualcomm-ai-stack-a-presentation-from-qualcomm/
Vinesh Sukumar, Senior Director and Head of AI/ML Product Management at Qualcomm Technologies, presents the “Accelerating Newer ML Models Using the Qualcomm AI Stack” tutorial at the May 2023 Embedded Vision Summit.
The Qualcomm AI Stack revolutionizes how Qualcomm thinks about AI software and provides the ultimate tool and user interface to enable ecosystem partners to create faster and smarter AI applications for all embedded form factors. Focusing on real user experience challenges centered around model deployment, Sakumar explains how the Snapdragon developer community leverages data types, quantization and neural architecture search—among others—to optimize complex AI architectures for emerging use cases.
Unlocking the Power of Generative AI An Executive's Guide.pdfPremNaraindas1
Generative AI is here, and it can revolutionize your business. With its powerful capabilities, this technology can help companies create more efficient processes, unlock new insights from data, and drive innovation. But how do you make the most of these opportunities?
This guide will provide you with the information and resources needed to understand the ins and outs of Generative AI, so you can make informed decisions and capitalize on the potential. It covers important topics such as strategies for leveraging large language models, optimizing MLOps processes, and best practices for building with Generative AI.
As generative AI adoption grows at record-setting speeds and computing demands increase, hybrid processing is more important than ever. But just like traditional computing evolved from mainframes and thin clients to today’s mix of cloud and edge devices, AI processing must be distributed between the cloud and devices for AI to scale and reach its full potential. In this talk you’ll learn:
• Why on-device AI is key
• Which generative AI models can run on device
• Why the future of AI is hybrid
• Qualcomm Technologies’ role in making hybrid AI a reality
Chat GPT 4 can pass the American state bar exam, but before you go expecting to see robot lawyers taking over the courtroom, hold your horses cowboys – we're not quite there yet. That being said, AI is becoming increasingly more human-like, and as a VC we need to start thinking about how this new wave of technology is going to affect the way we build and run businesses. What do we need to do differently? How can we make sure that our investment strategies are reflecting these changes? It's a brave new world out there, and we’ve got to keep the big picture in mind!
Sharing here with you what we at Cavalry Ventures found out during our Generative AI deep dive.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2022/06/powering-the-connected-intelligent-edge-and-the-future-of-on-device-ai-a-presentation-from-qualcomm/
Ziad Asghar, Vice President of Product Management at Qualcomm, presents the “Powering the Connected Intelligent Edge and the Future of On-Device AI” tutorial at the May 2022 Embedded Vision Summit.
Qualcomm is leading the realization of the “connected intelligent edge,” where the convergence of wireless connectivity, efficient computing and distributed AI will power the devices and experiences that you deserve. In this talk, Asghar explores some of the key challenges in deploying AI across diverse edge products in markets including mobile, automotive, XR, IoT, robotics and PCs — and some of the important differences in the AI requirements of these applications.
Asghar identifies unique AI features that will be needed as physical and digital spaces converge in what is now called the “metaverse”. He highlights key AI technologies offered within Qualcomm products, and how the company connects them to enable the connected intelligent edge. Finally, he shares his vision of the future of on-device AI — including on-device learning, efficient models, state-of-the-art quantization, and how Qualcomm plans to make this vision a reality.
Research presentation on the impact of AI on the advertising and CX landscape.
We start with a short introduction of AI, the causes of recent focus and hype, as well as a simplified model to compartmentalise different AI models.
The presentation constructs a framework to assess the potential impact of AI against :
- the complexity of the work
- the type of work being done - analysis, decision-making, and execution.
Based on the framework, the presentation argues for four possible futures:
- Creativity at the centre
- Digitalization of marketing
- Efficiency of marketing
- Impact of marketing
Furthermore, the presentation lists dangers and limitations inherent in the technology, as well as how agencies or individuals can get started to navigate the unknown future.
At its conclusion, it's argued that AI will likely have a substantial impact on the advertising and marketing industry. The agency business model is already under strain and will need to quickly adapt in light of significant threats posed by continued advancements in automation and generative AI models.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2023/06/accelerating-newer-ml-models-using-the-qualcomm-ai-stack-a-presentation-from-qualcomm/
Vinesh Sukumar, Senior Director and Head of AI/ML Product Management at Qualcomm Technologies, presents the “Accelerating Newer ML Models Using the Qualcomm AI Stack” tutorial at the May 2023 Embedded Vision Summit.
The Qualcomm AI Stack revolutionizes how Qualcomm thinks about AI software and provides the ultimate tool and user interface to enable ecosystem partners to create faster and smarter AI applications for all embedded form factors. Focusing on real user experience challenges centered around model deployment, Sakumar explains how the Snapdragon developer community leverages data types, quantization and neural architecture search—among others—to optimize complex AI architectures for emerging use cases.
Unlocking the Power of Generative AI An Executive's Guide.pdfPremNaraindas1
Generative AI is here, and it can revolutionize your business. With its powerful capabilities, this technology can help companies create more efficient processes, unlock new insights from data, and drive innovation. But how do you make the most of these opportunities?
This guide will provide you with the information and resources needed to understand the ins and outs of Generative AI, so you can make informed decisions and capitalize on the potential. It covers important topics such as strategies for leveraging large language models, optimizing MLOps processes, and best practices for building with Generative AI.
Understanding generative AI models A comprehensive overview.pdfStephenAmell4
Generative AI refers to a branch of artificial intelligence that focuses on enabling machines to generate new and original content. Unlike traditional AI systems that follow predefined rules and patterns, generative AI leverages advanced algorithms and neural networks to autonomously produce outputs that mimic human creativity and decision-making.
How do OpenAI GPT Models Work - Misconceptions and Tips for DevelopersIvo Andreev
Have you ever wondered why GPT models work? Do you ask questions like:
◉ How does GPT work? Why does the same problem receive different answers for different users? Is there a way to improve explainability? ◉ Can GPT model provide its sources? Why does Bing chat work differently? What are my ways to have better performance and improve completions? ◉ How can I work with data in my enterprise? What practical business cases could a generative AI model fit solving?
If you are tired of sessions just scratching the surface of OpenAI GPT, this one will go deeper and answer questions like why, why not and how.
Key Terms; ChatGPT Enterprise; Top Questions; Enterprise Data; Azure Search; Functions; Embeddings; Context Encoding; General Intelligence; Emerging Abilities; Chain of Thought; Plugins; Multimodal with DALL-E; Project Florence
🔹How will AI-based content-generating tools change your mission and products?
🔹This complimentary webinar [ON-DEMAND] explores multiple use cases that drive adoption in their early adopter customer base to provide product leaders with insights into the future of generative AI-powered businesses, and the potential generative AI holds for driving innovation and improving business processes.
GENERATIVE AI, THE FUTURE OF PRODUCTIVITYAndre Muscat
Discuss the impact and opportunity of using Generative AI to support your development and creative teams
* Explore business challenges in content creation
* Cost-per-unit of different types of content
* Use AI to reduce cost-per-unit
* New partnerships being formed that will have a material impact on the way we search and engage with content
Part 4 of a 9 Part Research Series named "What matters in AI" published on www.andremuscat.com
This talk overviews my background as a female data scientist, introduces many types of generative AI, discusses potential use cases, highlights the need for representation in generative AI, and showcases a few tools that currently exist.
Presenting the landscape of AI/ML in 2023 by introducing a quick summary of the last 10 years of its progress, current situation, and looking at things happening behind the scene.
In light of ChatGPT and the growing generative AI technologies, this presentation revisits my 2020 article on the need for Imagination Performance through the 2023 lens. Much has changed in three years in the technology front but nothing has changed in the human front. We still need meaning and to feel connected. This presentation surfaces the real human value proposition beyond generative AI tools.
Lydia Kostopoulos, PhD
LKCYBER.COM
Generative AI Use-cases for Enterprise - First SessionGene Leybzon
In this presentation, we will delve into the exciting applications of Generative AI across various business domains. Leveraging the capabilities of artificial intelligence and machine learning, Generative AI allows for dynamic, context-aware user interfaces that adapt in real-time to provide personalized user experiences. We will explore how this transformative technology can streamline design processes, facilitate user engagement, and open the doors to new forms of interactivity.
Global Governance of Generative AI: The Right Way ForwardLilian Edwards
AI regulation has been a hot topic since the rise of machine learning (ML) in the “big data” era, but generative AI or “foundation models” tools like ChatGPT, DALL-E 2(now 3) and CoPilot, ike ML before them, may create serious societal risks, including embedding and outputting bias; generating fake news, illegal or harmful content and inadvertent “hallucinations”; infringing existing laws relating eg to copyright and privacy; as well as environmental, competition and workplace concerns.
Many nations are now considering regulation to address these worries, and can draw on a number of basic and hybrid models of governance. This paper canvasses models of mandatory comprehensive legislation (where the EU AI Act hopes to place itself as a gold standard model); vertical mandatory legislation (where China has quietly taken a lead); adapting existing law (see the many copyright lawsuits underway); and voluntary “soft law” such as codes of ethics, “blueprints”, or industry guidelines. Both the domestic and international regulatory scenes for AI are also increasingly politicised as the rise of "AI safety" hype shows. Against this backdrop what choices should smaller countries such as the UK and Australia make? will international harmonisation lead to a race to the top as with the GDPR, or the bottom - rule by tech for tech?
The numbers tell the story: 84% of C-suite executives believe they must leverage artificial intelligence (AI) to achieve their growth objectives, yet 76% report they struggle with how to scale. With the stakes higher than ever, what can we learn from companies that are successfully scaling AI, achieving nearly 3X the return on investments and an average 32% premium on key financial valuation metrics?
To answer that question, Accenture conducted a landmark global study involving 1,500 C-suite executives from organizations across 16 industries. The aim: Help companies progress on their AI journey, from one-off AI experimentation to gaining a robust organization-wide capability that acts as a source of competitive agility and growth.
Read the full report:
http://www.accenture.com/AI-Built-to-Scale-Slideshare
In this session, you'll get all the answers about how ChatGPT and other GPT-X models can be applied to your current or future project. First, we'll put in order all the terms – OpenAI, GPT-3, ChatGPT, Codex, Dall-E, etc., and explain why Microsoft and Azure are often mentioned in this context. Then, we'll go through the main capabilities of the Azure OpenAI and respective usecases that might inspire you to either optimize your product or build a completely new one.
AUGMENTING CREATIVITY USING GEN AI FOR DESIGN & INNOVATION | TOJIN T. EAPENTojin Eapen, PhD
Presentation slides from my September 2023 guest lecture on Generative AI and its impact on creativity. The lecture also highlights the key themes of my recent July/August 2023 Harvard Business Review (HBR) cover article, exploring the potential of Generative AI to enhance human creativity. Additionally, the presentation engages in a discussion regarding the emerging opportunities and challenges within this domain.
Generative AI (GAI) refers to a type of artificial intelligence that is able to generate new data or content, such as text, images, or music. This is typically done by training a model on a large dataset of existing data, and then using the model to generate new, similar data.
-Promote Divergent Thinking
-Challenge Expertise Bias
-Assist in Idea Evaluation
-Support Idea Refinement
-Facilitate Collaboration
https://hbr.org/2023/07/how-generative-ai-can-augment-human-creativity
One of the biggest opportunities generative AI offers to businesses and governments is to augment human creativity and overcome the challenges of democratizing innovation.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2022/07/accelerate-tomorrows-models-with-lattice-fpgas-a-presentation-from-lattice-semiconductor/
Hussein Osman, Segment Marketing Director at Lattice Semiconductor, presents the “Accelerate Tomorrow’s Models with Lattice FPGAs” tutorial at the May 2022 Embedded Vision Summit.
Deep learning models are advancing at a dizzying pace, creating difficult dilemmas for system developers. When you begin developing an edge AI system, you select the best available model for your needs. But by the time you’re ready to deploy your product, your original model is obsolete. You’d like to upgrade your model, but your neural network accelerator was designed with previous-generation models in mind and struggles to deliver top performance and efficiency on state-of-the-art models. The solution is hardware that adapts to the needs of whatever algorithms you choose.
Hardware programmability enables Lattice FPGAs to support the latest models and techniques with astounding efficiency, typically consuming less than 200 mW when running visual AI workloads at 30+ frames per second. In this talk, Osman shows how Lattice FPGAs, coupled with our production-proven sensAI solution stack, are being used to quickly develop super-efficient AI implementations that enable groundbreaking features in smart edge devices.
The number of internet-connected devices is growing exponentially, enabling an increasing number of edge applications in environments such as smart cities, retail, and industry 4.0. These intelligent solutions often require processing large amounts of data, running models to enable image recognition, predictive analytics, autonomous systems, and more. Increasing system workloads and data processing capacity at the edge is essential to minimize latency, improve responsiveness, and reduce network traffic back to data centers. Purpose-built systems such as Supermicro’s short-depth, multi-node SuperEdge, powered by 3rd Gen Intel® Xeon® Scalable processors, increase compute and I/O density at the edge and enable businesses to further accelerate innovation.
Join this webinar to discover new insights in edge-to-cloud infrastructures and learn how Supermicro SuperEdge multi-node solutions leverage data center scale, performance, and efficiency for 5G, IoT, and Edge applications.
Understanding generative AI models A comprehensive overview.pdfStephenAmell4
Generative AI refers to a branch of artificial intelligence that focuses on enabling machines to generate new and original content. Unlike traditional AI systems that follow predefined rules and patterns, generative AI leverages advanced algorithms and neural networks to autonomously produce outputs that mimic human creativity and decision-making.
How do OpenAI GPT Models Work - Misconceptions and Tips for DevelopersIvo Andreev
Have you ever wondered why GPT models work? Do you ask questions like:
◉ How does GPT work? Why does the same problem receive different answers for different users? Is there a way to improve explainability? ◉ Can GPT model provide its sources? Why does Bing chat work differently? What are my ways to have better performance and improve completions? ◉ How can I work with data in my enterprise? What practical business cases could a generative AI model fit solving?
If you are tired of sessions just scratching the surface of OpenAI GPT, this one will go deeper and answer questions like why, why not and how.
Key Terms; ChatGPT Enterprise; Top Questions; Enterprise Data; Azure Search; Functions; Embeddings; Context Encoding; General Intelligence; Emerging Abilities; Chain of Thought; Plugins; Multimodal with DALL-E; Project Florence
🔹How will AI-based content-generating tools change your mission and products?
🔹This complimentary webinar [ON-DEMAND] explores multiple use cases that drive adoption in their early adopter customer base to provide product leaders with insights into the future of generative AI-powered businesses, and the potential generative AI holds for driving innovation and improving business processes.
GENERATIVE AI, THE FUTURE OF PRODUCTIVITYAndre Muscat
Discuss the impact and opportunity of using Generative AI to support your development and creative teams
* Explore business challenges in content creation
* Cost-per-unit of different types of content
* Use AI to reduce cost-per-unit
* New partnerships being formed that will have a material impact on the way we search and engage with content
Part 4 of a 9 Part Research Series named "What matters in AI" published on www.andremuscat.com
This talk overviews my background as a female data scientist, introduces many types of generative AI, discusses potential use cases, highlights the need for representation in generative AI, and showcases a few tools that currently exist.
Presenting the landscape of AI/ML in 2023 by introducing a quick summary of the last 10 years of its progress, current situation, and looking at things happening behind the scene.
In light of ChatGPT and the growing generative AI technologies, this presentation revisits my 2020 article on the need for Imagination Performance through the 2023 lens. Much has changed in three years in the technology front but nothing has changed in the human front. We still need meaning and to feel connected. This presentation surfaces the real human value proposition beyond generative AI tools.
Lydia Kostopoulos, PhD
LKCYBER.COM
Generative AI Use-cases for Enterprise - First SessionGene Leybzon
In this presentation, we will delve into the exciting applications of Generative AI across various business domains. Leveraging the capabilities of artificial intelligence and machine learning, Generative AI allows for dynamic, context-aware user interfaces that adapt in real-time to provide personalized user experiences. We will explore how this transformative technology can streamline design processes, facilitate user engagement, and open the doors to new forms of interactivity.
Global Governance of Generative AI: The Right Way ForwardLilian Edwards
AI regulation has been a hot topic since the rise of machine learning (ML) in the “big data” era, but generative AI or “foundation models” tools like ChatGPT, DALL-E 2(now 3) and CoPilot, ike ML before them, may create serious societal risks, including embedding and outputting bias; generating fake news, illegal or harmful content and inadvertent “hallucinations”; infringing existing laws relating eg to copyright and privacy; as well as environmental, competition and workplace concerns.
Many nations are now considering regulation to address these worries, and can draw on a number of basic and hybrid models of governance. This paper canvasses models of mandatory comprehensive legislation (where the EU AI Act hopes to place itself as a gold standard model); vertical mandatory legislation (where China has quietly taken a lead); adapting existing law (see the many copyright lawsuits underway); and voluntary “soft law” such as codes of ethics, “blueprints”, or industry guidelines. Both the domestic and international regulatory scenes for AI are also increasingly politicised as the rise of "AI safety" hype shows. Against this backdrop what choices should smaller countries such as the UK and Australia make? will international harmonisation lead to a race to the top as with the GDPR, or the bottom - rule by tech for tech?
The numbers tell the story: 84% of C-suite executives believe they must leverage artificial intelligence (AI) to achieve their growth objectives, yet 76% report they struggle with how to scale. With the stakes higher than ever, what can we learn from companies that are successfully scaling AI, achieving nearly 3X the return on investments and an average 32% premium on key financial valuation metrics?
To answer that question, Accenture conducted a landmark global study involving 1,500 C-suite executives from organizations across 16 industries. The aim: Help companies progress on their AI journey, from one-off AI experimentation to gaining a robust organization-wide capability that acts as a source of competitive agility and growth.
Read the full report:
http://www.accenture.com/AI-Built-to-Scale-Slideshare
In this session, you'll get all the answers about how ChatGPT and other GPT-X models can be applied to your current or future project. First, we'll put in order all the terms – OpenAI, GPT-3, ChatGPT, Codex, Dall-E, etc., and explain why Microsoft and Azure are often mentioned in this context. Then, we'll go through the main capabilities of the Azure OpenAI and respective usecases that might inspire you to either optimize your product or build a completely new one.
AUGMENTING CREATIVITY USING GEN AI FOR DESIGN & INNOVATION | TOJIN T. EAPENTojin Eapen, PhD
Presentation slides from my September 2023 guest lecture on Generative AI and its impact on creativity. The lecture also highlights the key themes of my recent July/August 2023 Harvard Business Review (HBR) cover article, exploring the potential of Generative AI to enhance human creativity. Additionally, the presentation engages in a discussion regarding the emerging opportunities and challenges within this domain.
Generative AI (GAI) refers to a type of artificial intelligence that is able to generate new data or content, such as text, images, or music. This is typically done by training a model on a large dataset of existing data, and then using the model to generate new, similar data.
-Promote Divergent Thinking
-Challenge Expertise Bias
-Assist in Idea Evaluation
-Support Idea Refinement
-Facilitate Collaboration
https://hbr.org/2023/07/how-generative-ai-can-augment-human-creativity
One of the biggest opportunities generative AI offers to businesses and governments is to augment human creativity and overcome the challenges of democratizing innovation.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2022/07/accelerate-tomorrows-models-with-lattice-fpgas-a-presentation-from-lattice-semiconductor/
Hussein Osman, Segment Marketing Director at Lattice Semiconductor, presents the “Accelerate Tomorrow’s Models with Lattice FPGAs” tutorial at the May 2022 Embedded Vision Summit.
Deep learning models are advancing at a dizzying pace, creating difficult dilemmas for system developers. When you begin developing an edge AI system, you select the best available model for your needs. But by the time you’re ready to deploy your product, your original model is obsolete. You’d like to upgrade your model, but your neural network accelerator was designed with previous-generation models in mind and struggles to deliver top performance and efficiency on state-of-the-art models. The solution is hardware that adapts to the needs of whatever algorithms you choose.
Hardware programmability enables Lattice FPGAs to support the latest models and techniques with astounding efficiency, typically consuming less than 200 mW when running visual AI workloads at 30+ frames per second. In this talk, Osman shows how Lattice FPGAs, coupled with our production-proven sensAI solution stack, are being used to quickly develop super-efficient AI implementations that enable groundbreaking features in smart edge devices.
The number of internet-connected devices is growing exponentially, enabling an increasing number of edge applications in environments such as smart cities, retail, and industry 4.0. These intelligent solutions often require processing large amounts of data, running models to enable image recognition, predictive analytics, autonomous systems, and more. Increasing system workloads and data processing capacity at the edge is essential to minimize latency, improve responsiveness, and reduce network traffic back to data centers. Purpose-built systems such as Supermicro’s short-depth, multi-node SuperEdge, powered by 3rd Gen Intel® Xeon® Scalable processors, increase compute and I/O density at the edge and enable businesses to further accelerate innovation.
Join this webinar to discover new insights in edge-to-cloud infrastructures and learn how Supermicro SuperEdge multi-node solutions leverage data center scale, performance, and efficiency for 5G, IoT, and Edge applications.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2021/07/5g-and-ai-transforming-the-next-generation-of-robotics-a-presentation-from-qualcomm/
Kishore Chakravadhanula, Staff Product Manager at Qualcomm, presents the “5G and AI Transforming the Next Generation of Robotics” tutorial at the May 2021 Embedded Vision Summit.
Bringing together the transformative power of 5G and AI technologies is essential to driving the next generation of high-compute, low-power robots and drones for consumer, enterprise and industrial sectors. In this session, Chakravadhanula discusses how scaling 5G and AI will help solve a diverse set of robotics challenges—from enabling high-accuracy AI inferencing and superior power efficiency to enhanced security and connectivity.
Chakravadhanula explains why these advances are key to enabling the robotics ecosystem and accelerating growth in segments from automated guided vehicles, autonomous mobile robots, delivery robots and drones to inventory, industrial, and collaborative robots. Additionally, he highlights recent use cases including how our AI and computer vision technologies are enabling autonomous flight on Mars and enabling your home vacuum robot to map rooms and avoid obstacles.
Qualcomm is an at-scale company. It powered the smartphone revolution and connected billions of people. It pioneered 3G and 4G, and now it is leading the way to 5G and a new era of intelligent, connected devices. Mobile is going to be the largest machine learning platform on the planet. Come learn how Qualcomm is making efficient on-device machine learning possible, how Qualcomm and Facebook worked closely to support machine learning in Facebook applications, and what’s next for Qualcomm and AI.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2022/06/autonomous-driving-ai-workloads-technology-trends-and-optimization-strategies-a-presentation-from-qualcomm/
Ahmed Sadek, Senior Director of Engineering at Qualcomm, presents the “Autonomous Driving AI Workloads: Technology Trends and Optimization Strategies” tutorial at the May 2022 Embedded Vision Summit.
Enabling safe, comfortable and affordable autonomous driving requires solving some of the most demanding and challenging technological problems. From centimeter-level localization to multimodal sensor perception, sensor fusion, behavior prediction, maneuver planning and trajectory planning and control, each one of these functions introduces its own unique challenges that must be solved, verified, tested and deployed on the road.
In this talk, Sadek reviews recent trends in AI workloads for autonomous driving as well as promising future directions. He covers AI workloads in camera, radar and lidar perception, AI workloads in environmental modeling, behavior prediction and drive policy. To enable optimized network performance at the edge, quantization and neural architecture optimization are typically performed either during training or post-training. Sadek also covers the importance of hardware-aware quantization and network architecture optimization, and introduces the innovation done by Qualcomm in these areas.
Application developers are key to the success of an edge compute strategy. They are the backbone for any digital ecosystem and their requirements drive the platform architecture. Edge computing is no different. In this talk, we will focus on some key requirements, challenges and possible solutions for a developer centric architecture for multi-access edge computing including abstraction of the service provider’s network complexity, low footprint cloud native builder models, micro-services, hardware abstractions, intelligence layers and massive monitoring of application instances.
About the speaker: Shamik Mishra is currently Assistant Vice President (AVP), Technology and Innovation at Aricent. He is a practice leader for new product architectures. He has extensive experience and contributions in software development in cloud, wireless technologies, edge computing and platform software. His research interests are Network Function Virtualization (NFV), Cloud and edge computing and Machine Learning (ML). He has spoken in several conferences and his work is regularly covered in the media. Shamik has a bachelor’s and a master’s degree from Indian Institute of Technology (IIT) Kharagpur, India.
Edge optimized architecture for fabric defect detection in real-timeShuquan Huang
In textile industry, fabric defect relies on human inspection traditionally, which is inaccurate, inconsistent, inefficient and expensive. There were automatic systems developed on the defect detection by identifying the faults in fabric surface using the image and video processing techniques. However, the existing solution has insufficiencies in defect data sharing, backhaul interconnect, maintenance and etc. By evolving to an edge-optimized architecture, we can help textile industry improve fabric quality, reduce operation cost and increase production efficiency. In this session, I’ll share:
What’s edge computing and why it’s important to intelligence manufacturing
What’s the characteristics, strengths and weaknesses of traditional fabric defect detection method
Why textile industry can benefit from edge computing infrastructure
How to design and implement an edge-enabled application for fabric defect detection in real-time
Insights, synergy and future research directions
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2021/07/software-defined-cameras-for-edge-computing-of-the-future-a-presentation-from-arm/
Parag Beeraka, Head of the Smart Camera and Vision Business at Arm, presents the “Software-Defined Cameras for Edge Computing of the Future,” tutorial at the May 2021 Embedded Vision Summit.
Computer-vision-enabled cameras have demonstrated the potential to bring compelling functionality to numerous applications. But to realize the full potential and assist with the growth of AI-enabled cameras, it’s necessary to drastically simplify the work of developing, deploying and maintaining these cameras. Hardware and firmware standards are key to accomplishing this. In this talk, Beeraka introduces Arm’s vision for a set of hardware and software standards addressing four key elements of software-defined cameras: security, machine learning, cloud enablement and software portability.
For example, smart camera machine learning workloads can run on a variety of processing engines. Common frameworks are needed so that these workloads can be seamlessly and efficiently mapped onto the available processing engines without requiring that the camera developer delve into processing engine details. Similarly, most smart cameras are starting to rely on cloud services for storage as well as model and software updates. Standardizing interfaces to these key elements will give smart camera developers the ability to quickly integrate the cloud services best matched to their needs, without having to master the details of those elements.
IoT and the Oil & Gas industry at M2M Oil & Gas 2014 in LondonEurotech
How the Internet of Things is catching up with the Oil & Gas industry.
How Eurotech's IoT architecture had its roots in the oil & gas industry, and why it is still relevant today.
AWS re:Invent 2016: Powering the Next Generation of Virtual Reality with Veri...Amazon Web Services
In six months, Verizon has built a best-in-class Augmented Reality and Virtual Reality (AR/VR) platform that streams HD video and game experiences using Amazon EC2 GPU Accelerated instances and CloudFront. Verizon will share their reference architecture and configuration best practices that enabled them to develop a massively scalable VR architecture that scales to support for 100K simultaneous HD video streams to customers around the globe.
Vertex perspectives ai optimized chipsets (part i)Yanai Oron
Businesses are increasingly adopting AI to create new applications to transform existing operations, driving big data with the growth of IoT and 5G networks and increasing future process complexities for human operators. In this new environment, AI will be needed to write algorithms dynamically to automate the entire programming process. Fortunately, algorithms associated with deep learning are able to achieve enhanced performance with increasing data, unlike the rest associated with machine learning.
Vertex Perspectives | AI-optimized Chipsets | Part IVertex Holdings
Businesses are increasingly adopting AI to create new applications to transform existing operations, driving big data with the growth of IoT and 5G networks and increasing future process complexities for human operators. In this new environment, AI will be needed to write algorithms dynamically to automate the entire programming process. Fortunately, algorithms associated with deep learning are able to achieve enhanced performance with increasing data, unlike the rest associated with machine learning. To date, deep learning technology has primarily been a software play. Existing processors were not originally designed for these new applications. Hence the need to develop AI-optimized hardware.
Similar to “The Future of AI is Here Today: Deep Dive into Qualcomm’s On-Device AI Offerings,” a Presentation from Qualcomm (20)
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/deploying-large-models-on-the-edge-success-stories-and-challenges-a-presentation-from-qualcomm/
Vinesh Sukumar, Senior Director of Product Management at Qualcomm Technologies, presents the “Deploying Large Models on the Edge: Success Stories and Challenges” tutorial at the May 2024 Embedded Vision Summit.
In this talk, Dr. Sukumar explains and demonstrates how Qualcomm has been successful in deploying large generative AI and multimodal models on the edge for a variety of use cases in consumer and enterprise markets. He examines key challenges that must be overcome before large models at the edge can reach their full commercial potential. He also highlights how Qualcomm is addressing these challenges through upgraded processor hardware, improved developer tools and a comprehensive library of fully optimized AI models in the Qualcomm AI Hub.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/scaling-vision-based-edge-ai-solutions-from-prototype-to-global-deployment-a-presentation-from-network-optix/
Maurits Kaptein, Chief Data Scientist at Network Optix and Professor at the University of Eindhoven, presents the “Scaling Vision-based Edge AI Solutions: From Prototype to Global Deployment” tutorial at the May 2024 Embedded Vision Summit.
The Embedded Vision Summit brings together innovators in silicon, devices, software and applications and empowers them to bring computer vision and perceptual AI into reliable and scalable products. However, integrating recent hardware, software and algorithm innovations into prime-time-ready products is quite challenging. Scaling from a proof of concept—for example, a novel neural network architecture performing a valuable task efficiently on a new piece of silicon—to an AI vision system installed in hundreds of sites requires surmounting myriad hurdles.
First, building on Network Optix’s 14 years of experience, Professor Kaptein details how to overcome the networking, fleet management, visualization and monetization challenges that come with scaling a global vision solution. Second, Kaptein discusses the complexities of making vision AI solutions device-agnostic and remotely manageable, proposing an open standard for AI model deployment to edge devices. The proposed standard aims to simplify market entry for silicon manufacturers and enhance scalability for solution developers. Kaptein outlines the standard’s core components and invites collaborative contributions to drive market expansion.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/whats-next-in-on-device-generative-ai-a-presentation-from-qualcomm/
Jilei Hou, Vice President of Engineering and Head of AI Research at Qualcomm Technologies, presents the “What’s Next in On-device Generative AI” tutorial at the May 2024 Embedded Vision Summit.
The generative AI era has begun! Large multimodal models are bringing the power of language understanding to machine perception, and transformer models are expanding to allow machines to understand using multiple types of sensors. This new wave of approaches is poised to revolutionize user experiences, disrupt industries and enable powerful new capabilities. For generative AI to reach its full potential, however, we must deploy it on edge devices, providing improved latency, pervasive interaction and enhanced privacy.
In this talk, Hou shares Qualcomm’s vision of the compelling opportunities enabled by efficient generative AI at the edge. He also identifies the key challenges that the industry must overcome to realize the massive potential of these technologies. And he highlights research and product development work that Qualcomm is doing to lead the way via an end-to-end system approach—including techniques for efficient on-device execution of LLMs, LVMs and LMMs, methods for orchestration of large models at the edge and approaches for adaptation and personalization.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2023/11/learning-compact-dnn-models-for-embedded-vision-a-presentation-from-the-university-of-maryland-at-college-park/
Shuvra Bhattacharyya, Professor at the University of Maryland at College Park, presents the “Learning Compact DNN Models for Embedded Vision” tutorial at the May 2023 Embedded Vision Summit.
In this talk, Bhattacharyya explores methods to transform large deep neural network (DNN) models into effective compact models. The transformation process that he focuses on—from large to compact DNN form—is referred to as pruning. Pruning involves the removal of neurons or parameters from a neural network. When performed strategically, pruning can lead to significant reductions in computational complexity without significant degradation in accuracy. It is sometimes even possible to increase accuracy through pruning.
Pruning provides a general approach for facilitating real-time inference in resource-constrained embedded computer vision systems. Bhattacharyya provides an overview of important aspects to consider when applying or developing a DNN pruning method and presents details on a recently introduced pruning method called NeuroGRS. NeuroGRS considers structures and trained weights jointly throughout the pruning process and can result in significantly more compact models compared to other pruning methods.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2023/11/introduction-to-computer-vision-with-cnns-a-presentation-from-mohammad-haghighat/
Independent consultant Mohammad Haghighat presents the “Introduction to Computer Vision with Convolutional Neural Networks” tutorial at the May 2023 Embedded Vision Summit.
This presentation covers the basics of computer vision using convolutional neural networks. Haghighat begins by introducing some important conventional computer vision techniques and then transition to explaining the basics of machine learning and convolutional neural networks (CNNs) and showing how CNNs are used in visual perception.
Haghighat illustrates the building blocks and computational elements of neural networks through examples. This session provides an overview of how modern computer vision algorithms are designed, trained and used in real-world applications.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2023/11/selecting-tools-for-developing-monitoring-and-maintaining-ml-models-a-presentation-from-yummly/
Parshad Patel, Data Scientist at Yummly, presents the “Selecting Tools for Developing, Monitoring and Maintaining ML Models” tutorial at the May 2023 Embedded Vision Summit.
With the boom in tools for developing, monitoring and maintaining ML models, data science teams have many options to choose from. Proprietary tools provided by cloud service providers are enticing, but teams may fear being locked in—and may worry that these tools are too costly or missing important features when compared with alternatives from specialized providers.
Fortunately, most proprietary, fee-based tools have an open-source component that can be integrated into a home-grown solution at low cost. This can be a good starting point, enabling teams to get started with modern tools without making big investments and leaving the door open to evolve tool selection over time. In this talk, Patel presents a step-by-step process for creating an MLOps tool set that enables you to deliver maximum value as a data scientist. He shares how Yummly built pipelines for model development and put them into production using open-source projects.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2023/11/building-accelerated-gstreamer-applications-for-video-and-audio-ai-a-presentation-from-wave-spectrum/
Abdo Babukr, Accelerated Computing Consultant at Wave Spectrum, presents the “Building Accelerated GStreamer Applications for Video and Audio AI,” tutorial at the May 2023 Embedded Vision Summit.
GStreamer is a popular open-source framework for creating streaming media applications. Developers often use GStreamer to streamline the development of computer vision and audio perception applications. Since perceptual algorithms are often quite demanding in terms of processing performance, in many cases developers need to find ways to accelerate key GStreamer building blocks, taking advantage of specialized features of their target processor or co-processor.
In this talk, Babukr introduces GStreamer and shows how to use it to build computer vision and audio perception applications. He also shows how to create efficient, high-performance GStreamer applications that utilize specialized hardware features.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2023/11/understanding-selecting-and-optimizing-object-detectors-for-edge-applications-a-presentation-from-walmart-global-tech/
Md Nasir Uddin Laskar, Staff Machine Learning Engineer at Walmart Global Tech, presents the “Understanding, Selecting and Optimizing Object Detectors for Edge Applications” tutorial at the May 2023 Embedded Vision Summit.
Object detectors count objects in a scene and determine their precise locations, while also labeling them. Object detection plays a crucial role in many vision applications, from autonomous driving to smart appliances. In many of these applications, it’s necessary or desirable to implement object detection at the edge.
In this presentation, Laskar explores the evolution of object detection algorithms, from traditional approaches to deep learning-based methods and transformer-based architectures. He delves into widely used approaches for object detection, such as two-stage R-CNNs and one-stage YOLO algorithms, and examines their strengths and weaknesses. And he provides guidance on how to evaluate and select an object detector for an edge application.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2023/10/introduction-to-modern-lidar-for-machine-perception-a-presentation-from-the-university-of-ottawa/
Robert Laganière, Professor at the University of Ottawa and CEO of Sensor Cortek, presents the “Introduction to Modern LiDAR for Machine Perception” tutorial at the May 2023 Embedded Vision Summit.
In this presentation, Laganière provides an introduction to light detection and ranging (LiDAR) technology. He explains how LiDAR sensors work and their main advantages and disadvantages. He also introduces different approaches to LiDAR, including scanning and flash LiDAR.
Laganière explores the types of data produced by LiDAR sensors and explains how this data can be processed using deep neural networks. He also examines the synergy between LiDAR and cameras, and the concept of pseudo-LiDAR for detection.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2023/11/vision-language-representations-for-robotics-a-presentation-from-the-university-of-pennsylvania/
Dinesh Jayaraman, Assistant Professor at the University of Pennsylvania, presents the “Vision-language Representations for Robotics” tutorial at the May 2023 Embedded Vision Summit.
In what format can an AI system best present what it “sees” in a visual scene to help robots accomplish tasks? This question has been a long-standing challenge for computer scientists and robotics engineers. In this presentation, Jayaraman provides insights into cutting-edge techniques being used to help robots better understand their surroundings, learn new skills with minimal guidance and become more capable of performing complex tasks.
Jayaraman discusses recent advances in unsupervised representation learning and explains how these approaches can be used to build visual representations that are appropriate for a controller that decides how the robot should act. In particular, he presents insights from his research group’s recent work on how to represent the constituent objects and entities in a visual scene, and how to combine vision and language in a way that permits effectively translating language-based task descriptions into images depicting the robot’s goals.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2023/10/adas-and-av-sensors-whats-winning-and-why-a-presentation-from-techinsights/
Ian Riches, Vice President of the Global Automotive Practice at TechInsights, presents the “ADAS and AV Sensors: What’s Winning and Why?” tutorial at the May 2023 Embedded Vision Summit.
It’s clear that the number of sensors per vehicle—and the sophistication of these sensors—is growing rapidly, largely thanks to increased adoption of advanced safety and driver assistance features. In this presentation, Riches explores likely future demand for automotive radars, cameras and LiDARs.
Riches examines which vehicle features will drive demand out to 2030, how vehicle architecture change is impacting the market and what sorts of compute platforms these sensors will be connected to. Finally, he shares his firm’s vision of what the landscape could look like far beyond 2030, considering scenarios out to 2050 for automated driving and the resulting sensor demand.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2023/10/computer-vision-in-sports-scalable-solutions-for-downmarkets-a-presentation-from-sportlogiq/
Mehrsan Javan, Co-founder and CTO of Sportlogiq, presents the “Computer Vision in Sports: Scalable Solutions for Downmarket Leagues” tutorial at the May 2023 Embedded Vision Summit.
Sports analytics is about observing, understanding and describing the game in an intelligent manner. In practice, this requires a fully automated, robust end-to-end pipeline, spanning from visual input, to player and group activities, to player and team evaluation to planning. Despite major advancements in computer vision and machine learning, today sports analytics solutions are limited to top leagues and are not widely available for downmarket leagues and youth sports.
In this talk, Javan explains how his company has developed scalable and robust computer vision solutions to democratize sport analytics and offer pro-league-level insights to leagues with modest resources, including youth leagues. He highlights key challenges—such as the requirement for low-cost, low-latency processing and the need for robustness despite variations in venues. He discusses the approaches Sportlogiq tried and how it ultimately overcame these challenges, including the use of transformers and fusion of multiple type of data streams to maximize accuracy.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2023/10/detecting-data-drift-in-image-classification-neural-networks-a-presentation-from-southern-illinois-university/
Spyros Tragoudas, Professor and School Director at Southern Illinois University Carbondale, presents the “Detecting Data Drift in Image Classification Neural Networks” tutorial at the May 2023 Embedded Vision Summit.
An unforeseen change in the input data is called “drift,” and may impact the accuracy of machine learning models. In this talk, Tragoudas presents a novel scheme for diagnosing data drift in the input streams of image classification neural networks. His proposed method for drift detection and quantification uses a threshold dictionary for the prediction probabilities of each class in the neural network model.
The method is applicable to any drift type in images such as noise and weather effects, among others. Tragoudas shares experimental results on various data sets, drift types and neural network models to show that his proposed method estimates the drift magnitude with high accuracy, especially when the level of drift significantly impacts the model’s performance.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2023/10/deep-neural-network-training-diagnosing-problems-and-implementing-solutions-a-presentation-from-sensor-cortek/
Fahed Hassanat, Chief Operating Officer and Head of Engineering at Sensor Cortek, presents the “Deep Neural Network Training: Diagnosing Problems and Implementing Solutions” tutorial at the May 2023 Embedded Vision Summit.
In this presentation, Hassanat delves into some of the most common problems that arise when training deep neural networks. He provides a brief overview of essential training metrics, including accuracy, precision, false positives, false negatives and F1 score.
Hassanat then explores training challenges that arise from problems with hyperparameters, inappropriately sized models, inadequate models, poor-quality datasets, imbalances within training datasets and mismatches between training and testing datasets. To help detect and diagnose training problems, he also covers techniques such as understanding performance curves, recognizing overfitting and underfitting, analyzing confusion matrices and identifying class interaction issues.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2023/10/ai-start-ups-the-perils-of-fishing-for-whales-war-stories-from-the-entrepreneurial-front-lines-a-presentation-from-seechange-technologies/
Tim Hartley, Vice President of Product for SeeChange Technologies, presents the “AI Start-ups: The Perils of Fishing for Whales (War Stories from the Entrepreneurial Front Lines)” tutorial at the May 2023 Embedded Vision Summit.
You have a killer idea that will change the world. You’ve thought through product-market fit and differentiation. You have seed funding and a world-beating team. Best of all, you’ve caught the attention of major players in your industry. You’ve reached peak “start-up”—that point of limitless possibility—when you go to bed with the same level of energy and enthusiasm you had when you woke. And then the first proof of concept starts…
In this talk, Hartley lays out some of the pitfalls that await those building the next big thing. Using real examples, he shares some of the dos and don’ts, particularly when dealing with that big potential first customer. Hartley discusses the importance of end-to-end design, ensuring your product solves real-world problems. He explores how far the big companies will tell you to jump—and then jump again—for free. And, most importantly, how to build long-term partnerships with major corporations without relying on over-promising sales pitches.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2023/10/a-computer-vision-system-for-autonomous-satellite-maneuvering-a-presentation-from-scout-space/
Andrew Harris, Spacecraft Systems Engineer at SCOUT Space, presents the “Developing a Computer Vision System for Autonomous Satellite Maneuvering” tutorial at the May 2023 Embedded Vision Summit.
Computer vision systems for mobile autonomous machines experience a wide variety of real-world conditions and inputs that can be challenging to capture accurately in training datasets. Few autonomous systems experience more challenging conditions than those in orbit. In this talk, Harris describes how SCOUT Space has designed and trained satellite vision systems using dynamic and physically informed synthetic image datasets.
Harris describes how his company generates synthetic data for this challenging environment and how it leverages new real-world data to improve our datasets. In particular, he explains how these synthetic datasets account for and can replicate real sources of noise and error in the orbital environment, and how his company supplements them with in-space data from the first SCOUT-Vision system, which has been in orbit since 2021.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2023/10/bias-in-computer-vision-its-bigger-than-facial-recognition-a-presentation-from-santa-clara-university/
Susan Kennedy, Assistant Professor of Philosophy at Santa Clara University, presents the “Bias in Computer Vision—It’s Bigger Than Facial Recognition!” tutorial at the May 2023 Embedded Vision Summit.
As AI is increasingly integrated into various industries, concerns about its potential to reproduce or exacerbate bias have become widespread. While the use of AI holds the promise of reducing bias, it can also have unintended consequences, particularly in high-stakes computer vision applications such as facial recognition. However, even seemingly low-stakes computer vision applications such as identifying potholes and damaged roads can also present ethical challenges related to bias.
This talk explores how bias in computer vision often poses an ethical challenge, regardless of the stakes involved. Kennedy discusses the limitations of technical solutions aimed at mitigating bias, and why “bias-free” AI may not be achievable. Instead, she focuses on the importance of adopting a “bias-aware” approach to responsible AI design and explores strategies that can be employed to achieve this.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2023/10/sensor-fusion-techniques-for-accurate-perception-of-objects-in-the-environment-a-presentation-from-sanborn-map-company/
Baharak Soltanian, Vice President of Research and Development for the Sanborn Map Company, presents the “Sensor Fusion Techniques for Accurate Perception of Objects in the Environment” tutorial at the May 2023 Embedded Vision Summit.
Increasingly, perceptual AI is being used to enable devices and systems to obtain accurate estimates of object locations, speeds and trajectories. In demanding applications, this is often best done using a heterogeneous combination of sensors (e.g., vision, radar, LiDAR). In this talk, Soltanian introduces techniques for combining data from multiple sensors to obtain accurate information about objects in the environment.
Soltanian briefly introduces the roles played by Kalman filters, particle filters, Bayesian networks and neural networks in this type of fusion. She then examines alternative fusion architectures, such as centralized and decentralized approaches, to better understand the trade-offs associated with different approaches to sensor fusion as used to enhance the ability of machines to understand their environment.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2023/10/updating-the-edge-ml-development-process-a-presentation-from-samsara/
Jim Steele, Vice President of Embedded Software at Samsara, presents the “Updating the Edge ML Development Process” tutorial at the May 2023 Embedded Vision Summit.
Samsara (NYSE:IOT) is focused on digitizing the world of operations. The company helps customers across many industries—including food and beverage, utilities and energy, field services and government—get information about their physical operations into the cloud, so they can operate more safely, efficiently and sustainably. Samsara’s sensors collect billions of data points per day and on-device processing is instrumental to its success. The company is constantly developing, improving and deploying ML models at the edge.
Samsara has found that the traditional development process—where ML scientists create models and hand them off to firmware engineers for embedded implementation—is slow and often produces difficult-to-resolve differences between the original model and the embedded implementation. In this talk, Steele presents an alternative development process that his company has adopted with good results. In this process, firmware engineers develop a general framework that ML scientists use to design, develop and deploy their models. This enables quick iterations and fewer confounding bugs.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2023/10/combating-bias-in-production-computer-vision-systems-a-presentation-from-red-cell-partners/
Alex Thaman, Chief Architect at Red Cell Partners, presents the “Combating Bias in Production Computer Vision Systems” tutorial at the May 2023 Embedded Vision Summit.
Bias is a critical challenge in predictive and generative AI that involves images of humans. People have a variety of body shapes, skin tones and other features that can be challenging to represent completely in training data. Without attention to bias risks, ML systems have the potential to treat people unfairly, and even to make humans more likely to do so.
In this talk, Thaman examines the ways in which bias can arise in visual AI systems. He shares techniques for detecting bias and strategies for minimizing it in production AI systems.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
Let's dive deeper into the world of ODC! Ricardo Alves (OutSystems) will join us to tell all about the new Data Fabric. After that, Sezen de Bruijn (OutSystems) will get into the details on how to best design a sturdy architecture within ODC.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
“The Future of AI is Here Today: Deep Dive into Qualcomm’s On-Device AI Offerings,” a Presentation from Qualcomm
1. The Future of AI is Here Today:
Deep Dive into Qualcomm’s
On-Device AI Offering
Vinesh Sukumar
Sr. Director, Product Management (AI/ML)
Qualcomm Technologies, Inc.
5. AI Applications: Across Various Segments
Expanding beyond modalities of computer vision to linguistics, communication, commerce and
language understanding
5
Mobile IoT Compute Cloud Auto
AI Assisted Imaging
• AI 3A
• Scene-based Camera Selection
Image Understanding
• Face Detection / Tracking / Features
• Object Detection / Tracking
• Body Detection / Tracking / Pose
• Human Segmentation
• Sky Segmentation
• Multi-Class Segmentation
• Depth Estimation
Beautify / Augment
• Scene-based Image Enhancement
Image Processing
• AI based NR or Image SR
• Scene-based Camera Selection
Audio
• Real time language
• Natural language processing (NLP)
Modem
• Parameter optimization
• Robust sequence predictions
Robotics
• Autonomous navigation
• Obstacle Avoidance
• Picking and Sorting
Productivity
• Background based noise cancellation on
Audio (inbound and outbound)
• Segmentation/Blur/Super Resolution on
Video
• Voice activation without keywords
• Face tracking
• Smart photo categorization
Data Centers
• Natural language processing
• Computer vision
• Recommendation system
IVI
• Occupancy monitoring system (OMS)
• Driver monitoring system (DMS)
• Surround perception
• Audio Command & Control
Retail
• Visitor/Face/Gesture Recognition
• Object/People Detection and Counting
• Barcode decoding
• Empty shelf detection
• Dwell time
Privacy & Security
• Automatic screen unlock and login
• Privacy alert
• Guard mode
ADAS (Up to L4)
• Highway driving assist
• Front collision warning
• lane departure,
• Traffic jam assist
• Auto lane change
• Auto lane merge
• Traffic light recognition
• Construction zones
• Urban autonomous driving
• Parking assist
• Person detection,
• Perception
• Valet parking
• Driver monitoring
Transportation
• License plate recognition
• Face and facial landmark detection
• Drowsiness detection
Content Creation & Gaming
Gaming with gesture control
Gaming with voice commands
Intelligent highlight videos
Game play improvement
Edge Compute
• Theft detection
• Face/body/license plate detection /
recognition
• Image classification and segmentation
Smart Devices
• Object/People detection
• Speaker detection
• Gun shot detection
Smart Buildings
• People Tracking
• Access Control
Performance & Efficiency
• Power and Screen optimization
Manufacturing/Logistics
• Predictive maintenance
• Energy management with Asset demand
Innovation centered around supporting high accuracy, latency &
heterogeneity
Innovation centered around energy efficiency & personalization Innovation centered around making
AI relevant in PC
Battery Life Latency Focused Throughput Heavy Concurrently Enabled
Few examples
18. AIMET : Quantize & Compress AI Models
18
Trained
AI model
AI Model Efficiency Toolkit
(AIMET)
Optimized
AI model
TensorFlow or PyTorch
Compression
Quantization
Deployed
AI model
State of the art network compression and quantization
tools for various DL architectures (CNN, BERT, GAN’s..)
Automated way of enabling reduction in precision of
weights and activation while maintaining accuracy
Original Image FP16 Segmented Map Quantized Segmented Map
<0.5% Drop in Accuracy (Across various DL Models)
-> Using QAT and PTQ techniques from AIMET
Results
Why
STEP : 2
19. Your–NN-Framework
Training .onnx .onnx .onnx .onnx .onnx
.onnx .pb
CPU
Qualcomm Neural Network Library
QML
NEON
KERNELS KERNELS
eAI
Open CL Open CL
Performance
Qualcomm®
Neural
Processing
SDK
ONNXRT
PT
Mobile
TF-
Lite
NNAPI TF-Lite µ
GPU
Hexagon
Processor
Qualcomm
Sensing
Hub
General purpose
compute engines
AI engines
Profiler
Debugger
Visualizer
Compilers
Scalability
Runtime
framework
Low level
Library
AI Engine
On-Device
Execution/Inference
Run Time: Qualcomm AI Software Stack for
Performance and Scalability Support
- Application Deployment
STEP : 3
19