The document summarizes dimensional measurement methodologies and their applications. It categorizes common methods as either tactile or non-tactile, and describes examples of each including coordinate measuring machines, interferometry, laser scanning, and photogrammetry. Applications discussed include reverse engineering, quality assurance, medical, automotive, and user interfaces. The market for 3D metrology is projected to reach $10.9 billion by 2022. The document also discusses trends in the field and a vision for the future including more compact, mobile, and cloud-based solutions enabled by advances in components, processing, and artificial intelligence.
For the full video of this presentation, please visit:
https://www.embedded-vision.com/platinum-members/embedded-vision-alliance/embedded-vision-training/videos/pages/may-2017-embedded-vision-summit-gallagher
For more information about embedded vision, please visit:
http://www.embedded-vision.com
Paul Gallagher, Senior Director of Technology and Product Planning for LG, presents the "Coming Shift from Image Sensors to Image Sensing" tutorial at the May 2017 Embedded Vision Summit.
The image sensor space is entering the fourth disruption in its evolution. The first three disruptions primarily focused on taking “pretty pictures” for human consumption, evaluation, and storage. The coming disruption will be driven by machine vision moving into the mainstream. Smart homes, offices, cars, devices – as well as AR/MR, biometrics and crowd monitoring – all need to run image data through a processor to activate responses without human viewing. The opportunity this presents is massive, but as the growth efficiencies come into play the solutions will become specialized.
This talk highlights the opportunities that the emerging shift to image-based sensing will bring throughout the imaging and vision industry. It explores the ingredients that industry participants will need in order to capitalize on these opportunities, and why the entrenched players may not be at as great an advantage as might be expected.
For the full video of this presentation, please visit:
https://www.embedded-vision.com/platinum-members/embedded-vision-alliance/embedded-vision-training/videos/pages/may-2017-embedded-vision-summit-jain
For more information about embedded vision, please visit:
http://www.embedded-vision.com
Divya Jain, Technical Director at Tyco Innovation, presents the "End to End Fire Detection Deep Neural Network Platform" tutorial at the May 2017 Embedded Vision Summit.
This presentation dives deep into a real-world problem of fire detection to see what it takes to build a complete solution using CNNs. Fire is specifically challenging because it doesn’t have a fixed shape or size like other objects. The presentation begins with a discussion of the technology stack, followed by the algorithm, and concluding with a review of the end to end architecture. Jain discusses the challenges her company encountered while training this algorithm and how they worked through them by building a scalable and reusable platform.
For the full video of this presentation, please visit:
https://www.embedded-vision.com/platinum-members/embedded-vision-alliance/embedded-vision-training/videos/pages/may-2017-embedded-vision-summit-zeller
For more information about embedded vision, please visit:
http://www.embedded-vision.com
Sadie Zeller, Manager of Global Product Management and the Clinical Vertical Market at Microscan Systems, presents the "Another Set of Eyes: Machine Vision Automation Solutions for In Vitro Diagnostics" tutorial at the May 2017 Embedded Vision Summit.
In vitro diagnostics (IVD) are tests that can detect diseases, conditions, or infections. The use of automation, including machine vision inspection, in IVD has increased steadily, and is now a standard practice. Vision-based laboratory automation enables greater throughput efficiency and minimizes the risk of human error. But IVD is a challenging application: the healthcare industry requires systems that are, at a minimum, fail-safe, and ideally, error-proof.
Machine vision systems for IVD (and related life sciences) therefore require a robust development phase including an iterative design-validate process to ensure that the system is safe for use. This presentation addresses some of the key requirements and constraints of healthcare vision applications, and highlights approaches for application design and testing to meet tough industry demands.
For the full video of this presentation, please visit:
https://www.embedded-vision.com/platinum-members/embedded-vision-alliance/embedded-vision-training/videos/pages/may-2017-embedded-vision-summit-leontiev
For more information about embedded vision, please visit:
http://www.embedded-vision.com
Anton Leontiev, Embedded Software Architect at ELVEES, JSC, presents the "Designing a Stereo IP Camera From Scratch" tutorial at the May 2017 Embedded Vision Summit.
As the number of cameras in an intelligent video surveillance system increases, server processing of the video quickly becomes a bottleneck. On the other hand, when computer vision algorithms are moved to a resource-limited camera platform, their output quality is often unsatisfactory.
The effectiveness of vision algorithms for surveillance can be greatly improved by using a depth map in addition to the regular image. Thus, using a stereo camera is a way to enable offloading of advanced algorithms from servers to IP cameras. This talk covers the main problems arising during the design of an embedded stereo IP camera, including capturing video streams from two sensors, frame synchronization between sensors, stereo calibration algorithms, and, finally, disparity map calculation.
For the full video of this presentation, please visit:
https://www.embedded-vision.com/platinum-members/euresys/embedded-vision-training/videos/pages/may-2017-embedded-vision-summit
For more information about embedded vision, please visit:
http://www.embedded-vision.com
Jean-Michel Wintgens, Vice President of Engineering at Euresys, presents the "Developing Real-time Video Applications with CoaXPress" tutorial at the May 2017 Embedded Vision Summit.
CoaXPress is a modern, high-performance video transport interface. Using a standard coaxial cable, it provides a point-to-point connection that is reliable, scalable and versatile. Wintgens shows, using real application cases and comparisons with other standards, that CoaXPress is the best choice for real-time embedded applications that require individual camera control with accurate timing and synchronization.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2021/09/alternative-image-sensors-for-intelligent-in-cabin-monitoring-home-security-and-smart-devices-a-presentation-from-xperi/
Petronel Bigioi, CTO for Product Licensing at Xperi, presents the “Alternative Image Sensors for Intelligent In-Cabin Monitoring, Home Security and Smart Devices” tutorial at the May 2021 Embedded Vision Summit.
The traditional approach for in-cabin-monitoring uses cameras that capture only visible or near-infrared (NIR) light and are designed to represent a scene as closely as possible to what a human expects to see at a constant frame rate. But visible or NIR light represents only a small fraction of the information available to us, and frames gather both wanted and unwanted information without regard to changes in scenes, wasting computation and missing important temporal details.
Alternative sensing paradigms such as event cameras and thermal cameras can be used to overcome some of these limits and enable features that would not be possible with a conventional camera. This presentation details the use of alternative image sensors for enabling new features and capabilities for in-cabin monitoring, home surveillance and smart cameras. Improved energy efficiency, better results in low light conditions and new safety features are some of the key benefits of these alternative sensing methods.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2021/09/when-2d-is-not-enough-an-overview-of-optical-depth-sensing-technologies-a-presentation-from-ambarella/
Dinesh Balasubramaniam, Senior Product Marketing Manager at Ambarella, presents the “When 2D Is Not Enough: An Overview of Optical Depth Sensing Technologies” tutorial at the May 2021 Embedded Vision Summit.
Camera systems used for computer vision at the edge are smarter than ever, but when they perceive the world in 2D, they remain limited for many applications because they lack information about the third dimension: depth. Sensing technologies that capture and integrate depth allow us to build smarter and safer applications across a wide variety of applications including robotics, surveillance, AR/VR and gesture detection.
In this presentation, Balasubramaniam examines three common technologies used for optical depth sensing: stereo camera systems, time-of-flight (ToF) sensors and structured light systems. He reviews the core ideas behind each technology, compares and contrasts them, and identifies the tradeoffs to consider when selecting a depth sensing technology for your application, focusing on accuracy, sensing range, performance under difficult lighting conditions, optical hardware requirements and more.
For the full video of this presentation, please visit:
https://www.embedded-vision.com/platinum-members/pathpartner/embedded-vision-training/videos/pages/may-2019-embedded-vision-summit
For more information about embedded vision, please visit:
http://www.embedded-vision.com
Praveen Nayak, Tech Lead at PathPartner Technology, presents the "Using Deep Learning for Video Event Detection on a Compute Budget" tutorial at the May 2019 Embedded Vision Summit.
Convolutional neural networks (CNNs) have made tremendous strides in object detection and recognition in recent years. However, extending the CNN approach to understanding of video or volumetric data poses tough challenges, including trade-offs between representation quality and computational complexity, which is of particular concern on embedded platforms with tight computational budgets. This presentation explores the use of CNNs for video understanding.
Nayak reviews the evolution of deep representation learning methods involving spatio- temporal fusion from C3D to Conv-LSTMs for vision-based human activity detection. He proposes a decoupled alternative to this fusion, describing an approach that combines a low-complexity predictive temporal segment proposal model and a fine-grained (perhaps high- complexity) inference model. PathPartner Technology finds that this hybrid approach, in addition to reducing computational load with minimal loss of accuracy, enables effective solutions to these high complexity inference tasks.
For the full video of this presentation, please visit:
https://www.embedded-vision.com/platinum-members/embedded-vision-alliance/embedded-vision-training/videos/pages/may-2017-embedded-vision-summit-gallagher
For more information about embedded vision, please visit:
http://www.embedded-vision.com
Paul Gallagher, Senior Director of Technology and Product Planning for LG, presents the "Coming Shift from Image Sensors to Image Sensing" tutorial at the May 2017 Embedded Vision Summit.
The image sensor space is entering the fourth disruption in its evolution. The first three disruptions primarily focused on taking “pretty pictures” for human consumption, evaluation, and storage. The coming disruption will be driven by machine vision moving into the mainstream. Smart homes, offices, cars, devices – as well as AR/MR, biometrics and crowd monitoring – all need to run image data through a processor to activate responses without human viewing. The opportunity this presents is massive, but as the growth efficiencies come into play the solutions will become specialized.
This talk highlights the opportunities that the emerging shift to image-based sensing will bring throughout the imaging and vision industry. It explores the ingredients that industry participants will need in order to capitalize on these opportunities, and why the entrenched players may not be at as great an advantage as might be expected.
For the full video of this presentation, please visit:
https://www.embedded-vision.com/platinum-members/embedded-vision-alliance/embedded-vision-training/videos/pages/may-2017-embedded-vision-summit-jain
For more information about embedded vision, please visit:
http://www.embedded-vision.com
Divya Jain, Technical Director at Tyco Innovation, presents the "End to End Fire Detection Deep Neural Network Platform" tutorial at the May 2017 Embedded Vision Summit.
This presentation dives deep into a real-world problem of fire detection to see what it takes to build a complete solution using CNNs. Fire is specifically challenging because it doesn’t have a fixed shape or size like other objects. The presentation begins with a discussion of the technology stack, followed by the algorithm, and concluding with a review of the end to end architecture. Jain discusses the challenges her company encountered while training this algorithm and how they worked through them by building a scalable and reusable platform.
For the full video of this presentation, please visit:
https://www.embedded-vision.com/platinum-members/embedded-vision-alliance/embedded-vision-training/videos/pages/may-2017-embedded-vision-summit-zeller
For more information about embedded vision, please visit:
http://www.embedded-vision.com
Sadie Zeller, Manager of Global Product Management and the Clinical Vertical Market at Microscan Systems, presents the "Another Set of Eyes: Machine Vision Automation Solutions for In Vitro Diagnostics" tutorial at the May 2017 Embedded Vision Summit.
In vitro diagnostics (IVD) are tests that can detect diseases, conditions, or infections. The use of automation, including machine vision inspection, in IVD has increased steadily, and is now a standard practice. Vision-based laboratory automation enables greater throughput efficiency and minimizes the risk of human error. But IVD is a challenging application: the healthcare industry requires systems that are, at a minimum, fail-safe, and ideally, error-proof.
Machine vision systems for IVD (and related life sciences) therefore require a robust development phase including an iterative design-validate process to ensure that the system is safe for use. This presentation addresses some of the key requirements and constraints of healthcare vision applications, and highlights approaches for application design and testing to meet tough industry demands.
For the full video of this presentation, please visit:
https://www.embedded-vision.com/platinum-members/embedded-vision-alliance/embedded-vision-training/videos/pages/may-2017-embedded-vision-summit-leontiev
For more information about embedded vision, please visit:
http://www.embedded-vision.com
Anton Leontiev, Embedded Software Architect at ELVEES, JSC, presents the "Designing a Stereo IP Camera From Scratch" tutorial at the May 2017 Embedded Vision Summit.
As the number of cameras in an intelligent video surveillance system increases, server processing of the video quickly becomes a bottleneck. On the other hand, when computer vision algorithms are moved to a resource-limited camera platform, their output quality is often unsatisfactory.
The effectiveness of vision algorithms for surveillance can be greatly improved by using a depth map in addition to the regular image. Thus, using a stereo camera is a way to enable offloading of advanced algorithms from servers to IP cameras. This talk covers the main problems arising during the design of an embedded stereo IP camera, including capturing video streams from two sensors, frame synchronization between sensors, stereo calibration algorithms, and, finally, disparity map calculation.
For the full video of this presentation, please visit:
https://www.embedded-vision.com/platinum-members/euresys/embedded-vision-training/videos/pages/may-2017-embedded-vision-summit
For more information about embedded vision, please visit:
http://www.embedded-vision.com
Jean-Michel Wintgens, Vice President of Engineering at Euresys, presents the "Developing Real-time Video Applications with CoaXPress" tutorial at the May 2017 Embedded Vision Summit.
CoaXPress is a modern, high-performance video transport interface. Using a standard coaxial cable, it provides a point-to-point connection that is reliable, scalable and versatile. Wintgens shows, using real application cases and comparisons with other standards, that CoaXPress is the best choice for real-time embedded applications that require individual camera control with accurate timing and synchronization.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2021/09/alternative-image-sensors-for-intelligent-in-cabin-monitoring-home-security-and-smart-devices-a-presentation-from-xperi/
Petronel Bigioi, CTO for Product Licensing at Xperi, presents the “Alternative Image Sensors for Intelligent In-Cabin Monitoring, Home Security and Smart Devices” tutorial at the May 2021 Embedded Vision Summit.
The traditional approach for in-cabin-monitoring uses cameras that capture only visible or near-infrared (NIR) light and are designed to represent a scene as closely as possible to what a human expects to see at a constant frame rate. But visible or NIR light represents only a small fraction of the information available to us, and frames gather both wanted and unwanted information without regard to changes in scenes, wasting computation and missing important temporal details.
Alternative sensing paradigms such as event cameras and thermal cameras can be used to overcome some of these limits and enable features that would not be possible with a conventional camera. This presentation details the use of alternative image sensors for enabling new features and capabilities for in-cabin monitoring, home surveillance and smart cameras. Improved energy efficiency, better results in low light conditions and new safety features are some of the key benefits of these alternative sensing methods.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2021/09/when-2d-is-not-enough-an-overview-of-optical-depth-sensing-technologies-a-presentation-from-ambarella/
Dinesh Balasubramaniam, Senior Product Marketing Manager at Ambarella, presents the “When 2D Is Not Enough: An Overview of Optical Depth Sensing Technologies” tutorial at the May 2021 Embedded Vision Summit.
Camera systems used for computer vision at the edge are smarter than ever, but when they perceive the world in 2D, they remain limited for many applications because they lack information about the third dimension: depth. Sensing technologies that capture and integrate depth allow us to build smarter and safer applications across a wide variety of applications including robotics, surveillance, AR/VR and gesture detection.
In this presentation, Balasubramaniam examines three common technologies used for optical depth sensing: stereo camera systems, time-of-flight (ToF) sensors and structured light systems. He reviews the core ideas behind each technology, compares and contrasts them, and identifies the tradeoffs to consider when selecting a depth sensing technology for your application, focusing on accuracy, sensing range, performance under difficult lighting conditions, optical hardware requirements and more.
For the full video of this presentation, please visit:
https://www.embedded-vision.com/platinum-members/pathpartner/embedded-vision-training/videos/pages/may-2019-embedded-vision-summit
For more information about embedded vision, please visit:
http://www.embedded-vision.com
Praveen Nayak, Tech Lead at PathPartner Technology, presents the "Using Deep Learning for Video Event Detection on a Compute Budget" tutorial at the May 2019 Embedded Vision Summit.
Convolutional neural networks (CNNs) have made tremendous strides in object detection and recognition in recent years. However, extending the CNN approach to understanding of video or volumetric data poses tough challenges, including trade-offs between representation quality and computational complexity, which is of particular concern on embedded platforms with tight computational budgets. This presentation explores the use of CNNs for video understanding.
Nayak reviews the evolution of deep representation learning methods involving spatio- temporal fusion from C3D to Conv-LSTMs for vision-based human activity detection. He proposes a decoupled alternative to this fusion, describing an approach that combines a low-complexity predictive temporal segment proposal model and a fine-grained (perhaps high- complexity) inference model. PathPartner Technology finds that this hybrid approach, in addition to reducing computational load with minimal loss of accuracy, enables effective solutions to these high complexity inference tasks.
For the full video of this presentation, please visit:
https://www.embedded-vision.com/industry-analysis/video-interviews-demos/path-adas-autonomy-presentation-strategy-analytics
For more information about embedded vision, please visit:
http://www.embedded-vision.com
Roger Lanctot, Director of Automotive Connected Mobility at Strategy Analytics, delivers the presentation "The Path from ADAS to Autonomy" at the Embedded Vision Alliance's December 2017 Vision Industry and Technology Forum. Lanctot shares his unique perspective on what the industry can realistically expect to achieve with ADAS and autonomous vehicles, using computer vision and other technologies.
For the full video of this presentation, please visit:
https://www.embedded-vision.com/platinum-members/pathpartner/embedded-vision-training/videos/pages/may-2017-embedded-vision-summit
For more information about embedded vision, please visit:
http://www.embedded-vision.com
Jayachandra Dakala, Technical Architect at PathPartner Technology, presents the "Approaches for Vision-based Driver Monitoring" tutorial at the May 2017 Embedded Vision Summit.
Since many road accidents are caused by driver inattention, assessing driver attention is important to preventing accidents. Distraction caused by other activities and sleepiness due to fatigue are the main causes of driver inattention. Vision-based assessment of driver distraction and fatigue must estimate face pose, sleepiness, expression, etc. Estimating these aspects under real driving conditions, including day-to-night transition, drivers wearing sunglasses etc., is a challenging task.
A solution using deep learning to handle tasks from searching for a driver’s face in a given image to estimating attention would potentially be difficult to realize in an embedded system. In this talk, Dakala looks at the pros and cons of various machine learning approaches like multi-task deep networks, boosted cascades, etc. for this application, and then describes a hybrid approach that provides the required insights while being realizable in an embedded system.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2021/10/building-an-autonomous-detect-and-avoid-system-for-commercial-drones-a-presentation-from-iris-automation/
Alejandro Galindo, Head of Research and Development at Iris Automation, presents the “Building an Autonomous Detect-and-Avoid System for Commercial Drones” tutorial at the May 2021 Embedded Vision Summit.
Commercial and industrial drones have the potential to completely disrupt industries and create new ones. Used in applications such as infrastructure inspection, search and rescue, package delivery, and many others, they can save time, money, and lives. Most of these applications require a real-time understanding of the environment and the risks of collision.
At the same time, commercial drones are limited in the size, weight, and power they can carry, narrowing the options for sensors and computing architectures. In this presentation, Galindo dives into what it takes to build an autonomous detect-and-avoid system for commercial drones and, in particular, focuses on computer vision issues such as predictability and reduction of false positives. Why are they important and what does it take to drive them in the right direction?
For the full video of this presentation, please visit:
https://www.embedded-vision.com/platinum-members/embedded-vision-alliance/embedded-vision-training/videos/pages/dec-2017-alliance-vitf-khronos
For more information about embedded vision, please visit:
http://www.embedded-vision.com
Neil Trevett, President of the Khronos Group, delivers the presentation "Update on Khronos Standards for Vision and Machine Learning" at the Embedded Vision Alliance's December 2017 Vision Industry and Technology Forum. Trevett shares updates on recent, current and planned Khronos standardization activities aimed at streamlining the deployment of embedded vision and AI.
For the full video of this presentation, please visit:
https://www.embedded-vision.com/platinum-members/embedded-vision-alliance/embedded-vision-training/videos/pages/may-2017-embedded-vision-summit-kim
For more information about embedded vision, please visit:
http://www.embedded-vision.com
Minyoung Kim, Senior Research Engineer at Panasonic Silicon Valley Laboratory, presents the "A Fast Object Detector for ADAS using Deep Learning" tutorial at the May 2017 Embedded Vision Summit.
Object detection has been one of the most important research areas in computer vision for decades. Recently, deep neural networks (DNNs) have led to significant improvement in several machine learning domains, including computer vision, achieving the state-of-the-art performance thanks to their theoretically proven modeling and generalization capabilities. However, it is still challenging to deploy such DNNs on embedded systems, for applications such as advanced driver assistance systems (ADAS), where computation power is limited.
Kim and her team focus on reducing the size of the network and required computations, and thus building a fast, real-time object detection system. They propose a fully convolutional neural network that can achieve at least 45 fps on 640x480 frames with competitive performance. With this network, there is no proposal generation step, which can cause a speed bottleneck; instead, a single forward propagation of the network approximates the locations of objects directly.
For the full video of this presentation, please visit:
http://www.embedded-vision.com/platinum-members/qualcomm/embedded-vision-training/videos/pages/may-2016-embedded-vision-summit-talluri
For more information about embedded vision, please visit:
http://www.embedded-vision.com
Raj Talluri, Senior Vice President of Product Management at Qualcomm Technologies, presents the "Is Vision the New Wireless?" tutorial at the May 2016 Embedded Vision Summit.
Over the past 20 years, digital wireless communications has become an essential technology for many industries, and a primary driver for the electronics industry. Today, computer vision is showing signs of following a similar trajectory. Once used only in low-volume applications such as manufacturing inspection, vision is now becoming an essential technology for a wide range of mass-market devices, from cars to drones to mobile phones. In this presentation, Talluri examines the motivations for incorporating vision into diverse products, presents case studies that illuminate the current state of vision technology in high-volume products, and explores critical challenges to ubiquitous deployment of visual intelligence.
For the full video of this presentation, please visit:
https://www.embedded-vision.com/platinum-members/embedded-vision-alliance/embedded-vision-training/videos/pages/may-2019-embedded-vision-summit-osterwood-tue
For more information about embedded vision, please visit:
http://www.embedded-vision.com
Chris Osterwood, Founder and CEO of Capable Robot Components, presents the "How to Choose a 3D Vision Sensor" tutorial at the May 2019 Embedded Vision Summit.
Designers of autonomous vehicles, robots and many other systems are faced with a critical challenge: Which 3D vision sensor technology to use? There are a wide variety of sensors on the market, employing modalities including passive stereo, active stereo, time of flight, 2D and 3D lasers and monocular approaches. This talk provides an overview of 3D vision sensor technologies and their capabilities and limitations, based on Osterwood's experience selecting the right 3D technology and sensor for a diverse range of autonomous robot designs.
There is no perfect sensor technology and no perfect sensor, but there is always a sensor which best aligns with the requirements of your application—you just need to find it. Osterwood describes a quantitative and qualitative evaluation process for 3D vision sensors, including testing processes using both controlled environments and field testing, and some surprising characteristics and limitations he's uncovered through that testing.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2021/09/the-five-rights-of-an-edge-ai-computer-vision-system-right-data-right-time-right-place-right-decision-right-action-a-presentation-from-adlink-technology/
Toby McClean, Vice President of AIoT Technology and Innovation at ADLINK Technology, presents the “Five Rights of an Edge AI Computer Vision System: Right Data, Right Time, Right Place, Right Decision, Right Action” tutorial at the May 2021 Embedded Vision Summit.
Solutions builders and business decision-makers designing edge AI computer vision systems should focus on five key factors to ensure outcomes that deliver ROI. The Five Rights of an edge AI computer vision system are streaming the right data, at the right time, to the right place, for the right decision, to drive the right action. And the best place to get started is the fifth and final right—the right action—defining what exactly is the outcome you want to achieve with your system.
What business problem does it solve? Once you identify this you then need to work backward from there, embracing the benefits and the challenges of AI at the edge. In this talk, McClean explains these key concepts and illustrates them via real-world use cases.
For the full video of this presentation, please visit:
https://www.embedded-vision.com/platinum-members/embedded-vision-alliance/embedded-vision-training/videos/pages/may-2019-embedded-vision-summit-guttmann
For more information about embedded vision, please visit:
http://www.embedded-vision.com
Moses Guttmann, CTO and founder of Allegro, presents the "Optimizing SSD Object Detection for Low-power Devices" tutorial at the May 2019 Embedded Vision Summit.
Deep learning-based computer vision models have gained traction in applications requiring object detection, thanks to their accuracy and flexibility. For deployment on low-power hardware, single-shot detection (SSD) models are attractive due to their speed when operating on inputs with small spatial dimensions.
The key challenge in creating efficient embedded implementations of SSD is not in the feature extraction module, but rather is due to the non-linear bottleneck in the detection stage, which does not lend itself to parallelization. This hinders the ability to lower the processing time per frame, even with custom hardware.
Guttmann describes in detail a data-centric optimization approach to SSD. The approach drastically lowers the number of priors (“anchors”) needed for the detection, and thus linearly decreases time spent on this costly part of the computation. Thus, specialized processors and custom hardware may be better utilized, yielding higher performance and lower latency regardless of the specific hardware used.
For the full video of this presentation, please visit:
https://www.embedded-vision.com/platinum-members/qualcomm/embedded-vision-training/videos/pages/may-2017-embedded-vision-summit-mangan
For more information about embedded vision, please visit:
http://www.embedded-vision.com
Michael Mangan, a member of the Product Manager Staff at Qualcomm Technologies, presents the "Computer Vision and Machine Learning at the Edge" tutorial at the May 2017 Embedded Vision Summit.
Computer vision and machine learning techniques are applied to myriad use cases in smartphones today. As mobile technology expands beyond the smartphone vertical, both technologies will continue to fuel innovation, individually and in concert. In this presentation, Mangan discusses Qualcomm Technologies, Inc.’s use of and vision for the future of computer vision and machine learning at the edge.
For the full video of this presentation, please visit:
http://www.embedded-vision.com/platinum-members/amd/embedded-vision-training/videos/pages/may-2016-embedded-vision-summit
For more information about embedded vision, please visit:
http://www.embedded-vision.com
Allen Rush, Fellow at AMD, presents the "How Computer Vision Is Accelerating the Future of Virtual Reality" tutorial at the May 2016 Embedded Vision Summit.
Virtual reality (VR) is the new focus for a wide variety of applications including entertainment, gaming, medical, science, and many others. The technology driving the VR user experience has advanced rapidly in the past few years, and it is now poised to proliferate into these applications with solid products that offer a range of cost, performance and capabilities. The next question is: how does computer vision intersect this emerging modality? Already we are seeing examples of the integration of computer vision and VR, for example for simple eye tracking and gesture recognition. This talk explores how we can expect more complex computer vision capabilities to become part of the VR landscape and the business and technical challenges that must be overcome to realize these compelling capabilities.
For the full video of this presentation, please visit:
https://www.embedded-vision.com/platinum-members/mathworks/embedded-vision-training/videos/pages/may-2019-embedded-vision-summit-hiremath-chou
For more information about embedded vision, please visit:
http://www.embedded-vision.com
Sandeep Hiremath, Product Manager, and Bill Chou, Senior Computer Vision Scientist, both of MathWorks, present the "Deploying Deep Learning Models on Embedded Processors for Autonomous Systems with MATLAB" tutorial at the May 2019 Embedded Vision Summit.
In this presentation, Hiremath and Chou explain how to bring the power of deep neural networks to memory- and power-constrained devices like those used in robotics and automated driving. The workflow starts with an algorithm design in MATLAB, which enjoys universal appeal among engineers and scientists because of its expressive power and ease of use. The algorithm may employ deep learning networks augmented with traditional computer vision techniques and can be tested and verified within MATLAB.
Next, the networks are trained using MATLAB’s GPU and parallel computing support either on the desktop, a local compute cluster or in the cloud. In the deployment phase, code generation tools are employed to automatically generate optimized code that can target both embedded GPUs like Jetson, Jetson Drive AGX Xavier, Intel-based CPU platforms or ARM-based embedded platforms. The generated code leverages target-specific libraries that are highly optimized for the target architecture and memory model.
For the full video of this presentation, please visit:
https://www.edge-ai-vision.com/2021/01/video-activity-recognition-with-limited-data-for-smart-home-applications-a-presentation-from-comcast/
For more information about edge AI and computer vision, please visit:
https://www.edge-ai-vision.com
Hongcheng Wang, Director of Technical Research at Comcast, presents the “Video Activity Recognition with Limited Data for Smart Home Applications” tutorial at the September 2020 Embedded Vision Summit.
Comcast’s Xfinity Home connects millions of home smart cameras and IoT devices to improve its customers’ safety and security. The company’s teams use computer vision and deep learning to understand video and sensor data from these devices to identify relevant events so that it can improve the user experience.
Specifically, Comcast has explored the spatial-temporal relationships among objects, places and actions. The company has also developed a semi-supervised learning approach for video classification (VideoSSL) to detect certain activities using limited training data. Using these techniques, and as described in this presentation, it has achieved very promising results on activity recognition with multiple datasets.
Mitchell Reifel (pmdtechnologies ag): pmd Time-of-Flight – the Swiss Army Kni...AugmentedWorldExpo
A talk from the Develop Track at AWE USA 2018 - the World's #1 XR Conference & Expo in Santa Clara, California May 30- June 1, 2018.
Mitchell Reifel (pmdtechnologies ag): pmd Time-of-Flight – the Swiss Army Knife of 3D depth sensing
pmd's Time-of-Flight technology is integrated into two AR-smartphones on the market! pmd ToF is in 4 AR headsets! This talk will show what pmd has achieved, what they can do with our 3D ToF technology and why depth sensing is one secret sauce for AR, VR and MR.
http://AugmentedWorldExpo.com
HSA-4146, Creating Smarter Applications and Systems Through Visual Intelligen...AMD Developer Central
Presentation HSA-4146, Creating Smarter Applications and Systems Through Visual Intelligence, by Jeff Bier at the AMD Developer Summit (APU13) November 11-13, 2013.
For the full video of this presentation, please visit:
https://www.edge-ai-vision.com/2021/01/cmos-image-sensors-a-guide-to-building-the-eyes-of-a-vision-system-a-presentation-from-gopro/
Jon Stern, Director of Optical Systems at GoPro, presents the “CMOS Image Sensors: A Guide to Building the Eyes of a Vision System” tutorial at the September 2020 Embedded Vision Summit.
Improvements in CMOS image sensors have been instrumental in lowering barriers for embedding vision into a broad range of systems. For example, a high degree of system-on-chip integration allows photons to be converted into bits with minimal support circuitry. Low power consumption enables imaging in even small, battery-powered devices. Simple control protocols mean that companies can design camera-based systems without extensive in-house expertise. Meanwhile, the low cost of CMOS sensors is enabling visual perception to become ever more pervasive.
In this tutorial, Stern introduces the basic operation, types and characteristics of CMOS image sensors; explains how to select the right sensor for your application; and provides practical guidelines for building a camera module by pairing the sensor with suitable optics. He highlights areas demanding of special attention to equip you with an understanding of the common pitfalls in designing imaging systems.
For the full video of this presentation, please visit:
https://www.embedded-vision.com/platinum-members/basler/embedded-vision-training/videos/pages/may-2017-embedded-vision-summit
For more information about embedded vision, please visit:
http://www.embedded-vision.com
Mark Hebbel, Head of New Business Development at Basler, presents the "Time of Flight Sensors: How Do I Choose Them and How Do I Integrate Them?" tutorial at the May 2017 Embedded Vision Summit.
3D digitalization of the world is becoming more important. This additional dimension of information allows more real-world perception challenges to be solved in a wide range of applications. Time-of-flight (ToF) sensors are one way to obtain depth information, and several time-of-flight sensors are available on the market.
In this talk, Hebbel examines the strengths and weaknesses of ToF sensors. He explains how to choose them based on your specifications, and where to get them. He also briefly discusses things you should watch out for when incorporating ToF sensors into your systems, along with the future of ToF technology.
For the full video of this presentation, please visit:
https://www.embedded-vision.com/industry-analysis/video-interviews-demos/path-adas-autonomy-presentation-strategy-analytics
For more information about embedded vision, please visit:
http://www.embedded-vision.com
Roger Lanctot, Director of Automotive Connected Mobility at Strategy Analytics, delivers the presentation "The Path from ADAS to Autonomy" at the Embedded Vision Alliance's December 2017 Vision Industry and Technology Forum. Lanctot shares his unique perspective on what the industry can realistically expect to achieve with ADAS and autonomous vehicles, using computer vision and other technologies.
For the full video of this presentation, please visit:
https://www.embedded-vision.com/platinum-members/pathpartner/embedded-vision-training/videos/pages/may-2017-embedded-vision-summit
For more information about embedded vision, please visit:
http://www.embedded-vision.com
Jayachandra Dakala, Technical Architect at PathPartner Technology, presents the "Approaches for Vision-based Driver Monitoring" tutorial at the May 2017 Embedded Vision Summit.
Since many road accidents are caused by driver inattention, assessing driver attention is important to preventing accidents. Distraction caused by other activities and sleepiness due to fatigue are the main causes of driver inattention. Vision-based assessment of driver distraction and fatigue must estimate face pose, sleepiness, expression, etc. Estimating these aspects under real driving conditions, including day-to-night transition, drivers wearing sunglasses etc., is a challenging task.
A solution using deep learning to handle tasks from searching for a driver’s face in a given image to estimating attention would potentially be difficult to realize in an embedded system. In this talk, Dakala looks at the pros and cons of various machine learning approaches like multi-task deep networks, boosted cascades, etc. for this application, and then describes a hybrid approach that provides the required insights while being realizable in an embedded system.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2021/10/building-an-autonomous-detect-and-avoid-system-for-commercial-drones-a-presentation-from-iris-automation/
Alejandro Galindo, Head of Research and Development at Iris Automation, presents the “Building an Autonomous Detect-and-Avoid System for Commercial Drones” tutorial at the May 2021 Embedded Vision Summit.
Commercial and industrial drones have the potential to completely disrupt industries and create new ones. Used in applications such as infrastructure inspection, search and rescue, package delivery, and many others, they can save time, money, and lives. Most of these applications require a real-time understanding of the environment and the risks of collision.
At the same time, commercial drones are limited in the size, weight, and power they can carry, narrowing the options for sensors and computing architectures. In this presentation, Galindo dives into what it takes to build an autonomous detect-and-avoid system for commercial drones and, in particular, focuses on computer vision issues such as predictability and reduction of false positives. Why are they important and what does it take to drive them in the right direction?
For the full video of this presentation, please visit:
https://www.embedded-vision.com/platinum-members/embedded-vision-alliance/embedded-vision-training/videos/pages/dec-2017-alliance-vitf-khronos
For more information about embedded vision, please visit:
http://www.embedded-vision.com
Neil Trevett, President of the Khronos Group, delivers the presentation "Update on Khronos Standards for Vision and Machine Learning" at the Embedded Vision Alliance's December 2017 Vision Industry and Technology Forum. Trevett shares updates on recent, current and planned Khronos standardization activities aimed at streamlining the deployment of embedded vision and AI.
For the full video of this presentation, please visit:
https://www.embedded-vision.com/platinum-members/embedded-vision-alliance/embedded-vision-training/videos/pages/may-2017-embedded-vision-summit-kim
For more information about embedded vision, please visit:
http://www.embedded-vision.com
Minyoung Kim, Senior Research Engineer at Panasonic Silicon Valley Laboratory, presents the "A Fast Object Detector for ADAS using Deep Learning" tutorial at the May 2017 Embedded Vision Summit.
Object detection has been one of the most important research areas in computer vision for decades. Recently, deep neural networks (DNNs) have led to significant improvement in several machine learning domains, including computer vision, achieving the state-of-the-art performance thanks to their theoretically proven modeling and generalization capabilities. However, it is still challenging to deploy such DNNs on embedded systems, for applications such as advanced driver assistance systems (ADAS), where computation power is limited.
Kim and her team focus on reducing the size of the network and required computations, and thus building a fast, real-time object detection system. They propose a fully convolutional neural network that can achieve at least 45 fps on 640x480 frames with competitive performance. With this network, there is no proposal generation step, which can cause a speed bottleneck; instead, a single forward propagation of the network approximates the locations of objects directly.
For the full video of this presentation, please visit:
http://www.embedded-vision.com/platinum-members/qualcomm/embedded-vision-training/videos/pages/may-2016-embedded-vision-summit-talluri
For more information about embedded vision, please visit:
http://www.embedded-vision.com
Raj Talluri, Senior Vice President of Product Management at Qualcomm Technologies, presents the "Is Vision the New Wireless?" tutorial at the May 2016 Embedded Vision Summit.
Over the past 20 years, digital wireless communications has become an essential technology for many industries, and a primary driver for the electronics industry. Today, computer vision is showing signs of following a similar trajectory. Once used only in low-volume applications such as manufacturing inspection, vision is now becoming an essential technology for a wide range of mass-market devices, from cars to drones to mobile phones. In this presentation, Talluri examines the motivations for incorporating vision into diverse products, presents case studies that illuminate the current state of vision technology in high-volume products, and explores critical challenges to ubiquitous deployment of visual intelligence.
For the full video of this presentation, please visit:
https://www.embedded-vision.com/platinum-members/embedded-vision-alliance/embedded-vision-training/videos/pages/may-2019-embedded-vision-summit-osterwood-tue
For more information about embedded vision, please visit:
http://www.embedded-vision.com
Chris Osterwood, Founder and CEO of Capable Robot Components, presents the "How to Choose a 3D Vision Sensor" tutorial at the May 2019 Embedded Vision Summit.
Designers of autonomous vehicles, robots and many other systems are faced with a critical challenge: Which 3D vision sensor technology to use? There are a wide variety of sensors on the market, employing modalities including passive stereo, active stereo, time of flight, 2D and 3D lasers and monocular approaches. This talk provides an overview of 3D vision sensor technologies and their capabilities and limitations, based on Osterwood's experience selecting the right 3D technology and sensor for a diverse range of autonomous robot designs.
There is no perfect sensor technology and no perfect sensor, but there is always a sensor which best aligns with the requirements of your application—you just need to find it. Osterwood describes a quantitative and qualitative evaluation process for 3D vision sensors, including testing processes using both controlled environments and field testing, and some surprising characteristics and limitations he's uncovered through that testing.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2021/09/the-five-rights-of-an-edge-ai-computer-vision-system-right-data-right-time-right-place-right-decision-right-action-a-presentation-from-adlink-technology/
Toby McClean, Vice President of AIoT Technology and Innovation at ADLINK Technology, presents the “Five Rights of an Edge AI Computer Vision System: Right Data, Right Time, Right Place, Right Decision, Right Action” tutorial at the May 2021 Embedded Vision Summit.
Solutions builders and business decision-makers designing edge AI computer vision systems should focus on five key factors to ensure outcomes that deliver ROI. The Five Rights of an edge AI computer vision system are streaming the right data, at the right time, to the right place, for the right decision, to drive the right action. And the best place to get started is the fifth and final right—the right action—defining what exactly is the outcome you want to achieve with your system.
What business problem does it solve? Once you identify this you then need to work backward from there, embracing the benefits and the challenges of AI at the edge. In this talk, McClean explains these key concepts and illustrates them via real-world use cases.
For the full video of this presentation, please visit:
https://www.embedded-vision.com/platinum-members/embedded-vision-alliance/embedded-vision-training/videos/pages/may-2019-embedded-vision-summit-guttmann
For more information about embedded vision, please visit:
http://www.embedded-vision.com
Moses Guttmann, CTO and founder of Allegro, presents the "Optimizing SSD Object Detection for Low-power Devices" tutorial at the May 2019 Embedded Vision Summit.
Deep learning-based computer vision models have gained traction in applications requiring object detection, thanks to their accuracy and flexibility. For deployment on low-power hardware, single-shot detection (SSD) models are attractive due to their speed when operating on inputs with small spatial dimensions.
The key challenge in creating efficient embedded implementations of SSD is not in the feature extraction module, but rather is due to the non-linear bottleneck in the detection stage, which does not lend itself to parallelization. This hinders the ability to lower the processing time per frame, even with custom hardware.
Guttmann describes in detail a data-centric optimization approach to SSD. The approach drastically lowers the number of priors (“anchors”) needed for the detection, and thus linearly decreases time spent on this costly part of the computation. Thus, specialized processors and custom hardware may be better utilized, yielding higher performance and lower latency regardless of the specific hardware used.
For the full video of this presentation, please visit:
https://www.embedded-vision.com/platinum-members/qualcomm/embedded-vision-training/videos/pages/may-2017-embedded-vision-summit-mangan
For more information about embedded vision, please visit:
http://www.embedded-vision.com
Michael Mangan, a member of the Product Manager Staff at Qualcomm Technologies, presents the "Computer Vision and Machine Learning at the Edge" tutorial at the May 2017 Embedded Vision Summit.
Computer vision and machine learning techniques are applied to myriad use cases in smartphones today. As mobile technology expands beyond the smartphone vertical, both technologies will continue to fuel innovation, individually and in concert. In this presentation, Mangan discusses Qualcomm Technologies, Inc.’s use of and vision for the future of computer vision and machine learning at the edge.
For the full video of this presentation, please visit:
http://www.embedded-vision.com/platinum-members/amd/embedded-vision-training/videos/pages/may-2016-embedded-vision-summit
For more information about embedded vision, please visit:
http://www.embedded-vision.com
Allen Rush, Fellow at AMD, presents the "How Computer Vision Is Accelerating the Future of Virtual Reality" tutorial at the May 2016 Embedded Vision Summit.
Virtual reality (VR) is the new focus for a wide variety of applications including entertainment, gaming, medical, science, and many others. The technology driving the VR user experience has advanced rapidly in the past few years, and it is now poised to proliferate into these applications with solid products that offer a range of cost, performance and capabilities. The next question is: how does computer vision intersect this emerging modality? Already we are seeing examples of the integration of computer vision and VR, for example for simple eye tracking and gesture recognition. This talk explores how we can expect more complex computer vision capabilities to become part of the VR landscape and the business and technical challenges that must be overcome to realize these compelling capabilities.
For the full video of this presentation, please visit:
https://www.embedded-vision.com/platinum-members/mathworks/embedded-vision-training/videos/pages/may-2019-embedded-vision-summit-hiremath-chou
For more information about embedded vision, please visit:
http://www.embedded-vision.com
Sandeep Hiremath, Product Manager, and Bill Chou, Senior Computer Vision Scientist, both of MathWorks, present the "Deploying Deep Learning Models on Embedded Processors for Autonomous Systems with MATLAB" tutorial at the May 2019 Embedded Vision Summit.
In this presentation, Hiremath and Chou explain how to bring the power of deep neural networks to memory- and power-constrained devices like those used in robotics and automated driving. The workflow starts with an algorithm design in MATLAB, which enjoys universal appeal among engineers and scientists because of its expressive power and ease of use. The algorithm may employ deep learning networks augmented with traditional computer vision techniques and can be tested and verified within MATLAB.
Next, the networks are trained using MATLAB’s GPU and parallel computing support either on the desktop, a local compute cluster or in the cloud. In the deployment phase, code generation tools are employed to automatically generate optimized code that can target both embedded GPUs like Jetson, Jetson Drive AGX Xavier, Intel-based CPU platforms or ARM-based embedded platforms. The generated code leverages target-specific libraries that are highly optimized for the target architecture and memory model.
For the full video of this presentation, please visit:
https://www.edge-ai-vision.com/2021/01/video-activity-recognition-with-limited-data-for-smart-home-applications-a-presentation-from-comcast/
For more information about edge AI and computer vision, please visit:
https://www.edge-ai-vision.com
Hongcheng Wang, Director of Technical Research at Comcast, presents the “Video Activity Recognition with Limited Data for Smart Home Applications” tutorial at the September 2020 Embedded Vision Summit.
Comcast’s Xfinity Home connects millions of home smart cameras and IoT devices to improve its customers’ safety and security. The company’s teams use computer vision and deep learning to understand video and sensor data from these devices to identify relevant events so that it can improve the user experience.
Specifically, Comcast has explored the spatial-temporal relationships among objects, places and actions. The company has also developed a semi-supervised learning approach for video classification (VideoSSL) to detect certain activities using limited training data. Using these techniques, and as described in this presentation, it has achieved very promising results on activity recognition with multiple datasets.
Mitchell Reifel (pmdtechnologies ag): pmd Time-of-Flight – the Swiss Army Kni...AugmentedWorldExpo
A talk from the Develop Track at AWE USA 2018 - the World's #1 XR Conference & Expo in Santa Clara, California May 30- June 1, 2018.
Mitchell Reifel (pmdtechnologies ag): pmd Time-of-Flight – the Swiss Army Knife of 3D depth sensing
pmd's Time-of-Flight technology is integrated into two AR-smartphones on the market! pmd ToF is in 4 AR headsets! This talk will show what pmd has achieved, what they can do with our 3D ToF technology and why depth sensing is one secret sauce for AR, VR and MR.
http://AugmentedWorldExpo.com
HSA-4146, Creating Smarter Applications and Systems Through Visual Intelligen...AMD Developer Central
Presentation HSA-4146, Creating Smarter Applications and Systems Through Visual Intelligence, by Jeff Bier at the AMD Developer Summit (APU13) November 11-13, 2013.
For the full video of this presentation, please visit:
https://www.edge-ai-vision.com/2021/01/cmos-image-sensors-a-guide-to-building-the-eyes-of-a-vision-system-a-presentation-from-gopro/
Jon Stern, Director of Optical Systems at GoPro, presents the “CMOS Image Sensors: A Guide to Building the Eyes of a Vision System” tutorial at the September 2020 Embedded Vision Summit.
Improvements in CMOS image sensors have been instrumental in lowering barriers for embedding vision into a broad range of systems. For example, a high degree of system-on-chip integration allows photons to be converted into bits with minimal support circuitry. Low power consumption enables imaging in even small, battery-powered devices. Simple control protocols mean that companies can design camera-based systems without extensive in-house expertise. Meanwhile, the low cost of CMOS sensors is enabling visual perception to become ever more pervasive.
In this tutorial, Stern introduces the basic operation, types and characteristics of CMOS image sensors; explains how to select the right sensor for your application; and provides practical guidelines for building a camera module by pairing the sensor with suitable optics. He highlights areas demanding of special attention to equip you with an understanding of the common pitfalls in designing imaging systems.
For the full video of this presentation, please visit:
https://www.embedded-vision.com/platinum-members/basler/embedded-vision-training/videos/pages/may-2017-embedded-vision-summit
For more information about embedded vision, please visit:
http://www.embedded-vision.com
Mark Hebbel, Head of New Business Development at Basler, presents the "Time of Flight Sensors: How Do I Choose Them and How Do I Integrate Them?" tutorial at the May 2017 Embedded Vision Summit.
3D digitalization of the world is becoming more important. This additional dimension of information allows more real-world perception challenges to be solved in a wide range of applications. Time-of-flight (ToF) sensors are one way to obtain depth information, and several time-of-flight sensors are available on the market.
In this talk, Hebbel examines the strengths and weaknesses of ToF sensors. He explains how to choose them based on your specifications, and where to get them. He also briefly discusses things you should watch out for when incorporating ToF sensors into your systems, along with the future of ToF technology.
Melexis Time of Flight Imager for Automotive Applications 2017 teardown rever...Yole Developpement
A cutting-edge ToF imager technology from Sony/Softkinetic, adapted by Melexis for automotive in-cabin applications
Today, Time-of-Flight (ToF) systems are among the most innovative technologies offering imaging companies an opportunity to lead the market. Every major player wants to integrate these devices to provide functions such as 3D imaging, proximity sensing, ambient light sensing and gesture recognition.
Sony/Softkinetic has been investigating this technology deeply, providing a unique pixel technology to several image sensor manufacturers in three application areas: consumer, automotive and industrial. For automotive applications, Sony/Softkinetic has licensed its technology to Melexis, which has worked on the pixel design to provide a ToF imager for gesture recognition.
The MLX75023 is an automotive 3D ToF Imager already integrated into gesture recognition systems from car makers like BMW. The 3D ToF Imager is packaged using Glass Ball Grid Array technology.
The device comprises the die sensor and the glass filter in the same component in thin, 0.7 mm-thick, packaging.
This report analyzes the complete component, from the glass near-infrared band pass filter to the collector, based on the ToF pixel technology licenses developed by Softkinetic and improved by Melexis.
The report includes a complete cost analysis and price estimation of the device based on a detailed description of the package, and the ToF imager.
It also features a complete ToF pixel technology comparison with Infineon, STMicroelectronics and Texas Instrument ToF imagers, which are also based on Sony/Softkinetic technology, with details on the companies’ choices.
More information on that report at http://www.i-micronews.com/reports.html
Tobias Rothermel (pmd technologies): pmd ToF – the Swiss Army Knife of 3D Sen...AugmentedWorldExpo
A talk from the Developer Track at AWE Europe 2017 - the largest conference for AR+VR in Munich, Germany October 19-20, 2017
Tobias Rothermel (pmd technologies): pmd ToF – the Swiss Army Knife of 3D Sensing
pmd ToF is in 2 AR-smartphones! pmd ToF is in 2 AR headsets! pmd ToF is in a surveillance camera! pmd ToF is proven to enable face recognition.
What does this mean?
It means that we are convinced that our depth sensing technology is as flexible as your application needs it. The talk will show what we already did and what can do with our ToF technology.
We will outline what we offer to developers in terms of available reference designs, delivered data and software interfaces and how it all fits into an augmented world.
For the full video of this presentation, please visit:
https://www.embedded-vision.com/platinum-members/8tree/embedded-vision-training/videos/pages/feb-2017-member-meeting
For more information about embedded vision, please visit:
http://www.embedded-vision.com
Arun Chhabra of 8tree delivers the presentation "Designing Vision Systems for Human Operators and Workflows" at the February 2017 Embedded Vision Alliance Member Meeting. Chhabra explains how his company is deploying computer vision to enhance existing workflows in industries such as aircraft maintenance.
Overview of technology, systems, potentials, functions, areas of application of augmented reality, created by the Virtual Dimension Center (VDC) in Fellbach.
GE Inspection Technologies reviews case studies of industrial production process control in the castings, aerospace and automotive industries using advanced computed tomography CT techniques. Presented to the American Society of Nondestructive Testing (ASNT) at the 2014 Annual Conference
Toward In-situ Realization of Ergonomic Hand/Arm Orthosis A Pilot Study on th...Ardalan Amiri
Unleashing the joint power of virtual prototyping and human modelling to address ergonomics of clinical limb and body supportive products.
3D scanning and reverse engineering, CAD techniques and inspections, topographical and topological optimization, material selection for proper additive manufacturing, CAE assessments, etc are all the basic constituents of this project.
In compare to the available methods to reduce the cost and time of the clinical orthosis realization while more flexibly customizing it in the best interest of the doctor and patient a semi-automatic system has been schematized for the purpose. The system operations have been recognized in details and performed by authors manually to investigate the possible challenges in case of a arm/hand orthosis with focus on wrist injuries. Addressing such challenges and characterizing the elemental tools required by such system were done using multiple commercial software packages in Reverse Engineering, CAD and CAE fields. The orthosis was optimized in an ergonomic manner favoring the fast prototyping as well as medical concerns. Many valuable point were drawn at the end as conclusion of this pilot study enhancing the concept in structural optimization and bio-mechanical considerations. The similar researches, up to the end of this report, lack purposeful topology design, envisioning an automated system and critical medical cases manifesting in bio-mechanical scenarios for product integrity assessments. All the previous points can be find in this report.
2020 vision - the journey from research lab to real-world productKTN
This presentation, delivered by Jag Minhas, CEO and Founder, Sensing Feeling, was the first presentation of the Implementing AI: Vision Systems Webinar.
Additive manufacturing 3D Printing technologySTAY CURIOUS
Additive manufacturing 3D Printing
3D printing is the process of building an object one thin layer at a time. It is fundamentally additive rather than subtractive in nature. To many, 3D printing is the singular production of often-ornate objects on a desktop printer.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/deploying-large-models-on-the-edge-success-stories-and-challenges-a-presentation-from-qualcomm/
Vinesh Sukumar, Senior Director of Product Management at Qualcomm Technologies, presents the “Deploying Large Models on the Edge: Success Stories and Challenges” tutorial at the May 2024 Embedded Vision Summit.
In this talk, Dr. Sukumar explains and demonstrates how Qualcomm has been successful in deploying large generative AI and multimodal models on the edge for a variety of use cases in consumer and enterprise markets. He examines key challenges that must be overcome before large models at the edge can reach their full commercial potential. He also highlights how Qualcomm is addressing these challenges through upgraded processor hardware, improved developer tools and a comprehensive library of fully optimized AI models in the Qualcomm AI Hub.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/scaling-vision-based-edge-ai-solutions-from-prototype-to-global-deployment-a-presentation-from-network-optix/
Maurits Kaptein, Chief Data Scientist at Network Optix and Professor at the University of Eindhoven, presents the “Scaling Vision-based Edge AI Solutions: From Prototype to Global Deployment” tutorial at the May 2024 Embedded Vision Summit.
The Embedded Vision Summit brings together innovators in silicon, devices, software and applications and empowers them to bring computer vision and perceptual AI into reliable and scalable products. However, integrating recent hardware, software and algorithm innovations into prime-time-ready products is quite challenging. Scaling from a proof of concept—for example, a novel neural network architecture performing a valuable task efficiently on a new piece of silicon—to an AI vision system installed in hundreds of sites requires surmounting myriad hurdles.
First, building on Network Optix’s 14 years of experience, Professor Kaptein details how to overcome the networking, fleet management, visualization and monetization challenges that come with scaling a global vision solution. Second, Kaptein discusses the complexities of making vision AI solutions device-agnostic and remotely manageable, proposing an open standard for AI model deployment to edge devices. The proposed standard aims to simplify market entry for silicon manufacturers and enhance scalability for solution developers. Kaptein outlines the standard’s core components and invites collaborative contributions to drive market expansion.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/whats-next-in-on-device-generative-ai-a-presentation-from-qualcomm/
Jilei Hou, Vice President of Engineering and Head of AI Research at Qualcomm Technologies, presents the “What’s Next in On-device Generative AI” tutorial at the May 2024 Embedded Vision Summit.
The generative AI era has begun! Large multimodal models are bringing the power of language understanding to machine perception, and transformer models are expanding to allow machines to understand using multiple types of sensors. This new wave of approaches is poised to revolutionize user experiences, disrupt industries and enable powerful new capabilities. For generative AI to reach its full potential, however, we must deploy it on edge devices, providing improved latency, pervasive interaction and enhanced privacy.
In this talk, Hou shares Qualcomm’s vision of the compelling opportunities enabled by efficient generative AI at the edge. He also identifies the key challenges that the industry must overcome to realize the massive potential of these technologies. And he highlights research and product development work that Qualcomm is doing to lead the way via an end-to-end system approach—including techniques for efficient on-device execution of LLMs, LVMs and LMMs, methods for orchestration of large models at the edge and approaches for adaptation and personalization.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2023/11/learning-compact-dnn-models-for-embedded-vision-a-presentation-from-the-university-of-maryland-at-college-park/
Shuvra Bhattacharyya, Professor at the University of Maryland at College Park, presents the “Learning Compact DNN Models for Embedded Vision” tutorial at the May 2023 Embedded Vision Summit.
In this talk, Bhattacharyya explores methods to transform large deep neural network (DNN) models into effective compact models. The transformation process that he focuses on—from large to compact DNN form—is referred to as pruning. Pruning involves the removal of neurons or parameters from a neural network. When performed strategically, pruning can lead to significant reductions in computational complexity without significant degradation in accuracy. It is sometimes even possible to increase accuracy through pruning.
Pruning provides a general approach for facilitating real-time inference in resource-constrained embedded computer vision systems. Bhattacharyya provides an overview of important aspects to consider when applying or developing a DNN pruning method and presents details on a recently introduced pruning method called NeuroGRS. NeuroGRS considers structures and trained weights jointly throughout the pruning process and can result in significantly more compact models compared to other pruning methods.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2023/11/introduction-to-computer-vision-with-cnns-a-presentation-from-mohammad-haghighat/
Independent consultant Mohammad Haghighat presents the “Introduction to Computer Vision with Convolutional Neural Networks” tutorial at the May 2023 Embedded Vision Summit.
This presentation covers the basics of computer vision using convolutional neural networks. Haghighat begins by introducing some important conventional computer vision techniques and then transition to explaining the basics of machine learning and convolutional neural networks (CNNs) and showing how CNNs are used in visual perception.
Haghighat illustrates the building blocks and computational elements of neural networks through examples. This session provides an overview of how modern computer vision algorithms are designed, trained and used in real-world applications.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2023/11/selecting-tools-for-developing-monitoring-and-maintaining-ml-models-a-presentation-from-yummly/
Parshad Patel, Data Scientist at Yummly, presents the “Selecting Tools for Developing, Monitoring and Maintaining ML Models” tutorial at the May 2023 Embedded Vision Summit.
With the boom in tools for developing, monitoring and maintaining ML models, data science teams have many options to choose from. Proprietary tools provided by cloud service providers are enticing, but teams may fear being locked in—and may worry that these tools are too costly or missing important features when compared with alternatives from specialized providers.
Fortunately, most proprietary, fee-based tools have an open-source component that can be integrated into a home-grown solution at low cost. This can be a good starting point, enabling teams to get started with modern tools without making big investments and leaving the door open to evolve tool selection over time. In this talk, Patel presents a step-by-step process for creating an MLOps tool set that enables you to deliver maximum value as a data scientist. He shares how Yummly built pipelines for model development and put them into production using open-source projects.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2023/11/building-accelerated-gstreamer-applications-for-video-and-audio-ai-a-presentation-from-wave-spectrum/
Abdo Babukr, Accelerated Computing Consultant at Wave Spectrum, presents the “Building Accelerated GStreamer Applications for Video and Audio AI,” tutorial at the May 2023 Embedded Vision Summit.
GStreamer is a popular open-source framework for creating streaming media applications. Developers often use GStreamer to streamline the development of computer vision and audio perception applications. Since perceptual algorithms are often quite demanding in terms of processing performance, in many cases developers need to find ways to accelerate key GStreamer building blocks, taking advantage of specialized features of their target processor or co-processor.
In this talk, Babukr introduces GStreamer and shows how to use it to build computer vision and audio perception applications. He also shows how to create efficient, high-performance GStreamer applications that utilize specialized hardware features.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2023/11/understanding-selecting-and-optimizing-object-detectors-for-edge-applications-a-presentation-from-walmart-global-tech/
Md Nasir Uddin Laskar, Staff Machine Learning Engineer at Walmart Global Tech, presents the “Understanding, Selecting and Optimizing Object Detectors for Edge Applications” tutorial at the May 2023 Embedded Vision Summit.
Object detectors count objects in a scene and determine their precise locations, while also labeling them. Object detection plays a crucial role in many vision applications, from autonomous driving to smart appliances. In many of these applications, it’s necessary or desirable to implement object detection at the edge.
In this presentation, Laskar explores the evolution of object detection algorithms, from traditional approaches to deep learning-based methods and transformer-based architectures. He delves into widely used approaches for object detection, such as two-stage R-CNNs and one-stage YOLO algorithms, and examines their strengths and weaknesses. And he provides guidance on how to evaluate and select an object detector for an edge application.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2023/10/introduction-to-modern-lidar-for-machine-perception-a-presentation-from-the-university-of-ottawa/
Robert Laganière, Professor at the University of Ottawa and CEO of Sensor Cortek, presents the “Introduction to Modern LiDAR for Machine Perception” tutorial at the May 2023 Embedded Vision Summit.
In this presentation, Laganière provides an introduction to light detection and ranging (LiDAR) technology. He explains how LiDAR sensors work and their main advantages and disadvantages. He also introduces different approaches to LiDAR, including scanning and flash LiDAR.
Laganière explores the types of data produced by LiDAR sensors and explains how this data can be processed using deep neural networks. He also examines the synergy between LiDAR and cameras, and the concept of pseudo-LiDAR for detection.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2023/11/vision-language-representations-for-robotics-a-presentation-from-the-university-of-pennsylvania/
Dinesh Jayaraman, Assistant Professor at the University of Pennsylvania, presents the “Vision-language Representations for Robotics” tutorial at the May 2023 Embedded Vision Summit.
In what format can an AI system best present what it “sees” in a visual scene to help robots accomplish tasks? This question has been a long-standing challenge for computer scientists and robotics engineers. In this presentation, Jayaraman provides insights into cutting-edge techniques being used to help robots better understand their surroundings, learn new skills with minimal guidance and become more capable of performing complex tasks.
Jayaraman discusses recent advances in unsupervised representation learning and explains how these approaches can be used to build visual representations that are appropriate for a controller that decides how the robot should act. In particular, he presents insights from his research group’s recent work on how to represent the constituent objects and entities in a visual scene, and how to combine vision and language in a way that permits effectively translating language-based task descriptions into images depicting the robot’s goals.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2023/10/adas-and-av-sensors-whats-winning-and-why-a-presentation-from-techinsights/
Ian Riches, Vice President of the Global Automotive Practice at TechInsights, presents the “ADAS and AV Sensors: What’s Winning and Why?” tutorial at the May 2023 Embedded Vision Summit.
It’s clear that the number of sensors per vehicle—and the sophistication of these sensors—is growing rapidly, largely thanks to increased adoption of advanced safety and driver assistance features. In this presentation, Riches explores likely future demand for automotive radars, cameras and LiDARs.
Riches examines which vehicle features will drive demand out to 2030, how vehicle architecture change is impacting the market and what sorts of compute platforms these sensors will be connected to. Finally, he shares his firm’s vision of what the landscape could look like far beyond 2030, considering scenarios out to 2050 for automated driving and the resulting sensor demand.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2023/10/computer-vision-in-sports-scalable-solutions-for-downmarkets-a-presentation-from-sportlogiq/
Mehrsan Javan, Co-founder and CTO of Sportlogiq, presents the “Computer Vision in Sports: Scalable Solutions for Downmarket Leagues” tutorial at the May 2023 Embedded Vision Summit.
Sports analytics is about observing, understanding and describing the game in an intelligent manner. In practice, this requires a fully automated, robust end-to-end pipeline, spanning from visual input, to player and group activities, to player and team evaluation to planning. Despite major advancements in computer vision and machine learning, today sports analytics solutions are limited to top leagues and are not widely available for downmarket leagues and youth sports.
In this talk, Javan explains how his company has developed scalable and robust computer vision solutions to democratize sport analytics and offer pro-league-level insights to leagues with modest resources, including youth leagues. He highlights key challenges—such as the requirement for low-cost, low-latency processing and the need for robustness despite variations in venues. He discusses the approaches Sportlogiq tried and how it ultimately overcame these challenges, including the use of transformers and fusion of multiple type of data streams to maximize accuracy.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2023/10/detecting-data-drift-in-image-classification-neural-networks-a-presentation-from-southern-illinois-university/
Spyros Tragoudas, Professor and School Director at Southern Illinois University Carbondale, presents the “Detecting Data Drift in Image Classification Neural Networks” tutorial at the May 2023 Embedded Vision Summit.
An unforeseen change in the input data is called “drift,” and may impact the accuracy of machine learning models. In this talk, Tragoudas presents a novel scheme for diagnosing data drift in the input streams of image classification neural networks. His proposed method for drift detection and quantification uses a threshold dictionary for the prediction probabilities of each class in the neural network model.
The method is applicable to any drift type in images such as noise and weather effects, among others. Tragoudas shares experimental results on various data sets, drift types and neural network models to show that his proposed method estimates the drift magnitude with high accuracy, especially when the level of drift significantly impacts the model’s performance.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2023/10/deep-neural-network-training-diagnosing-problems-and-implementing-solutions-a-presentation-from-sensor-cortek/
Fahed Hassanat, Chief Operating Officer and Head of Engineering at Sensor Cortek, presents the “Deep Neural Network Training: Diagnosing Problems and Implementing Solutions” tutorial at the May 2023 Embedded Vision Summit.
In this presentation, Hassanat delves into some of the most common problems that arise when training deep neural networks. He provides a brief overview of essential training metrics, including accuracy, precision, false positives, false negatives and F1 score.
Hassanat then explores training challenges that arise from problems with hyperparameters, inappropriately sized models, inadequate models, poor-quality datasets, imbalances within training datasets and mismatches between training and testing datasets. To help detect and diagnose training problems, he also covers techniques such as understanding performance curves, recognizing overfitting and underfitting, analyzing confusion matrices and identifying class interaction issues.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2023/10/ai-start-ups-the-perils-of-fishing-for-whales-war-stories-from-the-entrepreneurial-front-lines-a-presentation-from-seechange-technologies/
Tim Hartley, Vice President of Product for SeeChange Technologies, presents the “AI Start-ups: The Perils of Fishing for Whales (War Stories from the Entrepreneurial Front Lines)” tutorial at the May 2023 Embedded Vision Summit.
You have a killer idea that will change the world. You’ve thought through product-market fit and differentiation. You have seed funding and a world-beating team. Best of all, you’ve caught the attention of major players in your industry. You’ve reached peak “start-up”—that point of limitless possibility—when you go to bed with the same level of energy and enthusiasm you had when you woke. And then the first proof of concept starts…
In this talk, Hartley lays out some of the pitfalls that await those building the next big thing. Using real examples, he shares some of the dos and don’ts, particularly when dealing with that big potential first customer. Hartley discusses the importance of end-to-end design, ensuring your product solves real-world problems. He explores how far the big companies will tell you to jump—and then jump again—for free. And, most importantly, how to build long-term partnerships with major corporations without relying on over-promising sales pitches.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2023/10/a-computer-vision-system-for-autonomous-satellite-maneuvering-a-presentation-from-scout-space/
Andrew Harris, Spacecraft Systems Engineer at SCOUT Space, presents the “Developing a Computer Vision System for Autonomous Satellite Maneuvering” tutorial at the May 2023 Embedded Vision Summit.
Computer vision systems for mobile autonomous machines experience a wide variety of real-world conditions and inputs that can be challenging to capture accurately in training datasets. Few autonomous systems experience more challenging conditions than those in orbit. In this talk, Harris describes how SCOUT Space has designed and trained satellite vision systems using dynamic and physically informed synthetic image datasets.
Harris describes how his company generates synthetic data for this challenging environment and how it leverages new real-world data to improve our datasets. In particular, he explains how these synthetic datasets account for and can replicate real sources of noise and error in the orbital environment, and how his company supplements them with in-space data from the first SCOUT-Vision system, which has been in orbit since 2021.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2023/10/bias-in-computer-vision-its-bigger-than-facial-recognition-a-presentation-from-santa-clara-university/
Susan Kennedy, Assistant Professor of Philosophy at Santa Clara University, presents the “Bias in Computer Vision—It’s Bigger Than Facial Recognition!” tutorial at the May 2023 Embedded Vision Summit.
As AI is increasingly integrated into various industries, concerns about its potential to reproduce or exacerbate bias have become widespread. While the use of AI holds the promise of reducing bias, it can also have unintended consequences, particularly in high-stakes computer vision applications such as facial recognition. However, even seemingly low-stakes computer vision applications such as identifying potholes and damaged roads can also present ethical challenges related to bias.
This talk explores how bias in computer vision often poses an ethical challenge, regardless of the stakes involved. Kennedy discusses the limitations of technical solutions aimed at mitigating bias, and why “bias-free” AI may not be achievable. Instead, she focuses on the importance of adopting a “bias-aware” approach to responsible AI design and explores strategies that can be employed to achieve this.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2023/10/sensor-fusion-techniques-for-accurate-perception-of-objects-in-the-environment-a-presentation-from-sanborn-map-company/
Baharak Soltanian, Vice President of Research and Development for the Sanborn Map Company, presents the “Sensor Fusion Techniques for Accurate Perception of Objects in the Environment” tutorial at the May 2023 Embedded Vision Summit.
Increasingly, perceptual AI is being used to enable devices and systems to obtain accurate estimates of object locations, speeds and trajectories. In demanding applications, this is often best done using a heterogeneous combination of sensors (e.g., vision, radar, LiDAR). In this talk, Soltanian introduces techniques for combining data from multiple sensors to obtain accurate information about objects in the environment.
Soltanian briefly introduces the roles played by Kalman filters, particle filters, Bayesian networks and neural networks in this type of fusion. She then examines alternative fusion architectures, such as centralized and decentralized approaches, to better understand the trade-offs associated with different approaches to sensor fusion as used to enhance the ability of machines to understand their environment.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2023/10/updating-the-edge-ml-development-process-a-presentation-from-samsara/
Jim Steele, Vice President of Embedded Software at Samsara, presents the “Updating the Edge ML Development Process” tutorial at the May 2023 Embedded Vision Summit.
Samsara (NYSE:IOT) is focused on digitizing the world of operations. The company helps customers across many industries—including food and beverage, utilities and energy, field services and government—get information about their physical operations into the cloud, so they can operate more safely, efficiently and sustainably. Samsara’s sensors collect billions of data points per day and on-device processing is instrumental to its success. The company is constantly developing, improving and deploying ML models at the edge.
Samsara has found that the traditional development process—where ML scientists create models and hand them off to firmware engineers for embedded implementation—is slow and often produces difficult-to-resolve differences between the original model and the embedded implementation. In this talk, Steele presents an alternative development process that his company has adopted with good results. In this process, firmware engineers develop a general framework that ML scientists use to design, develop and deploy their models. This enables quick iterations and fewer confounding bugs.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2023/10/combating-bias-in-production-computer-vision-systems-a-presentation-from-red-cell-partners/
Alex Thaman, Chief Architect at Red Cell Partners, presents the “Combating Bias in Production Computer Vision Systems” tutorial at the May 2023 Embedded Vision Summit.
Bias is a critical challenge in predictive and generative AI that involves images of humans. People have a variety of body shapes, skin tones and other features that can be challenging to represent completely in training data. Without attention to bias risks, ML systems have the potential to treat people unfairly, and even to make humans more likely to do so.
In this talk, Thaman examines the ways in which bias can arise in visual AI systems. He shares techniques for detecting bias and strategies for minimizing it in production AI systems.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
Search and Society: Reimagining Information Access for Radical FuturesBhaskar Mitra
The field of Information retrieval (IR) is currently undergoing a transformative shift, at least partly due to the emerging applications of generative AI to information access. In this talk, we will deliberate on the sociotechnical implications of generative AI for information access. We will argue that there is both a critical necessity and an exciting opportunity for the IR community to re-center our research agendas on societal needs while dismantling the artificial separation between the work on fairness, accountability, transparency, and ethics in IR and the rest of IR research. Instead of adopting a reactionary strategy of trying to mitigate potential social harms from emerging technologies, the community should aim to proactively set the research agenda for the kinds of systems we should build inspired by diverse explicitly stated sociotechnical imaginaries. The sociotechnical imaginaries that underpin the design and development of information access technologies needs to be explicitly articulated, and we need to develop theories of change in context of these diverse perspectives. Our guiding future imaginaries must be informed by other academic fields, such as democratic theory and critical theory, and should be co-developed with social science scholars, legal scholars, civil rights and social justice activists, and artists, among others.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
PHP Frameworks: I want to break free (IPC Berlin 2024)Ralf Eggert
In this presentation, we examine the challenges and limitations of relying too heavily on PHP frameworks in web development. We discuss the history of PHP and its frameworks to understand how this dependence has evolved. The focus will be on providing concrete tips and strategies to reduce reliance on these frameworks, based on real-world examples and practical considerations. The goal is to equip developers with the skills and knowledge to create more flexible and future-proof web applications. We'll explore the importance of maintaining autonomy in a rapidly changing tech landscape and how to make informed decisions in PHP development.
This talk is aimed at encouraging a more independent approach to using PHP frameworks, moving towards a more flexible and future-proof approach to PHP development.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
2. ▪ Why is metrology important?
▪ Categorization of common methodologies for dimensional measurements
▪ Examples and Applications of Image based Dimensional Measurement
▪ Reverse Engineering
▪ Maintenance
▪ Quality Assurance
▪ Patient Positioning
▪ Dental
▪ Automotive
▪ User Interface
▪ Market segmentation, size, challenges and opportunities
▪ What does 8tree do?
▪ A view into the future…
Agenda
3. ▪ Metrology helps to tell what is “right” from what is “wrong” (with not much room for discussion)
▪ Most often there is a “Ground Truth”
▪ The relevance really started with the concept of interchangeable parts back 100 years ago
▪ With his assembly line and interchangeable parts Henry Ford was able to drop production time for a Model
T from 12 hours to just 93 minutes
▪ Before that, not every bolt would fit every nut, they were custom
fitted
▪ In order to make identical parts checking the parts and so metrology
gained a lot of importance
Why is metrology important?
Because Engineers don´t really like „Fake News“ ☺
https://www.thoughtco.com/henry-ford-and-the-assembly-line-1779201
4. ▪ In todays world we only realize the revolutionary concept of interchangeable parts, when parts don’t fit
▪ But generally we assume that we can put a nut of the same size on any bolt no matter whether its made in
the US, Europe or India.
▪ This is the base for todays mass production and worldwide distributed sourcing of components
▪ Some historical gaging tools:
Why is metrology important?
5. The Hubble Fiasco
▪ The mirror was polished precisely into the wrong shape
▪ At the perimeter it was too flat by about 2.2 micrometers due to
a misalignment of a lens
▪ After launch in 1990 it could
be fixed 3 years later,
saving the US$4.7 billion
investment
What happens when you ignore metrology?
https://en.wikipedia.org/wiki/Hubble_Space_Telescope
6. ▪ Production volume becomes larger, allowable time for inspection shorter
▪ None tactile methods are usually faster
▪ Tolerances become tighter
▪ Customer asking for 100% inspection rather than sample inspection
▪ More complex features should be checked rather than
simple measures
▪ For industry 4.0 data input is expected
-> all good reasons to move from tactile to optical metrology
Why should measurements be non-tactile?
7. One possible Categorization of common methodologies for dimensional measurements
Surface Measurement
Tacticle Non Tactile
Non Destructive Destructive Reflective Transmissive
Computer
Tomography
Optical Other
RadarSonar
SlicingCMM
Articulated
Arm
Uni Utah; Guido Gerig; Lesson: CS 6320, 3D Computer Vision Spring 2012
8. CMMs – Coordinate Measuring Machines and traditional hand tools
Tactile Methods
http://metalworkingnews.info/wp-
content/uploads/2014/11/Prod-Rev-ZEISS-CONTURA.jpg
https://faro.blob.core.windows.net/sitefinity/product-overview-
galleries/faroarm_beauty_1.jpg
9. Method Time of Flight (ToF) Interferometry Triangulation
1D Laser Distance
Measurement
Michelson Interferometer Point Triangulation
2D PMD-sensors White-Light-Interferometer Laser Line Triangulation
2 1/2D Structured Light Techniques
Photogrammmetry
3D Computer Tomografy (CT)
Non Tactile Methods and Measurement Principles – Some Examples
10. Comparison of various methods for 3D-shape measurement
• Conflict: large working distance and
high resolution (and vice versa)
• The 3 methods nicely work together to
cover a broad range of working
distances and uncertainties
www.iap.uni-jena.de
11. Time of Flight – Measuring the time for a light pulse to return
Single Pulse Time of Flight advantages:
• Large working distance, up to 40 km
• Uncertainty down to cm’s
• High speed, up to 100 kHz so suitable for scanning
application
12. Time of Flight – Measuring Phase Difference Output and Input
Phase Difference Time of Flight advantages:
• Middle range working distance
• Without reflector up to 100m
• Uncertainty down to mm’s
• Suitable for low-cost manufacturing
15. ▪ Most often used method for depth sensing today
▪ Reason: Lots of options possible by varying the following
parameters:
▪ Base distance (b)
▪ Angles alpha and beta control working distance and sensitivity
▪ Different projections of points, lines, fringe patterns control speed versus
amount of points
▪ Different cameras and lenses
▪ Different light sources from UV over white light to IR
Triangulation
http://computingengineering.asmedigitalcollection.asme.org/data/journals/jcisb6/930083/
jcise_014_03_035001_f001.png
16. Triangulation: Laser Line
Uni Utah; Guido Gerig; Lesson: CS 6320, 3D Computer Vision Spring 2012 http://www.automotivemanufacturingsolutions.com/
wp-content/uploads/2016/12/LMI-Fig2.png
20. Triangulation: Structured Light
• Very common method: Combination of Graycode and Phaseshift
• First binary black/white pattern doubles frequency with every new image,
then 4 sinusoidal images phase-shifted by 90°
21. Triangulation: Stereo Camera Setup
http://robot.neu.edu/rover/wp-content/uploads/sites/3/2013/01/wpid-20130128_220744.jpg http://carnegierobotics.com/multisense-s7/
22. Laser ToF Interferometry Laser Line Structured Light
Advantages High resolution with
high modulation
frequency
Very high resolution
possible up to small
fractions of light wave
length
Simple and easy to
implement
Very flexible technology,
allows adjustment of
sensitivity and field of
view
Disadvantages Fast electronics
necessary because of
high speed of light
Sensitive to small
vibrations, difficult to
measure larger
dimensions
Requires movement of
object or sensor to
swipe line over object
Limited by shadow
effect, doesn´t work on
transparent or shiny
objects
Comparison of different methods
24. Application: Maintenance on a milling machine with Interferometry
http://www.renishaw.de/media/img/gen/83dff5898c364bdd89085869ab482285.jpg
http://www.wzl.rwth-aachen.de/de/f765080f396ef05fc125778f00383cc6/bilderpool-029-2.jpg
26. Application: Patient Positioning and gating for Cancer treatment
http://www.raeng.org.uk/grants-and-prizes/prizes-and-medals/awards/the-macrobert-award/2017-finalist-vision-rt
27. Dental: Scanning in-vivo and on imprints
http://www.biodentalclinic.com.mx/tecnologia.php http://go.lmi3d.com/medical-applications-in-3d-scanning
29. User Interface: Kinect v1 and v2 – probably the most popular depth sensors
(24 million sold as of February 12, 2013)
https://www.dfki.de/web/research/publications/renameFileForDownload?filename=wasenmuller2016comparison.pdf&file_id=uploads_2964
Kinect v1 : based on triangulation
Kinect v2 : based on time of flight
30. Retail: Bodyscanning and Facescanning
http://www.thinkscan.co.uk/blog-news/biomedical-industry.htmlhttp://www.vfxscan.co.uk/ten24/wp-content/uploads/2012/12/Full-Body-scan-
Zbrush2.jpg
32. ▪ The 3D metrology market is expected to reach USD 10.90 Billion by 2022 from USD 7.80 Billion today
▪ CAGR of 7.0% between 2016 and 2022
▪ This includes CMMs as still the biggest share but optical methods growing quicker
▪ The major players in the 3D metrology market include
▪ Hexagon AB(Sweden),
▪ Carl Zeiss AG (Germany)
▪ Faro Technologies, Inc. (U.S.)
▪ Mitutoyo Corporation (Japan),
▪ Nikon Corporation (Japan)
▪ GE Measurement and Control Solutions Inc. (U.S.)
▪ GOM MBH (Germany)
Market size and development
https://www.linkedin.com/pulse/3d-metrology-market-worth-1090-billion-usd-2022-prashau-kumar
▪ Perceptron Inc. (U.S.)
▪ Renishaw PLC (U.K.)
▪ Zygo Corporation (U.S.)
▪ Advantest Corporation (Japan)
▪ Wenzel Prazision GmbH (Germany)
▪ 3D Digital Corp (U.S.)
▪ Creaform Inc.(Canada)
35. dentCHECK - Application
▪ Common Features of 8tree products
▪ Extremely easy to use, built for shop floor operators
▪ Handheld, battery powered surface inspection tools
▪ Application specific 3d scanners
▪ No monitor or keyboard necessary, but AR display of the
results
▪ No compromise in precision, 50 µm (0.002”) for dent
depth
▪ 1-click report generation
36. ▪ Higher image frequency and matching bandwidth on camera-computer interface
▪ Less power consumption from cameras, projectors, computers
▪ Increased processing power
▪ Support of GPU computing
▪ More compact components
▪ -> shrink form factor to cell
phone size ☺
What 8tree would like to see from the vision supply-chain to enable our
future roadmap
www.8-tree.com
37. ▪ We are pretty sure optical dimensional sensors of all kinds will be “Ubiquitous” soon
▪ We see trends towards
▪ Mobile and battery powered devices
▪ Smart phones will eventually replace dedicated devices with cameras becoming higher resolution, 3d sensors built in, projectors built in
and more processing power available
▪ More data will be moved to cloud storage and processing immediately after acquisition
▪ Therefore more bandwidth will be required on Wireless systems
▪ Big data and Artificial intelligence will be “Ubiquitous” as well
▪ Integration of image based systems into other upcoming systems like drones, robots, big data systems
▪ An examples from “our” aerospace and maintenance industry shows what engineers dream of for the next
years
A view into the future…