For the full video of this presentation, please visit:
https://www.embedded-vision.com/platinum-members/embedded-vision-alliance/embedded-vision-training/videos/pages/sept-2017-alliance-vitf-courtney
For more information about embedded vision, please visit:
http://www.embedded-vision.com
Dr. Patrick Courtney, MBA, of tec-connection and the Standards in Laboratory Automation (SiLA) Consortium delivers the presentation "The Reverse Factory: Embedded Vision in High-Volume Laboratory Applications" at the Embedded Vision Alliance's September 2017 Vision Industry and Technology Forum. In his presentation, Courtney covers the following topics:
▪ Motivation: the need and the market
▪ Big applications today: NGS case study
▪ Improvement curve: Carlson’s curve and what this means
▪ The next applications for imaging
For the full video of this presentation, please visit:
https://www.embedded-vision.com/platinum-members/embedded-vision-alliance/embedded-vision-training/videos/pages/may-2017-embedded-vision-summit-kim
For more information about embedded vision, please visit:
http://www.embedded-vision.com
Minyoung Kim, Senior Research Engineer at Panasonic Silicon Valley Laboratory, presents the "A Fast Object Detector for ADAS using Deep Learning" tutorial at the May 2017 Embedded Vision Summit.
Object detection has been one of the most important research areas in computer vision for decades. Recently, deep neural networks (DNNs) have led to significant improvement in several machine learning domains, including computer vision, achieving the state-of-the-art performance thanks to their theoretically proven modeling and generalization capabilities. However, it is still challenging to deploy such DNNs on embedded systems, for applications such as advanced driver assistance systems (ADAS), where computation power is limited.
Kim and her team focus on reducing the size of the network and required computations, and thus building a fast, real-time object detection system. They propose a fully convolutional neural network that can achieve at least 45 fps on 640x480 frames with competitive performance. With this network, there is no proposal generation step, which can cause a speed bottleneck; instead, a single forward propagation of the network approximates the locations of objects directly.
For the full video of this presentation, please visit:
https://www.embedded-vision.com/platinum-members/embedded-vision-alliance/embedded-vision-training/videos/pages/may-2017-embedded-vision-summit-zeller
For more information about embedded vision, please visit:
http://www.embedded-vision.com
Sadie Zeller, Manager of Global Product Management and the Clinical Vertical Market at Microscan Systems, presents the "Another Set of Eyes: Machine Vision Automation Solutions for In Vitro Diagnostics" tutorial at the May 2017 Embedded Vision Summit.
In vitro diagnostics (IVD) are tests that can detect diseases, conditions, or infections. The use of automation, including machine vision inspection, in IVD has increased steadily, and is now a standard practice. Vision-based laboratory automation enables greater throughput efficiency and minimizes the risk of human error. But IVD is a challenging application: the healthcare industry requires systems that are, at a minimum, fail-safe, and ideally, error-proof.
Machine vision systems for IVD (and related life sciences) therefore require a robust development phase including an iterative design-validate process to ensure that the system is safe for use. This presentation addresses some of the key requirements and constraints of healthcare vision applications, and highlights approaches for application design and testing to meet tough industry demands.
For the full video of this presentation, please visit:
https://www.embedded-vision.com/platinum-members/embedded-vision-alliance/embedded-vision-training/videos/pages/may-2017-embedded-vision-summit-ghazali
For more information about embedded vision, please visit:
http://www.embedded-vision.com
Adham Ghazali, co-founder and CEO of Imagry, presents the "Edge Intelligence: Visual Reinforcement Learning for Mobile Devices" tutorial at the May 2017 Embedded Vision Summit.
Real-life visual data encompasses a tremendous amount of information and presents a huge challenge for the design and development of a perceptual engine. Smart machines equipped with visual understanding technology will always be presented and challenged with new data. In this talk, Ghazali presents algorithm methods to allow learning in the end device to enable it to understand new data.
The main challenges to learning in the end device stem from the lack of computing power, lack of access to sufficient data samples and the need for involvement of human experts. Imagry addresses these challenges by combining a binary weights representation of deep neural networks and reinforcement learning. In particular, Ghazali explores and introduces a self-expanding cost function and the incorporation of external memory to enable DNNs to adapt to new data.
For the full video of this presentation, please visit:
https://www.embedded-vision.com/platinum-members/embedded-vision-alliance/embedded-vision-training/videos/pages/dec-2017-alliance-vitf-khronos
For more information about embedded vision, please visit:
http://www.embedded-vision.com
Neil Trevett, President of the Khronos Group, delivers the presentation "Update on Khronos Standards for Vision and Machine Learning" at the Embedded Vision Alliance's December 2017 Vision Industry and Technology Forum. Trevett shares updates on recent, current and planned Khronos standardization activities aimed at streamlining the deployment of embedded vision and AI.
For the full video of this presentation, please visit:
https://www.embedded-vision.com/platinum-members/embedded-vision-alliance/embedded-vision-training/videos/pages/may-2017-embedded-vision-summit-leontiev
For more information about embedded vision, please visit:
http://www.embedded-vision.com
Anton Leontiev, Embedded Software Architect at ELVEES, JSC, presents the "Designing a Stereo IP Camera From Scratch" tutorial at the May 2017 Embedded Vision Summit.
As the number of cameras in an intelligent video surveillance system increases, server processing of the video quickly becomes a bottleneck. On the other hand, when computer vision algorithms are moved to a resource-limited camera platform, their output quality is often unsatisfactory.
The effectiveness of vision algorithms for surveillance can be greatly improved by using a depth map in addition to the regular image. Thus, using a stereo camera is a way to enable offloading of advanced algorithms from servers to IP cameras. This talk covers the main problems arising during the design of an embedded stereo IP camera, including capturing video streams from two sensors, frame synchronization between sensors, stereo calibration algorithms, and, finally, disparity map calculation.
For the full video of this presentation, please visit:
https://www.embedded-vision.com/platinum-members/embedded-vision-alliance/embedded-vision-training/videos/pages/may-2017-embedded-vision-summit-jain
For more information about embedded vision, please visit:
http://www.embedded-vision.com
Divya Jain, Technical Director at Tyco Innovation, presents the "End to End Fire Detection Deep Neural Network Platform" tutorial at the May 2017 Embedded Vision Summit.
This presentation dives deep into a real-world problem of fire detection to see what it takes to build a complete solution using CNNs. Fire is specifically challenging because it doesn’t have a fixed shape or size like other objects. The presentation begins with a discussion of the technology stack, followed by the algorithm, and concluding with a review of the end to end architecture. Jain discusses the challenges her company encountered while training this algorithm and how they worked through them by building a scalable and reusable platform.
Efficient and thorough data collection and its timely analysis are critical for disaster response and recovery in order to save people's lives during disasters. However, access to comprehensive data in disaster areas and their quick analysis to transform the data to actionable knowledge are challenging. With the popularity and pervasiveness of mobile devices, crowdsourcing data collection and analysis has emerged as an effective and scalable solution. This paper addresses the problem of crowdsourcing mobile videos for disasters by identifying two unique challenges of 1) prioritizing visual data collection and transmission under bandwidth scarcity caused by damaged communication networks and 2) analyzing the acquired data in a timely manner. We introduce a new crowdsourcing framework for acquiring and analyzing the mobile videos utilizing fine granularity spatial metadata of videos for a rapidly changing disaster situation. We also develop an analytical model to quantify the visual awareness of a video based on its metadata and propose the visual awareness maximization problem for acquiring the most relevant data under bandwidth constraints. The collected videos are evenly distributed to off-site analysts to collectively minimize crowdsourcing efforts for analysis. Our simulation results demonstrate the effectiveness and feasibility of the proposed framework.
Links:
http://infolab.usc.edu/DocsDemos/to_ieeebigdata2015.pdf
http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=7363814
For the full video of this presentation, please visit:
https://www.embedded-vision.com/platinum-members/8tree/embedded-vision-training/videos/pages/sept-2017-alliance-vitf
For more information about embedded vision, please visit:
http://www.embedded-vision.com
Erik Klaas of 8tree delivers the presentation "The Evolution of Depth Sensing: From Exotic to Ubiquitous" at the Embedded Vision Alliance's September 2017 Vision Industry and Technology Forum. In his presentation, Klaas covers the following topics:
▪ Why is metrology important?
▪ Categorization of common methodologies for dimensional measurements
▪ Examples and applications of image-based dimensional measurement
▪ Market segmentation, size, challenges and opportunities
▪ What does 8tree do?
▪ A view into the future
For the full video of this presentation, please visit:
https://www.embedded-vision.com/platinum-members/embedded-vision-alliance/embedded-vision-training/videos/pages/may-2017-embedded-vision-summit-kim
For more information about embedded vision, please visit:
http://www.embedded-vision.com
Minyoung Kim, Senior Research Engineer at Panasonic Silicon Valley Laboratory, presents the "A Fast Object Detector for ADAS using Deep Learning" tutorial at the May 2017 Embedded Vision Summit.
Object detection has been one of the most important research areas in computer vision for decades. Recently, deep neural networks (DNNs) have led to significant improvement in several machine learning domains, including computer vision, achieving the state-of-the-art performance thanks to their theoretically proven modeling and generalization capabilities. However, it is still challenging to deploy such DNNs on embedded systems, for applications such as advanced driver assistance systems (ADAS), where computation power is limited.
Kim and her team focus on reducing the size of the network and required computations, and thus building a fast, real-time object detection system. They propose a fully convolutional neural network that can achieve at least 45 fps on 640x480 frames with competitive performance. With this network, there is no proposal generation step, which can cause a speed bottleneck; instead, a single forward propagation of the network approximates the locations of objects directly.
For the full video of this presentation, please visit:
https://www.embedded-vision.com/platinum-members/embedded-vision-alliance/embedded-vision-training/videos/pages/may-2017-embedded-vision-summit-zeller
For more information about embedded vision, please visit:
http://www.embedded-vision.com
Sadie Zeller, Manager of Global Product Management and the Clinical Vertical Market at Microscan Systems, presents the "Another Set of Eyes: Machine Vision Automation Solutions for In Vitro Diagnostics" tutorial at the May 2017 Embedded Vision Summit.
In vitro diagnostics (IVD) are tests that can detect diseases, conditions, or infections. The use of automation, including machine vision inspection, in IVD has increased steadily, and is now a standard practice. Vision-based laboratory automation enables greater throughput efficiency and minimizes the risk of human error. But IVD is a challenging application: the healthcare industry requires systems that are, at a minimum, fail-safe, and ideally, error-proof.
Machine vision systems for IVD (and related life sciences) therefore require a robust development phase including an iterative design-validate process to ensure that the system is safe for use. This presentation addresses some of the key requirements and constraints of healthcare vision applications, and highlights approaches for application design and testing to meet tough industry demands.
For the full video of this presentation, please visit:
https://www.embedded-vision.com/platinum-members/embedded-vision-alliance/embedded-vision-training/videos/pages/may-2017-embedded-vision-summit-ghazali
For more information about embedded vision, please visit:
http://www.embedded-vision.com
Adham Ghazali, co-founder and CEO of Imagry, presents the "Edge Intelligence: Visual Reinforcement Learning for Mobile Devices" tutorial at the May 2017 Embedded Vision Summit.
Real-life visual data encompasses a tremendous amount of information and presents a huge challenge for the design and development of a perceptual engine. Smart machines equipped with visual understanding technology will always be presented and challenged with new data. In this talk, Ghazali presents algorithm methods to allow learning in the end device to enable it to understand new data.
The main challenges to learning in the end device stem from the lack of computing power, lack of access to sufficient data samples and the need for involvement of human experts. Imagry addresses these challenges by combining a binary weights representation of deep neural networks and reinforcement learning. In particular, Ghazali explores and introduces a self-expanding cost function and the incorporation of external memory to enable DNNs to adapt to new data.
For the full video of this presentation, please visit:
https://www.embedded-vision.com/platinum-members/embedded-vision-alliance/embedded-vision-training/videos/pages/dec-2017-alliance-vitf-khronos
For more information about embedded vision, please visit:
http://www.embedded-vision.com
Neil Trevett, President of the Khronos Group, delivers the presentation "Update on Khronos Standards for Vision and Machine Learning" at the Embedded Vision Alliance's December 2017 Vision Industry and Technology Forum. Trevett shares updates on recent, current and planned Khronos standardization activities aimed at streamlining the deployment of embedded vision and AI.
For the full video of this presentation, please visit:
https://www.embedded-vision.com/platinum-members/embedded-vision-alliance/embedded-vision-training/videos/pages/may-2017-embedded-vision-summit-leontiev
For more information about embedded vision, please visit:
http://www.embedded-vision.com
Anton Leontiev, Embedded Software Architect at ELVEES, JSC, presents the "Designing a Stereo IP Camera From Scratch" tutorial at the May 2017 Embedded Vision Summit.
As the number of cameras in an intelligent video surveillance system increases, server processing of the video quickly becomes a bottleneck. On the other hand, when computer vision algorithms are moved to a resource-limited camera platform, their output quality is often unsatisfactory.
The effectiveness of vision algorithms for surveillance can be greatly improved by using a depth map in addition to the regular image. Thus, using a stereo camera is a way to enable offloading of advanced algorithms from servers to IP cameras. This talk covers the main problems arising during the design of an embedded stereo IP camera, including capturing video streams from two sensors, frame synchronization between sensors, stereo calibration algorithms, and, finally, disparity map calculation.
For the full video of this presentation, please visit:
https://www.embedded-vision.com/platinum-members/embedded-vision-alliance/embedded-vision-training/videos/pages/may-2017-embedded-vision-summit-jain
For more information about embedded vision, please visit:
http://www.embedded-vision.com
Divya Jain, Technical Director at Tyco Innovation, presents the "End to End Fire Detection Deep Neural Network Platform" tutorial at the May 2017 Embedded Vision Summit.
This presentation dives deep into a real-world problem of fire detection to see what it takes to build a complete solution using CNNs. Fire is specifically challenging because it doesn’t have a fixed shape or size like other objects. The presentation begins with a discussion of the technology stack, followed by the algorithm, and concluding with a review of the end to end architecture. Jain discusses the challenges her company encountered while training this algorithm and how they worked through them by building a scalable and reusable platform.
Efficient and thorough data collection and its timely analysis are critical for disaster response and recovery in order to save people's lives during disasters. However, access to comprehensive data in disaster areas and their quick analysis to transform the data to actionable knowledge are challenging. With the popularity and pervasiveness of mobile devices, crowdsourcing data collection and analysis has emerged as an effective and scalable solution. This paper addresses the problem of crowdsourcing mobile videos for disasters by identifying two unique challenges of 1) prioritizing visual data collection and transmission under bandwidth scarcity caused by damaged communication networks and 2) analyzing the acquired data in a timely manner. We introduce a new crowdsourcing framework for acquiring and analyzing the mobile videos utilizing fine granularity spatial metadata of videos for a rapidly changing disaster situation. We also develop an analytical model to quantify the visual awareness of a video based on its metadata and propose the visual awareness maximization problem for acquiring the most relevant data under bandwidth constraints. The collected videos are evenly distributed to off-site analysts to collectively minimize crowdsourcing efforts for analysis. Our simulation results demonstrate the effectiveness and feasibility of the proposed framework.
Links:
http://infolab.usc.edu/DocsDemos/to_ieeebigdata2015.pdf
http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=7363814
For the full video of this presentation, please visit:
https://www.embedded-vision.com/platinum-members/8tree/embedded-vision-training/videos/pages/sept-2017-alliance-vitf
For more information about embedded vision, please visit:
http://www.embedded-vision.com
Erik Klaas of 8tree delivers the presentation "The Evolution of Depth Sensing: From Exotic to Ubiquitous" at the Embedded Vision Alliance's September 2017 Vision Industry and Technology Forum. In his presentation, Klaas covers the following topics:
▪ Why is metrology important?
▪ Categorization of common methodologies for dimensional measurements
▪ Examples and applications of image-based dimensional measurement
▪ Market segmentation, size, challenges and opportunities
▪ What does 8tree do?
▪ A view into the future
For the full video of this presentation, please visit:
https://www.embedded-vision.com/platinum-members/embedded-vision-alliance/embedded-vision-training/videos/pages/may-2017-embedded-vision-summit-gallagher
For more information about embedded vision, please visit:
http://www.embedded-vision.com
Paul Gallagher, Senior Director of Technology and Product Planning for LG, presents the "Coming Shift from Image Sensors to Image Sensing" tutorial at the May 2017 Embedded Vision Summit.
The image sensor space is entering the fourth disruption in its evolution. The first three disruptions primarily focused on taking “pretty pictures” for human consumption, evaluation, and storage. The coming disruption will be driven by machine vision moving into the mainstream. Smart homes, offices, cars, devices – as well as AR/MR, biometrics and crowd monitoring – all need to run image data through a processor to activate responses without human viewing. The opportunity this presents is massive, but as the growth efficiencies come into play the solutions will become specialized.
This talk highlights the opportunities that the emerging shift to image-based sensing will bring throughout the imaging and vision industry. It explores the ingredients that industry participants will need in order to capitalize on these opportunities, and why the entrenched players may not be at as great an advantage as might be expected.
The OptIPortal, a Scalable Visualization, Storage, and Computing Termination ...Larry Smarr
10.04.07
Presentation by Larry Smarr to the NSF Campus Bridging Workshop
University Place Conference Center
Title: The OptIPortal, a Scalable Visualization, Storage, and Computing Termination Device for High Bandwidth Campus Bridging
Indianapolis, IN
Disaster Monitoring using Unmanned Aerial Vehicles and Deep LearningAndreas Kamilaris
Monitoring and identification of disasters are crucial for mitigating their effects on the
environment and on human population, and can be facilitated by the use of unmanned aerial vehicles
(UAV), equipped with camera sensors which can produce frequent aerial photos of the areas of interest. A
modern, promising technique for recognition of events based on aerial photos is deep learning. In this paper,
we present the state of the art work related to the use of deep learning techniques for disaster monitoring
and identification. Moreover, we demonstrate the potential of this technique in identifying disasters
automatically, with high accuracy, by means of a relatively simple deep learning model. Based on a small
dataset of 544 images (containing images of disasters such as fires, earthquakes, collapsed buildings,
tsunami and flooding, as well as “non-disaster” scenes), our preliminary results show an accuracy of 91%
achieved, indicating that deep learning, combined with UAV equipped with camera sensors, have the
potential to predict disasters with high accuracy in the near future. Presented at the EnviroInfo 2017 Conference in Luxembourg.
Applying Photonics to User Needs: The Application ChallengeLarry Smarr
05.02.28
Invited Talk to the 4th Annual On*VECTOR International Photonics Workshop
Sponsored by NTT Network Innovation Laboratories
Title: Applying Photonics to User Needs: The Application Challenge
University of California, San Diego
Efficient and thorough data collection and its timely analysis are critical for disaster response and recovery in order to save peoples lives during disasters. However, access to comprehensive data in disaster areas and their quick analysis to transform the data to actionable knowledge are challenging. With the popularity and pervasiveness of mobile devices, crowdsourcing data collection and analysis has emerged as an effective and scalable solution. This paper addresses the problem of crowdsourcing mobile videos for disasters by identifying two unique challenges of 1) prioritizing visualdata collection and transmission under bandwidth scarcity caused by damaged communication networks and 2) analyzing the acquired data in a timely manner. We introduce a new crowdsourcing framework for acquiring and analyzing the mobile videos utilizing fine granularity spatial metadata of videos for a rapidly changing disaster situation. We also develop an analytical model to quantify the visual awareness of a video based on its metadata and propose the visual awareness maximization problem for acquiring the most relevant data under bandwidth constraints. The collected videos are evenly distributed to off-site analysts to collectively minimize crowdsourcing efforts for analysis. Our simulation results demonstrate the effectiveness and feasibility of the proposed framework.
Making Sense of Information Through Planetary Scale ComputingLarry Smarr
09.03.01
Invited Presentation to the
Diamond Exchange—Brave New World
Title: Making Sense of Information Through Planetary Scale Computing
Monterey, CA
Edge-based Discovery of Training Data for Machine LearningZiqiang Feng
(Accepted and presented in Symposium on Edge Computing, Seattle, Oct 2018)
We show how edge-based early discard of data can greatly improve the productivity of a human expert in assembling a large training set for machine learning. This task may span multiple data sources that are live (e.g., video cameras) or archival (data sets dispersed over the Internet). The critical resource here is the attention of the expert. We describe Eureka, an interactive system that leverages edge computing to greatly improve the productivity of experts in this task. Our experimental results show that Eureka reduces the labeling effort needed to construct a training set by two orders of magnitude relative to a brute-force approach.
High Performance Cyberinfrastructure Discovery Tools for Data Intensive ResearchLarry Smarr
10.05.03
Keynote Speaker
NAE Grand Challenges Summit
Title: High Performance Cyberinfrastructure Discovery Tools for Data Intensive Research
Seattle, WA
To support vital scientific research in fields as diverse as astrophysics, biomedicine and climate science, SciNet beefed up its high-performance computing resources with a Lenovo ThinkSystem supercomputer 10 times more powerful than its predecessor.
Air monitoring sensors and advanced analytics in exposure assessmentDrew Hill
https://doi.org/10.6084/m9.figshare.12354866.v2
We are in the middle of a movement in environmental sensors that is taking the world by storm— Californian governments and public health practitioners, in particular, are leading the nation in exploring and implementing environmental sensors in the production of highly granular, realtime air quality information. As this movement matures, we are seeing improved understanding of ambient exposures and insights that are truly actionable — for example informing community emissions reduction plans under the recent Assembly Bill 617. This innovation in air quality sensor science can be leveraged to improve measurements in the industrial and occupational spaces. This movement has also lead to innovations in analysis methods that facilitate exposure insights not feasible with standard filter, adsorbent, and general integrated samples. This presentation discusses recent advancements in these spaces and offer brief examples of their implementation and potential applicability toward the industrial and occupational hygiene spaces.
Calit2: Experiments in Living in the Virtual/Physical WorldLarry Smarr
10.12.15
Invited Talk
"Cultivating Networked Centers of Excellence" CineGrid International Workshop 2010
Title: Calit2: Experiments in Living in the Virtual/Physical World
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2021/09/may-2021-embedded-vision-summit-opening-remarks-may-27/
Jeff Bier, Founder of the Edge AI and Vision Alliance, welcomes attendees to the May 2021 Embedded Vision Summit on May 27, 2021. Bier provides an overview of the edge AI and vision market opportunities, challenges, solutions and trends. He also introduces the Edge AI and Vision Alliance and the resources it offers for both product creators and members, and reviews the day’s agenda and other logistics.
In this deck from the 2014 HPC User Forum in Seattle, Jack Collins from the National Cancer Institute presents: Genomes to Structures to Function: The Role of HPC.
Watch the video presentation: http://wp.me/p3RLHQ-d28
How to Scale from Workstation through Cloud to HPC in Cryo-EM Processinginside-BigData.com
In this video from the GPU Technology Conference, Lance Wilson from Monash University presents: How to Scale from Workstation through Cloud to HPC in Cryo-EM Processing.
"Learn how high-resolution imaging is revolutionizing science and dramatically changing how we process, analyze, and visualize at this new scale. We will show the journey a researcher can take to produce images capable of winning a Nobel prize. We'll review the last two years of development in single-particle cryo-electron microscopy processing, with a focus on accelerated software, and discuss benchmarks and best practices for common software packages in this domain. Our talk will include videos and images of atomic resolution molecules and viruses that demonstrate our success in high-resolution imaging."
Watch the video: https://wp.me/p3RLHQ-kcW
Learn more: https://www.monash.edu/researchinfrastructure/cryo-em
and
https://www.nvidia.com/en-us/gtc/home/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
For the full video of this presentation, please visit:
https://www.embedded-vision.com/platinum-members/embedded-vision-alliance/embedded-vision-training/videos/pages/may-2017-embedded-vision-summit-gallagher
For more information about embedded vision, please visit:
http://www.embedded-vision.com
Paul Gallagher, Senior Director of Technology and Product Planning for LG, presents the "Coming Shift from Image Sensors to Image Sensing" tutorial at the May 2017 Embedded Vision Summit.
The image sensor space is entering the fourth disruption in its evolution. The first three disruptions primarily focused on taking “pretty pictures” for human consumption, evaluation, and storage. The coming disruption will be driven by machine vision moving into the mainstream. Smart homes, offices, cars, devices – as well as AR/MR, biometrics and crowd monitoring – all need to run image data through a processor to activate responses without human viewing. The opportunity this presents is massive, but as the growth efficiencies come into play the solutions will become specialized.
This talk highlights the opportunities that the emerging shift to image-based sensing will bring throughout the imaging and vision industry. It explores the ingredients that industry participants will need in order to capitalize on these opportunities, and why the entrenched players may not be at as great an advantage as might be expected.
The OptIPortal, a Scalable Visualization, Storage, and Computing Termination ...Larry Smarr
10.04.07
Presentation by Larry Smarr to the NSF Campus Bridging Workshop
University Place Conference Center
Title: The OptIPortal, a Scalable Visualization, Storage, and Computing Termination Device for High Bandwidth Campus Bridging
Indianapolis, IN
Disaster Monitoring using Unmanned Aerial Vehicles and Deep LearningAndreas Kamilaris
Monitoring and identification of disasters are crucial for mitigating their effects on the
environment and on human population, and can be facilitated by the use of unmanned aerial vehicles
(UAV), equipped with camera sensors which can produce frequent aerial photos of the areas of interest. A
modern, promising technique for recognition of events based on aerial photos is deep learning. In this paper,
we present the state of the art work related to the use of deep learning techniques for disaster monitoring
and identification. Moreover, we demonstrate the potential of this technique in identifying disasters
automatically, with high accuracy, by means of a relatively simple deep learning model. Based on a small
dataset of 544 images (containing images of disasters such as fires, earthquakes, collapsed buildings,
tsunami and flooding, as well as “non-disaster” scenes), our preliminary results show an accuracy of 91%
achieved, indicating that deep learning, combined with UAV equipped with camera sensors, have the
potential to predict disasters with high accuracy in the near future. Presented at the EnviroInfo 2017 Conference in Luxembourg.
Applying Photonics to User Needs: The Application ChallengeLarry Smarr
05.02.28
Invited Talk to the 4th Annual On*VECTOR International Photonics Workshop
Sponsored by NTT Network Innovation Laboratories
Title: Applying Photonics to User Needs: The Application Challenge
University of California, San Diego
Efficient and thorough data collection and its timely analysis are critical for disaster response and recovery in order to save peoples lives during disasters. However, access to comprehensive data in disaster areas and their quick analysis to transform the data to actionable knowledge are challenging. With the popularity and pervasiveness of mobile devices, crowdsourcing data collection and analysis has emerged as an effective and scalable solution. This paper addresses the problem of crowdsourcing mobile videos for disasters by identifying two unique challenges of 1) prioritizing visualdata collection and transmission under bandwidth scarcity caused by damaged communication networks and 2) analyzing the acquired data in a timely manner. We introduce a new crowdsourcing framework for acquiring and analyzing the mobile videos utilizing fine granularity spatial metadata of videos for a rapidly changing disaster situation. We also develop an analytical model to quantify the visual awareness of a video based on its metadata and propose the visual awareness maximization problem for acquiring the most relevant data under bandwidth constraints. The collected videos are evenly distributed to off-site analysts to collectively minimize crowdsourcing efforts for analysis. Our simulation results demonstrate the effectiveness and feasibility of the proposed framework.
Making Sense of Information Through Planetary Scale ComputingLarry Smarr
09.03.01
Invited Presentation to the
Diamond Exchange—Brave New World
Title: Making Sense of Information Through Planetary Scale Computing
Monterey, CA
Edge-based Discovery of Training Data for Machine LearningZiqiang Feng
(Accepted and presented in Symposium on Edge Computing, Seattle, Oct 2018)
We show how edge-based early discard of data can greatly improve the productivity of a human expert in assembling a large training set for machine learning. This task may span multiple data sources that are live (e.g., video cameras) or archival (data sets dispersed over the Internet). The critical resource here is the attention of the expert. We describe Eureka, an interactive system that leverages edge computing to greatly improve the productivity of experts in this task. Our experimental results show that Eureka reduces the labeling effort needed to construct a training set by two orders of magnitude relative to a brute-force approach.
High Performance Cyberinfrastructure Discovery Tools for Data Intensive ResearchLarry Smarr
10.05.03
Keynote Speaker
NAE Grand Challenges Summit
Title: High Performance Cyberinfrastructure Discovery Tools for Data Intensive Research
Seattle, WA
To support vital scientific research in fields as diverse as astrophysics, biomedicine and climate science, SciNet beefed up its high-performance computing resources with a Lenovo ThinkSystem supercomputer 10 times more powerful than its predecessor.
Air monitoring sensors and advanced analytics in exposure assessmentDrew Hill
https://doi.org/10.6084/m9.figshare.12354866.v2
We are in the middle of a movement in environmental sensors that is taking the world by storm— Californian governments and public health practitioners, in particular, are leading the nation in exploring and implementing environmental sensors in the production of highly granular, realtime air quality information. As this movement matures, we are seeing improved understanding of ambient exposures and insights that are truly actionable — for example informing community emissions reduction plans under the recent Assembly Bill 617. This innovation in air quality sensor science can be leveraged to improve measurements in the industrial and occupational spaces. This movement has also lead to innovations in analysis methods that facilitate exposure insights not feasible with standard filter, adsorbent, and general integrated samples. This presentation discusses recent advancements in these spaces and offer brief examples of their implementation and potential applicability toward the industrial and occupational hygiene spaces.
Calit2: Experiments in Living in the Virtual/Physical WorldLarry Smarr
10.12.15
Invited Talk
"Cultivating Networked Centers of Excellence" CineGrid International Workshop 2010
Title: Calit2: Experiments in Living in the Virtual/Physical World
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2021/09/may-2021-embedded-vision-summit-opening-remarks-may-27/
Jeff Bier, Founder of the Edge AI and Vision Alliance, welcomes attendees to the May 2021 Embedded Vision Summit on May 27, 2021. Bier provides an overview of the edge AI and vision market opportunities, challenges, solutions and trends. He also introduces the Edge AI and Vision Alliance and the resources it offers for both product creators and members, and reviews the day’s agenda and other logistics.
In this deck from the 2014 HPC User Forum in Seattle, Jack Collins from the National Cancer Institute presents: Genomes to Structures to Function: The Role of HPC.
Watch the video presentation: http://wp.me/p3RLHQ-d28
How to Scale from Workstation through Cloud to HPC in Cryo-EM Processinginside-BigData.com
In this video from the GPU Technology Conference, Lance Wilson from Monash University presents: How to Scale from Workstation through Cloud to HPC in Cryo-EM Processing.
"Learn how high-resolution imaging is revolutionizing science and dramatically changing how we process, analyze, and visualize at this new scale. We will show the journey a researcher can take to produce images capable of winning a Nobel prize. We'll review the last two years of development in single-particle cryo-electron microscopy processing, with a focus on accelerated software, and discuss benchmarks and best practices for common software packages in this domain. Our talk will include videos and images of atomic resolution molecules and viruses that demonstrate our success in high-resolution imaging."
Watch the video: https://wp.me/p3RLHQ-kcW
Learn more: https://www.monash.edu/researchinfrastructure/cryo-em
and
https://www.nvidia.com/en-us/gtc/home/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
A National Big Data Cyberinfrastructure Supporting Computational Biomedical R...Larry Smarr
Invited Presentation
Symposium on Computational Biology and Bioinformatics:
Remembering John Wooley
National Institutes of Health
Bethesda, MD
July 29, 2016
08.04.14
Invited Talk
National Astrobiology Institute Executive Council Meeting
Astrobiology Science Conference 2008
Santa Clara Convention Center
Title: High Performance Collaboration
Santa Clara, CA
Metagenomics Over Lambdas: Update on the CAMERA ProjectLarry Smarr
07.02.27
Invited Talk
6th Annual ON*VECTOR International Photonics Workshop
Title: Metagenomics Over Lambdas: Update on the CAMERA Project
La Jolla, CA
VariantSpark: applying Spark-based machine learning methods to genomic inform...Denis C. Bauer
Genomic information is increasingly used in medical practice giving rise to the need for efficient analysis methodology able to cope with thousands of individuals and millions of variants. Here we introduce VariantSpark, which utilizes Hadoop/Spark along with its machine learning library, MLlib, providing the means of parallelisation for population-scale bioinformatics tasks. VariantSpark is the interface to the standard variant format (VCF), offers seamless genome-wide sampling of variants and provides a pipeline for visualising results.
To demonstrate the capabilities of VariantSpark, we clustered more than 3,000 individuals with 80 Million variants each to determine the population structure in the dataset. VariantSpark is 80% faster than the Spark-based genome clustering approach, ADAM, the comparable implementation using Hadoop/Mahout, as well as Admixture, a commonly used tool for determining individual ancestries. It is over 90% faster than traditional implementations using R and Python. These benefits of speed, resource consumption and scalability enables VariantSpark to open up the usage of advanced, efficient machine learning algorithms to genomic data.
The package is written in Scala and available at https://github.com/BauerLab/VariantSpark.
DNA sequencing: rapid improvements and their implicationsJeffrey Funk
these slides analyze the rapid improvements in DNA sequencers and the implications for these rapid improvements for drug discovery, new crops, materials creation, and new bio-fuels. Many of the rapid improvements are from "reductions in scale." As with integrated circuits, reducing the size of features on DNA sequencers has enabled many orders of magnitude improvements in them. Unlike integrated circuits, the improvements are also due to changes in technology. For example, changes from pyrosequencing to semiconductor and nanopore sequencing have also been needed to achieve the reductions in scale. Second, pyrosequencing also benefited from improvements in lasers and camera chips.
Enabling Real Time Analysis & Decision Making - A Paradigm Shift for Experime...PyData
By Kerstin Kleese van Dam
PyData New York City 2017
New instrument technologies are enabling a new generation of in-situ and in-operando experiments, with extremely fine spatial and temporal resolution, that allows researchers to observe as physics, chemistry and biology are happening. These new methodologies go hand in hand with an exponential growth in data volumes and rates - petabyte scale data collections and terabyte/sec. At the same time scientists are pushing for a paradigm shift. As they can now observe processes in intricate details, they want to analyze, interpret and control those processes. Given the multitude of voluminous, heterogenous data streams involved in every single experiment, novel real time, data driven analysis and decision support approaches are needed to realize their vision. This talk will discuss state of the art streaming analysis for experimental facilities, its challenges and early successes. It will present where commercial technologies can be leveraged and how many of the novel approaches differ from commonly available solutions.
Calit2 - CSE's Living Laboratory for ApplicationsLarry Smarr
08.05.27
UCSD CSE 91 - Perspectives in Computer Science (Spring 2008)
Calit2@UCSD
Title: Calit2 - CSE's Living Laboratory for Applications
La Jolla, CA
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/opencv-for-high-performance-low-power-vision-applications-on-snapdragon-a-presentation-from-qualcomm/
Xin Zhong, Computer Vision Product Manager at Qualcomm Technologies, presents the “OpenCV for High-performance, Low-power Vision Applications on Snapdragon” tutorial at the May 2024 Embedded Vision Summit.
For decades, the OpenCV software library has been popular for developing computer vision applications. However, developers have found it challenging to create efficient implementations of their OpenCV applications on processors optimized for edge applications, like the Qualcomm Snapdragon family. As part of its comprehensive support for computer vision application developers, Qualcomm provides a variety of tools to enable developers to take full advantage of the heterogeneous computing resources in the Snapdragon processors.
In this talk, Zhong introduces a new element of Qualcomm’s computer vision tools suite: a version of OpenCV optimized for Snapdragon platforms, which allows developers to leverage and port their existing OpenCV-based applications seamlessly to Snapdragon platforms. Supporting OpenCV v4.x and later releases, this implementation contains unique Qualcomm-specific accelerations of OpenCV and OpenCV extension APIs. Zhong explains how this library enables developers to leverage existing OpenCV code to achieve superior performance and power savings on Snapdragon platforms.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/deploying-large-models-on-the-edge-success-stories-and-challenges-a-presentation-from-qualcomm/
Vinesh Sukumar, Senior Director of Product Management at Qualcomm Technologies, presents the “Deploying Large Models on the Edge: Success Stories and Challenges” tutorial at the May 2024 Embedded Vision Summit.
In this talk, Dr. Sukumar explains and demonstrates how Qualcomm has been successful in deploying large generative AI and multimodal models on the edge for a variety of use cases in consumer and enterprise markets. He examines key challenges that must be overcome before large models at the edge can reach their full commercial potential. He also highlights how Qualcomm is addressing these challenges through upgraded processor hardware, improved developer tools and a comprehensive library of fully optimized AI models in the Qualcomm AI Hub.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/scaling-vision-based-edge-ai-solutions-from-prototype-to-global-deployment-a-presentation-from-network-optix/
Maurits Kaptein, Chief Data Scientist at Network Optix and Professor at the University of Eindhoven, presents the “Scaling Vision-based Edge AI Solutions: From Prototype to Global Deployment” tutorial at the May 2024 Embedded Vision Summit.
The Embedded Vision Summit brings together innovators in silicon, devices, software and applications and empowers them to bring computer vision and perceptual AI into reliable and scalable products. However, integrating recent hardware, software and algorithm innovations into prime-time-ready products is quite challenging. Scaling from a proof of concept—for example, a novel neural network architecture performing a valuable task efficiently on a new piece of silicon—to an AI vision system installed in hundreds of sites requires surmounting myriad hurdles.
First, building on Network Optix’s 14 years of experience, Professor Kaptein details how to overcome the networking, fleet management, visualization and monetization challenges that come with scaling a global vision solution. Second, Kaptein discusses the complexities of making vision AI solutions device-agnostic and remotely manageable, proposing an open standard for AI model deployment to edge devices. The proposed standard aims to simplify market entry for silicon manufacturers and enhance scalability for solution developers. Kaptein outlines the standard’s core components and invites collaborative contributions to drive market expansion.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/whats-next-in-on-device-generative-ai-a-presentation-from-qualcomm/
Jilei Hou, Vice President of Engineering and Head of AI Research at Qualcomm Technologies, presents the “What’s Next in On-device Generative AI” tutorial at the May 2024 Embedded Vision Summit.
The generative AI era has begun! Large multimodal models are bringing the power of language understanding to machine perception, and transformer models are expanding to allow machines to understand using multiple types of sensors. This new wave of approaches is poised to revolutionize user experiences, disrupt industries and enable powerful new capabilities. For generative AI to reach its full potential, however, we must deploy it on edge devices, providing improved latency, pervasive interaction and enhanced privacy.
In this talk, Hou shares Qualcomm’s vision of the compelling opportunities enabled by efficient generative AI at the edge. He also identifies the key challenges that the industry must overcome to realize the massive potential of these technologies. And he highlights research and product development work that Qualcomm is doing to lead the way via an end-to-end system approach—including techniques for efficient on-device execution of LLMs, LVMs and LMMs, methods for orchestration of large models at the edge and approaches for adaptation and personalization.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2023/11/learning-compact-dnn-models-for-embedded-vision-a-presentation-from-the-university-of-maryland-at-college-park/
Shuvra Bhattacharyya, Professor at the University of Maryland at College Park, presents the “Learning Compact DNN Models for Embedded Vision” tutorial at the May 2023 Embedded Vision Summit.
In this talk, Bhattacharyya explores methods to transform large deep neural network (DNN) models into effective compact models. The transformation process that he focuses on—from large to compact DNN form—is referred to as pruning. Pruning involves the removal of neurons or parameters from a neural network. When performed strategically, pruning can lead to significant reductions in computational complexity without significant degradation in accuracy. It is sometimes even possible to increase accuracy through pruning.
Pruning provides a general approach for facilitating real-time inference in resource-constrained embedded computer vision systems. Bhattacharyya provides an overview of important aspects to consider when applying or developing a DNN pruning method and presents details on a recently introduced pruning method called NeuroGRS. NeuroGRS considers structures and trained weights jointly throughout the pruning process and can result in significantly more compact models compared to other pruning methods.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2023/11/introduction-to-computer-vision-with-cnns-a-presentation-from-mohammad-haghighat/
Independent consultant Mohammad Haghighat presents the “Introduction to Computer Vision with Convolutional Neural Networks” tutorial at the May 2023 Embedded Vision Summit.
This presentation covers the basics of computer vision using convolutional neural networks. Haghighat begins by introducing some important conventional computer vision techniques and then transition to explaining the basics of machine learning and convolutional neural networks (CNNs) and showing how CNNs are used in visual perception.
Haghighat illustrates the building blocks and computational elements of neural networks through examples. This session provides an overview of how modern computer vision algorithms are designed, trained and used in real-world applications.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2023/11/selecting-tools-for-developing-monitoring-and-maintaining-ml-models-a-presentation-from-yummly/
Parshad Patel, Data Scientist at Yummly, presents the “Selecting Tools for Developing, Monitoring and Maintaining ML Models” tutorial at the May 2023 Embedded Vision Summit.
With the boom in tools for developing, monitoring and maintaining ML models, data science teams have many options to choose from. Proprietary tools provided by cloud service providers are enticing, but teams may fear being locked in—and may worry that these tools are too costly or missing important features when compared with alternatives from specialized providers.
Fortunately, most proprietary, fee-based tools have an open-source component that can be integrated into a home-grown solution at low cost. This can be a good starting point, enabling teams to get started with modern tools without making big investments and leaving the door open to evolve tool selection over time. In this talk, Patel presents a step-by-step process for creating an MLOps tool set that enables you to deliver maximum value as a data scientist. He shares how Yummly built pipelines for model development and put them into production using open-source projects.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2023/11/building-accelerated-gstreamer-applications-for-video-and-audio-ai-a-presentation-from-wave-spectrum/
Abdo Babukr, Accelerated Computing Consultant at Wave Spectrum, presents the “Building Accelerated GStreamer Applications for Video and Audio AI,” tutorial at the May 2023 Embedded Vision Summit.
GStreamer is a popular open-source framework for creating streaming media applications. Developers often use GStreamer to streamline the development of computer vision and audio perception applications. Since perceptual algorithms are often quite demanding in terms of processing performance, in many cases developers need to find ways to accelerate key GStreamer building blocks, taking advantage of specialized features of their target processor or co-processor.
In this talk, Babukr introduces GStreamer and shows how to use it to build computer vision and audio perception applications. He also shows how to create efficient, high-performance GStreamer applications that utilize specialized hardware features.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2023/11/understanding-selecting-and-optimizing-object-detectors-for-edge-applications-a-presentation-from-walmart-global-tech/
Md Nasir Uddin Laskar, Staff Machine Learning Engineer at Walmart Global Tech, presents the “Understanding, Selecting and Optimizing Object Detectors for Edge Applications” tutorial at the May 2023 Embedded Vision Summit.
Object detectors count objects in a scene and determine their precise locations, while also labeling them. Object detection plays a crucial role in many vision applications, from autonomous driving to smart appliances. In many of these applications, it’s necessary or desirable to implement object detection at the edge.
In this presentation, Laskar explores the evolution of object detection algorithms, from traditional approaches to deep learning-based methods and transformer-based architectures. He delves into widely used approaches for object detection, such as two-stage R-CNNs and one-stage YOLO algorithms, and examines their strengths and weaknesses. And he provides guidance on how to evaluate and select an object detector for an edge application.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2023/10/introduction-to-modern-lidar-for-machine-perception-a-presentation-from-the-university-of-ottawa/
Robert Laganière, Professor at the University of Ottawa and CEO of Sensor Cortek, presents the “Introduction to Modern LiDAR for Machine Perception” tutorial at the May 2023 Embedded Vision Summit.
In this presentation, Laganière provides an introduction to light detection and ranging (LiDAR) technology. He explains how LiDAR sensors work and their main advantages and disadvantages. He also introduces different approaches to LiDAR, including scanning and flash LiDAR.
Laganière explores the types of data produced by LiDAR sensors and explains how this data can be processed using deep neural networks. He also examines the synergy between LiDAR and cameras, and the concept of pseudo-LiDAR for detection.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2023/11/vision-language-representations-for-robotics-a-presentation-from-the-university-of-pennsylvania/
Dinesh Jayaraman, Assistant Professor at the University of Pennsylvania, presents the “Vision-language Representations for Robotics” tutorial at the May 2023 Embedded Vision Summit.
In what format can an AI system best present what it “sees” in a visual scene to help robots accomplish tasks? This question has been a long-standing challenge for computer scientists and robotics engineers. In this presentation, Jayaraman provides insights into cutting-edge techniques being used to help robots better understand their surroundings, learn new skills with minimal guidance and become more capable of performing complex tasks.
Jayaraman discusses recent advances in unsupervised representation learning and explains how these approaches can be used to build visual representations that are appropriate for a controller that decides how the robot should act. In particular, he presents insights from his research group’s recent work on how to represent the constituent objects and entities in a visual scene, and how to combine vision and language in a way that permits effectively translating language-based task descriptions into images depicting the robot’s goals.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2023/10/adas-and-av-sensors-whats-winning-and-why-a-presentation-from-techinsights/
Ian Riches, Vice President of the Global Automotive Practice at TechInsights, presents the “ADAS and AV Sensors: What’s Winning and Why?” tutorial at the May 2023 Embedded Vision Summit.
It’s clear that the number of sensors per vehicle—and the sophistication of these sensors—is growing rapidly, largely thanks to increased adoption of advanced safety and driver assistance features. In this presentation, Riches explores likely future demand for automotive radars, cameras and LiDARs.
Riches examines which vehicle features will drive demand out to 2030, how vehicle architecture change is impacting the market and what sorts of compute platforms these sensors will be connected to. Finally, he shares his firm’s vision of what the landscape could look like far beyond 2030, considering scenarios out to 2050 for automated driving and the resulting sensor demand.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2023/10/computer-vision-in-sports-scalable-solutions-for-downmarkets-a-presentation-from-sportlogiq/
Mehrsan Javan, Co-founder and CTO of Sportlogiq, presents the “Computer Vision in Sports: Scalable Solutions for Downmarket Leagues” tutorial at the May 2023 Embedded Vision Summit.
Sports analytics is about observing, understanding and describing the game in an intelligent manner. In practice, this requires a fully automated, robust end-to-end pipeline, spanning from visual input, to player and group activities, to player and team evaluation to planning. Despite major advancements in computer vision and machine learning, today sports analytics solutions are limited to top leagues and are not widely available for downmarket leagues and youth sports.
In this talk, Javan explains how his company has developed scalable and robust computer vision solutions to democratize sport analytics and offer pro-league-level insights to leagues with modest resources, including youth leagues. He highlights key challenges—such as the requirement for low-cost, low-latency processing and the need for robustness despite variations in venues. He discusses the approaches Sportlogiq tried and how it ultimately overcame these challenges, including the use of transformers and fusion of multiple type of data streams to maximize accuracy.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2023/10/detecting-data-drift-in-image-classification-neural-networks-a-presentation-from-southern-illinois-university/
Spyros Tragoudas, Professor and School Director at Southern Illinois University Carbondale, presents the “Detecting Data Drift in Image Classification Neural Networks” tutorial at the May 2023 Embedded Vision Summit.
An unforeseen change in the input data is called “drift,” and may impact the accuracy of machine learning models. In this talk, Tragoudas presents a novel scheme for diagnosing data drift in the input streams of image classification neural networks. His proposed method for drift detection and quantification uses a threshold dictionary for the prediction probabilities of each class in the neural network model.
The method is applicable to any drift type in images such as noise and weather effects, among others. Tragoudas shares experimental results on various data sets, drift types and neural network models to show that his proposed method estimates the drift magnitude with high accuracy, especially when the level of drift significantly impacts the model’s performance.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2023/10/deep-neural-network-training-diagnosing-problems-and-implementing-solutions-a-presentation-from-sensor-cortek/
Fahed Hassanat, Chief Operating Officer and Head of Engineering at Sensor Cortek, presents the “Deep Neural Network Training: Diagnosing Problems and Implementing Solutions” tutorial at the May 2023 Embedded Vision Summit.
In this presentation, Hassanat delves into some of the most common problems that arise when training deep neural networks. He provides a brief overview of essential training metrics, including accuracy, precision, false positives, false negatives and F1 score.
Hassanat then explores training challenges that arise from problems with hyperparameters, inappropriately sized models, inadequate models, poor-quality datasets, imbalances within training datasets and mismatches between training and testing datasets. To help detect and diagnose training problems, he also covers techniques such as understanding performance curves, recognizing overfitting and underfitting, analyzing confusion matrices and identifying class interaction issues.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2023/10/ai-start-ups-the-perils-of-fishing-for-whales-war-stories-from-the-entrepreneurial-front-lines-a-presentation-from-seechange-technologies/
Tim Hartley, Vice President of Product for SeeChange Technologies, presents the “AI Start-ups: The Perils of Fishing for Whales (War Stories from the Entrepreneurial Front Lines)” tutorial at the May 2023 Embedded Vision Summit.
You have a killer idea that will change the world. You’ve thought through product-market fit and differentiation. You have seed funding and a world-beating team. Best of all, you’ve caught the attention of major players in your industry. You’ve reached peak “start-up”—that point of limitless possibility—when you go to bed with the same level of energy and enthusiasm you had when you woke. And then the first proof of concept starts…
In this talk, Hartley lays out some of the pitfalls that await those building the next big thing. Using real examples, he shares some of the dos and don’ts, particularly when dealing with that big potential first customer. Hartley discusses the importance of end-to-end design, ensuring your product solves real-world problems. He explores how far the big companies will tell you to jump—and then jump again—for free. And, most importantly, how to build long-term partnerships with major corporations without relying on over-promising sales pitches.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2023/10/a-computer-vision-system-for-autonomous-satellite-maneuvering-a-presentation-from-scout-space/
Andrew Harris, Spacecraft Systems Engineer at SCOUT Space, presents the “Developing a Computer Vision System for Autonomous Satellite Maneuvering” tutorial at the May 2023 Embedded Vision Summit.
Computer vision systems for mobile autonomous machines experience a wide variety of real-world conditions and inputs that can be challenging to capture accurately in training datasets. Few autonomous systems experience more challenging conditions than those in orbit. In this talk, Harris describes how SCOUT Space has designed and trained satellite vision systems using dynamic and physically informed synthetic image datasets.
Harris describes how his company generates synthetic data for this challenging environment and how it leverages new real-world data to improve our datasets. In particular, he explains how these synthetic datasets account for and can replicate real sources of noise and error in the orbital environment, and how his company supplements them with in-space data from the first SCOUT-Vision system, which has been in orbit since 2021.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2023/10/bias-in-computer-vision-its-bigger-than-facial-recognition-a-presentation-from-santa-clara-university/
Susan Kennedy, Assistant Professor of Philosophy at Santa Clara University, presents the “Bias in Computer Vision—It’s Bigger Than Facial Recognition!” tutorial at the May 2023 Embedded Vision Summit.
As AI is increasingly integrated into various industries, concerns about its potential to reproduce or exacerbate bias have become widespread. While the use of AI holds the promise of reducing bias, it can also have unintended consequences, particularly in high-stakes computer vision applications such as facial recognition. However, even seemingly low-stakes computer vision applications such as identifying potholes and damaged roads can also present ethical challenges related to bias.
This talk explores how bias in computer vision often poses an ethical challenge, regardless of the stakes involved. Kennedy discusses the limitations of technical solutions aimed at mitigating bias, and why “bias-free” AI may not be achievable. Instead, she focuses on the importance of adopting a “bias-aware” approach to responsible AI design and explores strategies that can be employed to achieve this.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2023/10/sensor-fusion-techniques-for-accurate-perception-of-objects-in-the-environment-a-presentation-from-sanborn-map-company/
Baharak Soltanian, Vice President of Research and Development for the Sanborn Map Company, presents the “Sensor Fusion Techniques for Accurate Perception of Objects in the Environment” tutorial at the May 2023 Embedded Vision Summit.
Increasingly, perceptual AI is being used to enable devices and systems to obtain accurate estimates of object locations, speeds and trajectories. In demanding applications, this is often best done using a heterogeneous combination of sensors (e.g., vision, radar, LiDAR). In this talk, Soltanian introduces techniques for combining data from multiple sensors to obtain accurate information about objects in the environment.
Soltanian briefly introduces the roles played by Kalman filters, particle filters, Bayesian networks and neural networks in this type of fusion. She then examines alternative fusion architectures, such as centralized and decentralized approaches, to better understand the trade-offs associated with different approaches to sensor fusion as used to enhance the ability of machines to understand their environment.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2023/10/updating-the-edge-ml-development-process-a-presentation-from-samsara/
Jim Steele, Vice President of Embedded Software at Samsara, presents the “Updating the Edge ML Development Process” tutorial at the May 2023 Embedded Vision Summit.
Samsara (NYSE:IOT) is focused on digitizing the world of operations. The company helps customers across many industries—including food and beverage, utilities and energy, field services and government—get information about their physical operations into the cloud, so they can operate more safely, efficiently and sustainably. Samsara’s sensors collect billions of data points per day and on-device processing is instrumental to its success. The company is constantly developing, improving and deploying ML models at the edge.
Samsara has found that the traditional development process—where ML scientists create models and hand them off to firmware engineers for embedded implementation—is slow and often produces difficult-to-resolve differences between the original model and the embedded implementation. In this talk, Steele presents an alternative development process that his company has adopted with good results. In this process, firmware engineers develop a general framework that ML scientists use to design, develop and deploy their models. This enables quick iterations and fewer confounding bugs.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Leading Change strategies and insights for effective change management pdf 1.pdf
"The Reverse Factory: Embedded Vision in High-Volume Laboratory Applications," a Presentation from tec-connection
1. Page 1
The Reverse Factory
Embedded Vision in High-Volume
(and Value) Laboratory Applications
Patrick Courtney
patrick.courtney@tec-connection.com
Embedded Vision Alliance
Hamburg 6th September 2017
V2.6
3. Page 3
Fred Sanger (1918-2013)
• Nobel Prize 1958
• Protein sequencing
• Human insulin
Image credit: MRC Laboratory of Molecular Biology
4. Page 4
Structure of DNA 1953
Crick and Watson
Nobel 1962
Friedrich Miescher 1869
5. Page 5
Fred Sanger (1918-2013)
• Nobel Prize #2
• DNA sequencing 1980
1st generation sequencing
C T G A
Image credit: MRC Laboratory of Molecular Biology
separationbyelectricfield
6. Page 6
Synopsis
• Motivation: the need and the market
• Laboratory as a factory in reverse
• Enabled by science and technology (including imaging)
• Big applications today: NGS case study
• End applications: ourselves and our world, family, food
• How it works: chemistry, optics, software
• Role of imaging in delivering performance
• Improvement curve: Carlson’s curve and what this means
• Cost, speed, growing the market, new applications
• The next applications for imaging
• Scientific & technological trends
• There are still plenty of opportunities
7. Page 7
Laboratory as a factory in reverse:
from sample to information
petrochemicals
Industrial
biomedical
research
pharmaceutical
forensics environmental
materials
research
food & drink
consumer
goods
from well behaved to heterogenous; from solid into liquid form
clinical
Life sciences Physical sciences
9. Page 9
Clinical applications of genomics
• Screening
• Diagnosis
• for cancer, infection
• Treatment
• for selection, progress, follow up
• example: breast cancer BRCA1
• Emerging area
• counselling and reproduction
cisncancer pharmainfo.net
10. Page 10
Applications expanding beyond medicine
• Next generation sequencing is now used very widely
• Family
• Food
• Flu
• Forensics
• Fish
• High volume applications of NGS: all that touches on life
11. Page 11
Family: self, ancestry, genealogy
• Self
• Inheritance
• Health risk?
• Regulation
• FDA and terms of use
• (and our pets)
consumer
goods
12. Page 12
Flu: Tracing infection Zika 2016
• 4-40 entry points from April
Grubaugh, Nathan D., et al. "Genomic epidemiology
reveals multiple introductions of Zika virus into the
United States." Nature (2017) 546, pp401-405.
13. Page 13
Aircraft safety: bird strike data - what when and why
Lapwing
Kestrel
Galah
environmental
14. Page 14
Elements of an NGS (next generation) system
• DNA strand
• Flow cell
• Chemistry
• Optics
• Laser
• Camera
• Software
https://www.youtube.com/watch?v=9YxExTSwgPM sequencing 5min
https://www.youtube.com/watch?v=pfZp5Vgsbw0 flow cell 2min
illumina
15. Page 15
Role of imaging: the flow cell
Each image 3-4Mp, 120k images per 36 cycle run = 350Gb
16. Page 16
Role of imaging: the optical path
illumina
fluorescence
17. Page 17
Role of imaging: how it works
8 lanes x 100 tiles. 70bp -> 28k images / lane
300k clusters per tile. 3Gb totalillumina
20. Page 20
Further improvements (1)
Problem: 4 colour channels per image
Solution: from 4 channels to 2 channels
illumina
21. Page 21
Further improvements (1)
Leads to other problems
• But …. 2 colour chemistry can overcall high confidence G bases
The sequence below shows this effect:
@1:11101:2930:2211 1:N:0
ATTTATTATTAATTAAATATTAATAATAAATAGATCGGAAGAGCACACGTCTGAACTCCAGTCACTAGCTTAGCGCGTATGCCGTCGTCGGCGTGCAAAAAAAAAGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
+AAAAAEEEEEEEEEEEE6EEEEEAEEEEEEEEEA/EE<EEEAEE/EAEEAEEEE6</EEEEEA/<//<///A/A//////</E<//////E///A/</A/<<A////A/E<EEEEEEEAEEE/EEEAEAEAEAE6/AEAEE<AAEAEE
It’s easier to see if you visualise the quality scores for this sequence
single_seq_quality
https://sequencing.qcfail.com/articles/illumina-2-colour-chemistry-can-overcall-high-confidence-g-bases/
22. Page 22
Further improvements (2)
Problem: cluster density issues
• Cluster density can be demanding
• Especially for some samples
Krueger F, Andrews SR, Osborne CS (2011) Large Scale Loss of Data in Low-Diversity Illumina Sequencing Libraries Can Be Recovered by
Deferred Cluster Calling. PLOS ONE 6(1): e16607. https://doi.org/10.1371/journal.pone.0016607
24. Page 24
Further improvements (2)
Solution: patterned flow cells
• From random spots to fixed positions
• Simplification of analysis
• Increase in density and reliability
• So more data in less time, cost
illumina
25. Page 25
Further improvements (3)
Problem: better use of flow cell
• Solution: use two surface imaging
• Challenging imaging and focussing
• End up with an optical head with 6 linear cameras
Illumina US8143599
26. Page 26
Nature 507, 294–295 (20 March 2014)
Improvement: Carlson’s curve and what this means
27. Page 27
Improvement: Carlson’s curve and what this means
• average of 5x in 2 year (2.4x faster than Moore)
• with a peak of 1000x in 2 years
• 100k x better over 14 years vs 34 years
Ben Moore, in gnuplot by grendel|khan. - Own work, Public Domain, https://commons.wikimedia.org/w/index.php?curid=31006154
Illumina Hiseq2000s BGI Hong Kong (128 units)
Moore’s
Carlson’s
2001
peak Carlson
• Enabled by many
technologies
100k is 57 and 217
28. Page 28
Or put it another way, if computing had improved as fast….
IBM PC XT 1983 = 34 years ago
So for the technology we have now
the IBM PC would have been introduced in 2003
The same year of Nemo…
….or a car would cost €0.20… or a $1000 flight, 1 cent
29. Page 29
Market size and trends
• Lab instruments market
• $40bn (instruments/service)
• Segments and growth rate
• Oncology, infection, reproduction, agriculture, forensic, consumer
• Currently $3bn growing 30% CAGR to $12bn by 2022
• Market capitalisation:
• illumina (cap $28bn) make profit of $1.7bn on sales of $2.5bn
• Learnings for the vision supply chain:
• Rewards fall to the users, and system integrator
• Components suppliers get small % units sales
• Driver: cost per genome, not raw speed
• But someone has to learn the application and design the system
Grand view research; Macquarie (USA) Research 2014
Consumer genomics
Agri-genomics Forensics
Metagenomics, drug development
Immune system monitoring
Reproductive health
Clinical Investigation
Oncology
30. Page 30
Market trends and remaining opportunity
• Remaining potential for clinical applications
• On the cost reduction from $3.000M to $1000
• Moving WGS (Whole Genome Sequencing) into the doctors practice
• How many units? How many physicians? 10M
• Remaining potential for all other applications
• How much DNA is there out there?
33. Page 33
Scientific and technological trends
• New science: Nobel prizes
• New imaging modes
• New labels
• New technology
• Sensors
• Optics
• Algorithms
• Robotics
• Drivers: faster, easy to use, more specific, sensitive, robust
34. Page 34
Scientific and technological trends
• New science: Nobel prizes
• New imaging modes
• New labels
• New technology
• Sensors
• Optics
• Algorithms
Evolution of the microscope
• Robotics
• Drivers: faster, easy to use, more specific, sensitive, robust
35. Page 35
Evolution of the microscope since c.1670
Expanding the market
• Drivers: quality, productivity by ease of use and automation
modern microscope imaging plate readervan Leeuwenhoek benchtop microscope
Individual cells: fluorescence brightfield
foldscope
plate of cells
36. Page 36
Automated cell counters: from $60k to $5k
Expanding the market
Beckmann, Invitrogen, SigmaAldrich
automated
manual
37. Page 37
Scientific and technological trends
• New science: Nobel prizes
• New imaging modes
• New labels
• New technology
• Sensors, Optics
• Algorithms
• Robotics
• Drivers: faster, easy to use, more specific, sensitive, robust
38. Page 38
Improved scientific knowledge: Nobel prizes for the lab
• The Nobel Prize in Chemistry 2008
• Osamu Shimomura, Martin Chalfie and Roger Y. Tsien†
• for the discovery & development of green fluorescent protein GFP
• New labels (antibodies, nanoparticles…)
• The Nobel Prize in Chemistry 2014
• Eric Betzig, Stefan W. Hell and William E. Moerner
• for the development of super-resolved fluorescence microscopy
• New imaging modes (Raman [1930], IR, spectroscopy…)
39. Page 39
What is the resolution revolution ?
and why imaging is (still) important
40. Page 40
What is the resolution revolution ?
and why imaging is (still) important
Ernst Abbe stated a limit
on resolving power (1873)
By Daniel Mietchen - Own work, CC0, https://commons.wikimedia.org/w/index.php?curid=35168637
Abbe’s diffraction limit (credit: Johan Jarnestad /The Royal Swedish Academy of Sciences)
41. Page 41
What is the resolution revolution ?
and why imaging is (still) important
Ernst Abbe stated a limit
on resolving power (1873)
By Daniel Mietchen - Own work, CC0, https://commons.wikimedia.org/w/index.php?curid=35168637
Abbe’s diffraction limit (credit: Johan Jarnestad /The Royal Swedish Academy of Sciences)
Resolution scheme: adopted from Thorley et al., Super-resolution Microscopy: A Comparison of Commercially Available Options, Fluorescence Microscopy Super-Resolution and Other Novel Techniques, Academic Press, 2014
43. Page 43
Super-resolution: how it works (1)
http://zeiss-campus.magnet.fsu.edu/articles/superresolution/palm/practicalaspects.html
44. Page 44
New reagents: Brainbow labelling
and why imaging is (still) important
Lichtman et al., Nature Reviews Neuroscience 2008
45. Page 45
Building brainbow from fluorescent proteins
• Motivation: to map all the connections in the brain
• What this means for the imaging supply chain:
• better faster smarter cameras
• multichannel, multifocal xyz-t-λ
Lawson Kurtz et al. / Duke University
46. Page 46
Scientific and technological trends
• New science: Nobel prizes
• New imaging modes
• New reagents
• New technology
• Sensors, optics
• Algorithms
• Robotics
• Lab environment
• Drivers: faster, easy to use, more specific, sensitive, robust
multiple
sensors
AndrewAlliance
48. Page 48
What the lab really looks like: it’s a messy place
A long way from
lean processes,
from industry 4.0
If the DNA is the
”job description
for the cell”, this
is what actually
happens when it
meets the world
Cancer research makes for a messy bench. … @WorldwideCancer
49. Page 49
Applications tomorrow: watching the lab
• Klavins Lab
• “Aquarium”
• See TEDx talk on synthetic “programming” biology
https://www.youtube.com/watch?v=jL0cG4NJGd4
50. Page 50
Actions on future applications
• Future imaging (super-microscopes)
• Lab (factory) of the future: 20-100 cameras per lab
• Hospital of the future: role of imaging in the lab
• Take home message:
• imaging has proven value but still only present at a very low level
• Role of EU programmes
51. Page 53
Bringing it all together:
The Healthcare Lighthouse vision
Laboratory
Care
Surgery
Rehabilitation
euRobotics topic groups on medical and laboratory robotics
52. Acknowledgements
• Almost too many to mention, but I’ll try
– DNA sequencing: Illumina, HPA, Qiagen
– Microscopy: PerkinElmer, Sartorius, Stefan Hell
– Cell counting: Luna, Roche, Jenoptik
– Smartlab: Deutsche Messe
– AndrewAlliance, EU, euRobotics
54. Page 58
How much DNA is there out there?
6x1030
microbes
on earth
55. Page 60
Ebola and the most expensive tent in the world
Dr Sam Collins
Prof Ian Goodfellowactually an ion torrent machine
56. Page 61
Super-resolution: how it works (1)
http://zeiss-campus.magnet.fsu.edu/articles/superresolution/palm/practicalaspects.html
57. Page 62
Super-resolution: how it works (2)
Localisation is
more precise
than resolving
In effect: trade
time for space
Role of imaging
Actually, several techniques http://www.practicallyscience.com/category/bio/cellbio/