Deep learning @ Edge using Intel's Neural Compute Stickgeetachauhan
Talk @ Intel Global IoT DevFest, Nov 2017
The new generation of hardware accelerators are enabling rich AI driven, Intelligent IoT solutions @ the edge.
The talk showcased how to use Intel's latest Nervana Compute Stick for accelerating deep learning IoT solutions. It also covered use cases and code details for running Deep Learning models on Intel's Nervana Compute Stick.
Best Practices for On-Demand HPC in Enterprisesgeetachauhan
Traditionally HPC has been popular in Scientific domains, but not in most other Enterprises. With the advent of on-demand-HPC in cloud and growing adoption of Deep Learning, HPC should now be a standard platform for any Enterprise leading with AI and Machine Learning. This session will cover the best practices for building your own on-demand HPC cluster for Enterprise workloads along with key use cases where Enterprises will benefit from HPC solution.
These are my slides for the 2012 meeting of all german DFG founded research training groups (Graduiertenkolleg) in computer science. I present the group METRIK.
Deep learning is making news across the country as one of the most promising techniques in machine learning research. However, these methods are complex to implement, finicky to tune, and state-of-the-art accuracy is only achieved by a few experts in the field. In this session, we give a beginner-friendly explanation of deep learning using neural networks—what it is, what it does, and how; and introduce the concept of deep features, which allows you to obtain great performance with reduced running times and data set sizes. We then show how these methods can easily be deployed on GPU instances (G2) on Amazon EC2.
IoT-Daten: Mehr und schneller ist nicht automatisch besser.
Über optimale Sampling-Strategien, wie man rechnen kann, ob IoT sich rechnet, und warum es nicht immer Deep Learning und Real-Time-Analytics sein muss. (Folien Deutsch/Englisch)
Practical steps for non-machine learners on how to prepare your medical image dataset for deep learning modelling.
Here we use a fundus image dataset as an example that might have controls (healthy eyes) and glaucomatous fundus images with three different severities. In glaucoma, the optic disc is of a special interest so we want to annotate that from the images using a bounding box to help the deep learning training.
Deep learning @ Edge using Intel's Neural Compute Stickgeetachauhan
Talk @ Intel Global IoT DevFest, Nov 2017
The new generation of hardware accelerators are enabling rich AI driven, Intelligent IoT solutions @ the edge.
The talk showcased how to use Intel's latest Nervana Compute Stick for accelerating deep learning IoT solutions. It also covered use cases and code details for running Deep Learning models on Intel's Nervana Compute Stick.
Best Practices for On-Demand HPC in Enterprisesgeetachauhan
Traditionally HPC has been popular in Scientific domains, but not in most other Enterprises. With the advent of on-demand-HPC in cloud and growing adoption of Deep Learning, HPC should now be a standard platform for any Enterprise leading with AI and Machine Learning. This session will cover the best practices for building your own on-demand HPC cluster for Enterprise workloads along with key use cases where Enterprises will benefit from HPC solution.
These are my slides for the 2012 meeting of all german DFG founded research training groups (Graduiertenkolleg) in computer science. I present the group METRIK.
Deep learning is making news across the country as one of the most promising techniques in machine learning research. However, these methods are complex to implement, finicky to tune, and state-of-the-art accuracy is only achieved by a few experts in the field. In this session, we give a beginner-friendly explanation of deep learning using neural networks—what it is, what it does, and how; and introduce the concept of deep features, which allows you to obtain great performance with reduced running times and data set sizes. We then show how these methods can easily be deployed on GPU instances (G2) on Amazon EC2.
IoT-Daten: Mehr und schneller ist nicht automatisch besser.
Über optimale Sampling-Strategien, wie man rechnen kann, ob IoT sich rechnet, und warum es nicht immer Deep Learning und Real-Time-Analytics sein muss. (Folien Deutsch/Englisch)
Practical steps for non-machine learners on how to prepare your medical image dataset for deep learning modelling.
Here we use a fundus image dataset as an example that might have controls (healthy eyes) and glaucomatous fundus images with three different severities. In glaucoma, the optic disc is of a special interest so we want to annotate that from the images using a bounding box to help the deep learning training.
A Distributed Deep Learning Approach for the Mitosis Detection from Big Medic...Databricks
The strongest indicator of a cancer patient's prognosis is the number of mitotic bodies that a pathologist manually counts from the high-resolution whole-slide histopathology images. Obviously, it is not efficient to manually count the mitosis number. But it is still challenging to automate the process of mitosis detection due to the limited training datasets and the intensive computing involved in the model training and inference. This presentation introduces a large-scale deep learning approach to train a two-stage CNN-based model with high accuracy to detect the mitosis locations directly from the high-resolution whole-slide images. In details, we first train a nuclei detection model to remove the background information from the raw whole-slide histopathology images. Second, a customized ResNet-50 model is trained on the cleaned dataset in the first step. The first step saves the training time while improving the model performance in the second step. A false-positive oversampling approach is used to further improve the model performance. With these models, the inference process is conducted to detect the mitosis locations from the large volume of histopathology images in parallel. Meanwhile, the whole pipeline, including data preprocessing, model training, hyperparameter tuning, and inference, is parallelized by utilizing the distributed TensorFlow, Apache Spark, and HDFS. The experiences and techniques in this project can be applied to other large scale deep learning problems as well.
Speaker: Fei Hu
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2021/10/applying-the-right-deep-learning-model-with-the-right-data-for-your-application-a-presentation-from-vision-elements/
Hila Blecher-Segev, Computer Vision and AI Research Associate at Vision Elements, presents the “Applying the Right Deep Learning Model with the Right Data for Your Application” tutorial at the May 2021 Embedded Vision Summit.
Deep learning has made a huge impact on a wide variety of computer vision applications. But while the capabilities of deep neural networks are impressive, understanding how to best apply them is not straightforward. In this talk, Blecher-Segev highlights key questions that must be answered when considering incorporating a deep neural network into a vision application.
What type of data will be most beneficial for the task? Should the DNN use other types of data in addition to images? How should the data be annotated? What classes should be defined? What is the minimum amount of data needed for the network to be generalized and robust? What algorithmic approach should we use for our task (classification, regression or segmentation)? What type of network should we choose (FCN, DCNN, RNN, GAN)? Blecher-Segev explains the options and trade-offs, and maps out a process for making good choices for a specific application.
Edge-based Discovery of Training Data for Machine LearningZiqiang Feng
(Accepted and presented in Symposium on Edge Computing, Seattle, Oct 2018)
We show how edge-based early discard of data can greatly improve the productivity of a human expert in assembling a large training set for machine learning. This task may span multiple data sources that are live (e.g., video cameras) or archival (data sets dispersed over the Internet). The critical resource here is the attention of the expert. We describe Eureka, an interactive system that leverages edge computing to greatly improve the productivity of experts in this task. Our experimental results show that Eureka reduces the labeling effort needed to construct a training set by two orders of magnitude relative to a brute-force approach.
This presentation consist of models and explanations of deep learning, artificial intelligence and today's systems and communications. This was presented at the ITU-T Workshop on Machine Learning for 5G held at the ITU HQ in Geneva, Switzerland on 29 January 2018. More information on this workshop can be found here: https://www.itu.int/en/ITU-T/Workshops-and-Seminars/20180129/Pages/default.aspx
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2021/10/building-an-autonomous-detect-and-avoid-system-for-commercial-drones-a-presentation-from-iris-automation/
Alejandro Galindo, Head of Research and Development at Iris Automation, presents the “Building an Autonomous Detect-and-Avoid System for Commercial Drones” tutorial at the May 2021 Embedded Vision Summit.
Commercial and industrial drones have the potential to completely disrupt industries and create new ones. Used in applications such as infrastructure inspection, search and rescue, package delivery, and many others, they can save time, money, and lives. Most of these applications require a real-time understanding of the environment and the risks of collision.
At the same time, commercial drones are limited in the size, weight, and power they can carry, narrowing the options for sensors and computing architectures. In this presentation, Galindo dives into what it takes to build an autonomous detect-and-avoid system for commercial drones and, in particular, focuses on computer vision issues such as predictability and reduction of false positives. Why are they important and what does it take to drive them in the right direction?
Building Interpretable & Secure AI Systems using PyTorchgeetachauhan
Slides from my talk at Deep Learning World 2020. The talk covered use cases, special challenges and solutions for building Interpretable and Secure AI systems using Pytorch.
- Tools for building Interpretable models
- How to build secure, privacy preserving AI models with Pytorch
- Use cases and insights from the field
A practical talk by Anirudh Koul aimed at how to run Deep Neural Networks to run on memory and energy constrained devices like smartphones. Highlights some frameworks and best practices.
For the full video of this presentation, please visit:
https://www.embedded-vision.com/platinum-members/pathpartner/embedded-vision-training/videos/pages/may-2019-embedded-vision-summit
For more information about embedded vision, please visit:
http://www.embedded-vision.com
Praveen Nayak, Tech Lead at PathPartner Technology, presents the "Using Deep Learning for Video Event Detection on a Compute Budget" tutorial at the May 2019 Embedded Vision Summit.
Convolutional neural networks (CNNs) have made tremendous strides in object detection and recognition in recent years. However, extending the CNN approach to understanding of video or volumetric data poses tough challenges, including trade-offs between representation quality and computational complexity, which is of particular concern on embedded platforms with tight computational budgets. This presentation explores the use of CNNs for video understanding.
Nayak reviews the evolution of deep representation learning methods involving spatio- temporal fusion from C3D to Conv-LSTMs for vision-based human activity detection. He proposes a decoupled alternative to this fusion, describing an approach that combines a low-complexity predictive temporal segment proposal model and a fine-grained (perhaps high- complexity) inference model. PathPartner Technology finds that this hybrid approach, in addition to reducing computational load with minimal loss of accuracy, enables effective solutions to these high complexity inference tasks.
In this paper, a new steganography algorithm has been suggested to enforce the security of data hiding and to increase the amount of payloads. This algorithm is based on four safety layers; the first safety layer has been initiated through compression and an encryption of a confidential message using a set partition in hierarchical trees (SPIHT) and advanced encryption standard (AES) mechanisms respectively. An irregular image segmentation algorithm (IIS) on a cover-image (Ic) has been constructed successfully in
the second safety layer, and it is based on the adaptive reallocation segments' edges (ARSE) by applying an
adaptive finite-element method (AFEM) to find the numerical solution of the proposed partial differential equation (PDE). An intelligent computing technique using a hybrid adaptive neural network with a modified ant colony optimizer (ANN_MACO) has been proposed in the third safety layer to construct a
learning system. This system accepts entry using support vector machine (SVM) to generate input patterns as features of byte attributes and produces new features to modify a cover-image. The significant innovation of the proposed novel steganography algorithm is applied efficiently on the forth
safety layer which is more robust for hiding a large amount of confidential message reach to six bits per pixel (bpp) into color images. The new approach of hiding algorithm works against statistical and visual attacks with high imperceptible of hiding data into stego-images (Is). The experimental results are
discussed and compared with the previous steganography algorithms; it demonstrates that the proposed algorithm has a significant improvement on the effect of the security level of steganography by making an arduous task of retrieving embedded confidential message from color images.
Deep learning on mobile - 2019 Practitioner's GuideAnirudh Koul
The 2019 Guide to Deep Learning on Mobile, from Inference to Training on iOS and Android smartphones. Featuring CoreML, Tensorflow Lite, MLKit, Fritz, AutoML Approaches (Hardware Aware Neural Architecture Search) to make models more efficient, and lots of videos. Presented by Anirudh Koul, Siddha Ganju and Meher Anand Kasam. More details at PracticalDL.ai in the upcoming O'Reilly Book 'Practical Deep Learning on Cloud & Mobile'
The problem of scene classification in surveillance footage is of great importance for ensuring security in public areas. With challenges such as low quality feeds, occlusion, viewpoint variations, background clutter etc. The task is both challenging and error-prone. Therefore it is important to keep the false positives low to maintain a high accuracy of detection. In this paper, we adapt high performing CNN architectures to identify abandoned luggage in a surveillance feed. We explore several CNN based approaches, from Transfer Learning on the Imagenet dataset to one-shot detection using architectures such as YOLOv3. Using network visualization techniques, we gain insight into what the neural network sees and the basis of classification decision. The experiments have been conducted on real world datasets, and highlights the complexity in such classifications. Obtained results indicate that a combination of proposed techniques outperforms the individual approaches.
Author: Utkarsh Contractor
Talk @ ACM SF Bayarea Chapter on Deep Learning for medical imaging space.
The talk covers use cases, special challenges and solutions for Deep Learning for Medical Image Analysis using Tensorflow+Keras. You will learn about:
- Use cases for Deep Learning in Medical Image Analysis
- Different DNN architectures used for Medical Image Analysis
- Special purpose compute / accelerators for Deep Learning (in the Cloud / On-prem)
- How to parallelize your models for faster training of models and serving for inferenceing.
- Optimization techniques to get the best performance from your cluster (like Kubernetes/ Apache Mesos / Spark)
- How to build an efficient Data Pipeline for Medical Image Analysis using Deep Learning
- Resources to jump start your journey - like public data sets, common models used in Medical Image Analysis
Using Deep Learning to do Real-Time Scoring in Practical Applications - 2015-...Greg Makowski
This talk covers 4 configurations of deep learning to solve different types of application needs. Also, strategies for speed up and real-time scoring are discussed.
A Distributed Deep Learning Approach for the Mitosis Detection from Big Medic...Databricks
The strongest indicator of a cancer patient's prognosis is the number of mitotic bodies that a pathologist manually counts from the high-resolution whole-slide histopathology images. Obviously, it is not efficient to manually count the mitosis number. But it is still challenging to automate the process of mitosis detection due to the limited training datasets and the intensive computing involved in the model training and inference. This presentation introduces a large-scale deep learning approach to train a two-stage CNN-based model with high accuracy to detect the mitosis locations directly from the high-resolution whole-slide images. In details, we first train a nuclei detection model to remove the background information from the raw whole-slide histopathology images. Second, a customized ResNet-50 model is trained on the cleaned dataset in the first step. The first step saves the training time while improving the model performance in the second step. A false-positive oversampling approach is used to further improve the model performance. With these models, the inference process is conducted to detect the mitosis locations from the large volume of histopathology images in parallel. Meanwhile, the whole pipeline, including data preprocessing, model training, hyperparameter tuning, and inference, is parallelized by utilizing the distributed TensorFlow, Apache Spark, and HDFS. The experiences and techniques in this project can be applied to other large scale deep learning problems as well.
Speaker: Fei Hu
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2021/10/applying-the-right-deep-learning-model-with-the-right-data-for-your-application-a-presentation-from-vision-elements/
Hila Blecher-Segev, Computer Vision and AI Research Associate at Vision Elements, presents the “Applying the Right Deep Learning Model with the Right Data for Your Application” tutorial at the May 2021 Embedded Vision Summit.
Deep learning has made a huge impact on a wide variety of computer vision applications. But while the capabilities of deep neural networks are impressive, understanding how to best apply them is not straightforward. In this talk, Blecher-Segev highlights key questions that must be answered when considering incorporating a deep neural network into a vision application.
What type of data will be most beneficial for the task? Should the DNN use other types of data in addition to images? How should the data be annotated? What classes should be defined? What is the minimum amount of data needed for the network to be generalized and robust? What algorithmic approach should we use for our task (classification, regression or segmentation)? What type of network should we choose (FCN, DCNN, RNN, GAN)? Blecher-Segev explains the options and trade-offs, and maps out a process for making good choices for a specific application.
Edge-based Discovery of Training Data for Machine LearningZiqiang Feng
(Accepted and presented in Symposium on Edge Computing, Seattle, Oct 2018)
We show how edge-based early discard of data can greatly improve the productivity of a human expert in assembling a large training set for machine learning. This task may span multiple data sources that are live (e.g., video cameras) or archival (data sets dispersed over the Internet). The critical resource here is the attention of the expert. We describe Eureka, an interactive system that leverages edge computing to greatly improve the productivity of experts in this task. Our experimental results show that Eureka reduces the labeling effort needed to construct a training set by two orders of magnitude relative to a brute-force approach.
This presentation consist of models and explanations of deep learning, artificial intelligence and today's systems and communications. This was presented at the ITU-T Workshop on Machine Learning for 5G held at the ITU HQ in Geneva, Switzerland on 29 January 2018. More information on this workshop can be found here: https://www.itu.int/en/ITU-T/Workshops-and-Seminars/20180129/Pages/default.aspx
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2021/10/building-an-autonomous-detect-and-avoid-system-for-commercial-drones-a-presentation-from-iris-automation/
Alejandro Galindo, Head of Research and Development at Iris Automation, presents the “Building an Autonomous Detect-and-Avoid System for Commercial Drones” tutorial at the May 2021 Embedded Vision Summit.
Commercial and industrial drones have the potential to completely disrupt industries and create new ones. Used in applications such as infrastructure inspection, search and rescue, package delivery, and many others, they can save time, money, and lives. Most of these applications require a real-time understanding of the environment and the risks of collision.
At the same time, commercial drones are limited in the size, weight, and power they can carry, narrowing the options for sensors and computing architectures. In this presentation, Galindo dives into what it takes to build an autonomous detect-and-avoid system for commercial drones and, in particular, focuses on computer vision issues such as predictability and reduction of false positives. Why are they important and what does it take to drive them in the right direction?
Building Interpretable & Secure AI Systems using PyTorchgeetachauhan
Slides from my talk at Deep Learning World 2020. The talk covered use cases, special challenges and solutions for building Interpretable and Secure AI systems using Pytorch.
- Tools for building Interpretable models
- How to build secure, privacy preserving AI models with Pytorch
- Use cases and insights from the field
A practical talk by Anirudh Koul aimed at how to run Deep Neural Networks to run on memory and energy constrained devices like smartphones. Highlights some frameworks and best practices.
For the full video of this presentation, please visit:
https://www.embedded-vision.com/platinum-members/pathpartner/embedded-vision-training/videos/pages/may-2019-embedded-vision-summit
For more information about embedded vision, please visit:
http://www.embedded-vision.com
Praveen Nayak, Tech Lead at PathPartner Technology, presents the "Using Deep Learning for Video Event Detection on a Compute Budget" tutorial at the May 2019 Embedded Vision Summit.
Convolutional neural networks (CNNs) have made tremendous strides in object detection and recognition in recent years. However, extending the CNN approach to understanding of video or volumetric data poses tough challenges, including trade-offs between representation quality and computational complexity, which is of particular concern on embedded platforms with tight computational budgets. This presentation explores the use of CNNs for video understanding.
Nayak reviews the evolution of deep representation learning methods involving spatio- temporal fusion from C3D to Conv-LSTMs for vision-based human activity detection. He proposes a decoupled alternative to this fusion, describing an approach that combines a low-complexity predictive temporal segment proposal model and a fine-grained (perhaps high- complexity) inference model. PathPartner Technology finds that this hybrid approach, in addition to reducing computational load with minimal loss of accuracy, enables effective solutions to these high complexity inference tasks.
In this paper, a new steganography algorithm has been suggested to enforce the security of data hiding and to increase the amount of payloads. This algorithm is based on four safety layers; the first safety layer has been initiated through compression and an encryption of a confidential message using a set partition in hierarchical trees (SPIHT) and advanced encryption standard (AES) mechanisms respectively. An irregular image segmentation algorithm (IIS) on a cover-image (Ic) has been constructed successfully in
the second safety layer, and it is based on the adaptive reallocation segments' edges (ARSE) by applying an
adaptive finite-element method (AFEM) to find the numerical solution of the proposed partial differential equation (PDE). An intelligent computing technique using a hybrid adaptive neural network with a modified ant colony optimizer (ANN_MACO) has been proposed in the third safety layer to construct a
learning system. This system accepts entry using support vector machine (SVM) to generate input patterns as features of byte attributes and produces new features to modify a cover-image. The significant innovation of the proposed novel steganography algorithm is applied efficiently on the forth
safety layer which is more robust for hiding a large amount of confidential message reach to six bits per pixel (bpp) into color images. The new approach of hiding algorithm works against statistical and visual attacks with high imperceptible of hiding data into stego-images (Is). The experimental results are
discussed and compared with the previous steganography algorithms; it demonstrates that the proposed algorithm has a significant improvement on the effect of the security level of steganography by making an arduous task of retrieving embedded confidential message from color images.
Deep learning on mobile - 2019 Practitioner's GuideAnirudh Koul
The 2019 Guide to Deep Learning on Mobile, from Inference to Training on iOS and Android smartphones. Featuring CoreML, Tensorflow Lite, MLKit, Fritz, AutoML Approaches (Hardware Aware Neural Architecture Search) to make models more efficient, and lots of videos. Presented by Anirudh Koul, Siddha Ganju and Meher Anand Kasam. More details at PracticalDL.ai in the upcoming O'Reilly Book 'Practical Deep Learning on Cloud & Mobile'
The problem of scene classification in surveillance footage is of great importance for ensuring security in public areas. With challenges such as low quality feeds, occlusion, viewpoint variations, background clutter etc. The task is both challenging and error-prone. Therefore it is important to keep the false positives low to maintain a high accuracy of detection. In this paper, we adapt high performing CNN architectures to identify abandoned luggage in a surveillance feed. We explore several CNN based approaches, from Transfer Learning on the Imagenet dataset to one-shot detection using architectures such as YOLOv3. Using network visualization techniques, we gain insight into what the neural network sees and the basis of classification decision. The experiments have been conducted on real world datasets, and highlights the complexity in such classifications. Obtained results indicate that a combination of proposed techniques outperforms the individual approaches.
Author: Utkarsh Contractor
Talk @ ACM SF Bayarea Chapter on Deep Learning for medical imaging space.
The talk covers use cases, special challenges and solutions for Deep Learning for Medical Image Analysis using Tensorflow+Keras. You will learn about:
- Use cases for Deep Learning in Medical Image Analysis
- Different DNN architectures used for Medical Image Analysis
- Special purpose compute / accelerators for Deep Learning (in the Cloud / On-prem)
- How to parallelize your models for faster training of models and serving for inferenceing.
- Optimization techniques to get the best performance from your cluster (like Kubernetes/ Apache Mesos / Spark)
- How to build an efficient Data Pipeline for Medical Image Analysis using Deep Learning
- Resources to jump start your journey - like public data sets, common models used in Medical Image Analysis
Using Deep Learning to do Real-Time Scoring in Practical Applications - 2015-...Greg Makowski
This talk covers 4 configurations of deep learning to solve different types of application needs. Also, strategies for speed up and real-time scoring are discussed.
Bridging Concepts and Practice in eScience via Simulation-driven EngineeringRafael Ferreira da Silva
The CyberInfrastructure (CI) has been the object of intensive research and development in the last decade, re- sulting in a rich set of abstractions and interoperable software implementations that are used in production today for supporting ongoing and breakthrough scientific discoveries. A key challenge is the development of tools and application execution frameworks that are robust in current and emerging CI configurations, and that can anticipate the needs of upcoming CI applications. This paper presents WRENCH, a framework that enables simulation-driven engineering for evaluating and developing CI application execution frameworks. WRENCH provides a set of high- level simulation abstractions that serve as building blocks for developing custom simulators. These abstractions rely on the scalable and accurate simulation models that are provided by the SimGrid simulation framework. Consequently, WRENCH makes it possible to build, with minimum software development effort, simulators that that can accurately and scalably simulate a wide spectrum of large and complex CI scenarios. These simulators can then be used to evaluate and/or compare alternate platform, system, and algorithm designs, so as to drive the development of CI solutions for current and emerging applications.
Implementing AI: Running AI at the Edge: Adapting AI to available resource in...KTN
The Implementing AI: Running AI at the Edge, hosted by KTN and eFutures, is the second event of the Implementing AI webinar series.
To make products more intelligent, more responsive and to reduce the data generated, it is advantageous to run AI on the product itself, as opposed to in the cloud.
The focus of this webinar was the opportunities and challenges of moving the AI processing to “the Edge”. The webinar had four presentations from experts covering overviews of the opportunity, implementation techniques and case studies.
Find out more: https://ktn-uk.co.uk/news/just-launched-implementing-ai-webinar-series
Many automakers are trying to utilize machine learning to realize automated driving of cars. Application of GPGPU and Approximate computing is being actively studied because using conventional CPUs in machine learning is often disadvantageous from the viewpoint of performance and energy consumption. As of today, they are in the stage of commercialization sufficiently. However, considering the high performance and low energy consumption required of automobiles that are several years ahead, it is not guaranteed enough that GPGPU and Approximate computing have the potential to fully satisfy them. Therefore, some automakers are considering Neuromophic device as a semiconductor candidate to be installed in the next generation of automatic driving vehicles. For the past eight months, IBM has been studying technologies for applying Japanese automobile manufacturers and Neuromophic devices to automobiles. We will report technical problems and application areas obtained from that study.
We envision a world where devices, machines, automobiles, and things are much more intelligent, simplifying and enriching our daily lives. They will be able to perceive, reason, and take intuitive actions based on awareness of the situation, improving just about any experience and solving problems that to this point we’ve either left to the user, or to more conventional algorithms.
Artificial intelligence (AI) is the technology driving this revolution. You may think that AI is really about big data and the cloud, and yet Qualcomm’s solutions already have the power, thermal, and processing efficiency to run powerful AI algorithms on the actual device. Our current products now support many AI use cases, such as computer vision, natural language processing, and malware detection — both for smartphones and autos — and we are researching broader topics, such as AI for wireless connectivity, power management, and photography. View this presentation to learn about our AI vision, including:
Why mobile is becoming the pervasive AI platform
The benefits of AI moving to the device and complementing the cloud
The benefits of distributed processing for AI
Qualcomm’s long history of AI research and development
What the future of AI processing might look like
CPN211 My Datacenter Has Walls That Move - AWS re: Invent 2012Amazon Web Services
How do you think about computing resources in a world where you can launch and terminate computational capacity in minutes? Amazon EC2 provides a powerful platform to access vast computational resources at the click of a button or a simple API call. It is also very different from operating your own data center or having to managed fix assets in a co-location facility. This talk walks you through examples of how the cloud enables more efficient capacity planning, provides guidance in how developers and organizations can manage thousands of instances efficiently, and highlights tools that make it easy for you to plan your capacity needs, even when those needs might require you to provision the equivalent of a small data center at short notice.
Similar to Using Simulation for Decision Support: Lessons Learned from FireGrid (20)
Unit 8 - Information and Communication Technology (Paper I).pdfThiyagu K
This slides describes the basic concepts of ICT, basics of Email, Emerging Technology and Digital Initiatives in Education. This presentations aligns with the UGC Paper I syllabus.
Read| The latest issue of The Challenger is here! We are thrilled to announce that our school paper has qualified for the NATIONAL SCHOOLS PRESS CONFERENCE (NSPC) 2024. Thank you for your unwavering support and trust. Dive into the stories that made us stand out!
Delivering Micro-Credentials in Technical and Vocational Education and TrainingAG2 Design
Explore how micro-credentials are transforming Technical and Vocational Education and Training (TVET) with this comprehensive slide deck. Discover what micro-credentials are, their importance in TVET, the advantages they offer, and the insights from industry experts. Additionally, learn about the top software applications available for creating and managing micro-credentials. This presentation also includes valuable resources and a discussion on the future of these specialised certifications.
For more detailed information on delivering micro-credentials in TVET, visit this https://tvettrainer.com/delivering-micro-credentials-in-tvet/
This presentation includes basic of PCOS their pathology and treatment and also Ayurveda correlation of PCOS and Ayurvedic line of treatment mentioned in classics.
Safalta Digital marketing institute in Noida, provide complete applications that encompass a huge range of virtual advertising and marketing additives, which includes search engine optimization, virtual communication advertising, pay-per-click on marketing, content material advertising, internet analytics, and greater. These university courses are designed for students who possess a comprehensive understanding of virtual marketing strategies and attributes.Safalta Digital Marketing Institute in Noida is a first choice for young individuals or students who are looking to start their careers in the field of digital advertising. The institute gives specialized courses designed and certification.
for beginners, providing thorough training in areas such as SEO, digital communication marketing, and PPC training in Noida. After finishing the program, students receive the certifications recognised by top different universitie, setting a strong foundation for a successful career in digital marketing.
Introduction to AI for Nonprofits with Tapp NetworkTechSoup
Dive into the world of AI! Experts Jon Hill and Tareq Monaur will guide you through AI's role in enhancing nonprofit websites and basic marketing strategies, making it easy to understand and apply.
A review of the growth of the Israel Genealogy Research Association Database Collection for the last 12 months. Our collection is now passed the 3 million mark and still growing. See which archives have contributed the most. See the different types of records we have, and which years have had records added. You can also see what we have for the future.
MATATAG CURRICULUM: ASSESSING THE READINESS OF ELEM. PUBLIC SCHOOL TEACHERS I...NelTorrente
In this research, it concludes that while the readiness of teachers in Caloocan City to implement the MATATAG Curriculum is generally positive, targeted efforts in professional development, resource distribution, support networks, and comprehensive preparation can address the existing gaps and ensure successful curriculum implementation.
Normal Labour/ Stages of Labour/ Mechanism of LabourWasim Ak
Normal labor is also termed spontaneous labor, defined as the natural physiological process through which the fetus, placenta, and membranes are expelled from the uterus through the birth canal at term (37 to 42 weeks
Biological screening of herbal drugs: Introduction and Need for
Phyto-Pharmacological Screening, New Strategies for evaluating
Natural Products, In vitro evaluation techniques for Antioxidants, Antimicrobial and Anticancer drugs. In vivo evaluation techniques
for Anti-inflammatory, Antiulcer, Anticancer, Wound healing, Antidiabetic, Hepatoprotective, Cardio protective, Diuretics and
Antifertility, Toxicity studies as per OECD guidelines
Using Simulation for Decision Support: Lessons Learned from FireGrid
1. Using Simulation for Decision Support:
Lessons Learned from FireGrid
Gerhard Wickler1
George Beckett2, Liangxiu Han3, Sung Han Koo4,
Stephen Potter1, Gavin Pringle2, Austin Tate1
1:AIAI, 2:EPCC, 3:NeSC, 4:SEE,
University of Edinburgh, United Kingdom
www.ed.ac.uk
g.wickler@ed.ac.uk
Intelligent Systems
@ ISCRAM 2009
1
2. FireGrid
1000s of sensors Emergency
responders
Grid
Command-and-
Control
Super-
I-X Technologies real-time
simulation
Computational (HPC)
models
Intelligent Systems
@ ISCRAM 2009
2
5. FireGrid Final Experiment:
User Interface
3D schematized overview
of relevant locations
for each location:
– double traffic light
(current/future hazard
level) per location
– time-line window on
demand
» time slider
» hazard points
» beliefs with
justifications
» link for more
information
Intelligent Systems
@ ISCRAM 2009
5
6. Lessons Learned: Overview
model
sensor data
interpretation
simulation
acquisition
software
HPC / Grid
questions: can we re-apply the FireGrid approach for in a different
scenario, e.g. FloodGrid, QuakeGrid, PandemicGrid, etc.
lessons learned structured according to data flow:
– data acquisition from sensors
– high-performance computing (HPC)
– the Grid
– models and simulation
– intelligent decision support
Intelligent Systems
@ ISCRAM 2009
6
7. Data Acquisition from Sensors:
Overview
aim: collect raw data from available sensors
experiment: ca. 140 sensors of different types
(mostly thermocouples) used
caveats for lessons learned:
– sensors used were simple: single quantity at
specific location; no image data used/analysed
– sensors were pre-installed: exact number and
location known; may not be possible in other
scenarios (e.g. oil spill)
Intelligent Systems
@ ISCRAM 2009
7
8. Data Acquisition from Sensors:
Lessons Learned (1)
Is all the data required by the models actually available?
– problem: models may demand inputs that cannot be
measured realistically, e.g. location of furniture, heat release
rates over time
– problem: number and location of sensors, e.g. centre of room
not practical
Can the sensor data be channelled to and processed by the
simulator?
– problem: data logger is set up to write to file, e.g. when aim
is post-experimental data analysis
– problem: data is in proprietary format, e.g. to protect
commercial interest
Intelligent Systems
@ ISCRAM 2009
8
9. Data Acquisition from Sensors:
Lessons Learned (2)
At what frequency can sensor values be expected?
– not a problem in FireGrid
– problem: sensor readings not synchronized
Is there an ontology that describes the required sensor
types?
– problem: design database to hold sensor readings
Is there a reliable way of grading the sensor output?
– problem: failing or dislocated sensors give incorrect readings
resulting in poor predictions
» sensor grading: decide which sensor readings are to be
believed
» developed a constraint-based algorithm that results in a
consistent picture (minimize violated constraints)
Intelligent Systems
@ ISCRAM 2009
9
10. High Performance Computing:
Lessons Learned (1)
How fast does the simulation run on a “normal” computer?
– problem: linear speed-up might not be sufficient; expected
speed-up due to multiple processors; linear speed-up is best
case
– problem: current CFD model for fires do not scale well
What is the execution bottleneck for the simulation?
– problem: computational bottleneck may be input/output
operations; using multiple CPUs will not provide
solution
– problem: inter-process communication may slow
down computation
Intelligent Systems
@ ISCRAM 2009
10
11. High Performance Computing:
Lessons Learned (2)
Is the model implementation suitable for running on a (parallel)
HPC resource?
– problem: domain experts often produce serial code; need to
parallelize the simulation software
– approach: ensemble computing (used in FireGrid)
Can the existing implementation be compiled on the HPC
resource?
– problem: simulator (in Fortran) using non-standard features;
need to port to HPC platform using different compiler and
libraries
How quickly do simulators need to start running?
– problem: batch system causes delay on HPC
Intelligent Systems
@ ISCRAM 2009
11
12. The Grid:
Background
aim: use Grid to provide on-demand access to HPC
resources
Grid: “… a form of distributed computing whereby a
quot;super and virtual computerquot; is composed of a cluster of
networked, loosely coupled computers, acting in concert to
perform very large tasks. […] What distinguishes grid
computing from conventional cluster computing systems is
that grids tend to be more loosely
coupled, heterogeneous, and geographically
dispersed.”
issues:
– not aiming to fully exploit Grid capabilities
– pre-installation of simulation software
on heterogeneous systems very difficult
Intelligent Systems
@ ISCRAM 2009
12
13. The Grid:
Lessons Learned
How many (heterogeneous) computing resources should be
available through the Grid?
– advice: start with small number (one + one spare);
minimizes porting effort
Is there a Grid expert available?
– problem: software for accessing the Grid seems still
experimental
Can the simulator be adapted to the resource it
is running on?
– problem: Grid provides unified interface, but
setting parameters may be necessary to get
optimal performance out of an HPC resource
Intelligent Systems
@ ISCRAM 2009
13
14. Models and Simulation:
Lessons Learned
Have the models ever been used to generate predictions?
– problem: models developed in research context;
usable for predictions? validation?
Can the simulation be “calibrated on the fly”?
– problem: model may not be able to assimilate live
sensor data
– FireGrid approach: parameter-sweep
Can the model be used to address “what-if” questions?
– problem: model does not take into account
hypothetical actions of emergency responders
Can the model assess the accuracy of its own results?
– problem: responders need confidence in model
Intelligent Systems
@ ISCRAM 2009
14
15. Intelligent Decision Support:
Lessons Learned
Are the model outputs in terms the emergency responders can
understand?
– problem: model output is large amounts of numbers; need to
be contextualized and interpreted;
– approaches: AI system vs. expert at emergency
Is there a set of standard operating procedures available?
– SOPs: give ways in which task can be accomplished;
preconditions represent kind of information decision
makers need to know
Can uncertainty about the model results be conveyed
to the user in a useful way?
– problem: what do percentages mean?
Intelligent Systems
@ ISCRAM 2009
15
16. Conclusions
aim of this paper: provide lessons learned for people
trying to build a system that:
– uses (large amounts of) sensor data to
– steer a super-real-time simulation that
– generates predictions which are the basis for
– decision support for emergency responders.
but: for a different type of scenario/model, e.g.
– an oil spill simulator
– a flood simulator (for a river)
creating such a system requires experts from a variety of
technical domains, and pitfalls that are obvious to an
expert in one field may be far from it to an expert in a
different field, even if they are all experts in computing!
Intelligent Systems
@ ISCRAM 2009
16