The document provides licensing information and legal disclaimers for any intellectual property related to the materials. It notes that the information on products, services, and processes is subject to change and advises contacting an Intel representative for the latest specifications. The document contains optimization notices for Intel compilers and performance tests on Intel microprocessors.
Medical images (CT scans, X-Rays) must be segmented to identify the region of interest; then areas of interest must be classified for diagnosis and reporting Applied for Lung Disease diagnosis from Chest X-Rays/CT-Scans Segmentation/classification can be a tedious process. AI can help! Wipro used Deep Learning to develop a Medical Image Segmentation & Diagnosis Solution running on Intel’s AI platform.
Advanced Techniques to Accelerate Model Tuning | Software for AI Optimization...Intel® Software
Learn about the algorithms and associated implementations that power SigOpt, a platform for efficiently conducting model development and hyperparameter optimization. Get started on your AI Developer Journey @ software.intel.com/ai.
Reducing Deep Learning Integration Costs and Maximizing Compute Efficiency| S...Intel® Software
oneDNN Graph API extends oneDNN with a graph interface which reduces deep learning integration costs and maximizes compute efficiency across a variety of AI hardware including AI accelerators. Get started on your AI Developer Journey @ software.intel.com/ai.
Whether you are an AI, HPC, IoT, Graphics, Networking or Media developer, visit the Intel Developer Zone today to access the latest software products, resources, training, and support. Test-drive the latest Intel hardware and software products on DevCloud, our online development sandbox, and use DevMesh, our online collaboration portal, to meet and work with other innovators and product leaders. Get started by joining the Intel Developer Community @ software.intel.com.
Software AI Accelerators: The Next Frontier | Software for AI Optimization Su...Intel® Software
Software AI Accelerators deliver orders of magnitude performance gain for AI across deep learning, classical machine learning, and graph analytics and are key to enabling AI Everywhere. Get started on your AI Developer Journey @ software.intel.com/ai.
Build a Deep Learning Video Analytics Framework | SIGGRAPH 2019 Technical Ses...Intel® Software
Explore how to build a unified framework based on FFmpeg and GStreamer to enable video analytics on all Intel® hardware, including CPUs, GPUs, VPUs, FPGAs, and in-circuit emulators.
Fast Insights to Optimized Vectorization and Memory Using Cache-aware Rooflin...Intel® Software
Integrated into Intel® Advisor, Cache-aware Roofline Modeling (CARM) provides insight into how an application behaves by helping to determine a) how optimally it works on a given hardware, b) the main factors that limit performance, c) if the workload is memory or compute-bound, and d) the right strategy to improve application performance.
Medical images (CT scans, X-Rays) must be segmented to identify the region of interest; then areas of interest must be classified for diagnosis and reporting Applied for Lung Disease diagnosis from Chest X-Rays/CT-Scans Segmentation/classification can be a tedious process. AI can help! Wipro used Deep Learning to develop a Medical Image Segmentation & Diagnosis Solution running on Intel’s AI platform.
Advanced Techniques to Accelerate Model Tuning | Software for AI Optimization...Intel® Software
Learn about the algorithms and associated implementations that power SigOpt, a platform for efficiently conducting model development and hyperparameter optimization. Get started on your AI Developer Journey @ software.intel.com/ai.
Reducing Deep Learning Integration Costs and Maximizing Compute Efficiency| S...Intel® Software
oneDNN Graph API extends oneDNN with a graph interface which reduces deep learning integration costs and maximizes compute efficiency across a variety of AI hardware including AI accelerators. Get started on your AI Developer Journey @ software.intel.com/ai.
Whether you are an AI, HPC, IoT, Graphics, Networking or Media developer, visit the Intel Developer Zone today to access the latest software products, resources, training, and support. Test-drive the latest Intel hardware and software products on DevCloud, our online development sandbox, and use DevMesh, our online collaboration portal, to meet and work with other innovators and product leaders. Get started by joining the Intel Developer Community @ software.intel.com.
Software AI Accelerators: The Next Frontier | Software for AI Optimization Su...Intel® Software
Software AI Accelerators deliver orders of magnitude performance gain for AI across deep learning, classical machine learning, and graph analytics and are key to enabling AI Everywhere. Get started on your AI Developer Journey @ software.intel.com/ai.
Build a Deep Learning Video Analytics Framework | SIGGRAPH 2019 Technical Ses...Intel® Software
Explore how to build a unified framework based on FFmpeg and GStreamer to enable video analytics on all Intel® hardware, including CPUs, GPUs, VPUs, FPGAs, and in-circuit emulators.
Fast Insights to Optimized Vectorization and Memory Using Cache-aware Rooflin...Intel® Software
Integrated into Intel® Advisor, Cache-aware Roofline Modeling (CARM) provides insight into how an application behaves by helping to determine a) how optimally it works on a given hardware, b) the main factors that limit performance, c) if the workload is memory or compute-bound, and d) the right strategy to improve application performance.
Review state-of-the-art techniques that use neural networks to synthesize motion, such as mode-adaptive neural network and phase-functioned neural networks. See how next-generation CPUs with reinforcement learning can offer better performance.
For the full video of this presentation, please visit:
https://www.edge-ai-vision.com/2020/11/acceleration-of-deep-learning-using-openvino-3d-seismic-case-study-a-presentation-from-intel/
For more information about edge AI and computer vision, please visit:
https://www.edge-ai-vision.com
Manas Pathak, Global AI Lead for Oil and Gas at Intel, presents the “Acceleration of Deep Learning Using OpenVINO: 3D Seismic Case Study” tutorial at the September 2020 Embedded Vision Summit.
The use of deep learning for automatic seismic data interpretation is gaining the attention of many researchers across the oil and gas industry. The integration of high-performance computing (HPC) AI workflows in seismic data interpretation brings the challenge of moving and processing large amounts of data from HPC to AI computing solutions and vice-versa.
In this presentation, Pathak illustrates this challenge via a case study using a public deep learning model for salt identification applied on a 3D seismic survey from the F3 Dutch block in the North Sea. He presents a workflow to address this challenge and perform accelerated AI on seismic data. The Intel Distribution of OpenVINO toolkit was used to increase the inference performance of a pre-trained model on an Intel CPU. OpenVINO allows CPU users to get significant improvement in AI inference performance for high memory capacity deep learning models used on large datasets without any significant loss in accuracy.
oneAPI: Industry Initiative & Intel ProductTyrone Systems
With the growth of AI, machine learning, and data-centric applications, the industry needs a programming model that allows developers to take advantage of rapid innovation in processor architectures. TensorFlow supports the oneAPI industry initiative and its standards-based open specification.
oneAPI complements TensorFlow’s modular design and provides increased choice of hardware vendor and processor architecture, and faster support of next-generation accelerators. TensorFlow uses oneAPI today on Xeon processors and we look forward to using oneAPI to run on future Intel architectures.
Advanced Single Instruction Multiple Data (SIMD) Programming with Intel® Impl...Intel® Software
Explore practical elements, such as performance profiling, debugging, and porting advice. Get an overview of advanced programming topics, like common design patterns, SIMD lane interoperability, data conversions, and more.
For the full video of this presentation, please visit:
https://www.edge-ai-vision.com/2020/11/getting-efficient-dnn-inference-performance-is-it-really-about-the-tops-a-presentation-from-intel/
For more information about edge AI and computer vision, please visit:
https://www.edge-ai-vision.com
Gary Brown, Director of AI Marketing at Intel, presents the “Getting Efficient DNN Inference Performance: Is It Really About the TOPS?” tutorial at the September 2020 Embedded Vision Summit.
This presentation looks at how performance is measured among deep learning inference platforms, starting with the simple peak TOPS metric, why it’s used and why it might be misleading. Brown looks at compute efficiency as measured by real benchmark workload performance and how it relates to peak TOPS, comparing performance across Intel’s inference platforms. He also discusses how developers can use Intel’s DevCloud for the Edge to quickly access Intel’s inference platforms.
Python Data Science and Machine Learning at Scale with Intel and AnacondaIntel® Software
Python is the number 1 language for data scientists, and Anaconda is the most popular python platform. Intel and Anaconda have partnered to bring scalability and near-native performance to Python with simple installations. Learn how data scientists can now access oneAPI-optimized Python packages such as NumPy, Scikit-Learn, Modin, Pandas, and XGBoost directly from the Anaconda repository through simple installation and minimal code changes.
Enterprise Video Hosting: Introducing the Intel Video PortalIT@Intel
Intel IT developed an enterprise video hosting solution in order to meet the needs of employees who wanted to create and share videos in an easy-to-use and secure manner.
RenderMan*: The Role of Open Shading Language (OSL) with Intel® Advanced Vect...Intel® Software
This talk focuses on the newest release in RenderMan* 22.5 and its adoption at Pixar Animation Studios* for rendering future movies. With native support for Intel® Advanced Vector Extensions, Intel® Advanced Vector Extensions 2, and Intel® Advanced Vector Extensions 512, it includes enhanced library features, debugging support, and an extensive test framework.
The field of machine programming — the automation of the development of software — is making notable research advances. This is, in part, due to the emergence of a wide range of novel techniques in machine learning. In today’s technological landscape, software is integrated into almost everything we do, but maintaining software is a time-consuming and error-prone process. When fully realized, machine programming will enable everyone to express their creativity and develop their own software without writing a single line of code. Intel realizes the pioneering promise of machine programming, which is why it created the Machine Programming Research (MPR) team in Intel Labs. The MPR team’s goal is to create a society where everyone can create software, but machines will handle the “programming” part.
This session was held by Vladimir Brenner, Partner Account Manager, Disruptors & AI, Intel AI at the Dive into H2O: London training on June 17, 2019.
Please find the recording here: https://youtu.be/60o3eyG5OLM
Review state-of-the-art techniques that use neural networks to synthesize motion, such as mode-adaptive neural network and phase-functioned neural networks. See how next-generation CPUs with reinforcement learning can offer better performance.
For the full video of this presentation, please visit:
https://www.edge-ai-vision.com/2020/11/acceleration-of-deep-learning-using-openvino-3d-seismic-case-study-a-presentation-from-intel/
For more information about edge AI and computer vision, please visit:
https://www.edge-ai-vision.com
Manas Pathak, Global AI Lead for Oil and Gas at Intel, presents the “Acceleration of Deep Learning Using OpenVINO: 3D Seismic Case Study” tutorial at the September 2020 Embedded Vision Summit.
The use of deep learning for automatic seismic data interpretation is gaining the attention of many researchers across the oil and gas industry. The integration of high-performance computing (HPC) AI workflows in seismic data interpretation brings the challenge of moving and processing large amounts of data from HPC to AI computing solutions and vice-versa.
In this presentation, Pathak illustrates this challenge via a case study using a public deep learning model for salt identification applied on a 3D seismic survey from the F3 Dutch block in the North Sea. He presents a workflow to address this challenge and perform accelerated AI on seismic data. The Intel Distribution of OpenVINO toolkit was used to increase the inference performance of a pre-trained model on an Intel CPU. OpenVINO allows CPU users to get significant improvement in AI inference performance for high memory capacity deep learning models used on large datasets without any significant loss in accuracy.
oneAPI: Industry Initiative & Intel ProductTyrone Systems
With the growth of AI, machine learning, and data-centric applications, the industry needs a programming model that allows developers to take advantage of rapid innovation in processor architectures. TensorFlow supports the oneAPI industry initiative and its standards-based open specification.
oneAPI complements TensorFlow’s modular design and provides increased choice of hardware vendor and processor architecture, and faster support of next-generation accelerators. TensorFlow uses oneAPI today on Xeon processors and we look forward to using oneAPI to run on future Intel architectures.
Advanced Single Instruction Multiple Data (SIMD) Programming with Intel® Impl...Intel® Software
Explore practical elements, such as performance profiling, debugging, and porting advice. Get an overview of advanced programming topics, like common design patterns, SIMD lane interoperability, data conversions, and more.
For the full video of this presentation, please visit:
https://www.edge-ai-vision.com/2020/11/getting-efficient-dnn-inference-performance-is-it-really-about-the-tops-a-presentation-from-intel/
For more information about edge AI and computer vision, please visit:
https://www.edge-ai-vision.com
Gary Brown, Director of AI Marketing at Intel, presents the “Getting Efficient DNN Inference Performance: Is It Really About the TOPS?” tutorial at the September 2020 Embedded Vision Summit.
This presentation looks at how performance is measured among deep learning inference platforms, starting with the simple peak TOPS metric, why it’s used and why it might be misleading. Brown looks at compute efficiency as measured by real benchmark workload performance and how it relates to peak TOPS, comparing performance across Intel’s inference platforms. He also discusses how developers can use Intel’s DevCloud for the Edge to quickly access Intel’s inference platforms.
Python Data Science and Machine Learning at Scale with Intel and AnacondaIntel® Software
Python is the number 1 language for data scientists, and Anaconda is the most popular python platform. Intel and Anaconda have partnered to bring scalability and near-native performance to Python with simple installations. Learn how data scientists can now access oneAPI-optimized Python packages such as NumPy, Scikit-Learn, Modin, Pandas, and XGBoost directly from the Anaconda repository through simple installation and minimal code changes.
Enterprise Video Hosting: Introducing the Intel Video PortalIT@Intel
Intel IT developed an enterprise video hosting solution in order to meet the needs of employees who wanted to create and share videos in an easy-to-use and secure manner.
RenderMan*: The Role of Open Shading Language (OSL) with Intel® Advanced Vect...Intel® Software
This talk focuses on the newest release in RenderMan* 22.5 and its adoption at Pixar Animation Studios* for rendering future movies. With native support for Intel® Advanced Vector Extensions, Intel® Advanced Vector Extensions 2, and Intel® Advanced Vector Extensions 512, it includes enhanced library features, debugging support, and an extensive test framework.
The field of machine programming — the automation of the development of software — is making notable research advances. This is, in part, due to the emergence of a wide range of novel techniques in machine learning. In today’s technological landscape, software is integrated into almost everything we do, but maintaining software is a time-consuming and error-prone process. When fully realized, machine programming will enable everyone to express their creativity and develop their own software without writing a single line of code. Intel realizes the pioneering promise of machine programming, which is why it created the Machine Programming Research (MPR) team in Intel Labs. The MPR team’s goal is to create a society where everyone can create software, but machines will handle the “programming” part.
This session was held by Vladimir Brenner, Partner Account Manager, Disruptors & AI, Intel AI at the Dive into H2O: London training on June 17, 2019.
Please find the recording here: https://youtu.be/60o3eyG5OLM
HPC DAY 2017 | Accelerating tomorrow's HPC and AI workflows with Intel Archit...HPC DAY
HPC DAY 2017 - http://www.hpcday.eu/
Accelerating tomorrow's HPC and AI workflows with Intel Architecture
Atanas Atanasov | HPC solution architect, EMEA region at Intel
In this deck from ATPESC 2019, James Moawad and Greg Nash from Intel present: FPGAs and Machine Learning.
"Neural networks are inspired by biological systems, in particular the human brain. Through the combination of powerful computing resources and novel architectures for neurons, neural networks have achieved state-of-the-art results in many domains such as computer vision and machine translation. FPGAs are a natural choice for implementing neural networks as they can handle different algorithms in computing, logic, and memory resources in the same device. Faster performance comparing to competitive implementations as the user can hardcore operations into the hardware. Software developers can use the OpenCL device C level programming standard to target FPGAs as accelerators to standard CPUs without having to deal with hardware level design."
Watch the video: https://wp.me/p3RLHQ-lnc
Learn more: https://extremecomputingtraining.anl.gov/archive/atpesc-2019/agenda-2019/
and
https://www.intel.com/content/www/us/en/products/programmable/fpga.html
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Tendências da junção entre Big Data Analytics, Machine Learning e Supercomput...Igor José F. Freitas
O objetivo desta palestra é apresentar aos desenvolvedores como o universo da Computação de Alto Desempenho (computação paralela) está se tornando cada vez mais acessível e se democratizando nos softwares de Big Data e Inteligência Artificial. Supercomputadores que até pouco tempo eram utilizados apenas em indústrias de nicho, setores do governo e pela ciência, estão contribuindo para a solução de grandes desafios da sociedade, da indústria e da ciência. Esta palestra terá uma abordagem técnica envolvendo conceitos de software e hardware com o intuito de provocar o desenvolvedor a fazer uso de grandes servidores para desenvolverem aplicações inovadoras.
An easy-to-use, automatic, self-contained toolkit to accelerate ODM* benchmarking NFVi-ready server designs on Intel® Scalable Server platforms based on golden benchmark to characterize baseline performance test on DPDK, QAT and OVS, running on a single Xeon SP server.
DPDK Summit - 08 Sept 2014 - Intel - Networking Workloads on Intel ArchitectureJim St. Leger
Venky Venkatesan presents information on the Data Plane Development Kit (DPDK) including an overview, background, methodology, and future direction and developments.
ONS 2018 LA - Intel Tutorial: Cloud Native to NFV - Alon Bernstein, Cisco & K...Kuralamudhan Ramakrishnan
The first wave of NFV was about taking a network function and running it as-is in a virtual environment. The web giants follow a different approach called Cloud Native. Cloud Native views the cloud as a huge distributed compute platform, applications are broken into micro-services and deployed in a container based environment using DevOps.
Communication Service Providers are looking to adopt Cloud Native, yet the existing Cloud Native principles are not sufficient to meet their business and NFV use case needs. In this session, Intel and Cisco will explore and share experiences addressing challenges, technology gaps and migration path to Cloud Native for NFV.
Join us to alleviate your concerns around data plane performance, control, and DevOps deployment when using micro-services, Containers, and Kubernetes implementations.
What are latest new features that DPDK brings into 2018?Michelle Holley
We will provide an overview of the new features of the latest DPDK release including source code browsing and API listing of top two new features of latest DPDK release. And on top of that, there will be a hands-on lab, on the Intel® microarchitecture servers, to learn how getting started with DPDK will become much simpler and powerful.
AI for All: Biology is eating the world & AI is eating Biology Intel® Software
Advances in cell biology and creation of an immense amount of data are converging with advances in Machine learning to analyze this data. Biology is experiencing its AI moment and driving the massive computation involved in understanding biological mechanisms and driving interventions. Learn about how cutting edge technologies such as Software Guard Extensions (SGX) in the latest Intel Xeon Processors and Open Federated Learning (OpenFL), an open framework for federated learning developed by Intel, are helping advance AI in gene therapy, drug design, disease identification and more.
Streamline End-to-End AI Pipelines with Intel, Databricks, and OmniSciIntel® Software
Preprocess, visualize, and Build AI Faster at-Scale on Intel Architecture. Develop end-to-end AI pipelines for inferencing including data ingestion, preprocessing, and model inferencing with tabular, NLP, RecSys, video and image using Intel oneAPI AI Analytics Toolkit and other optimized libraries. Build at-scale performant pipelines with Databricks and end-to-end Xeon optimizations. Learn how to visualize with the OmniSci Immerse Platform and experience a live demonstration of the Intel Distribution of Modin and OmniSci.
AI for good: Scaling AI in science, healthcare, and more.Intel® Software
How do we scale AI to its full potential to enrich the lives of everyone on earth? Learn about AI hardware and software acceleration and how Intel AI technologies are being used to solve critical problems in high energy physics, cancer research, financial inclusion, and more. Get started on your AI Developer Journey @ software.intel.com/ai
AWS & Intel Webinar Series - Accelerating AI ResearchIntel® Software
Scale your research workloads faster with Intel on AWS. Learn how the performance and productivity of Intel Hardware and Software help bridge the gap between ideation and results in Data Science. Get started on your AI Developer Journey @ software.intel.com/ai.
ANYFACE*: Create Film Industry-Quality Facial Rendering & Animation Using Mai...Intel® Software
ANYFACE* brings film industry-quality facial rendering and animation to mainstream PC platforms using novel approaches to create face details and control microsurfaces. The solution enables users to create high-fidelity game character facial models using photogrammetry.
Ray Tracing with Intel® Embree and Intel® OSPRay: Use Cases and Updates | SIG...Intel® Software
Explore practical examples of Intel® Embree and Intel® OSPRay in production rendering and the best practices of using the kernels in typical rendering pipelines.
Use Variable Rate Shading (VRS) to Improve the User Experience in Real-Time G...Intel® Software
Variable-rate shading (VRS) is a new feature of Microsoft DirectX* 12 and is supported on the 11th generation of Intel® graphics hardware. Get an overview and learn best practices, recommendations, and how to modify traditional 3D effects to take advantage of VRS.
Bring the Future of Entertainment to Your Living Room: MPEG-I Immersive Video...Intel® Software
Explore the proposed Metadata for Immersive Video (MIV) standard specification. MIV enables real-world content captured by cameras to be viewed by users with Six Degrees of Freedom (6DoF) movement, similar to a VR experience with synthetic content.
In this presentation, we describe a heuristic for modifying the structure of sparse deep convolutional networks during training. The heuristic allows us to train sparse networks directly to reach accuracies on par with accuracies obtained through compressing/pruning of big dense models. We show that exploring the network structure during training is essential to reach best accuracies, even when the optimal network structure is known a-priori.
Intel® AI: Non-Parametric Priors for Generative Adversarial Networks Intel® Software
This presentation proposes a novel prior which is derived using basic theorems from probability theory and off-the-shelf optimizers, to improve fidelity of image generation using GANs by interpolating along any Euclidean straight line without any additional training and architecture modifications
Pmemkv is an open source, key-value store for persistent memory based on the Persistent Memory Development Kit (PMDK). Written in C and C++, it provides optimized bindings for Java*, Javascript*, and Ruby on Rails*), and includes multiple storage engines for different use cases.
Big Data Uses with Distributed Asynchronous Object StorageIntel® Software
Learn about the architecture and features of Distributed Asynchronous Object Storage (DAOS). This open source object store is based on the Persistent Memory Development Kit (PMDK) for massively distributed non-volatile memory applications.
Debugging Tools & Techniques for Persistent Memory ProgrammingIntel® Software
Learn about pmempool, a Persistent Memory Development Kit tool that helps you prevent, diagnose, and recover from data corruption. The session also covers other debugging tools for persistent memory programming.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
"Impact of front-end architecture on development cost", Viktor TurskyiFwdays
I have heard many times that architecture is not important for the front-end. Also, many times I have seen how developers implement features on the front-end just following the standard rules for a framework and think that this is enough to successfully launch the project, and then the project fails. How to prevent this and what approach to choose? I have launched dozens of complex projects and during the talk we will analyze which approaches have worked for me and which have not.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
3. 3
Datasetcitation
A Large and Diverse Dataset for Improved Vehicle Make and Model Recognition
F. Tafazzoli, K. Nishiyama and H. Frigui
In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops 2017.
4. 4
Use Intel hardware and software portfolio and demonstrate the data science
process
• Hands-on understanding of building a deep learning model and deploying to
the edge
• Use an enterprise image classification problem
• Perform Exploratory Data Analysis on the VMMR dataset
• Choose a framework and network
• Train the model – obtain the graph and weights of the trained network
• Deploy the model on CPU, integrated Graphics and Intel® Movidius™ Neural
Compute Stick
Learningobjective
6. 6
• Basic understanding of AI principles, Machine Learning and Deep Learning
• Coding experience with Python
• Some exposure to different frameworks – Tensorflow*, Caffe* etc.
• Here are some tutorials to get you stared
• Introduction to AI
• Machine Learning
• Deep Learning
• Applied Deep Learning with Tensorflow*
prerequisites
15. 15
Software
hardware
community
nGraph
OpenVINO™
toolkit
Nauta™
ML Libraries
Intel AI
Builders
Intel AI
Developer
Program
BreakingbarriersbetweenAITheoryandreality
Simplify AI
via our robust community
Choose any approach
from analytics to deep learning
Tame your data deluge
with our data layer expertise
Deploy AI anywhere
with unprecedented HW choice
Speed up development
with open AI software
Partner with Intel to accelerate your AI journey
Scale with confidence
on the platform for IT & cloud
Intel
GPU
*
*
*
*
*
Intel AI
DevCloud
BigDL
Intel®
MKL-DNN
www.intel.ai
16.
17. 17
Dedicated
Media/vision
Automated
Driving
Dedicated
DLTraining
Flexible
Acceleration
Dedicated
DLinference
Graphics,Media&
AnalyticsAcceleration
*FPGA: (1) First to market to accelerate evolving AI workloads (2) AI+other system level workloads like AI+I/O ingest, networking, security, pre/post-processing, etc (3) Low latency memory constrained workloads like RNN/LSTM
1GNA=Gaussian Neural Accelerator
All products, computer systems, dates, and figures are preliminary based on current expectations, and are subject to change without notice.
Images are examples of intended applications but not an exhaustive list.
device
Edge
Multi-cloud
NNP-L
NNP-I
GPU
And/OR
ADD
ACCELERATION
DeployAIanywhere
with unprecedented hardware choice
18. 18
1”Most businesses” claim is based on survey of Intel direct engagements and internal market segment analysis
➢ Most businesses (---)
will use the CPU for their
AI & deep learning needs
➢ Some early adopters (---)
may reach a tipping point
when acceleration is needed1
theDeeplearningmythDLDemand
Time
Acceleration
zone
CPU
zone
“A GPU is required for deep learning…”
FALSE
22. 22
software.intel.com/ai
Get 4-weeks FREE access to
the Intel® AI DevCloud, use
your existing Intel® Xeon®
Processor-based cluster, or
use a public cloud service
Intel®AIacademy
For developers, students, instructors and startups
teach Share
Developlearn
Showcase your innovation
at industry & academic
events and online via the
Intel AI community forum
Get smarter using
online tutorials,
webinars, student kits
and support forums
Educate others using
available course
materials, hands-on
labs, and more
23. 23 23
Learn More on DevMesh
Opportunities to
share your projects
as an Intel® Student
Ambassador
▪ Industry events via
sponsored speakerships
▪ Student Workshops
▪ Ambassador Labs
▪ Intel® Developer Mesh
25. 25
WhyIntelAI?
Partner
withIntel
toaccelerate
yourAI
Journey
Simplify AI
via our robust community
Tame your data deluge
with our data layer experts
Choose any approach
from analytics to deep learning
Speed up development
with open AI software
Deploy AI anywhere
with unprecedented HW choice
Scale with confidence
on the engine for IT & cloud
www.intel.ai
26. 26
• Intel® AI Academy
https://software.intel.com/ai-academy
• Intel® AI Student Kit
https://software.intel.com/ai-academy/students/kits/
• Intel® AI DevCloud
https://software.intel.com/ai-academy/tools/devcloud
• Intel® AI Academy Support Community
https://communities.intel.com/community/tech/intel-ai-academy
• DevMesh
https://devmesh.intel.com
Resources
29. 29
Technology
Data
Model
Deploy
IntelAICasestudy
Corrosion
L H
Challenge Brainstorm opportunities using the
70+ AI solutions in Intel’s portfolio
and rank the business value of each
approach
Identify approach & complexity of each
solution with Intel’s guidance; choose high-
ROI industrial defect detection using DL1
Discuss ethical, social, legal, security
& other risks and mitigation plans
with Intel experts prior to kickoff
Values
People
Secure internal buy-in for AI pilot and
new SW development philosophy, grow
talent via Intel AI developer program
Developer
Program
Value
Simplicity
1DL = Deep Learning
30. 30
IntelAIcasestudy(cont’d)
Technology
Data
Model
Prepare data for model development working with Intel and/or
partner to get the time-consuming data layer right (~12 weeks)
Develop model by training, testing inference and documenting
results working with Intel and/or partner for the pilot (~12 weeks)
Challenge
approach
Values
People
Deploy
Source
Data
Transmit
Data
Ingest
Data
Cleanup
Data
Integrate
Data
Stage
Data
Train
(Topology
Experiments)
Train
(Tune Hyper-
parameters)
Test
Inference
Document
Results
10% 5% 5% 60% 10% 10%
30% 30% 20% 20%
Project breakdown is approximated based on engineering estimates for time spent on each step in this real customer POC/pilot; time distribution is expected to be similar but vary somewhat for other deep learning use cases
31. 31
IntelAIcasestudy(cont’d)
Technology
Data
Model
Challenge
approach
Values
People
Deploy
Data
Ingest
Drones
Media
Store
Training Model
Store
Prepare
Data
Inference Label
Store
Media
Server
Solution
Layer
Service
Layer
Engage Intel AI Builders partner to deploy & scale
Data Ingestion
Inference
Prepare Data
Service Layer
Media Server
Data Ingestion
Data Ingestion
Data Ingestion
Inference
Inference
Inference
Prepare Data
Service Layer
Service Layer
Media Server
Media Server
Multi-UseCluster
4 Nodes
One ingestion
per day, one-
day retention
MediaServer
Media Store
Media Store
Media Store
Media Store
Media Store
Media Store
Adv.Analytics
Model Store
Model Store
Model Store
Model Store
Label Store
Label Store
Label Store
Label Store
110 Nodes
8 TB/day per
camera
10 cameras
3x replication
1-year video
retention
4 mgmt nodes
4 Nodes
20M frames
per day
2 Nodes
Infrequent op
3 Nodes
Simultaneous
users
3 Nodes
10k clips
stored
16 Nodes
4 Nodes
1-year of
history
4 Nodes
Labels for
20M frames
/day
DataStore
1x 2S 61xx 20x 4TB SSD
Training
Training
Per Node
1x 2S 81xx
5x 4TB SSD Per Node
1x 2S 81xx
1x 4TB SSD
Drone
Per Drone
1x Intel® Core™
processor
1x Intel® Movidius™
VPU
10 Drones
Real-time object
detection and
data collection
Software
➢ OpenVino™ Toolkit
➢ Intel® Movidius™ SDK
Drone
Drone
Drone
Drone
Drone
RemoteDevices
➢ TensorFlow*
➢ Intel® MKL-DNN
Intermittent use
1 training/month
for <10 hours
Per
Node
Builders
32. 32
• AI in the real world is much more involved than in the lab
• In most cases, acquiring the data for the challenge at hand, preparing it for
training is as time consuming as training and model analysis phases
• Most often, the entire process takes weeks to months to complete
Keylearning
33. 33
• An enterprise problem is too large and complex to address in a classroom
• Pick a smaller challenge and understand the steps to later apply to your
enterprise problems
• The AI journey in the class today will focus on:
• Defining a challenge
• Technology choices
• Obtaining a dataset and exploratory data analysis
• Training a model and deploying it on CPU, integrated Graphics, Intel®
Movidius™ Neural Compute Stick
Addressingtheaijourneyintheclassroom
36. 36
• Identify the challenge – Identification
of most stolen cars in the US
• Image recognition problem
• Application – Traffic surveillance
• Extensible to License Plate Detection
(not included in the class)
Step1–thechallenge
37.
38. 38
• Intel® AI DevCloud
• Amazon Web Services* (AWS)
• Microsoft Azure*
• Google Compute Engine* (GCE)
Step5-Computechoicesfortrainingandinference
39.
40. 40
▪ A cloud hosted hardware and software platform available to Intel® AI Academy
members to learn, sandbox and get started on Artificial Intelligence projects
• Intel® Xeon® Scalable Processors(Intel(R) Xeon(R) Gold 6128 CPU @ 3.40GHz 24
cores with 2-way hyper-threading, 96 GB of on-platform RAM (DDR4), 200 GB
of file storage
• 4 weeks of initial access, with extension based upon project needs
• Technical support via Intel® AI Academy Support Community
• Available now to all AI Academy Members
• https://software.intel.com/ai-academy/tools/devcloud
Intel®AI DevCloud
41. 41
• Intel® distribution of Python* 2.7 and 3.6
including NumPy, SciPy, pandas, scikit-
learn, Jupyter, matplotlib, and mpi4py
• Intel® Optimized Caffe*
• Intel® Optimized TensorFlow*
• Intel Optimized Theano*
• Keras library
• More Frameworks coming as they are
optimized
Intel® Parallel Studio XE Cluster Edition
and the tools and libraries included with
it:
• Intel C, C++ and Fortran compilers
• Intel® MPI library
• Intel® OpenMP* library
• Intel® Threading Building Blocks library
• Intel® Math Kernel Library-DNN
• Intel® Data Analytics Acceleration
Library
OptimizedSoftware–Noinstallrequired
44. 44
Amazon Web Services* (AWS)
• Name: C5 or C5n
• vCPUs: 2 - 72
• Memory: 4gb - 144gb
Google Compute Engine* (GCE):
• Name: n1-highcpu
• vCPUs: 2 - 96
• Memory: 1.8gb - 86.4gb
ChoosingyourCloudcompute
Microsoft Azure* (Azure):
• Name: Fsv2
• vCPUs: 2 - 72
• Memory: 4gb - 144gb
What to look for in your compute choices:
• Better: Intel® Xeon™ Scalable Processor (code named Skylake) / Best: 2nd Gen Intel® Xeon™
Scalable Processor (code named Cascade Lake)
• AVX512 and VNNI Support
• Compute Intensive Instance Type per Cloud Service Provider
• Memory and vCPU are specific to your dataset
45.
46. 46
• Obtain a starter dataset
• Initial assessment of data
• Prepare the dataset for the problem
at hand
• Identify relevant classes and images
• Preprocess
• Data augmentation
Step6–Exploratorydataanalysis
47. 47
• Look for existing datasets that are similar to or match the given problem
• Saves time and money
• Leverage the work of others
• Build upon the body of knowledge for future projects
• We begin with the VMMRdb dataset
Obtainastarterdataset
48. 48
The Vehicle Make and Model Recognition dataset (VMMRdb):
• Large in scale and diversity
• Images are collected from Craigslist
• Contains 9170 classes
• Identified 76 Car Manufacturers
• 291,752 images in total
• Manufactured between1950-2016
• Explore the VMMR dataset in the
Optional-Explore-VMMR.ipynb
Initialassessmentofthedataset
Car Manufacturer
49. 49
Hottest Wheels: The Most Stolen New And Used Cars In The U.S.
Choose the 10 classes in this problem – shortens training time
• Honda Civic (1998): 45,062
• Honda Accord (1997): 43,764
• Ford F-150 (2006): 35,105
• Chevrolet Silverado (2004): 30.056 # indicates number of stolen cars in each model in 2017
• Toyota Camry (2017): 17,276
• Nissan Altima (2016): 13,358
• Toyota Corolla (2016): 12,337
• Dodge/Ram Pickup (2001): 12,004
• GMC Sierra (2017): 10,865
• Chevrolet Impala (2008): 9,487
datasetforthestolencarschallenge
50. 50
• Map multiple year vehicles to the stolen car category (based on exterior similarity)
• Provides more samples to work with
• Honda Civic (1998) → Honda Civic (1997 - 1998)
• Honda Accord (1997) → Honda Accord (1996 - 1997)
• Ford F-150 (2006) → Ford F150 (2005 - 2007)
• Chevrolet Silverado (2004) → Chevrolet Silverado (2003 - 2004)
• Toyota Camry (2017) →Toyota Camry (2012 - 2014)
• Nissan Altima (2016) →Nissan Altima (2013 - 2015)
• Toyota Corolla (2016) →Toyota Corolla (2011 - 2013)
• Dodge/Ram Pickup (2001) →Dodge Ram 1500 (1995 - 2001)
• GMC Sierra (2017) →GMC Sierra 1500 (2007 - 2013)
• Chevrolet Impala (2008) →Chevrolet Impala (2007 - 2009)
Preparedatasetforthestolencarschallenge
51. 51
• Fetch and visually inspect a dataset
• Image Preprocessing
• Address Imbalanced Dataset Problem
• Organize a dataset into training, validation and testing groups
• Augment training data
• Limit overlap between training and testing data
• Sufficient testing and validation datasets
• Complete Notebook: Part1-Exploratory_Data_Analysis.ipynb
Preprocessthedataset
52. 52
• Visually Inspecting the Dataset
– Taking note of variances
– ¾ view
– Front view
– Back view
– Side View, etc.
– Image aspect ratio differs
• Sample Class name:
– Manufacturer
– Model
– Year
Inspectthedataset
53. 53
• Honda Civic (1998)
• Honda Accord (1997)
• Ford F-150 (2006)
• Chevrolet Silverado (2004)
• Toyota Camry (2014)
• Nissan Altima (2014)
• Toyota Corolla (2013)
• Dodge/Ram Pickup (2001)
• GMC Sierra (2012)
• Chevrolet Impala (2008)
Datacreation
54. 54
Preprocessing
• Removes inconsistencies and
incompleteness in the raw data and
cleans it up for model consumption
• Techniques:
– Black background
– Rescaling, gray scaling
– Sample wise centering, standard
normalization
– Feature wise centering, standard
normalization
– RGB → BGR
Data Augmentation
• Improves the quantity and quality of
the dataset
• Helpful when dataset is small or
some classes have less data than
others
• Techniques:
– Rotation
– Horizontal & Vertical Shift, Flip
– Zooming & Shearing
Preprocessing&Augmentation
Learn more about the preprocessing and augmentation methods in Optional-VMMR_ImageProcessing_DataAugmentation.ipynb
56. 56
• Images are made of pixels
• Pixels are made of combinations of Red, Green, Blue, channels.
RGBchannels
57. 57
• Depending on the network choice RGB-BGR conversion is required.
• One way to achieve this task is to use Keras* preprocess_input
>> keras.preprocessing.image.ImageDataGenerator(preprocessing_function=preprocess_input)
RGB–BGR
61. 61
• Generating a trained model involves
multiple steps
• Choose a framework (Tensorflow*,
Caffe*, PyTorch)
• Choose a network (InceptionV3, VGG16,
MobileNet, ResNet etc. or custom)
• Train the model and tune it for better
performance
– Hyper parameter tuning
• Generate a trained model (frozen
graph/ caffemodel etc.)
Step7–Thetraining/modelphase
62.
63. 63
• Which frameworks is Intel optimizing?
• What are the decision factors for choosing a specific framework?
• Why did we choose Tensorflow?
Decisionmetricsforchoosingaframework
64. 64
optimizedDeeplearningframeworks
and more…
SEE ALSO: Machine Learning Libraries for Python (Scikit-learn, Pandas, NumPy), R (Cart, randomForest, e1071), Distributed (MlLib on Spark, Mahout)
*Limited availability today
Other names and brands may be claimed as the property of others.
*
Install an Intel-optimized framework and featured topology
Get started today at ai.intel.com/framework-optimizations/
More under optimization:
*
*
*
FOR
*
*
*
*
FOR
*
*
65. 65
Developing Deep Neural Network models can be done faster with Machine learning frameworks/libraries.
There are a plethora of choices of frameworks and the decision on which to choose is very important.
Some of the criteria to consider for the choice are:
1. Opensource and Level of Adoption
2. Optimizations on CPU
3. Graph Visualization
4. Debugging
5. Library Management
6. Inference target (CPU/ Integrated Graphics/ Intel® Movidius™ Neural Compute Stick /FPGA)
Considering all these factors, we have decided to use the Google Deep Learning framework TensorFlow
Caffe/TensorFlow/Pytorchframeworks
66. 66
The choice of framework was based on ..
Opensource and high level of Adoption – supports more features, also has the ‘contrib’ package for the
creation of more models which allows for support of more higher-level functions.
Optimizations on CPU – TensorFlow with CPU optimizations can give up to 14x Speedup in Training and 3.2x
Speedup in Inference! TensorFlow is flexible enough to support experimentation with new deep learning
models/topologies and system level optimizations. Intel optimizations have been up-streamed and are part of
public TensorFlow* GitHub repo.
Inference target (CPU/GPU/Movidius/FPGA) – TensorFlow can be scaled or deployed on different types of
devices ranging from CPUs, GPUs and Inferenced on devices as small as mobile phones. TensorFlow has
seamless integration with CPU, GPU, TPU with no need for any explicit configuration. Support for small-scale,
mobile, TF serving for server-sided deployment. TensorFlow graphs are exportable graph – pb/onnx
WhydidwechooseTensorFlow?
67. 67
The choice of framework was base on ..
Graph Visualization: compared to its closest rivals like Torch and Theano, TensorFlow has better
computational graph visualization with Tensor Board.
Debugging: TensorFlow uses its debugger called the ‘tfdbg’ TensorFlow Debugging, which lets you
execute subparts of a graph to observe the state of the running graphs.
Library Management: TensorFlow has the advantage of the consistent performance, quick updates and
regular new releases with new features. This course uses Keras which will enable an easier transition to
TensorFlow 2.0 for training and testing models.
WhydidwechooseTensorFlow?
68.
69. 69
We started this project with the plan for inference on an edge device in mind as our ultimate
deployment platform. To that end we always considered three things when selecting our topology or
network: time to train, size, and inference speed.
• Time to Train: Depending on the number of layers and computation required, a network can take
a significantly shorter or longer time to train. Computation time and programmer time are costly
resources, so we wanted a reduced training times.
• Size: Since we're targeting edge devices and an Intel® Movidius™ Neural Compute Stick, we must
consider the size of the network that is allowed in memory as well as supported networks.
• Inference Speed: Typically the deeper and larger the network, the slower the inference speed. In
our use case we are working with a live video stream; we want at least 10 frames per second on
inference.
• Accuracy: It is equally important to have an accurate model. Even though, most pretrained models
have their accuracy data published, but we still need to discover how they perform on our
dataset.
HowtoSELECTANetwork?
70. 70
We decided to train our dataset on three networks that are currently supported on our edge
devices (CPU, Integrated GPU, Intel® Movidius™ Neural Compute Stick).
The original paper* was trained on ResNet-50. However, it is not supported currently on Intel®
Movidius™ Neural Compute Stick.
The supported networks that we trained the model on:
• Inception v3
• VGG16
• MobileNet
*http://vmmrdb.cecsresearch.org/papers/VMMR_TSWC.pdf
Inceptionv3-VGG16–MOBILENETnetworks
74. 74
After training and comparing the performance and results based on the previously
discussed criteria, our final choice of Network was Inception V3.
This choice was because, out of the three networks;
• MobileNet was the least accurate model (74%) but had the smallest size (16mb)
• VGG16 was the most accurate (89%) but the largest in size (528mb)
• InceptionV3 had median accuracy (83%) and size (92mb)
Inceptionv3-VGG16-MOBILENET
75. 75
Based on your projects requirements the choice of framework and topology will
differ.
• Time to train
• Size of the model
• Inference speed
• Acceptable accuracy
There is no one size fits all approach to these choices and there is trial and error to
finding your optimal solution
summary
78. 78
• Try out Optional-Training_VGG16.ipynb
• Try out Optional-Training_Mobilenet.ipynb
• See how your training results differ from inceptionV3
(optional)trainingusingvgg16andmobilenet
79. 79
• Understand how to interpret the
results of the training by analyzing
our model with different metrics and
graphs
• Confusion Matrix
• Classification Report
• Precision-Recall Plot
• ROC Plot
• (Optional) Complete Notebook –
Part3-Model_Analysis.ipynb
Modelanalysis
80.
81. 81
• What does deployment or inference
mean?
• What does deploying to the edge
mean?
• Understand the Intel® Distribution of
OpenVINO™ Toolkit
• Learn how to deploy to CPU, Integrated
Graphics, Intel® Movidius™ Neural
Compute Stick
Step8–Thedeploymentphase
83. 83
WhatisinferenceontheEdge?
• Real-time evaluation of a model subject to the constraints of power, latency and
memory.
• Requires AI models that are specially tuned to the above-mentioned
constraints.
• Models such SqueezeNet, for example, are tuned for image inferencing on PCs
and embedded devices.
84.
85. 85
DeepLearningvs.TraditionalComputerVision
OpenVINO™ has tools for an end to end vision pipeline
85
Pre-trained
Optimized
Deep Learning
Models
Intel
Hardware
Abstraction
Layer
API Solution API Solution Direct Coding Solution
Deep Learning Deployment Toolkit Computer Vision
Libraries Custom Code
OpenCL™ C/C++OpenCV*/OpenVX*
OpenVINO™
GPUCPU FPGA GPUCPUVPU
Intel®
Media SDK
Model
Optimizer
Inference
Engine
Intel® SDK for
OpenCL™
Applications
TRADITIONALComputerVisionDEEPLEARNINGComputerVision
▪ Based on selection and connections of computational
filters to abstract key features and correlating them to an
object
▪ Works well with well defined objects and controlled scene
▪ Difficult to predict critical features in larger number of
objects or varying scenes
▪ Based on application of a large number of filters
to an image to extract features.
▪ Features in the object(s) are analyzed with the
goal of associating each input image with an
output node for each type of object.
▪ Values are assigned to output node representing
the probability that the image is the object
associated with the output node.
OpenCV* OpenVX*
GPUCPU
OpenVX and the OpenVX logo are trademarks of the Khronos Group Inc.
OpenCL and the OpenCL logo are trademarks of Apple Inc. used by permission by Khronos
86.
87. 87
Optimize/
Heterogeneous
Inference engine
supports multiple
devices for
heterogeneous flows.
(device-level optimization)
Prepare
Optimize
Model optimizer:
▪ Converting
▪ Optimizing
▪ Preparing to
inference
(device agnostic,
generic optimization)
Inference
Inference engine
lightweight API
to use in
applications for
inference.
MKL-
DNN
cl-DNN
CPU: Intel®
Xeon®/Intel®
Core™/Intel Atom®
GPU
FPGA
Myriad™ 2/X
DLA
Intel®
Movidius™ API
Train
Train a DL model.
Currently supports:
▪ Caffe*
▪ Mxnet*
▪ TensorFlow*
Extend
Inference engine
supports
extensibility
and allows
custom kernels
for various
devices.
Extensibility
C++
Extensibility
OpenCL™
Extensibility
OpenCL™/TBD
Extensibility
TBD
Intel®DeepLearningDeploymentToolkit
88. 88
• A trained model is the input to the Model Optimizer (MO)
• Use the frozen graph (.pb file) from the Stolen Cars model training
as input
• The MO provides to tools to convert a trained model to a frozen
graph in the event it is not already done.
Step1–Trainamodel
89.
90. 90
▪ Easy to use, Python*-based workflow does not require rebuilding frameworks.
▪ Import Models from various frameworks (Caffe*, TensorFlow*, MXNet*, more are planned…)
▪ More than 100 models for Caffe*, MXNet* and TensorFlow* validated.
▪ IR files for models using standard layers or user-provided custom layers do not require Caffe*
▪ Fallback to original framework is possible in cases of unsupported layers, but requires original
framework
ImprovePerformancewithModelOptimizer
Trained
Model
Model Optimizer
Analyze
Quantize
Optimizetopology
Convert
Intermediate Representation (IR) file
91. 91
Model optimizer performs generic optimization:
• Node merging
• Horizontal fusion
• Batch normalization to scale shift
• Fold scale shift with convolution
• Drop unused layers (dropout)
• FP16/FP32 quantization
ImprovePerformancewithModelOptimizer(cont’d)
91
Model optimizer can cut out a portion of the network:
• Model has pre/post-processing parts that cannot be mapped to existing layers.
• Model has a training part that is not used during inference.
• Model is too complex and cannot be converted in one shot.
92. 92
Example
1. Remove Batch normalization stage.
2. Recalculate the weights to ‘include’ the operation.
3. Merge Convolution and ReLU into one optimized kernel.
ImprovePerformancewithModelOptimizer
92
93. 93
• To generate IR files, the MO must recognize the layers in the model
• Some layers are standard across frameworks and neural network topologies
• Example – Convolution, Pooling, Activation etc.
• MO can easily generate the IR representation for these layers
• Framework specific instructions to use the MO:
• Caffe: https://software.intel.com/en-us/articles/OpenVINO-Using-Caffe
• Tensorflow: https://software.intel.com/en-us/articles/OpenVINO-Using-TensorFlow
• MxNet: https://software.intel.com/en-us/articles/OpenVINO-Using-MXNet
Processingstandardlayers
93
94. 94
• Custom layers are layers not included in the list of layers known to MO
• Register the custom layers as extensions to the Model Optimizer
• Is independent of availability of Caffe* on the computer
• Register the custom layers as Custom and use the system Caffe to calculate the
output shape of each Custom Layer
• Requires Caffe Python interface on the system
• Requires the custom layer to be defined in the CustomLayersMapping.xml file
• Process is similar in Tensorflow* as well
Processingcustomlayers(optional)
94
95.
96. 96
OptimalModelPerformanceUsingtheInferenceEngine
96
▪ Simple & Unified API for Inference
across all Intel® architecture (IA)
▪ Optimized inference on large IA
hardware targets (CPU/iGPU/FPGA)
▪ Heterogeneity support allows execution
of layers across hardware types
▪ Asynchronous execution improves
performance
▪ Futureproof/scale your development
for future Intel® processors
Transform Models & Data into Results & Intelligence
Inference Engine Common API
Plug-InArchitecture
Inference
Engine
Runtime
Movidius API
Intel®
Movidius™
Myriad™ 2 VPU
DLA
Intel® Integrated
Graphics(GPU)
CPU: Intel®
Xeon®/Core™/Atom®
clDNN Plugin
Intel Math Kernel
Library
(MKLDNN)Plugin
OpenCL™Intrinsics
FPGA Plugin
Applications/Service
Intel® Arria®
10 FPGA
Movidius
Plugin
OpenVX and the OpenVX logo are trademarks of the Khronos Group Inc.
OpenCL and the OpenCL logo are trademarks of Apple Inc. used by permission by Khronos
97. 97
InferenceEngine
• The Inference Engine is a C++ library with a set of C++ classes that application developers
use in their application to infer input data (images) and get the result.
• The library provides you an API to read the IR, set input and output formats and execute the
model on various devices.
100. 100
• Introduction
• Car theft classification
• What it does
• The implementation instructs users on
how to develop a working solution to
the problem of creating a car theft
classification application using Intel®
hardware and software tools.
HandsonInference--EdgeDeviceTutorial:
101. 101
• The app uses the pre-trained models from the earlier exercises.
• The model is based on the modified Inception_V3 network that was derived from
a check-point trained for ImageNet with1000 categories. For purposes of this
exercises the model was modified in the last layer to only account for the 10
categories of most stolen cars.
• Upon getting a frame from the OpenCV's VideoCapture, the application performs
inference with the model. The results are displayed in a frame with the
classification text and performance numbers.
HowitWorks
102. 102
1. Load plugin
plugin = IEPlugin(device=device_option)
2. Read IR / Load Network
net = IENetwork(model=model_xml,weights=model_bin)
3. Configure Input and Output
input_blob, out_blob = next(iter(net.inputs)), next(iter(net.outputs))
4. Load Model
n, c, h, w = net.inputs[input_blob].shape
exec_net = plugin.load(network=net)
StepStoInference
103. 103
5. Prepare Input
inputs={input_blob: [cv2.resize(frame_, (w, h)).transpose((2, 0, 1))]}
6. Infer
res = exec_net.infer(inputs=inputs)
res = res[out_blob]
7. Process Output
top = res[0].argsort()[-1:][::-1]
pred_label = labels[top[0]]
StepstoInferenceContinued
104.
105. 105
2nd generationINTEL®XEON®SCALABLEPROCESSOR
PerformanceTCO/Flexibility SecUrity
Built-in Acceleration with
Intel® Deep Learning Boost…
✓ IMT – Intel® Infrastructure Management
Technologies
✓ ADQ – Application Device Queues
✓ SST – Intel® Speed Select Technology
✓ Intel® Security Essentials
✓ Intel® SecL: Intel® Security
Libraries for Data Center
✓ TDT – Intel® Threat Detection
TechnologyThroughput (img/s)
Drop-in compatible CPU on Intel® Xeon® Scalable platform
Begin your AI journey efficiently,
now with even more agility…
Hardware-Enhanced
Security…
deep
learning
throughput!1
Up to
30X
1 Based on Intel internal testing: 1X,5.7x,14x and 30x performance improvement based on Intel® Optimization for Café ResNet-50 inference throughput performance on Intel® Xeon® Scalable Processor. See Configuration Details 3
Performance results are based on testing as of 7/11/2017(1x) ,11/8/2018 (5.7x), 2/20/2019 (14x) and 2/26/2019 (30x) and may not reflect all publically available security updates. No product can be absolutely secure. See configuration
disclosure for details. ,
Optimization Notice: Intel's compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors. These optimizations include SSE2, SSE3, and SSSE3 instruction
sets and other optimizations. Intel does not guarantee the availability, functionality, or effectiveness of any optimization on microprocessors not manufactured by Intel. Microprocessor-dependent optimizations in this product are intended for use
with Intel microprocessors. Certain optimizations not specific to Intel microarchitecture are reserved for Intel microprocessors. Please refer to the applicable product User and Reference Guides for more information regarding the specific
instruction sets covered by this notice. Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as SYSmark and MobileMark, are measured using
specific computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your
contemplated purchases, including the performance of that product when combined with other products. For more complete information visit: http://www.intel.com/performance
109. 109
OpenVINO toolkit support for int8 model inference on Intel processors:
• Convert the model from original framework format using the Model Optimizer
tool. This will output the model in Intermediate Representation (IR) format.
• Perform model calibration using the calibration tool within the Intel
Distribution of OpenVINO toolkit. It accepts the model in IR format and is
framework-agnostic.
• Use the updated model in IR format to perform inference.
Stepstoconvertatrainedmodelandinfer
110.
111.
112. 112112
ReinforcementLearningcoach
Open source Python Reinforcement Learning framework for AI agents
development and training -
Coach is a python reinforcement learning framework containing
implementation of many state-of-the-art algorithms.
It exposes a set of easy-to-use APIs for experimenting with new
Reinforcement Learning algorithms and allows simple integration of
new environments to solve
113. 113
• Built with the Intel-optimized version of TensorFlow* to enable efficient training
of RL agents on multi-core CPUs.
• As of release 0.11 Coach now supports MxNet
• Additionally, trained models can now be exported using ONNX* to be used in
deep learning frameworks not currently supported by RL Coach.
Frameworksupport
118. 118
• NLP Architect is an open-source Python library for exploring state-of-the-art deep
learning topologies and techniques for natural language processing and natural
language understanding.
• NLP Architect utilizes the following open source deep learning frameworks:
TensorFlow*, Intel-Optimized TensorFlow* with MKL-DNN, Dynet*
Installation instructions using pip:
pip install nlp-architect
NLP Architect helps you to get started with instructions on the supported
frameworks, NLP models, algorithms and modules.
NlparchitectbyIntel®AILAB
119. 119
NLP library designed to be flexible, easy to extend, allow for easy and rapid
integration of NLP models in applications and to showcase optimized models.
Features:
• Core NLP models used in many NLP tasks and useful in many NLP applications
• Novel NLU models showcasing novel topologies and techniques
• Simple REST API server (doc):
– serving trained models (for inference)
– plug-in system for adding your own model
• Based on optimized Deep Learning frameworks:
– TensorFlow
– Intel-Optimized TensorFlow with MKL-DNN
– Dynet
overview
121. 121
Nlparchitect-ResearchdrivenNLP/NLUmodels
The library contains state-of-art and novel NLP and NLU models in a variety of
topics:
• Dependency parsing
• Intent detection and Slot tagging model for Intent based applications
• Memory Networks for goal-oriented dialog
• Noun phrase embedding vectors model
• Noun phrase semantic segmentation
• Named Entity Recognition
• Word Chunking
122. 122
Nlparchitect-ResearchdrivenNLP/NLUmodels
Others include:
• Reading comprehension
• Language modeling using Temporal Convolution Network
• Unsupervised Cross lingual Word Embedding
• Supervised sentiment analysis
• Sparse and quantized neural machine translation
• Relation Identification and cross document co-reference
Overtime the list of models included in this space will change.
125. 125
• Multi-user, distributed computing environment for running deep learning model
training experiments
• Results of experiments, can be viewed and monitored using a command line
interface, web UI and/or TensorBoard*
• Use existing data sets, use your own data, or downloaded data from online
sources, and create public or private folders to make collaboration among
teams easier.
• Runs using the industry leading Kubernetes* and Docker* platform for
scalability and ease of management
benefits
128. 128
• Rich deep learning support: Provides comprehensive support for deep
learning, including numeric computing via Tensor and high-level neural
networks in addition, you can load pretrained Caffe* or Torch models into the
Spark framework, and then use the BigDL library to run inference applications
on their data.
• Efficient scale out: To perform data analytics at “big data scale” by
using Spark as well as efficient implementations of synchronous stochastic
gradient descent (SGD) and all-reduce communications in Spark.
• Extremely high performance: Uses Intel® Math Kernel Library (Intel® MKL) and
multithreaded programming in each Spark task. Designed and optimized for
Intel® Xeon® processors, BigDL and Intel® MKL provide you extremely high
performance.
Whatisbigdl
129. 129
Analyze a large amount of data on the same big data Spark cluster on which the
data reside (HDFS, Apache HBase*, or Hive);
Add deep learning functionality (either training or prediction) to your big data
(Spark) programs or workflow
Use existing Hadoop/Spark clusters to run your deep learning applications, which
you can then easily share with other workloads (e.g., extract-transform-load, data
warehouse, feature engineering, classical machine learning, graph analytics).
Whyusebigdl
130. 130
Distributed deep learning framework
for Apache Spark
Standard Spark programs
• Run on existing Spark/Hadoop clusters
(no changes needed)
Feature parity with popular DL
frameworks
• E.g., Caffe, Torch, Tensorflow, etc.
High performance (on CPU)
• Powered by Intel® Math Kernel Library for Deep
Neutral Networks (Intel MKL-DNN) and multi-
threaded programming
Efficient scale-out
• Leveraging Spark for distributed training &
inference
BigDL-BringingDeepLearningToBigDataPlatform
SQL SparkR Streaming
MLlib GraphX
ML Pipeline
DataFrame
Spark Core
https://github.com/intel-analytics/BigDL
https://bigdl-project.github.io/
software.intel.com/bigdl
https://github.com/intel-analytics/analytics-zoo
131. 131
• Intel® Distribution of OpenVINO™ Toolkit
• Reinforcement Learning Coach
• NLP Architect
• Nauta
• BigDL
• Intel Optimizations to Caffe*
• Intel Optimizations to TensorFlow*
Learn more through the AI webinar series
AI Courses:
⁻ Introduction to AI
⁻ Machine Learning
⁻ Deep Learning
⁻ Applied Deep Learning with Tensorflow*
resources