EARS: The Easy Approach to Requirements SyntaxTechWell
One key to specifying effective functional requirements is minimizing misinterpretation and ambiguity. By employing a consistent syntax in your requirements, you can improve readability and help ensure that everyone on the team understands exactly what to develop. John Terzakis provides examples of typical requirements and explains how to improve them using the Easy Approach to Requirements Syntax (EARS). EARS provides a simple yet powerful method of capturing the nuances of functional requirements. John explains that you need to identify two distinct types of requirements. Ubiquitous requirements state a fundamental property of the software that always occurs; non-ubiquitous requirements depend on the occurrence of an event, error condition, state, or option. Learn and practice identifying the correct requirements type and restating those requirements with the corresponding syntax. Join John to find out what’s wrong with the requirements statement—The software shall warn of low battery—and how to fix it.
EARS: The Easy Approach to Requirements SyntaxTechWell
One key to specifying effective functional requirements is minimizing misinterpretation and ambiguity. By employing a consistent syntax in your requirements, you can improve readability and help ensure that everyone on the team understands exactly what to develop. John Terzakis provides examples of typical requirements and explains how to improve them using the Easy Approach to Requirements Syntax (EARS). EARS provides a simple yet powerful method of capturing the nuances of functional requirements. John explains that you need to identify two distinct types of requirements. Ubiquitous requirements state a fundamental property of the software that always occurs; non-ubiquitous requirements depend on the occurrence of an event, error condition, state, or option. Learn and practice identifying the correct requirements type and restating those requirements with the corresponding syntax. Join John to find out what’s wrong with the requirements statement—The software shall warn of low battery—and how to fix it.
Find Requirements Defects to Build Better SoftwareTechWell
Requirements defects are often the source of the majority of all software defects. Discovering and correcting a defect during testing is typically twenty-five times more expensive than correcting it during the requirements definition phase. Identifying and removing defects early in the software development lifecycle provides many benefits including reduced rework costs, less wasted effort, and greater team productivity. This translates into software projects that deliver the committed functionality on schedule, within budget, and with higher levels of customer satisfaction. John Terzakis shares powerful tips and techniques for quickly identifying requirements defects and providing feedback on how to improve them. Learn the ten attributes of a well-written requirement and how to detect various categories of requirements issues including ambiguity, passive voice, subjectivity, and missing event triggers. Using the concepts presented, John leads the analysis of a set of requirements. Leave with checklists that will make your requirements reviews more effective.
Enterprise Video Hosting: Introducing the Intel Video PortalIT@Intel
Intel IT developed an enterprise video hosting solution in order to meet the needs of employees who wanted to create and share videos in an easy-to-use and secure manner.
Accelerating Our Path to Multi Platform BenefitsIT@Intel
This is a time of tremendous change for IT organizations everywhere.
Intel IT realized we need to enable enterprise applications to support the devices of today (touch) and also develop the applications so they are ready for the next big thing (voice and gesture). We’ve kicked-off a new initiative that focuses on accelerating delivery of applications to our business partners and employees on their mobile platform(s) of choice.
For the full video of this presentation, please visit:
https://www.edge-ai-vision.com/2020/11/smarter-manufacturing-with-intels-deep-learning-based-machine-vision-a-presentation-from-intel/
For more information about edge AI and computer vision, please visit:
https://www.edge-ai-vision.com
Tara K. Thimmanaik, Solutions Architect at Intel, presents the “Smarter Manufacturing with Intel’s Deep Learning-Based Machine Vision” tutorial at the September 2020 Embedded Vision Summit.
As demand for smarter and more efficient manufacturing is growing, IoT technologies—including sensors, edge devices, gateways, servers and the cloud—are being used throughout the factory to compute deep learning analytics workloads at the appropriate location. Efficient data-driven manufacturing can help to reduce labor costs, increase quality and maximize profit. The biggest hindrance to achieving these outcomes is the difficulty in extracting data from vendor-locked and proprietary systems for analytics downstream.
In this presentation, Thimmanaik covers Intel’s approach to developing open, flexible and scalable solutions, including:
• Intel’s technologies such as OpenVINO, Movidius Vision Processor Units, Edge Insights Software (EIS) and deep learning algorithms
• How Intel’s offerings come together in the industrial marketplace with partnerships forged to address the constraints of manufacturing infrastructure
• Real-world examples highlighting defect detection in textile printing (where 90% accuracy at 50 fps was achieved) and smartphone screen production (where false negatives were only 0.6%)
EARS: The Easy Approach to Requirements SyntaxTechWell
One key to specifying effective functional requirements is minimizing misinterpretation and ambiguity. By employing a consistent syntax in your requirements, you can improve readability and help ensure that everyone on the team understands exactly what to develop. John Terzakis provides examples of typical requirements and explains how to improve them using the Easy Approach to Requirements Syntax (EARS). EARS provides a simple yet powerful method of capturing the nuances of functional requirements. John explains that you need to identify two distinct types of requirements. Ubiquitous requirements state a fundamental property of the software that always occurs; non-ubiquitous requirements depend on the occurrence of an event, error condition, state, or option. Learn and practice identifying the correct requirements type and restating those requirements with the corresponding syntax. Join John to find out what’s wrong with the requirements statement—“The software shall warn of low battery”—and how to fix it.
Fault tolerance ease of setup comparison: NEC hardware-based FT vs. software-...Principled Technologies
For enterprise datacenter staff, time is of the essence. While using a software-based FT solution such as VMware vSphere Fault Tolerance is an effective way to eliminate downtime, the five extra steps required to configure every single VM can add up.
In our hands-on tests, setting up a server with eight fault-tolerant virtual machines took only 41 steps on the NEC Express5800/R320d-M4, vs. 60 steps when we used a VMware vSphere Fault Tolerance, a difference of 36.6 percent. With a greater number of VMs per server, this difference would increase. Hardware-based fault tolerance on the NEC Express5800/R320d-M4 also reduced the number of necessary hardware components required by half compared to the software-based based FT approach.
With dozens of servers hosting hundreds of VMs, your IT staff can benefit enormously from the hardware-based fault tolerance that the NEC Express5800/R320d-M4 delivers.
Intel® Xeon® Processor E7-8800/4800 v4 EAMG 2.0Intel IT Center
This set of Intel® Xeon® processor E7-8800/4800 v4 family proof points spans several key business segments. The Intel® Xeon® processor E7-8800/4800 v4 product family delivers the horsepower for real-time, high-capacity data analysis that can help businesses derive rapid actionable insights to deliver innovative new services and customer experiences. With high performance, industry’s largest memory, robust reliability, and hardware-enhanced security features, the E7-8800/4800 v4 is optimal for scale-up platforms, delivering rapid in-memory computing for today’s most demanding real-time data and transaction-intensive workloads.
Accelerating Apache Spark with Intel QuickAssist TechnologyDatabricks
Enterprise and cloud data centers are under pressure to continuously expand revenue-generating and value-added services, such as compute intensive and I/O-demanding Big Data solutions, which moves large amounts of data into and out of storage, and sends it across the networked clusters.
A significant amount of time and network bandwidth can be saved when the data is compressed before it is passed between servers, as long as the compression/decompression operations are efficient and require negligible CPU cycles. Intel QuickAssist Technology allows compute-intensive workloads, specifically compression, to be offloaded from the CPU core onto dedicated hardware accelerators. Intel Quick Assist Technology enables developers to create software solutions that leverage compression/decompression acceleration, accessing the technology through APIs in the Intel QuickAssist Software.
This talk provides developers with information on Intel QuickAssist Technology and presents some key use cases to provide background for them to understand how they can take advantage of the hardware-based compression acceleration and performance improvements available with Intel QuickAssist Technology in their Spark applications.
This presentation from CleanSoft Academy helps graduates to make a career choice in the discipline of software testing. A must read for all those graduates who are not sure what career to pursue after graduation.
Gary Brown (Movidius, Intel): Deep Learning in AR: the 3 Year HorizonAugmentedWorldExpo
A talk from the Develop Track at AWE USA 2017 - the largest conference for AR+VR in Santa Clara, California May 31- June 2, 2017.
Gary Brown (Movidius, Intel): Deep Learning in AR: the 3 Year Horizon
Deep learning techniques are gaining in popularity in many facets of embedded vision, and this holds true for AR and VR. Will they soon dominate every facet of vision processing? This talk explores this question by examining the theory and practice of applying deep learning to real world problems for Augmented Reality, with real examples describing how this shift is happening today quickly in some areas, and slower in others.
http://AugmentedWorldExpo.com
Deploying Intel Architecture-based Tablets with Windows* 8 at IntelIT@Intel
Intel IT recently deployed Intel® Atom™ processor-based tablets with Microsoft Windows* 8 in our enterprise in a proof of concept.
Our participants were pleased with the experience, and reported greater productivity and flexibility.
For the full video of this presentation, please visit:
https://www.edge-ai-vision.com/2021/07/accelerating-edge-ai-solution-development-with-pre-validated-hardware-software-kits-from-intel-partners-a-presentation-from-intel/
Daniel Tsui, Foundational Developer Kit Product Manager at Intel, presents the “Accelerating Edge AI Solution Development with Pre-validated Hardware-Software Kits from Intel Partners” tutorial at the May 2021 Embedded Vision Summit.
When developing a new edge AI solution, you want to focus on your system’s unique functionality. In this session, Tsui shares the different foundational developer kits available from Intel’s partners to help speed your edge AI solution development. These kits include industrial-grade hardware that’s ready to deploy and that can be purchased easily using a credit card.
These systems come with Intel’s Edge Insights for Vision software package, a set of pre-validated software modules for orchestration and cloud support, and the Intel Distribution of OpenVINO toolkit for computer vision and deep learning applications. Watch and learn how these robust, pre-validated hardware and software resources can accelerate the development of your edge AI solution.
For the full video of this presentation, please visit:
https://www.edge-ai-vision.com/2020/11/acceleration-of-deep-learning-using-openvino-3d-seismic-case-study-a-presentation-from-intel/
For more information about edge AI and computer vision, please visit:
https://www.edge-ai-vision.com
Manas Pathak, Global AI Lead for Oil and Gas at Intel, presents the “Acceleration of Deep Learning Using OpenVINO: 3D Seismic Case Study” tutorial at the September 2020 Embedded Vision Summit.
The use of deep learning for automatic seismic data interpretation is gaining the attention of many researchers across the oil and gas industry. The integration of high-performance computing (HPC) AI workflows in seismic data interpretation brings the challenge of moving and processing large amounts of data from HPC to AI computing solutions and vice-versa.
In this presentation, Pathak illustrates this challenge via a case study using a public deep learning model for salt identification applied on a 3D seismic survey from the F3 Dutch block in the North Sea. He presents a workflow to address this challenge and perform accelerated AI on seismic data. The Intel Distribution of OpenVINO toolkit was used to increase the inference performance of a pre-trained model on an Intel CPU. OpenVINO allows CPU users to get significant improvement in AI inference performance for high memory capacity deep learning models used on large datasets without any significant loss in accuracy.
HPC DAY 2017 | Accelerating tomorrow's HPC and AI workflows with Intel Archit...HPC DAY
HPC DAY 2017 - http://www.hpcday.eu/
Accelerating tomorrow's HPC and AI workflows with Intel Architecture
Atanas Atanasov | HPC solution architect, EMEA region at Intel
In this deck from ATPESC 2019, James Moawad and Greg Nash from Intel present: FPGAs and Machine Learning.
"Neural networks are inspired by biological systems, in particular the human brain. Through the combination of powerful computing resources and novel architectures for neurons, neural networks have achieved state-of-the-art results in many domains such as computer vision and machine translation. FPGAs are a natural choice for implementing neural networks as they can handle different algorithms in computing, logic, and memory resources in the same device. Faster performance comparing to competitive implementations as the user can hardcore operations into the hardware. Software developers can use the OpenCL device C level programming standard to target FPGAs as accelerators to standard CPUs without having to deal with hardware level design."
Watch the video: https://wp.me/p3RLHQ-lnc
Learn more: https://extremecomputingtraining.anl.gov/archive/atpesc-2019/agenda-2019/
and
https://www.intel.com/content/www/us/en/products/programmable/fpga.html
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Find Requirements Defects to Build Better SoftwareTechWell
Requirements defects are often the source of the majority of all software defects. Discovering and correcting a defect during testing is typically twenty-five times more expensive than correcting it during the requirements definition phase. Identifying and removing defects early in the software development lifecycle provides many benefits including reduced rework costs, less wasted effort, and greater team productivity. This translates into software projects that deliver the committed functionality on schedule, within budget, and with higher levels of customer satisfaction. John Terzakis shares powerful tips and techniques for quickly identifying requirements defects and providing feedback on how to improve them. Learn the ten attributes of a well-written requirement and how to detect various categories of requirements issues including ambiguity, passive voice, subjectivity, and missing event triggers. Using the concepts presented, John leads the analysis of a set of requirements. Leave with checklists that will make your requirements reviews more effective.
Enterprise Video Hosting: Introducing the Intel Video PortalIT@Intel
Intel IT developed an enterprise video hosting solution in order to meet the needs of employees who wanted to create and share videos in an easy-to-use and secure manner.
Accelerating Our Path to Multi Platform BenefitsIT@Intel
This is a time of tremendous change for IT organizations everywhere.
Intel IT realized we need to enable enterprise applications to support the devices of today (touch) and also develop the applications so they are ready for the next big thing (voice and gesture). We’ve kicked-off a new initiative that focuses on accelerating delivery of applications to our business partners and employees on their mobile platform(s) of choice.
For the full video of this presentation, please visit:
https://www.edge-ai-vision.com/2020/11/smarter-manufacturing-with-intels-deep-learning-based-machine-vision-a-presentation-from-intel/
For more information about edge AI and computer vision, please visit:
https://www.edge-ai-vision.com
Tara K. Thimmanaik, Solutions Architect at Intel, presents the “Smarter Manufacturing with Intel’s Deep Learning-Based Machine Vision” tutorial at the September 2020 Embedded Vision Summit.
As demand for smarter and more efficient manufacturing is growing, IoT technologies—including sensors, edge devices, gateways, servers and the cloud—are being used throughout the factory to compute deep learning analytics workloads at the appropriate location. Efficient data-driven manufacturing can help to reduce labor costs, increase quality and maximize profit. The biggest hindrance to achieving these outcomes is the difficulty in extracting data from vendor-locked and proprietary systems for analytics downstream.
In this presentation, Thimmanaik covers Intel’s approach to developing open, flexible and scalable solutions, including:
• Intel’s technologies such as OpenVINO, Movidius Vision Processor Units, Edge Insights Software (EIS) and deep learning algorithms
• How Intel’s offerings come together in the industrial marketplace with partnerships forged to address the constraints of manufacturing infrastructure
• Real-world examples highlighting defect detection in textile printing (where 90% accuracy at 50 fps was achieved) and smartphone screen production (where false negatives were only 0.6%)
EARS: The Easy Approach to Requirements SyntaxTechWell
One key to specifying effective functional requirements is minimizing misinterpretation and ambiguity. By employing a consistent syntax in your requirements, you can improve readability and help ensure that everyone on the team understands exactly what to develop. John Terzakis provides examples of typical requirements and explains how to improve them using the Easy Approach to Requirements Syntax (EARS). EARS provides a simple yet powerful method of capturing the nuances of functional requirements. John explains that you need to identify two distinct types of requirements. Ubiquitous requirements state a fundamental property of the software that always occurs; non-ubiquitous requirements depend on the occurrence of an event, error condition, state, or option. Learn and practice identifying the correct requirements type and restating those requirements with the corresponding syntax. Join John to find out what’s wrong with the requirements statement—“The software shall warn of low battery”—and how to fix it.
Fault tolerance ease of setup comparison: NEC hardware-based FT vs. software-...Principled Technologies
For enterprise datacenter staff, time is of the essence. While using a software-based FT solution such as VMware vSphere Fault Tolerance is an effective way to eliminate downtime, the five extra steps required to configure every single VM can add up.
In our hands-on tests, setting up a server with eight fault-tolerant virtual machines took only 41 steps on the NEC Express5800/R320d-M4, vs. 60 steps when we used a VMware vSphere Fault Tolerance, a difference of 36.6 percent. With a greater number of VMs per server, this difference would increase. Hardware-based fault tolerance on the NEC Express5800/R320d-M4 also reduced the number of necessary hardware components required by half compared to the software-based based FT approach.
With dozens of servers hosting hundreds of VMs, your IT staff can benefit enormously from the hardware-based fault tolerance that the NEC Express5800/R320d-M4 delivers.
Intel® Xeon® Processor E7-8800/4800 v4 EAMG 2.0Intel IT Center
This set of Intel® Xeon® processor E7-8800/4800 v4 family proof points spans several key business segments. The Intel® Xeon® processor E7-8800/4800 v4 product family delivers the horsepower for real-time, high-capacity data analysis that can help businesses derive rapid actionable insights to deliver innovative new services and customer experiences. With high performance, industry’s largest memory, robust reliability, and hardware-enhanced security features, the E7-8800/4800 v4 is optimal for scale-up platforms, delivering rapid in-memory computing for today’s most demanding real-time data and transaction-intensive workloads.
Accelerating Apache Spark with Intel QuickAssist TechnologyDatabricks
Enterprise and cloud data centers are under pressure to continuously expand revenue-generating and value-added services, such as compute intensive and I/O-demanding Big Data solutions, which moves large amounts of data into and out of storage, and sends it across the networked clusters.
A significant amount of time and network bandwidth can be saved when the data is compressed before it is passed between servers, as long as the compression/decompression operations are efficient and require negligible CPU cycles. Intel QuickAssist Technology allows compute-intensive workloads, specifically compression, to be offloaded from the CPU core onto dedicated hardware accelerators. Intel Quick Assist Technology enables developers to create software solutions that leverage compression/decompression acceleration, accessing the technology through APIs in the Intel QuickAssist Software.
This talk provides developers with information on Intel QuickAssist Technology and presents some key use cases to provide background for them to understand how they can take advantage of the hardware-based compression acceleration and performance improvements available with Intel QuickAssist Technology in their Spark applications.
This presentation from CleanSoft Academy helps graduates to make a career choice in the discipline of software testing. A must read for all those graduates who are not sure what career to pursue after graduation.
Gary Brown (Movidius, Intel): Deep Learning in AR: the 3 Year HorizonAugmentedWorldExpo
A talk from the Develop Track at AWE USA 2017 - the largest conference for AR+VR in Santa Clara, California May 31- June 2, 2017.
Gary Brown (Movidius, Intel): Deep Learning in AR: the 3 Year Horizon
Deep learning techniques are gaining in popularity in many facets of embedded vision, and this holds true for AR and VR. Will they soon dominate every facet of vision processing? This talk explores this question by examining the theory and practice of applying deep learning to real world problems for Augmented Reality, with real examples describing how this shift is happening today quickly in some areas, and slower in others.
http://AugmentedWorldExpo.com
Deploying Intel Architecture-based Tablets with Windows* 8 at IntelIT@Intel
Intel IT recently deployed Intel® Atom™ processor-based tablets with Microsoft Windows* 8 in our enterprise in a proof of concept.
Our participants were pleased with the experience, and reported greater productivity and flexibility.
For the full video of this presentation, please visit:
https://www.edge-ai-vision.com/2021/07/accelerating-edge-ai-solution-development-with-pre-validated-hardware-software-kits-from-intel-partners-a-presentation-from-intel/
Daniel Tsui, Foundational Developer Kit Product Manager at Intel, presents the “Accelerating Edge AI Solution Development with Pre-validated Hardware-Software Kits from Intel Partners” tutorial at the May 2021 Embedded Vision Summit.
When developing a new edge AI solution, you want to focus on your system’s unique functionality. In this session, Tsui shares the different foundational developer kits available from Intel’s partners to help speed your edge AI solution development. These kits include industrial-grade hardware that’s ready to deploy and that can be purchased easily using a credit card.
These systems come with Intel’s Edge Insights for Vision software package, a set of pre-validated software modules for orchestration and cloud support, and the Intel Distribution of OpenVINO toolkit for computer vision and deep learning applications. Watch and learn how these robust, pre-validated hardware and software resources can accelerate the development of your edge AI solution.
For the full video of this presentation, please visit:
https://www.edge-ai-vision.com/2020/11/acceleration-of-deep-learning-using-openvino-3d-seismic-case-study-a-presentation-from-intel/
For more information about edge AI and computer vision, please visit:
https://www.edge-ai-vision.com
Manas Pathak, Global AI Lead for Oil and Gas at Intel, presents the “Acceleration of Deep Learning Using OpenVINO: 3D Seismic Case Study” tutorial at the September 2020 Embedded Vision Summit.
The use of deep learning for automatic seismic data interpretation is gaining the attention of many researchers across the oil and gas industry. The integration of high-performance computing (HPC) AI workflows in seismic data interpretation brings the challenge of moving and processing large amounts of data from HPC to AI computing solutions and vice-versa.
In this presentation, Pathak illustrates this challenge via a case study using a public deep learning model for salt identification applied on a 3D seismic survey from the F3 Dutch block in the North Sea. He presents a workflow to address this challenge and perform accelerated AI on seismic data. The Intel Distribution of OpenVINO toolkit was used to increase the inference performance of a pre-trained model on an Intel CPU. OpenVINO allows CPU users to get significant improvement in AI inference performance for high memory capacity deep learning models used on large datasets without any significant loss in accuracy.
HPC DAY 2017 | Accelerating tomorrow's HPC and AI workflows with Intel Archit...HPC DAY
HPC DAY 2017 - http://www.hpcday.eu/
Accelerating tomorrow's HPC and AI workflows with Intel Architecture
Atanas Atanasov | HPC solution architect, EMEA region at Intel
In this deck from ATPESC 2019, James Moawad and Greg Nash from Intel present: FPGAs and Machine Learning.
"Neural networks are inspired by biological systems, in particular the human brain. Through the combination of powerful computing resources and novel architectures for neurons, neural networks have achieved state-of-the-art results in many domains such as computer vision and machine translation. FPGAs are a natural choice for implementing neural networks as they can handle different algorithms in computing, logic, and memory resources in the same device. Faster performance comparing to competitive implementations as the user can hardcore operations into the hardware. Software developers can use the OpenCL device C level programming standard to target FPGAs as accelerators to standard CPUs without having to deal with hardware level design."
Watch the video: https://wp.me/p3RLHQ-lnc
Learn more: https://extremecomputingtraining.anl.gov/archive/atpesc-2019/agenda-2019/
and
https://www.intel.com/content/www/us/en/products/programmable/fpga.html
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Como criar um mundo autônomo e conectado - Jomar SilvaiMasters
Jomar Silva - Technical Evangelist, Intel
A evolução das tecnologias de hardware, software e comunicação nos últimos anos permite projetar um novo mundo digital, autônomo e conectado.
A Internet das Coisas quando utilizada em conjunto com Inteligência Artificial propiciam um novo patamar de aplicações autônomas e conectadas, que serão a base para a criação deste novo mundo digital.
O grande desafio neste novo cenário é o imenso volume de dados que precisa ser capturado e processado em tempo real para permitir o desenvolvimento de soluções como carros autônomos e sistemas automáticos de segurança baseado em monitoramento por vídeo.
Na palestra iremos abordar estes desafios técnicos, Internet das Coisas, Inteligência Artificial, Visão Computacional, arquiteturas base para o desenvolvimento de soluções autônomas end-to-end e sobre tecnologias e produtos de hardware e software da Intel que podem te ajudar a enfrentar estes desafios de forma otimizada.
Serão abordados diversos projetos de software Open Source, bem como repositórios de soluções de código aberto que poderão ser utilizados para acelerar o aprendizado do desenvolvedor neste novo mundo digital, autônomo e conectado.
Apresentado no InterCon 2018 - https://eventos.imasters.com.br/intercon
This session was held by Vladimir Brenner, Partner Account Manager, Disruptors & AI, Intel AI at the Dive into H2O: London training on June 17, 2019.
Please find the recording here: https://youtu.be/60o3eyG5OLM
AWS Summit Singapore - Make Business Intelligence Scalable and AdaptableAmazon Web Services
Akanksha Bilani, APAC Director, Intel Software
Kapil Bansal, Alliance Head, APJ, Intel
Business, science, and academia are using AI applications — in the data center, the cloud, and at the edge — supported by a broad, growing portfolio of Intel technologies. Come join Kapil Bansal (Alliance Head – APJ) and Akanksha Bilani (APAC Director, Intel Software) to learn how Intel helps make AI initiatives practical and straightforward. Learn from the opportunities AI brings in and be part of the era of the convergence of data and the power of compute.
Preparing the Data Center for the Internet of ThingsIntel IoT
Intel’s Mark Skarpness provides an overview of the Internet of Things and discusses how the data center is essential for the IoT.
For more information go to www.intel.com/iot
DPDK Summit - 08 Sept 2014 - Intel - Networking Workloads on Intel ArchitectureJim St. Leger
Venky Venkatesan presents information on the Data Plane Development Kit (DPDK) including an overview, background, methodology, and future direction and developments.
E5 Intel Xeon Processor E5 Family Making the Business Case Intel IT Center
This presentation highlights cloud computing advantages of the Intel® Xeon® processor E5 family and helps you make the business case for investing. Includes access to an ROI calculator.
Similar to Microsoft Build 2019- Intel AI Workshop (20)
AI for All: Biology is eating the world & AI is eating Biology Intel® Software
Advances in cell biology and creation of an immense amount of data are converging with advances in Machine learning to analyze this data. Biology is experiencing its AI moment and driving the massive computation involved in understanding biological mechanisms and driving interventions. Learn about how cutting edge technologies such as Software Guard Extensions (SGX) in the latest Intel Xeon Processors and Open Federated Learning (OpenFL), an open framework for federated learning developed by Intel, are helping advance AI in gene therapy, drug design, disease identification and more.
Python Data Science and Machine Learning at Scale with Intel and AnacondaIntel® Software
Python is the number 1 language for data scientists, and Anaconda is the most popular python platform. Intel and Anaconda have partnered to bring scalability and near-native performance to Python with simple installations. Learn how data scientists can now access oneAPI-optimized Python packages such as NumPy, Scikit-Learn, Modin, Pandas, and XGBoost directly from the Anaconda repository through simple installation and minimal code changes.
Streamline End-to-End AI Pipelines with Intel, Databricks, and OmniSciIntel® Software
Preprocess, visualize, and Build AI Faster at-Scale on Intel Architecture. Develop end-to-end AI pipelines for inferencing including data ingestion, preprocessing, and model inferencing with tabular, NLP, RecSys, video and image using Intel oneAPI AI Analytics Toolkit and other optimized libraries. Build at-scale performant pipelines with Databricks and end-to-end Xeon optimizations. Learn how to visualize with the OmniSci Immerse Platform and experience a live demonstration of the Intel Distribution of Modin and OmniSci.
AI for good: Scaling AI in science, healthcare, and more.Intel® Software
How do we scale AI to its full potential to enrich the lives of everyone on earth? Learn about AI hardware and software acceleration and how Intel AI technologies are being used to solve critical problems in high energy physics, cancer research, financial inclusion, and more. Get started on your AI Developer Journey @ software.intel.com/ai
Software AI Accelerators: The Next Frontier | Software for AI Optimization Su...Intel® Software
Software AI Accelerators deliver orders of magnitude performance gain for AI across deep learning, classical machine learning, and graph analytics and are key to enabling AI Everywhere. Get started on your AI Developer Journey @ software.intel.com/ai.
Advanced Techniques to Accelerate Model Tuning | Software for AI Optimization...Intel® Software
Learn about the algorithms and associated implementations that power SigOpt, a platform for efficiently conducting model development and hyperparameter optimization. Get started on your AI Developer Journey @ software.intel.com/ai.
Reducing Deep Learning Integration Costs and Maximizing Compute Efficiency| S...Intel® Software
oneDNN Graph API extends oneDNN with a graph interface which reduces deep learning integration costs and maximizes compute efficiency across a variety of AI hardware including AI accelerators. Get started on your AI Developer Journey @ software.intel.com/ai.
AWS & Intel Webinar Series - Accelerating AI ResearchIntel® Software
Scale your research workloads faster with Intel on AWS. Learn how the performance and productivity of Intel Hardware and Software help bridge the gap between ideation and results in Data Science. Get started on your AI Developer Journey @ software.intel.com/ai.
Whether you are an AI, HPC, IoT, Graphics, Networking or Media developer, visit the Intel Developer Zone today to access the latest software products, resources, training, and support. Test-drive the latest Intel hardware and software products on DevCloud, our online development sandbox, and use DevMesh, our online collaboration portal, to meet and work with other innovators and product leaders. Get started by joining the Intel Developer Community @ software.intel.com.
Advanced Single Instruction Multiple Data (SIMD) Programming with Intel® Impl...Intel® Software
Explore practical elements, such as performance profiling, debugging, and porting advice. Get an overview of advanced programming topics, like common design patterns, SIMD lane interoperability, data conversions, and more.
Build a Deep Learning Video Analytics Framework | SIGGRAPH 2019 Technical Ses...Intel® Software
Explore how to build a unified framework based on FFmpeg and GStreamer to enable video analytics on all Intel® hardware, including CPUs, GPUs, VPUs, FPGAs, and in-circuit emulators.
Review state-of-the-art techniques that use neural networks to synthesize motion, such as mode-adaptive neural network and phase-functioned neural networks. See how next-generation CPUs with reinforcement learning can offer better performance.
RenderMan*: The Role of Open Shading Language (OSL) with Intel® Advanced Vect...Intel® Software
This talk focuses on the newest release in RenderMan* 22.5 and its adoption at Pixar Animation Studios* for rendering future movies. With native support for Intel® Advanced Vector Extensions, Intel® Advanced Vector Extensions 2, and Intel® Advanced Vector Extensions 512, it includes enhanced library features, debugging support, and an extensive test framework.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
4. 4
Agenda
• Intel® AI Academy
• Intel® AI Portfolio
• Intel AI Use Cases
• ML/DL Introduction
• Training on Caffe*/Tensorflow* with Intel optimizations
• Introduction to the Intel® OpenVINO™ Toolkit
• Introduction to the Intel® Movidius™ Neural Compute Stick and SDK
• Overview of Intel® Optimized Caffe* and Tensorflow*
• Intel® AI DevCloud
12. Proof of Concept: Image Recognition
Seismic Reflection Analysis
Client:
A leading developer of software solutions to the global oil and gas industry.
Challenge:
Automate identification of fault lines within seismic reflection data.
Solution:
Built a proof of concept that is trained using seismic reflection data and can predict
the probability of finding fault lines on previously unseen images.
Performs pixel-wise semantic segmentation of SEG-Y formatted data
Model trained using supervised learning
Advantages:
Automation enables analysis of vast amounts of data faster
Could identify potentially rewarding locations from subtle clues in the data
13. Proof of Concept: Image Recognition
Oil Rig “Inspector Assist” SystemClient
Multinational oil and gas company
Challenge
The customer operates a number of offshore oil rigs,
and uses submersible vehicles to take video footage to
ensure their infrastructure is healthy and safe.
Since reviews of this footage are time consuming and
prone to errors, a more efficient solution for detecting
potential problems is needed.
Solution
Built models to detect and classify bolts according to
level of corrosion.
Advantages
Video footage can be condensed to 10% of its original length
by filtering out unimportant frames and highlight potential
problem areas, enabling inspectors to perform their jobs
more efficiently.
Level of Corrosion
Low High
17. Software
hardware
community
nGraph
OpenVINO™
toolkit
Nauta™
ML Libraries
Intel AI
Builders
Intel AI
Developer
Program
BreakingbarriersbetweenAITheoryandreality
Simplify AI
via our robust community
Choose any approach
from analytics to deep learning
Tame your data deluge
with our data layer expertise
Deploy AI anywhere
with unprecedented HW choice
Speed up development
with open AI software
Partner with Intel to accelerate your AI journey
Scale with confidence
on the platform for IT & cloud
Intel
GPU
*
*
*
*
*
Intel AI
DevCloud
BigDL
Intel®
MKL-DNN
www.intel.ai
18.
19. Dedicated
Media/vision
Automated
Driving
Dedicated
DLTraining
Flexible
Acceleration
Dedicated
DLinference
Graphics,Media&
AnalyticsAcceleration
*FPGA: (1) First to market to accelerate evolving AI workloads (2) AI+other system level workloads like AI+I/O ingest, networking, security, pre/post-processing, etc (3) Low latency memory constrained workloads like RNN/LSTM
1GNA=Gaussian Neural Accelerator
All products, computer systems, dates, and figures are preliminary based on current expectations, and are subject to change without notice.
Images are examples of intended applications but not an exhaustive list.
device
Edge
Multi-cloud
NNP-L
NNP-I
GPU
And/OR
ADD
ACCELERATION
DeployAIanywhere
with unprecedented hardware choice
23. software.intel.com/ai
Get 4-weeks FREE access to
the Intel® AI DevCloud, use
your existing Intel® Xeon®
Processor-based cluster, or
use a public cloud service
Intel®AIacademy
For developers, students, instructors and startups
teach Share
Developlearn
Showcase your innovation
at industry & academic
events and online via the
Intel AI community forum
Get smarter using
online tutorials,
webinars, student kits
and support forums
Educate others using
available course
materials, hands-on
labs, and more
24. 24
LearnMoreonDevMesh
Opportunities to
share your projects
as an Intel® Student
Ambassador
▪ Industry events via
sponsored speakerships
▪ Student Workshops
▪ Ambassador Labs
▪ Intel® Developer Mesh
25. AIbuilders:ecosystem
BUSINESS
INTELLIGENCE
&ANALYTCS
VISION CONVERSATIONALBOTS AITOOLS&CONSULTING AIPaaS
HEALTHCARE FINANCIAL
SERVICES
RETAIL TRANSPORTATION NEWS,MEDIA&
ENTERTAINMENT
AGRICULTURE LEGAL&HR ROBOTIC
PROCESS
AUTOMOATION
oem Systemintegrators
CROSSVERTICAL
VERTICAL
HORIZONTAL
Builders.intel.com/ai
Other names and brands may be claimed as the property of others.
100+AI Partners
35. 35
▪ Each point can be iteratively
calculated from the previous one
3
Gradient Descent with Linear Regression
𝐽 𝛽0, 𝛽1
𝛽1
𝛽0
𝜔0
𝜔1𝜔2 = 𝜔1 − 𝛼𝛻
1
2
𝑖=1
𝑚
𝛽0 + 𝛽1 𝑥 𝑜𝑏𝑠
(𝑖)
− 𝑦𝑜𝑏𝑠
(𝑖)
2
𝜔2
𝜔3 = 𝜔2 − 𝛼𝛻
1
2
𝑖=1
𝑚
𝛽0 + 𝛽1 𝑥 𝑜𝑏𝑠
(𝑖)
− 𝑦𝑜𝑏𝑠
(𝑖)
2 𝜔3
36.
37. 37
3
Why Deep Learning – What is wrong with Linear
Classifiers?
XOR
The counter
example to all
models
We need non-
linear functions
X1 X2
0 0 0
y
0 1 1
1 0 1
1 1 0
0
X1
X2
0
1
1
Source: https://medium.com/towards-data-science/introducing-deep-learning-and-neural-networks-deep-learning-for-rookies-1-bd68f9cf5883
+
+-
-
38. 38
3
1.5 0.5
Input
Input
+1
+1
+1
+1
-2
Output
X1 X2
0 0 0
y
0 1 1
1 0 1
1 1 0
We Need Layers Usually Lots with Non-linear
Transformations
Threshold to 0 or 1
XOR = (X1 and not X2) OR (Not X1 and X2)
1
0
1 x 1
0 x 1
1 x 1
0 x 1
1 < 1.5
0 x -2
(1 x 1) + (0 x 1) < 1.5 = 0
( 1x1) + (0x-2) + (0x1)= 1 > 0.5 =1
39. 39
1.5 0.5
Input
Input
+1
+1
+1
+1
-2
Output
X1 X2
0 0 0
y
0 1 1
1 0 1
1 1 0
We Need Layers Usually Lots with Non-linear
Transformations
Threshold to 0 or 1
XOR = (X1 and not X2) OR (Not X1 and X2)
1
1
1 x 1
1 x 1
1 x 1
1 x 1
2 > 1.5
1 x -2
(1 x 1) + (1 x 1) = 2 > 1.5
(1x1) + (1x -2) + (1x1) = 0 < .5 =0
40. 40
“Deep learning is a set of algorithms in
machine learning that attempt to model
high-level abstractions in data by using
architectures composed of multiple
non-linear transformations.”
- Wikipedia*
4
This is a brewing domain called Deep Learning
In the machine learning world, we use neural networks. The idea comes from biology.
Each layer learns something.
41. 41
1.5 0.5
Input
Input
+1
+1
+1
+1
-2 Output
≈
Motivation for Neural Nets
▪ Use biology as inspiration for mathematical model
▪ Get signals from previous neurons
▪ Generate signals (or not) according to inputs
▪ Pass signals on to next neurons≈
▪ By layering many neurons, can create complex model
45. 45
▪ Sigmoid function
– Smooth transition in output between (0,1)
▪ Tanh function
– Smooth transition in output between (-1,1)
▪ ReLU function
– f(x) = max(x,0)
▪ Step function
– f(x) = (0,1)
TypesofActivationFunctions
46. 46
WhyNeuralNets?
▪ Why not just use a single neuron? Why do we need a larger network?
▪ A single neuron (like logistic regression) only permits a linear decision boundary.
▪ Most real-world problems are considerably more complicated!
48. 48
Convolutional Neural Nets
Primary Ideas behind Convolutional Neural Networks:
– Let the Neural Network learn which kernels are most useful
– Use same set of kernels across entire image (translation invariance)
– Reduces number of parameters and “variance” (from bias-variance point of view)
– Can Think of Kernels as “Local Feature Detectors”
Vertical Line Detector
-1 1 -1
-1 1 -1
-1 1 -1
Horizontal Line Detector
-1 -1 -1
1 1 1
-1 -1 -1
Corner Detector
-1 -1 -1
-1 1 1
-1 1 1
52. LeNet-5
How many total weights in the network?
Conv1: 1*6*5*5 + 6 = 156
Conv3: 6*16*5*5 + 16 = 2416
FC1: 400*120 + 120 = 48120
FC2: 120*84 + 84 = 10164
FC3: 84*10 + 10 = 850
Total: = 61706
Less than a single FC layer with [1200x1200] weights!
Note that Convolutional Layers have relatively few weights.
52
53. 53
5
Convolutional Neural Network
– Each neuron connected to a small set of
nearby neurons in the previous layer
– Uses same set of weights for each neuron
– Ideal for spatial feature recognition, Ex: Image
recognition
– Cheaper on resources due to fewer
connections
Fully Connected Neural Networks
– Each neuron is connected to every neuron in the
previous layer
– Every connection has a separate weight
– Not optimal for detecting features
– Computationally intensive – heavy memory usage
Differences Between CNN and Fully Connected Networks
54.
55. Natural and man-made disasters create
havoc and grief. Lost and abandoned
pets/livestock only add to the emotional
toll.
How do you find your beloved dog after a
flood? What happens to your daughter’s
horse?
Our charter is to unite pets with their
families.
Animal ID Startup
56. We need your help creating a way to
identify animals. Initial product is
focused on cat/dog breed identification.
Your app will be used by rescuers and the
public to document found animals and to
search for lost pets.
Welcome aboard!
YourJob:DataScientist
62. 62
Choosingthe“Right”Hardware
Power/Performance Efficiency Varies
▪ Running the right workload on the
right piece of hardware → higher
efficiency
▪ Hardware acceleration is a must
▪ Heterogeneous computing?
Tradeoffs
▪ Power/performance
▪ Price
▪ Software flexibility, portability
PowerEfficiency
Computation Flexibility
Dedicated
Hardware
GPU
CPU
X1
X10
X100 Vision Processing
Efficiency
Vision DSPs
FPGA
63. 63
▪ Based on selection and connections of computational filters to
abstract key features and correlating them to an object.
▪ Works well with well defined objects and controlled scene.
▪ Difficult to predict critical features in larger number of objects or
varying scenes.
Traditional Computer Vision
▪ Based on application of a large number of filters to an image to
extract features.
▪ Features in the object(s) are analyzed with the goal of associating
each input image with an output node for each type of object.
▪ Values are assigned to output node representing the probability
that the image is the object associated with the output node.
Deep Learning Computer Vision
Pre-trained
Optimized Deep
Learning Models
OpenVINO™ toolkit
Intel® Deep Learning Deployment Toolkit
Model
Optimizer
Inference
Engine
API Solution
Computer Vision
Libraries
OpenCV*/OpenVX*
OpenCV* OpenVX*
Direct Coding Solution
Custom Code
new filters/algorithms or
optimizations/fusing steps
OpenCL™ C/C++
Intel® SDK for
OpenCL™
Application
Intel®
Media SDK
API Solution
CPU GPU FPGA VPU CPU GPU CPU GPU
Intel
Hardware
Abstraction
Layer
OpenVX and the OpenVX logo are trademarks of the Khronos Group Inc.
OpenCL and the OpenCL logo are trademarks of Apple Inc. used by permission by Khronos
IR = Intermediate Representation File GPU = Intel CPU with integrated graphics processing unit/Intel® Processor Graphics VPU = Intel® Movidius™ Vision Processing Unit
Deep Learning vs. Traditional Computer Vision
OpenVINO™ toolkit has tools for an end-to-end vision pipeline
IR
File
64. Optimize/
Heterogeneous
Inference engine
supports multiple
devices for
heterogeneous flows.
(device-level
optimization)
Prepare
Optimize
Model optimizer:
▪ Converting
▪ Optimizing
▪ Preparing to
inference
(device agnostic,
generic optimization)
Inference
Inference engine
lightweight API
to use in
applications for
inference.
MKL-
DNN
cl-DNN
CPU: Intel®
Xeon®/Intel®
Core™/Intel Atom®
GPU
FPGA
Myriad™ 2/X
DLA
Intel®
Movidius™
API
Train
Train a DL model.
Currently supports:
▪ Caffe*
▪ Mxnet*
▪ TensorFlow*
Extend
Inference engine
supports
extensibility
and allows
custom kernels
for various
devices.
Extensibility
C++
Extensibility
OpenCL™
Extensibility
OpenCL™/TB
D
Extensibility
TBD
ApplicationdevelopmentwithOpenVINO™Toolkit
64
65. AzureML→EdgeFlow
usingAZUREIOTEdge+AZUREONNXRT+OpenVINOExecutionProvider
MSFT’s pre-trained
topologies & models
User’s custom
topologies &
models
ONNX, CAFFE, TENSORFLOW, …
AzureML
Intel component
MSFT components
Users’ custom
components
OS with Azure IoT Edge
OpenVINO IE Libs
Inference Scripts
CPU
MKL
DNN
CLDNN,
Media libs
DLA Myriad
Localresourceaccessto
optimizedDLlibraries
Deviceresourceaccess
toaccelerators
GPU FPGA Movidius
Azure
Container
Registry
Azure IOT
Hub
ONNX Runtime
OpenVINO EP
ONNX
Model
ONNX Model Converters
Edge
69. Intel® Neural Compute Stick 2 69
CMX (2.5 MB to 450 GB/s Bandwidth)
Neural
Compute
Engine
CV
Accelerat-
ion
Pixel
Processin
g
System Support
Functions
Interfaces
16 SHAVE
Programmable Cores
CPU
Cluster
RT
RISC
LPDDR
AON
Intel®NeuralComputestick2:
Featuringtheintel®Movidius™myriad™xvpu
System support
functions operate
frames, tiles, CODEC,
compression and
security
Homogeneous memory
design for low-power,
UL latency, sustained
High Performance, and
locally stored data
VLIW (DSP)
programmable
processors are
optimized for complex
vision & imaging
workloads
An entirely new deep
neural network (DNN)
inferencing engine that
offers flexible
interconnect and ease of
configuration for on-
device DNNs and
computer vision
applications
RISC Processors, RTOS
Schedulers, Pipeline
Managers, Sensor
Control Frameworks
A self-sufficient, all-in-one processor that features the powerful Neural Compute Engine and 16 programmable SHAVE cores
that deliver class-leading performance for deep neural network inference applications.
70. Intel® Neural Compute Stick 2 70
HighPerformance&LowPowerforAIInference
Intel®neuralcomputestick2
Order now from Mouser Electronics
for $99 MSRP*: Where to buy
*MSRP is not a guarantee of final retail price. MSRP may be changed in the future based upon economic conditions.
+
Intel®
Movidius™
Myriad™ X
VPU
Intel® Distribution of
OpenVINO™ toolkit
Optimized by
Powered by
MORE CORES. MORE AI INFERENCE.
✓ Start quickly with plug-and-play
simplicity
✓ Develop on common frameworks
and out-of-box sample applications
✓ Prototype on any platform with a
USB port
✓ Operate without cloud compute
dependence
Boost
productivity
Simplify
prototyping
Discover
efficiencies
78. Graphoptimizations:layoutpropagation
Converting to/from optimized layout can be less expensive than operating on un-
optimized layout.
All MKL-DNN operators use highly-optimized layouts for TensorFlow tensors.
Conv2D
ReLU
Input Filter
Shape
MklConv2D
Input Filter
Convert
Convert Convert
MklReLU
Convert
Shape
Convert
Initial Graph After Layout Conversions
79. Graphoptimizations:layoutpropagation
Did you notice anything wrong with previous graph?
Problem: redundant conversions
MklConv2D
Input Filter
Convert
Convert Convert
MklReLU
Convert
Shape
Convert
MklConv2D
Input Filter
Convert Convert
MklReLU
Convert
Shape
After Layout Conversion After Layout Propagation
80. TensorFlow graphs offer opportunities
for parallel execution.
Threading model
1. inter_op_parallelism_threads =
max number of operators that
can be executed in parallel
2. intra_op_parallelism_threads =
max number of threads to use
for executing an operator
3. OMP_NUM_THREADS = MKL-DNN
equivalent of
intra_op_parallelism_threads
Systemoptimizations:loadbalancing
MklConv2D
Input Filter
Convert Convert
MklReLU
Convert
Shape
81. >>> config = tf.ConfigProto()
>>> config.intra_op_parallelism_threads = 56
>>> config.inter_op_parallelism_threads = 2
>>> tf.Session(config=config)
tf.ConfigProto is used to set the inter_op_parallelism_threads and
intra_op_parallelism_threads configurations of the Session object.
https://www.tensorflow.org/performance/performance_guide#tensorflow_with_intel_mkl_dnn
performanceGUIDE
82. Systemoptimizations:loadbalancing
Incorrect setting of threading model parameters can lead to over-
or under-subscription, leading to poor performance.
Solution:
Set these parameters for your model manually.
Guidelines on TensorFlow webpage
OMP: Error #34: System unable
to allocate necessary resources
for OMP thread:
OMP: System error #11: Resource
temporarily unavailable
OMP: Hint: Try decreasing the
value of OMP_NUM_THREADS.
83. performanceGUIDE
Setting the threading model correctly
We provide best settings for popular CNN models. (https://ai.intel.com/tensorflow-optimizations-
intel-xeon-scalable-processor)
os.environ["KMP_BLOCKTIME"] = "1"
os.environ["KMP_AFFINITY"] = "granularity=fine,compact,1,0"
os.environ["KMP_SETTINGS"] = "0"
os.environ["OMP_NUM_THREADS"] = “56"
https://www.tensorflow.org/performance/performance_
guide#tensorflow_with_intel_mkl_dnn
Example setting MKL variables with python os.environ :
85. 85
Summary
Convolutional Neural Network with TensorFlow
Getting Intel-optimized TensorFlow is easy.
TensorFlow performance guide is the best source on performance tips.
Intel-optimized TensorFlow improves TensorFlow CPU performance by up to 14X.
Stay tuned for updates - https://ai.intel.com/tensorflow
86.
87. 87
LeveragetheadvantagesofIntel’send-to-endAIofferings
• Training:
• Take advantage of Intel® Xeon™ Scalable Processors for training Deep Neural Networks
• Download and Install Intel® Optimized Caffe*
• Download and install Tensorflow* with Intel’s optimizations
• Pre-built wheels for Intel Architecture
• Inference
• Download and Install the Intel® Movidius™ Neural Compute Stick SDK
• Take advantage of AI courses and training available on Intel® Developer Zone