SlideShare a Scribd company logo
1 of 37
Intel® OpenVINO™ Toolkit
如何加速AI視覺應用開發
Internet of Things Group 2
Legalnotices&disclaimersThis document contains information on products, services and/or processes in development. All information provided here is subject to change without notice.
Contact your Intel representative to obtain the latest forecast, schedule, specifications and roadmaps.
Intel technologies’ features and benefits depend on system configuration and may require enabled hardware, software or service activation. Learn more at
intel.com, or from the OEM or retailer. No computer system can be absolutely secure.
Tests document performance of components on a particular test, in specific systems. Differences in hardware, software, or configuration will affect actual
performance. Consult other sources of information to evaluate performance as you consider your purchase. For more complete information about performance
and benchmark results, visit http://www.intel.com/performance.
Cost reduction scenarios described are intended as examples of how a given Intel-based product, in the specified circumstances and configurations, may affect
future costs and provide cost savings. Circumstances will vary. Intel does not guarantee any costs or cost reduction.
Statements in this document that refer to Intel’s plans and expectations for the quarter, the year, and the future, are forward-looking statements that involve a
number of risks and uncertainties. A detailed discussion of the factors that could affect Intel’s results and plans is included in Intel’s SEC filings, including the
annual report on Form 10-K.
The products described may contain design defects or errors known as errata which may cause the product to deviate from published specifications. Current
characterized errata are available on request.
Performance estimates were obtained prior to implementation of recent software patches and firmware updates intended to address exploits referred to as
"Spectre" and "Meltdown." Implementation of these updates may make these results inapplicable to your device or system.
No license (express or implied, by estoppel or otherwise) to any intellectual property rights is granted by this document.
Intel does not control or audit third-party benchmark data or the web sites referenced in this document. You should visit the referenced web site and confirm
whether referenced data are accurate.
Results have been estimated or simulated using internal Intel analysis or architecture simulation or modeling, and provided to you for informational purposes.
Any differences in your system hardware, software or configuration may affect your actual performance.
Intel, the Intel logo, Pentium, Celeron, Atom, Core, Xeon, Movidius, Saffron, OpenVINO, MediaSDK and others are trademarks of Intel Corporation in the U.S.
and/or other countries.
*Other names and brands may be claimed as the property of others.
© 2018 Intel Corporation.
Internet of Things Group 3
https://software.intel.com/en-us/openvino-toolkit
4
Artificial
Intelligence
is the ability of machines
to learn from experience,
without explicit
programming, in order to
perform cognitive
functions associated with
the human mind
ArtificialIntelligence
Machinelearning
Algorithms whose performance
improve as they are exposed to
more data over time
Deeplearning
Subset of machine
learning in which
multi-layered neural
networks learn from
vast amounts of data
5
Lots of
labeled data!
Training
Inference
Forward
Backward
Model weights
Forward
“Bicycle”?
“Strawberry”
“Bicycle”?
Error
Human
Bicycle
Strawberry
??????
DeeplearningBasics
Data set size
Accuracy
Didyouknow?
Training with a large
data set AND deep (many
layered) neural network
often leads to the highest
accuracy inference
Internet of Things Group 6
7
OS Support CentOS* 7.4 (64 bit) Ubuntu* 16.04.3 LTS (64 bit) Microsoft Windows* 10 (64 bit) Yocto Project* version Poky Jethro v2.0.3 (64 bit)
OpenVX and the OpenVX logo are trademarks of the Khronos Group Inc.
OpenCL and the OpenCL logo are trademarks of Apple Inc. used by permission by Khronos
Intel® Architecture-Based
Platforms Support
Intel® Deep Learning Deployment Toolkit Traditional Computer Vision Tools & Libraries
Model Optimizer
Convert & Optimize
Inference Engine
Optimized InferenceIR OpenCV* OpenVX*
Photography
Vision
Optimized Libraries
IR = Intermediate
Representation file
For Intel® CPU & CPU with integrated graphics
Increase Media/Video/Graphics Performance
Intel® Media SDK
Open Source version
OpenCL™
Drivers & Runtimes
For CPU with integrated graphics
Optimize Intel® FPGA
FPGA RunTime Environment
(from Intel® FPGA SDK for OpenCL™)
Bitstreams
FPGA – Linux* only
Code Samples & 10 Pre-trained Models Code Samples
Openvino™toolkitCross-Platform Tool to Accelerate Computer Vision & Deep Learning Inference Performance
software.intel.com/openvino-toolkit
All products, computer systems, dates, and figures are preliminary based on current expectations, and are subject to change without notice.
Internet of Things Group 8
Discover Intel® OpenVINO™ Toolkit Capabilities
Internet of Things Group 9
Hardware Acceleration
Internet of Things Group 10
Intel Architecture
Multicore/Multithreading/Vectorization
https://software.intel.com/en-us/articles/introduction-to-intel-advanced-vector-extensions
Internet of Things Group 11
Intel Integrated Graphics
Internet of Things Group 12
Movidius Neural Compute Stick (Myriad 2)
Internet of Things Group 13
Intel® FPGA
Intel Arria 10 GX FPGA
• High-performance, multi-gigabit SERDES
transceivers up to 15 Gbps
• 1,150K logic elements available (-2L speed
grade)
• 53 Mb of embedded memory
On-Board Memory
• 8 GB DDR4 memory banks with error correction
code (ECC) (2 banks)
• 1 Gb Mb (128 MB) flash
Interfaces
• PCIe x8 Gen3 electrical, x16 mechanical
• USB 2.0 interface for debug and programming of
FPGA and flash memory
• 1X QSFP+ with 4X 10GbE or 40GbE support
• Standard height, 1/2 length
• Low-profile option upon request
Copyright © 2018, Intel Corporation. All rights reserved.
*Other names and brands may be claimed as the property of others.
Optimization NoticeOptimization Notice
OpenVINO™ toolkit Technical Specifications
Intel® Platforms Compatible Operating Systems
Target
Solution
Platforms
CPU
 6th-8th generation Intel® Xeon® & Core™ processors
 Ubuntu* 16.04.3 LTS (64 bit)
 Microsoft Windows* 10 (64 bit)
 CentOS* 7.4 (64 bit)
 Intel® Pentium® processor N4200/5, N3350/5, N3450/5 with Intel® HD Graphics  Yocto Project* Poky Jethro v2.0.3 (64 bit)
Iris® Pro & Intel® HD Graphics
 6th-8th generation Intel® Core™ processor with Intel® Iris™ Pro graphics & Intel® HD Graphics
 6th-8th generation Intel® Xeon® processor with Intel® Iris™ Pro Graphics & Intel® HD Graphics
(excluding e5 product family, which does not have graphics1)
 Ubuntu 16.04.3 LTS (64 bit)
 Windows 10 (64 bit)
 CentOS 7.4 (64 bit)
FPGA
 Intel® Arria® FPGA 10 GX development kit
 Intel® Programmable Acceleration Card with Intel® Arria® 10 GX FPGA operating systems
 OpenCV* & OpenVX* functions must be run against the CPU or Intel® Processor Graphics (GPU)
 Ubuntu 16.04.3 LTS (64 bit)
 CentOS 7.4 (64 bit)
VPU
 Intel Movidius™ Neural Compute Stick
 Ubuntu 16.04.3 LTS (64 bit)
 CentOS 7.4 (64 bit)
 Windows 10 (64 bit)
Development
Platforms
6
th
-8
th
generation Intel® Core™ and Intel® Xeon® processors
 Ubuntu* 16.04.3 LTS (64 bit)
 Windows® 10 (64 bit)
 CentOS* 7.4 (64 bit)
Additional
Software
Requirements
Linux* build environment required components
 OpenCV 3.4 or higher  GNU Compiler Collection (GCC) 3.4 or higher
 CMake* 2.8 or higher  Python* 3.4 or higher
Microsoft Windows* build environment required components
 Intel® HD Graphics Driver (latest version)†  OpenCV 3.4 or higher
 Intel® C++ Compiler 2017 Update 4  CMake 2.8 or higher
 Python 3.4 or higher  Microsoft Visual Studio* 2015
External Dependencies/Additional Software View Product Site, detailed System Requirements
14
1Graphics drivers are required only if you use Intel® Processor Graphics (GPU).
Internet of Things Group 15
Traditional Computer Vision
Internet of Things Group 16
Typical Deep Learning Video Workload
Internet of Things Group 17
Starting with OpenCV* & OpenVX*
 Well established, open source, computer
vision library
 Wide variety of algorithms and functions
available
 Includes Intel® Photography Vision Library
OpenVX and the OpenVX logo are trademarks of the Khronos Group Inc.
 Intel optimized functions for faster performance on Intel hardware
 Basic building blocks to speed performance, cut development time & allow customization
 All-in-one package
 Targeted at real time, low power
applications
 Graph based representation,
optimization & execution
 11 samples included
Internet of Things Group 18
OpenCV* Samples
• PeopleDetect : demonstrates the use of the HoG
descriptor.
• Background foreground segmentation : demonstrates
background segmentation.
• Dense Optical Flow : demonstrates using of dense
optical flow algorithms.
• PVL Face Detection and Recognition : demonstrate
Intel PVA face detection and recognition algorithms.
• Colorization : demonstrates recoloring grayscale
images with DNN.
• Opencl Custom Kernel : demonstrates running custom
OpenCL kernels by means of OpenCV T-API interface.
Internet of Things Group 19
OpenVX™ Samples
Gstreamer
Interoperability
Video
Stabilization
Hetero
Basic
Camera
Tampering
Census
Transform
Custom
OpenCL™
Kernel
Face
Detection
Auto-
Contrast
Lane
Detection
Motion
Detection
Color Copy
Pipeline
Internet of Things Group 20
Deep Learning Computer Vision
21
Caffe*
Tensor Flow*
MxNet*
.dataIR
IR
IR = Intermediate
Representation format
Convert & optimize
to fit all targets
Load, infer
CPU Plugin
GPU Plugin
FPGA Plugin
Myriad Plugin
Model
Optimizer
Convert &
Optimize
Extendibility
C++
Extendibility
OpenCL™
Extendibility
OpenCL/TBD
Extendibility
TBD
Model Optimizer
 What it is: Preparation step -> imports trained models
 Why important: Optimizes for performance/space with
conservative topology transformations; biggest boost is
from conversion to data types matching hardware.
Inference Engine
 What it is: High-level inference API
 Why important: Interface is implemented as dynamically
loaded plugins for each hardware type. Delivers best
performance for each type without requiring users to
implement and maintain multiple code pathways.
Trained
Model
Inference
Engine
Common API
(C++)
Optimized cross-
platform inference
OpenCL and the OpenCL logo are trademarks of Apple Inc. used by permission by Khronos
GPU = Intel CPU with integrated graphics processing unit/Intel® Processor Graphics
Intel®DeepLearningDeploymentToolkit(DLDT)Take Full Advantage of the Power of Intel® Architecture for Deep Learning
All products, computer systems, dates, and figures are preliminary based on current expectations, and are subject to change without notice.
Internet of Things Group 22
Deep Learning Inference Workflow
MKL-DNN
cl-DNN
CPU:
Xeon/Core/Atom
GPU
FPGA
VPU
DLA
NYRIAD
1
Train
2
Prepare
model
3
Inference
Internet of Things Group 23
Public Models
Image Classification
Object Detection
Semantic Segmentation
Neural Style Transfer
1
Train
2
Prepare
model
3
Inference
Intel Confidential 24
Supported Tensorflow* Models
Converting Your TensorFlow* Model
https://software.intel.com/en-us/articles/OpenVINO-Using-TensorFlow
Model Name Slim Model Classification
Inception v1 inception_v1_2016_08_28.tar.gz
Inception v2 inception_v1_2016_08_28.tar.gz
Inception v3 inception_v3_2016_08_28.tar.gz
Inception v4 inception_v4_2016_09_09.tar.gz
Inception ResNet v2 inception_resnet_v2_2016_08_30.tar.gz
MobileNet v1 128 mobilenet_v1_0.25_128.tgz
MobileNet v1 160 mobilenet_v1_0.5_160.tgz
MobileNet v1 224 mobilenet_v1_1.0_224.tgz
MobileNet v2 1.4 224 mobilenet_v2_1.4_224.tgz
MobileNet v2 1.0 224 mobilenet_v2_1.0_224.tgz
NasNet Large nasnet-a_large_04_10_2017.tar.gz
NasNet Mobile nasnet-a_mobile_04_10_2017.tar.gz
ResidualNet-50 v1 resnet_v1_50_2016_08_28.tar.gz
ResidualNet-50 v2 resnet_v2_50_2017_04_14.tar.gz
ResidualNet-101 v1 resnet_v1_101_2016_08_28.tar.gz
ResidualNet-101 v2 resnet_v2_101_2017_04_14.tar.gz
ResidualNet-152 v1 resnet_v1_152_2016_08_28.tar.gz
ResidualNet-152 v2 resnet_v2_152_2017_04_14.tar.gz
VGG-16 vgg_16_2016_08_28.tar.gz
VGG-19 vgg_19_2016_08_28.tar.gz
Model Name TensorFlow Object Detection API Models (Frozen)
SSD Mobilenet V1 ssd_mobilenet_v1_coco_2018_01_28.tar.gz
SSD MobileNet V1 0.75 Depth ssd_mobilenet_v1_0.75_depth_300x300_coco14_sync_2018_
07_03.tar.gz
SSD MobileNet V1 PPN ssd_mobilenet_v1_ppn_shared_box_predictor_300x300_coco
14_sync_2018_07_03.tar.gz
SSD MobileNet V1 FPN ssd_mobilenet_v1_fpn_shared_box_predictor_640x640_coco1
4_sync_2018_07_03.tar.gz
SSD ResNet50 FPN ssd_resnet50_v1_fpn_shared_box_predictor_640x640_coco14
_sync_2018_07_03.tar.gz
SSD MobileNet V2 ssd_mobilenet_v2_coco_2018_03_29.tar.gz
SSD Inception V2 ssd_inception_v2_coco_2018_01_28.tar.gz
Faster R-CNN Inception V2 faster_rcnn_inception_v2_coco_2018_01_28.tar.gz
Faster R-CNN ResNet 50 faster_rcnn_resnet50_coco_2018_01_28.tar.gz
Faster R-CNN ResNet 50 Low
Proposals
faster_rcnn_resnet50_lowproposals_coco_2018_01_28.tar.gz
Faster R-CNN ResNet 101 faster_rcnn_resnet101_coco_2018_01_28.tar.gz
Faster R-CNN ResNet 101 Low
Proposals
faster_rcnn_resnet101_lowproposals_coco_2018_01_28.tar.gz
Faster R-CNN Inception ResNet V2 faster_rcnn_inception_resnet_v2_atrous_coco_2018_01_28.tar.
gz
Faster R-CNN Inception ResNet V2
Low Proposals
faster_rcnn_inception_resnet_v2_atrous_lowproposals_coco_2
018_01_28.tar.gz
Faster R-CNN NasNet faster_rcnn_nas_coco_2018_01_28.tar.gz
Faster R-CNN NasNet Low
Proposals
faster_rcnn_nas_lowproposals_coco_2018_01_28.tar.gz
Mask R-CNN Inception ResNet V2 mask_rcnn_inception_resnet_v2_atrous_coco_2018_01_28.tar.
gz
Mask R-CNN Inception V2 mask_rcnn_inception_v2_coco_2018_01_28.tar.gz
Mask R-CNN ResNet 101 mask_rcnn_resnet101_atrous_coco_2018_01_28.tar.gz
Mask R-CNN ResNet 50 mask_rcnn_resnet50_atrous_coco_2018_01_28.tar.gz
1
Train
2
Prepare
model
3
Inference
Internet of Things Group 25
Run Model Optimizer
python mo.py
--input_model alexnet.caffemodel
--input data
--input_shape [1,3,227,227]
--data_type FP32
--log_level DEBUG
1
Train
2
Prepare
model
3
Inference
alexnet.xml
alexnet.bin
Intel ConfidentialIntel Confidential 26
python mo.py --input_model inception_v3_frozen.pb --input_shape
[1,299,299,3] -- mean_values [127.5,127.5,127.5] –scale_values
[127.5,127.5,127.5]
Model Optimizer arguments:
Common parameters:
- Path to the Input Model: TF_inception_v3/inception_v3_frozen.pb
- Path for generated IR: TF_inception_v3/FP32
- IR output name: inception_v3_frozen
- Log level: ERROR
- Batch: Not specified, inherited from the model
- Input layers: Not specified, inherited from the model
- Output layers: Not specified, inherited from the model
- Input shapes: [1,299,299,3]
- Mean values: [127.5,127.5,127.5]
- Scale values: [127.5,127.5,127.5]
- Scale factor: Not specified
- Precision of IR: FP32
- Enable fusing: True
- Enable grouped convolutions fusing: True
- Move mean values to preprocess section: False
- Reverse input channels: False
TensorFlow specific parameters:
- Input model in text protobuf format: False
- Offload unsupported operations: False
- Path to model dump for TensorBoard: None
- Update the configuration file with input/output node names: None
- Operations to offload: None
- Patterns to offload: None
- Use the config file: None
Model Optimizer version: 1.2.110.59f62983
[ SUCCESS ] Generated IR model.
[ SUCCESS ] XML file: TF_inception_v3/FP32/inception_v3_frozen.xml
[ SUCCESS ] BIN file: TF_inception_v3/FP32/inception_v3_frozen.bin
[ SUCCESS ] Total execution time: 4.97 seconds.
TensorFlow* Inception V3 Model
classification_sample.exe -m inception_v3_frozen.xml -i
.democar.png
[ INFO ] InferenceEngine:
API version ............ 1.1
Build .................. 11653
[ INFO ] Parsing input parameters
[ INFO ] Loading plugin
API version ............ 1.1
Build .................. win_20180511
Description ....... MKLDNNPlugin
[ INFO ] Loading network files:
inception_v3_frozen.xml
inception_v3_frozen.bin
[ INFO ] Preparing input blobs
[ WARNING ] Image is resized from (787, 259) to (224, 224)
[ INFO ] Batch size is 1
[ INFO ] Preparing output blobs
[ INFO ] Loading model to the plugin
[ INFO ] Starting inference (1 iterations)
[ INFO ] Average running time of one iteration: 91.8117 ms
[ INFO ] Processing output blobs
Top 10 results:
Image democar.png
752 0.2250938 label racer, race car, racing car
818 0.1749035 label sports car, sport car
512 0.1381157 label convertible
628 0.0980638 label limousine, limo
437 0.0751940 label beach wagon, station wagon, wagon, estate car, beach waggon,
station waggon, waggon
480 0.0682043 label car wheel
582 0.0508757 label grille, radiator grille
706 0.0157829 label passenger car, coach, carriage
469 0.0073989 label cab, hack, taxi, taxicab
865 0.0056826 label tow truck, tow car, wrecker
[ INFO ] Execution successful
1
Train
2
Prepare
model
3
Inference
Internet of Things Group 27
Pretrained Models
1
Train
2
Prepare
model
3
Inference
Internet of Things Group 28
person-detection-retail-0001
This model is for a pedestrian detector used for Retail scenarios.
The model is based on a backbone with hyper-feature + R-FCN.
Metric Value
AP 80.91%
Pose coverage Standing upright, parallel to the
image plane
Support of occluded pedestrians YES
Occlusion coverage <50%
Min pedestrian height 80 pixels (on 1080p)
Max objects to detect 200
GFlops 12.584487
MParams 3.243982
Source framework Caffe*
Caffe* CPU
Intel® MKL
Inference
Engine Intel®
MKL-DNN
Inference
Engine clDNN
FP16
Inference
Engine clDNN
FP32
OpenCV* CPU
3.80 17.45 15.25 10.29 9.27
Average Precision (AP) is defined as an area under the precision/recall curve.
The validation dataset consists of about 50,000 images from about 100 scenes.
Performance (FPS) and System
Configuration
Ubuntu* 16.04
Intel® Core™ i5-6500 CPU @ 2.90GHz fixed
GPU GT2 @ 1.00GHz fixed
DDR4 PC17000/2133MHz
1
Train
2
Prepare
model
3
Inference
Internet of Things Group 29
IR
Read network
description +
weights
Load Network
Create Engine
Instance w/ HW
plugin
load input blobs
+ infer
CNNNetworkReader network_reader;
network_reader.ReadNetwork(“Model.xml”);
Network_reader.ReadWeights(“Model.bin")
InferenceEngine::InferenceEnginePluginPtr engine_ptr =
InferenceEngine::PluginDispatcher(pluginDirs).getSuitablePlugin(TargetDevice::eGPU);
InferencePlugin plugin(engine_ptr);
auto network = network_reader.getNetwork();
auto executable_network = plugin.LoadNetwork(network, {});
auto infer_request = executable_network.CreateInferRequest();
for (auto & item : inputInfo) {
auto input_name = item->first;
auto input = infer_request.GetBlob(input_name);
}
infer_request->infer();
UsingtheInferenceEngineAPI 1
Train
2
Prepare
model
3
Inference
Internet of Things Group 30
Deep Learning Object Classification
Output is an
array of
category
possibilities
1
Train
2
Prepare
model
3
Inference
928-0.543935-ice cream, icecream
// Read Network, Create Engine and Load Netowrk
Mat frame,frame2;
for (;;) {
cap >> frame; //OpenCV video capture
//resize to expected size (in IR .xml)
resize(frame,frame2,Size(227,227));
//run inference
long unsigned framesize= frame2.rows*frame2.step1();
ConvertImageToInput(frame2.data, framesize, *input);
sts = _plugin->Infer(*input, *output, &dsc);
//get top classifier label
int blobsize=output->size();
float *data=output->data();
float max=0;
int maxidx=0;
for (int i1=0; i1<blobsize; i1++) {
if (data[i1]>max) {
max=data[i1];
maxidx=i1;
}
}
// do something with classification data
imshow( "frame", frame2 );
if (waitKey(30) >= 0) break;
}
Internet of Things Group 31
Run Inference 1:
Model
vehicle-license-plate-
detection-barrier-0007
Detects Vehicles
Run Inference 2:
Model
vehicle-attributes-
recognition-barrier-0010
Classifies vehicle attributes
Run Inference 3:
Model
license-plate-recognition-
barrier-0001
Detects License Plates
Load Input Image(s)
Display Results
Asynchronous and Heterogeneous
1
Train
2
Prepare
model
3
Inference
Internet of Things Group 32
OpenVINO Documentation
33
OpenVINO Reference Documents
IE document (local)
<openvino>/deployment_tools/documentation/index.html
OpenVINO online documents
https://software.intel.com/en-us/openvino-toolkit/documentation/featured
OpenVINO Get Started Guide
https://software.intel.com/en-us/openvino-toolkit/documentation/get-started
34
Model / Layer Support information
• Open IE document (local)
• Go to “Converting Your XXXX Model”
35
Inference Engine API Reference Manual
Open IE document (local)
Enter keywords to search box and incremental search result will be displayed
Copyright © 2018, Intel Corporation. All rights reserved.
*Other names and brands may be claimed as the property of others.
Optimization NoticeOptimization Notice
36
Benefits of the OpenVINO™ toolkit
Reduce time using a library of optimized
OpenCV* & OpenVX* functions, & 15+ samples.
Develop once, deploy for current
& future Intel-based devices.
Use the increasing repository
of OpenCL™ starting points in OpenCV*
to add your own unique code.
Access Intel computer vision accelerators.
Speed code performance.
Supports heterogeneous processing
& asynchronous execution.
AcceleratePerformance
OpenVX and the OpenVX logo are trademarks of the Khronos Group Inc.
OpenCL and the OpenCL logo are trademarks of Apple Inc. used by permission by Khronos
IntegrateDeeplearning
Unleash convolutional neural
network (CNN) based deep
learning inference
Up to
19.9x
increase1
1Performance increase comparing certain standard framework models vs. Intel-optimized models in the Intel® Deep Learning Deployment Toolkit. Performance results are based on testing as of June 13, 2018 and
may not reflect all publicly available security updates. See configuration disclosure for details. No product can be absolutely secure. For more complete information about performance and benchmark results, visit
www.intel.com/benchmarks. Testing by Intel as of June 13, 2018. See Benchmark slide 14 for configuration details.
Harness the Power of Intel® Processors: CPU, CPU with Integrated Graphics, FPGA,VPU
speeddevelopment Innovate&customize
using a common API, pre-trained models &
computer vision algorithms.
Placeholder Footer Copy / BU Logo or Name Goes Here

More Related Content

What's hot

What's hot (20)

8 intel network builders overview
8 intel network builders overview8 intel network builders overview
8 intel network builders overview
 
Технологии Intel для виртуализации сетей операторов связи
Технологии Intel для виртуализации сетей операторов связиТехнологии Intel для виртуализации сетей операторов связи
Технологии Intel для виртуализации сетей операторов связи
 
Open Source Interactive CPU Preview Rendering with Pixar's Universal Scene De...
Open Source Interactive CPU Preview Rendering with Pixar's Universal Scene De...Open Source Interactive CPU Preview Rendering with Pixar's Universal Scene De...
Open Source Interactive CPU Preview Rendering with Pixar's Universal Scene De...
 
Unleashing Intel® Advanced Vector Extensions 512 (Intel® AVX-512) Inside the ...
Unleashing Intel® Advanced Vector Extensions 512 (Intel® AVX-512) Inside the ...Unleashing Intel® Advanced Vector Extensions 512 (Intel® AVX-512) Inside the ...
Unleashing Intel® Advanced Vector Extensions 512 (Intel® AVX-512) Inside the ...
 
TDC2018SP | Trilha IA - Inteligencia Artificial na Arquitetura Intel
TDC2018SP | Trilha IA - Inteligencia Artificial na Arquitetura IntelTDC2018SP | Trilha IA - Inteligencia Artificial na Arquitetura Intel
TDC2018SP | Trilha IA - Inteligencia Artificial na Arquitetura Intel
 
Tuning For Deep Learning Inference with Intel® Processor Graphics | SIGGRAPH ...
Tuning For Deep Learning Inference with Intel® Processor Graphics | SIGGRAPH ...Tuning For Deep Learning Inference with Intel® Processor Graphics | SIGGRAPH ...
Tuning For Deep Learning Inference with Intel® Processor Graphics | SIGGRAPH ...
 
E5 Intel Xeon Processor E5 Family Making the Business Case
E5 Intel Xeon Processor E5 Family Making the Business Case E5 Intel Xeon Processor E5 Family Making the Business Case
E5 Intel Xeon Processor E5 Family Making the Business Case
 
Xeon E5 Making the Business Case PowerPoint
Xeon E5 Making the Business Case PowerPointXeon E5 Making the Business Case PowerPoint
Xeon E5 Making the Business Case PowerPoint
 
Intel® Open Image Denoise in Unity*
Intel® Open Image Denoise in Unity*Intel® Open Image Denoise in Unity*
Intel® Open Image Denoise in Unity*
 
TDC2019 Intel Software Day - Inferencia de IA em edge devices
TDC2019 Intel Software Day - Inferencia de IA em edge devicesTDC2019 Intel Software Day - Inferencia de IA em edge devices
TDC2019 Intel Software Day - Inferencia de IA em edge devices
 
Evaluating Microsoft Windows 8 Security on Intel Architecture Tablets
Evaluating Microsoft Windows 8 Security on Intel Architecture TabletsEvaluating Microsoft Windows 8 Security on Intel Architecture Tablets
Evaluating Microsoft Windows 8 Security on Intel Architecture Tablets
 
Accelerate Your Game Development on Android*
Accelerate Your Game Development on Android*Accelerate Your Game Development on Android*
Accelerate Your Game Development on Android*
 
IT@Intel: Creating Smart Spaces with All-in-Ones
IT@Intel:  Creating Smart Spaces with All-in-OnesIT@Intel:  Creating Smart Spaces with All-in-Ones
IT@Intel: Creating Smart Spaces with All-in-Ones
 
Build a Deep Learning Video Analytics Framework | SIGGRAPH 2019 Technical Ses...
Build a Deep Learning Video Analytics Framework | SIGGRAPH 2019 Technical Ses...Build a Deep Learning Video Analytics Framework | SIGGRAPH 2019 Technical Ses...
Build a Deep Learning Video Analytics Framework | SIGGRAPH 2019 Technical Ses...
 
So You Want to Build a Snowman…But it is Summer
So You Want to Build a Snowman…But it is SummerSo You Want to Build a Snowman…But it is Summer
So You Want to Build a Snowman…But it is Summer
 
Microsoft Build 2019- Intel AI Workshop
Microsoft Build 2019- Intel AI Workshop Microsoft Build 2019- Intel AI Workshop
Microsoft Build 2019- Intel AI Workshop
 
Intel Graphics Performance Analyzers (Intel GPA)
Intel Graphics Performance Analyzers (Intel GPA)Intel Graphics Performance Analyzers (Intel GPA)
Intel Graphics Performance Analyzers (Intel GPA)
 
AIDC Summit LA- Hands-on Training
AIDC Summit LA- Hands-on Training AIDC Summit LA- Hands-on Training
AIDC Summit LA- Hands-on Training
 
Dynamic Resolution Techniques for Intel® Processor Graphics | SIGGRAPH 2018 T...
Dynamic Resolution Techniques for Intel® Processor Graphics | SIGGRAPH 2018 T...Dynamic Resolution Techniques for Intel® Processor Graphics | SIGGRAPH 2018 T...
Dynamic Resolution Techniques for Intel® Processor Graphics | SIGGRAPH 2018 T...
 
TDC2019 Intel Software Day - Tecnicas de Programacao Paralela em Machine Lear...
TDC2019 Intel Software Day - Tecnicas de Programacao Paralela em Machine Lear...TDC2019 Intel Software Day - Tecnicas de Programacao Paralela em Machine Lear...
TDC2019 Intel Software Day - Tecnicas de Programacao Paralela em Machine Lear...
 

Similar to 【視覺進化論】AI智慧視覺運算技術論壇_2_ChungYeh

“Acceleration of Deep Learning Using OpenVINO: 3D Seismic Case Study,” a Pres...
“Acceleration of Deep Learning Using OpenVINO: 3D Seismic Case Study,” a Pres...“Acceleration of Deep Learning Using OpenVINO: 3D Seismic Case Study,” a Pres...
“Acceleration of Deep Learning Using OpenVINO: 3D Seismic Case Study,” a Pres...
Edge AI and Vision Alliance
 
“Intel Video AI Box—Converging AI, Media and Computing in a Compact and Open ...
“Intel Video AI Box—Converging AI, Media and Computing in a Compact and Open ...“Intel Video AI Box—Converging AI, Media and Computing in a Compact and Open ...
“Intel Video AI Box—Converging AI, Media and Computing in a Compact and Open ...
Edge AI and Vision Alliance
 
Accelerating Insights in the Technical Computing Transformation
Accelerating Insights in the Technical Computing TransformationAccelerating Insights in the Technical Computing Transformation
Accelerating Insights in the Technical Computing Transformation
Intel IT Center
 
Embedded Chief River Design-In Presentation_30442998.pdf
Embedded Chief River Design-In Presentation_30442998.pdfEmbedded Chief River Design-In Presentation_30442998.pdf
Embedded Chief River Design-In Presentation_30442998.pdf
OemTest
 

Similar to 【視覺進化論】AI智慧視覺運算技術論壇_2_ChungYeh (20)

“Acceleration of Deep Learning Using OpenVINO: 3D Seismic Case Study,” a Pres...
“Acceleration of Deep Learning Using OpenVINO: 3D Seismic Case Study,” a Pres...“Acceleration of Deep Learning Using OpenVINO: 3D Seismic Case Study,” a Pres...
“Acceleration of Deep Learning Using OpenVINO: 3D Seismic Case Study,” a Pres...
 
DPDK Summit - 08 Sept 2014 - Intel - Networking Workloads on Intel Architecture
DPDK Summit - 08 Sept 2014 - Intel - Networking Workloads on Intel ArchitectureDPDK Summit - 08 Sept 2014 - Intel - Networking Workloads on Intel Architecture
DPDK Summit - 08 Sept 2014 - Intel - Networking Workloads on Intel Architecture
 
Software Development Tools for Intel® IoT Platforms
Software Development Tools for Intel® IoT PlatformsSoftware Development Tools for Intel® IoT Platforms
Software Development Tools for Intel® IoT Platforms
 
HPC DAY 2017 | Accelerating tomorrow's HPC and AI workflows with Intel Archit...
HPC DAY 2017 | Accelerating tomorrow's HPC and AI workflows with Intel Archit...HPC DAY 2017 | Accelerating tomorrow's HPC and AI workflows with Intel Archit...
HPC DAY 2017 | Accelerating tomorrow's HPC and AI workflows with Intel Archit...
 
Driving Industrial InnovationOn the Path to Exascale
Driving Industrial InnovationOn the Path to ExascaleDriving Industrial InnovationOn the Path to Exascale
Driving Industrial InnovationOn the Path to Exascale
 
“Intel Video AI Box—Converging AI, Media and Computing in a Compact and Open ...
“Intel Video AI Box—Converging AI, Media and Computing in a Compact and Open ...“Intel Video AI Box—Converging AI, Media and Computing in a Compact and Open ...
“Intel Video AI Box—Converging AI, Media and Computing in a Compact and Open ...
 
Introduction to container networking in K8s - SDN/NFV London meetup
Introduction to container networking in K8s - SDN/NFV  London meetupIntroduction to container networking in K8s - SDN/NFV  London meetup
Introduction to container networking in K8s - SDN/NFV London meetup
 
Accelerating Insights in the Technical Computing Transformation
Accelerating Insights in the Technical Computing TransformationAccelerating Insights in the Technical Computing Transformation
Accelerating Insights in the Technical Computing Transformation
 
Intel Knights Landing Slides
Intel Knights Landing SlidesIntel Knights Landing Slides
Intel Knights Landing Slides
 
Forts and Fights Scaling Performance on Unreal Engine*
Forts and Fights Scaling Performance on Unreal Engine*Forts and Fights Scaling Performance on Unreal Engine*
Forts and Fights Scaling Performance on Unreal Engine*
 
Accelerating AI Adoption with Partners
Accelerating AI Adoption with PartnersAccelerating AI Adoption with Partners
Accelerating AI Adoption with Partners
 
Accelerate Ceph performance via SPDK related techniques
Accelerate Ceph performance via SPDK related techniques Accelerate Ceph performance via SPDK related techniques
Accelerate Ceph performance via SPDK related techniques
 
Intel® Xeon® Processor E5-2600 v3 Product Family Application Showcase – Big D...
Intel® Xeon® Processor E5-2600 v3 Product Family Application Showcase – Big D...Intel® Xeon® Processor E5-2600 v3 Product Family Application Showcase – Big D...
Intel® Xeon® Processor E5-2600 v3 Product Family Application Showcase – Big D...
 
Embedded Chief River Design-In Presentation_30442998.pdf
Embedded Chief River Design-In Presentation_30442998.pdfEmbedded Chief River Design-In Presentation_30442998.pdf
Embedded Chief River Design-In Presentation_30442998.pdf
 
Develop, Deploy, and Innovate with Intel® Cluster Ready
Develop, Deploy, and Innovate with Intel® Cluster ReadyDevelop, Deploy, and Innovate with Intel® Cluster Ready
Develop, Deploy, and Innovate with Intel® Cluster Ready
 
Intel Technologies for High Performance Computing
Intel Technologies for High Performance ComputingIntel Technologies for High Performance Computing
Intel Technologies for High Performance Computing
 
Software-defined Visualization, High-Fidelity Visualization: OpenSWR and OSPRay
Software-defined Visualization, High-Fidelity Visualization: OpenSWR and OSPRaySoftware-defined Visualization, High-Fidelity Visualization: OpenSWR and OSPRay
Software-defined Visualization, High-Fidelity Visualization: OpenSWR and OSPRay
 
TDC2017 | São Paulo - Trilha Machine Learning How we figured out we had a SRE...
TDC2017 | São Paulo - Trilha Machine Learning How we figured out we had a SRE...TDC2017 | São Paulo - Trilha Machine Learning How we figured out we had a SRE...
TDC2017 | São Paulo - Trilha Machine Learning How we figured out we had a SRE...
 
Tendências da junção entre Big Data Analytics, Machine Learning e Supercomput...
Tendências da junção entre Big Data Analytics, Machine Learning e Supercomput...Tendências da junção entre Big Data Analytics, Machine Learning e Supercomput...
Tendências da junção entre Big Data Analytics, Machine Learning e Supercomput...
 
Accelerate Your AI Today
Accelerate Your AI TodayAccelerate Your AI Today
Accelerate Your AI Today
 

More from MAKERPRO.cc

More from MAKERPRO.cc (20)

從0100量產挑戰
從0100量產挑戰從0100量產挑戰
從0100量產挑戰
 
從群眾募資到教育套件的挑戰與克服
從群眾募資到教育套件的挑戰與克服從群眾募資到教育套件的挑戰與克服
從群眾募資到教育套件的挑戰與克服
 
【1110ROS社群開講】智能照護專案ROS也能派上用場_Sco Lin
【1110ROS社群開講】智能照護專案ROS也能派上用場_Sco Lin【1110ROS社群開講】智能照護專案ROS也能派上用場_Sco Lin
【1110ROS社群開講】智能照護專案ROS也能派上用場_Sco Lin
 
【1110ROS社群開講】ROS 2與DDS應用於工業領域_王健豪
【1110ROS社群開講】ROS 2與DDS應用於工業領域_王健豪【1110ROS社群開講】ROS 2與DDS應用於工業領域_王健豪
【1110ROS社群開講】ROS 2與DDS應用於工業領域_王健豪
 
【1110ROS群開講】開發機器人大腦 - 智慧導航實務應用_賴俊吉
【1110ROS群開講】開發機器人大腦  - 智慧導航實務應用_賴俊吉【1110ROS群開講】開發機器人大腦  - 智慧導航實務應用_賴俊吉
【1110ROS群開講】開發機器人大腦 - 智慧導航實務應用_賴俊吉
 
【1110ROS社群開講】實務經驗分享,初階也能快速上手!_林威志
【1110ROS社群開講】實務經驗分享,初階也能快速上手!_林威志【1110ROS社群開講】實務經驗分享,初階也能快速上手!_林威志
【1110ROS社群開講】實務經驗分享,初階也能快速上手!_林威志
 
【1110ROS社群開講】如何打造與人一起學習的機器檯燈_鄭凱文
【1110ROS社群開講】如何打造與人一起學習的機器檯燈_鄭凱文【1110ROS社群開講】如何打造與人一起學習的機器檯燈_鄭凱文
【1110ROS社群開講】如何打造與人一起學習的機器檯燈_鄭凱文
 
Face detection myriad_批次檔
Face detection myriad_批次檔Face detection myriad_批次檔
Face detection myriad_批次檔
 
【1006物聯網社群開講】智慧辦公室全面啟動!_何甘霖
【1006物聯網社群開講】智慧辦公室全面啟動!_何甘霖【1006物聯網社群開講】智慧辦公室全面啟動!_何甘霖
【1006物聯網社群開講】智慧辦公室全面啟動!_何甘霖
 
【1006物聯網社群開講】Project D – Pi 相機的趣味應用_DoFI
【1006物聯網社群開講】Project D  – Pi 相機的趣味應用_DoFI【1006物聯網社群開講】Project D  – Pi 相機的趣味應用_DoFI
【1006物聯網社群開講】Project D – Pi 相機的趣味應用_DoFI
 
【1006物聯網社群開講】Raspberry Pi + ROS = 實現無人自駕理念!_蕭盈璋
【1006物聯網社群開講】Raspberry Pi + ROS = 實現無人自駕理念!_蕭盈璋【1006物聯網社群開講】Raspberry Pi + ROS = 實現無人自駕理念!_蕭盈璋
【1006物聯網社群開講】Raspberry Pi + ROS = 實現無人自駕理念!_蕭盈璋
 
【1006物聯網社群開講】Raspberry Pi for Everyone_Felix
【1006物聯網社群開講】Raspberry Pi for Everyone_Felix【1006物聯網社群開講】Raspberry Pi for Everyone_Felix
【1006物聯網社群開講】Raspberry Pi for Everyone_Felix
 
【視覺進化論】AI智慧視覺運算技術論壇_5_Bofu
【視覺進化論】AI智慧視覺運算技術論壇_5_Bofu【視覺進化論】AI智慧視覺運算技術論壇_5_Bofu
【視覺進化論】AI智慧視覺運算技術論壇_5_Bofu
 
0929-【迎向高齡時代】居家醫療社群推動交流會-Part5
0929-【迎向高齡時代】居家醫療社群推動交流會-Part50929-【迎向高齡時代】居家醫療社群推動交流會-Part5
0929-【迎向高齡時代】居家醫療社群推動交流會-Part5
 
0929-【迎向高齡時代】居家醫療社群推動交流會-Part3
0929-【迎向高齡時代】居家醫療社群推動交流會-Part30929-【迎向高齡時代】居家醫療社群推動交流會-Part3
0929-【迎向高齡時代】居家醫療社群推動交流會-Part3
 
【物聯網自造x開發工具系列】Linkit 7697物聯網實作開發案例
【物聯網自造x開發工具系列】Linkit 7697物聯網實作開發案例【物聯網自造x開發工具系列】Linkit 7697物聯網實作開發案例
【物聯網自造x開發工具系列】Linkit 7697物聯網實作開發案例
 
【物聯網自造x開發工具系列】Llinkit-7697物聯網實作開發案例-LoRa建置
【物聯網自造x開發工具系列】Llinkit-7697物聯網實作開發案例-LoRa建置【物聯網自造x開發工具系列】Llinkit-7697物聯網實作開發案例-LoRa建置
【物聯網自造x開發工具系列】Llinkit-7697物聯網實作開發案例-LoRa建置
 
【自造松充電課】物聯網案例應用
【自造松充電課】物聯網案例應用【自造松充電課】物聯網案例應用
【自造松充電課】物聯網案例應用
 
【自造松充電課】物聯網創新案例商品化挑戰
【自造松充電課】物聯網創新案例商品化挑戰【自造松充電課】物聯網創新案例商品化挑戰
【自造松充電課】物聯網創新案例商品化挑戰
 
【自造松充電課】如何做好5分鐘pitch
【自造松充電課】如何做好5分鐘pitch【自造松充電課】如何做好5分鐘pitch
【自造松充電課】如何做好5分鐘pitch
 

Recently uploaded

IAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI SolutionsIAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI Solutions
Enterprise Knowledge
 
Artificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and MythsArtificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and Myths
Joaquim Jorge
 
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptxEIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
Earley Information Science
 

Recently uploaded (20)

A Call to Action for Generative AI in 2024
A Call to Action for Generative AI in 2024A Call to Action for Generative AI in 2024
A Call to Action for Generative AI in 2024
 
Presentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreterPresentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreter
 
IAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI SolutionsIAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI Solutions
 
A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)
 
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
 
[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdf[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdf
 
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
 
Handwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed textsHandwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed texts
 
08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking Men08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking Men
 
Automating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps ScriptAutomating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps Script
 
Slack Application Development 101 Slides
Slack Application Development 101 SlidesSlack Application Development 101 Slides
Slack Application Development 101 Slides
 
Artificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and MythsArtificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and Myths
 
🐬 The future of MySQL is Postgres 🐘
🐬  The future of MySQL is Postgres   🐘🐬  The future of MySQL is Postgres   🐘
🐬 The future of MySQL is Postgres 🐘
 
Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024
 
2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...
 
Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)
 
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptxEIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
 
Understanding Discord NSFW Servers A Guide for Responsible Users.pdf
Understanding Discord NSFW Servers A Guide for Responsible Users.pdfUnderstanding Discord NSFW Servers A Guide for Responsible Users.pdf
Understanding Discord NSFW Servers A Guide for Responsible Users.pdf
 
Boost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfBoost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdf
 
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
 

【視覺進化論】AI智慧視覺運算技術論壇_2_ChungYeh

  • 2. Internet of Things Group 2 Legalnotices&disclaimersThis document contains information on products, services and/or processes in development. All information provided here is subject to change without notice. Contact your Intel representative to obtain the latest forecast, schedule, specifications and roadmaps. Intel technologies’ features and benefits depend on system configuration and may require enabled hardware, software or service activation. Learn more at intel.com, or from the OEM or retailer. No computer system can be absolutely secure. Tests document performance of components on a particular test, in specific systems. Differences in hardware, software, or configuration will affect actual performance. Consult other sources of information to evaluate performance as you consider your purchase. For more complete information about performance and benchmark results, visit http://www.intel.com/performance. Cost reduction scenarios described are intended as examples of how a given Intel-based product, in the specified circumstances and configurations, may affect future costs and provide cost savings. Circumstances will vary. Intel does not guarantee any costs or cost reduction. Statements in this document that refer to Intel’s plans and expectations for the quarter, the year, and the future, are forward-looking statements that involve a number of risks and uncertainties. A detailed discussion of the factors that could affect Intel’s results and plans is included in Intel’s SEC filings, including the annual report on Form 10-K. The products described may contain design defects or errors known as errata which may cause the product to deviate from published specifications. Current characterized errata are available on request. Performance estimates were obtained prior to implementation of recent software patches and firmware updates intended to address exploits referred to as "Spectre" and "Meltdown." Implementation of these updates may make these results inapplicable to your device or system. No license (express or implied, by estoppel or otherwise) to any intellectual property rights is granted by this document. Intel does not control or audit third-party benchmark data or the web sites referenced in this document. You should visit the referenced web site and confirm whether referenced data are accurate. Results have been estimated or simulated using internal Intel analysis or architecture simulation or modeling, and provided to you for informational purposes. Any differences in your system hardware, software or configuration may affect your actual performance. Intel, the Intel logo, Pentium, Celeron, Atom, Core, Xeon, Movidius, Saffron, OpenVINO, MediaSDK and others are trademarks of Intel Corporation in the U.S. and/or other countries. *Other names and brands may be claimed as the property of others. © 2018 Intel Corporation.
  • 3. Internet of Things Group 3 https://software.intel.com/en-us/openvino-toolkit
  • 4. 4 Artificial Intelligence is the ability of machines to learn from experience, without explicit programming, in order to perform cognitive functions associated with the human mind ArtificialIntelligence Machinelearning Algorithms whose performance improve as they are exposed to more data over time Deeplearning Subset of machine learning in which multi-layered neural networks learn from vast amounts of data
  • 5. 5 Lots of labeled data! Training Inference Forward Backward Model weights Forward “Bicycle”? “Strawberry” “Bicycle”? Error Human Bicycle Strawberry ?????? DeeplearningBasics Data set size Accuracy Didyouknow? Training with a large data set AND deep (many layered) neural network often leads to the highest accuracy inference
  • 7. 7 OS Support CentOS* 7.4 (64 bit) Ubuntu* 16.04.3 LTS (64 bit) Microsoft Windows* 10 (64 bit) Yocto Project* version Poky Jethro v2.0.3 (64 bit) OpenVX and the OpenVX logo are trademarks of the Khronos Group Inc. OpenCL and the OpenCL logo are trademarks of Apple Inc. used by permission by Khronos Intel® Architecture-Based Platforms Support Intel® Deep Learning Deployment Toolkit Traditional Computer Vision Tools & Libraries Model Optimizer Convert & Optimize Inference Engine Optimized InferenceIR OpenCV* OpenVX* Photography Vision Optimized Libraries IR = Intermediate Representation file For Intel® CPU & CPU with integrated graphics Increase Media/Video/Graphics Performance Intel® Media SDK Open Source version OpenCL™ Drivers & Runtimes For CPU with integrated graphics Optimize Intel® FPGA FPGA RunTime Environment (from Intel® FPGA SDK for OpenCL™) Bitstreams FPGA – Linux* only Code Samples & 10 Pre-trained Models Code Samples Openvino™toolkitCross-Platform Tool to Accelerate Computer Vision & Deep Learning Inference Performance software.intel.com/openvino-toolkit All products, computer systems, dates, and figures are preliminary based on current expectations, and are subject to change without notice.
  • 8. Internet of Things Group 8 Discover Intel® OpenVINO™ Toolkit Capabilities
  • 9. Internet of Things Group 9 Hardware Acceleration
  • 10. Internet of Things Group 10 Intel Architecture Multicore/Multithreading/Vectorization https://software.intel.com/en-us/articles/introduction-to-intel-advanced-vector-extensions
  • 11. Internet of Things Group 11 Intel Integrated Graphics
  • 12. Internet of Things Group 12 Movidius Neural Compute Stick (Myriad 2)
  • 13. Internet of Things Group 13 Intel® FPGA Intel Arria 10 GX FPGA • High-performance, multi-gigabit SERDES transceivers up to 15 Gbps • 1,150K logic elements available (-2L speed grade) • 53 Mb of embedded memory On-Board Memory • 8 GB DDR4 memory banks with error correction code (ECC) (2 banks) • 1 Gb Mb (128 MB) flash Interfaces • PCIe x8 Gen3 electrical, x16 mechanical • USB 2.0 interface for debug and programming of FPGA and flash memory • 1X QSFP+ with 4X 10GbE or 40GbE support • Standard height, 1/2 length • Low-profile option upon request
  • 14. Copyright © 2018, Intel Corporation. All rights reserved. *Other names and brands may be claimed as the property of others. Optimization NoticeOptimization Notice OpenVINO™ toolkit Technical Specifications Intel® Platforms Compatible Operating Systems Target Solution Platforms CPU  6th-8th generation Intel® Xeon® & Core™ processors  Ubuntu* 16.04.3 LTS (64 bit)  Microsoft Windows* 10 (64 bit)  CentOS* 7.4 (64 bit)  Intel® Pentium® processor N4200/5, N3350/5, N3450/5 with Intel® HD Graphics  Yocto Project* Poky Jethro v2.0.3 (64 bit) Iris® Pro & Intel® HD Graphics  6th-8th generation Intel® Core™ processor with Intel® Iris™ Pro graphics & Intel® HD Graphics  6th-8th generation Intel® Xeon® processor with Intel® Iris™ Pro Graphics & Intel® HD Graphics (excluding e5 product family, which does not have graphics1)  Ubuntu 16.04.3 LTS (64 bit)  Windows 10 (64 bit)  CentOS 7.4 (64 bit) FPGA  Intel® Arria® FPGA 10 GX development kit  Intel® Programmable Acceleration Card with Intel® Arria® 10 GX FPGA operating systems  OpenCV* & OpenVX* functions must be run against the CPU or Intel® Processor Graphics (GPU)  Ubuntu 16.04.3 LTS (64 bit)  CentOS 7.4 (64 bit) VPU  Intel Movidius™ Neural Compute Stick  Ubuntu 16.04.3 LTS (64 bit)  CentOS 7.4 (64 bit)  Windows 10 (64 bit) Development Platforms 6 th -8 th generation Intel® Core™ and Intel® Xeon® processors  Ubuntu* 16.04.3 LTS (64 bit)  Windows® 10 (64 bit)  CentOS* 7.4 (64 bit) Additional Software Requirements Linux* build environment required components  OpenCV 3.4 or higher  GNU Compiler Collection (GCC) 3.4 or higher  CMake* 2.8 or higher  Python* 3.4 or higher Microsoft Windows* build environment required components  Intel® HD Graphics Driver (latest version)†  OpenCV 3.4 or higher  Intel® C++ Compiler 2017 Update 4  CMake 2.8 or higher  Python 3.4 or higher  Microsoft Visual Studio* 2015 External Dependencies/Additional Software View Product Site, detailed System Requirements 14 1Graphics drivers are required only if you use Intel® Processor Graphics (GPU).
  • 15. Internet of Things Group 15 Traditional Computer Vision
  • 16. Internet of Things Group 16 Typical Deep Learning Video Workload
  • 17. Internet of Things Group 17 Starting with OpenCV* & OpenVX*  Well established, open source, computer vision library  Wide variety of algorithms and functions available  Includes Intel® Photography Vision Library OpenVX and the OpenVX logo are trademarks of the Khronos Group Inc.  Intel optimized functions for faster performance on Intel hardware  Basic building blocks to speed performance, cut development time & allow customization  All-in-one package  Targeted at real time, low power applications  Graph based representation, optimization & execution  11 samples included
  • 18. Internet of Things Group 18 OpenCV* Samples • PeopleDetect : demonstrates the use of the HoG descriptor. • Background foreground segmentation : demonstrates background segmentation. • Dense Optical Flow : demonstrates using of dense optical flow algorithms. • PVL Face Detection and Recognition : demonstrate Intel PVA face detection and recognition algorithms. • Colorization : demonstrates recoloring grayscale images with DNN. • Opencl Custom Kernel : demonstrates running custom OpenCL kernels by means of OpenCV T-API interface.
  • 19. Internet of Things Group 19 OpenVX™ Samples Gstreamer Interoperability Video Stabilization Hetero Basic Camera Tampering Census Transform Custom OpenCL™ Kernel Face Detection Auto- Contrast Lane Detection Motion Detection Color Copy Pipeline
  • 20. Internet of Things Group 20 Deep Learning Computer Vision
  • 21. 21 Caffe* Tensor Flow* MxNet* .dataIR IR IR = Intermediate Representation format Convert & optimize to fit all targets Load, infer CPU Plugin GPU Plugin FPGA Plugin Myriad Plugin Model Optimizer Convert & Optimize Extendibility C++ Extendibility OpenCL™ Extendibility OpenCL/TBD Extendibility TBD Model Optimizer  What it is: Preparation step -> imports trained models  Why important: Optimizes for performance/space with conservative topology transformations; biggest boost is from conversion to data types matching hardware. Inference Engine  What it is: High-level inference API  Why important: Interface is implemented as dynamically loaded plugins for each hardware type. Delivers best performance for each type without requiring users to implement and maintain multiple code pathways. Trained Model Inference Engine Common API (C++) Optimized cross- platform inference OpenCL and the OpenCL logo are trademarks of Apple Inc. used by permission by Khronos GPU = Intel CPU with integrated graphics processing unit/Intel® Processor Graphics Intel®DeepLearningDeploymentToolkit(DLDT)Take Full Advantage of the Power of Intel® Architecture for Deep Learning All products, computer systems, dates, and figures are preliminary based on current expectations, and are subject to change without notice.
  • 22. Internet of Things Group 22 Deep Learning Inference Workflow MKL-DNN cl-DNN CPU: Xeon/Core/Atom GPU FPGA VPU DLA NYRIAD 1 Train 2 Prepare model 3 Inference
  • 23. Internet of Things Group 23 Public Models Image Classification Object Detection Semantic Segmentation Neural Style Transfer 1 Train 2 Prepare model 3 Inference
  • 24. Intel Confidential 24 Supported Tensorflow* Models Converting Your TensorFlow* Model https://software.intel.com/en-us/articles/OpenVINO-Using-TensorFlow Model Name Slim Model Classification Inception v1 inception_v1_2016_08_28.tar.gz Inception v2 inception_v1_2016_08_28.tar.gz Inception v3 inception_v3_2016_08_28.tar.gz Inception v4 inception_v4_2016_09_09.tar.gz Inception ResNet v2 inception_resnet_v2_2016_08_30.tar.gz MobileNet v1 128 mobilenet_v1_0.25_128.tgz MobileNet v1 160 mobilenet_v1_0.5_160.tgz MobileNet v1 224 mobilenet_v1_1.0_224.tgz MobileNet v2 1.4 224 mobilenet_v2_1.4_224.tgz MobileNet v2 1.0 224 mobilenet_v2_1.0_224.tgz NasNet Large nasnet-a_large_04_10_2017.tar.gz NasNet Mobile nasnet-a_mobile_04_10_2017.tar.gz ResidualNet-50 v1 resnet_v1_50_2016_08_28.tar.gz ResidualNet-50 v2 resnet_v2_50_2017_04_14.tar.gz ResidualNet-101 v1 resnet_v1_101_2016_08_28.tar.gz ResidualNet-101 v2 resnet_v2_101_2017_04_14.tar.gz ResidualNet-152 v1 resnet_v1_152_2016_08_28.tar.gz ResidualNet-152 v2 resnet_v2_152_2017_04_14.tar.gz VGG-16 vgg_16_2016_08_28.tar.gz VGG-19 vgg_19_2016_08_28.tar.gz Model Name TensorFlow Object Detection API Models (Frozen) SSD Mobilenet V1 ssd_mobilenet_v1_coco_2018_01_28.tar.gz SSD MobileNet V1 0.75 Depth ssd_mobilenet_v1_0.75_depth_300x300_coco14_sync_2018_ 07_03.tar.gz SSD MobileNet V1 PPN ssd_mobilenet_v1_ppn_shared_box_predictor_300x300_coco 14_sync_2018_07_03.tar.gz SSD MobileNet V1 FPN ssd_mobilenet_v1_fpn_shared_box_predictor_640x640_coco1 4_sync_2018_07_03.tar.gz SSD ResNet50 FPN ssd_resnet50_v1_fpn_shared_box_predictor_640x640_coco14 _sync_2018_07_03.tar.gz SSD MobileNet V2 ssd_mobilenet_v2_coco_2018_03_29.tar.gz SSD Inception V2 ssd_inception_v2_coco_2018_01_28.tar.gz Faster R-CNN Inception V2 faster_rcnn_inception_v2_coco_2018_01_28.tar.gz Faster R-CNN ResNet 50 faster_rcnn_resnet50_coco_2018_01_28.tar.gz Faster R-CNN ResNet 50 Low Proposals faster_rcnn_resnet50_lowproposals_coco_2018_01_28.tar.gz Faster R-CNN ResNet 101 faster_rcnn_resnet101_coco_2018_01_28.tar.gz Faster R-CNN ResNet 101 Low Proposals faster_rcnn_resnet101_lowproposals_coco_2018_01_28.tar.gz Faster R-CNN Inception ResNet V2 faster_rcnn_inception_resnet_v2_atrous_coco_2018_01_28.tar. gz Faster R-CNN Inception ResNet V2 Low Proposals faster_rcnn_inception_resnet_v2_atrous_lowproposals_coco_2 018_01_28.tar.gz Faster R-CNN NasNet faster_rcnn_nas_coco_2018_01_28.tar.gz Faster R-CNN NasNet Low Proposals faster_rcnn_nas_lowproposals_coco_2018_01_28.tar.gz Mask R-CNN Inception ResNet V2 mask_rcnn_inception_resnet_v2_atrous_coco_2018_01_28.tar. gz Mask R-CNN Inception V2 mask_rcnn_inception_v2_coco_2018_01_28.tar.gz Mask R-CNN ResNet 101 mask_rcnn_resnet101_atrous_coco_2018_01_28.tar.gz Mask R-CNN ResNet 50 mask_rcnn_resnet50_atrous_coco_2018_01_28.tar.gz 1 Train 2 Prepare model 3 Inference
  • 25. Internet of Things Group 25 Run Model Optimizer python mo.py --input_model alexnet.caffemodel --input data --input_shape [1,3,227,227] --data_type FP32 --log_level DEBUG 1 Train 2 Prepare model 3 Inference alexnet.xml alexnet.bin
  • 26. Intel ConfidentialIntel Confidential 26 python mo.py --input_model inception_v3_frozen.pb --input_shape [1,299,299,3] -- mean_values [127.5,127.5,127.5] –scale_values [127.5,127.5,127.5] Model Optimizer arguments: Common parameters: - Path to the Input Model: TF_inception_v3/inception_v3_frozen.pb - Path for generated IR: TF_inception_v3/FP32 - IR output name: inception_v3_frozen - Log level: ERROR - Batch: Not specified, inherited from the model - Input layers: Not specified, inherited from the model - Output layers: Not specified, inherited from the model - Input shapes: [1,299,299,3] - Mean values: [127.5,127.5,127.5] - Scale values: [127.5,127.5,127.5] - Scale factor: Not specified - Precision of IR: FP32 - Enable fusing: True - Enable grouped convolutions fusing: True - Move mean values to preprocess section: False - Reverse input channels: False TensorFlow specific parameters: - Input model in text protobuf format: False - Offload unsupported operations: False - Path to model dump for TensorBoard: None - Update the configuration file with input/output node names: None - Operations to offload: None - Patterns to offload: None - Use the config file: None Model Optimizer version: 1.2.110.59f62983 [ SUCCESS ] Generated IR model. [ SUCCESS ] XML file: TF_inception_v3/FP32/inception_v3_frozen.xml [ SUCCESS ] BIN file: TF_inception_v3/FP32/inception_v3_frozen.bin [ SUCCESS ] Total execution time: 4.97 seconds. TensorFlow* Inception V3 Model classification_sample.exe -m inception_v3_frozen.xml -i .democar.png [ INFO ] InferenceEngine: API version ............ 1.1 Build .................. 11653 [ INFO ] Parsing input parameters [ INFO ] Loading plugin API version ............ 1.1 Build .................. win_20180511 Description ....... MKLDNNPlugin [ INFO ] Loading network files: inception_v3_frozen.xml inception_v3_frozen.bin [ INFO ] Preparing input blobs [ WARNING ] Image is resized from (787, 259) to (224, 224) [ INFO ] Batch size is 1 [ INFO ] Preparing output blobs [ INFO ] Loading model to the plugin [ INFO ] Starting inference (1 iterations) [ INFO ] Average running time of one iteration: 91.8117 ms [ INFO ] Processing output blobs Top 10 results: Image democar.png 752 0.2250938 label racer, race car, racing car 818 0.1749035 label sports car, sport car 512 0.1381157 label convertible 628 0.0980638 label limousine, limo 437 0.0751940 label beach wagon, station wagon, wagon, estate car, beach waggon, station waggon, waggon 480 0.0682043 label car wheel 582 0.0508757 label grille, radiator grille 706 0.0157829 label passenger car, coach, carriage 469 0.0073989 label cab, hack, taxi, taxicab 865 0.0056826 label tow truck, tow car, wrecker [ INFO ] Execution successful 1 Train 2 Prepare model 3 Inference
  • 27. Internet of Things Group 27 Pretrained Models 1 Train 2 Prepare model 3 Inference
  • 28. Internet of Things Group 28 person-detection-retail-0001 This model is for a pedestrian detector used for Retail scenarios. The model is based on a backbone with hyper-feature + R-FCN. Metric Value AP 80.91% Pose coverage Standing upright, parallel to the image plane Support of occluded pedestrians YES Occlusion coverage <50% Min pedestrian height 80 pixels (on 1080p) Max objects to detect 200 GFlops 12.584487 MParams 3.243982 Source framework Caffe* Caffe* CPU Intel® MKL Inference Engine Intel® MKL-DNN Inference Engine clDNN FP16 Inference Engine clDNN FP32 OpenCV* CPU 3.80 17.45 15.25 10.29 9.27 Average Precision (AP) is defined as an area under the precision/recall curve. The validation dataset consists of about 50,000 images from about 100 scenes. Performance (FPS) and System Configuration Ubuntu* 16.04 Intel® Core™ i5-6500 CPU @ 2.90GHz fixed GPU GT2 @ 1.00GHz fixed DDR4 PC17000/2133MHz 1 Train 2 Prepare model 3 Inference
  • 29. Internet of Things Group 29 IR Read network description + weights Load Network Create Engine Instance w/ HW plugin load input blobs + infer CNNNetworkReader network_reader; network_reader.ReadNetwork(“Model.xml”); Network_reader.ReadWeights(“Model.bin") InferenceEngine::InferenceEnginePluginPtr engine_ptr = InferenceEngine::PluginDispatcher(pluginDirs).getSuitablePlugin(TargetDevice::eGPU); InferencePlugin plugin(engine_ptr); auto network = network_reader.getNetwork(); auto executable_network = plugin.LoadNetwork(network, {}); auto infer_request = executable_network.CreateInferRequest(); for (auto & item : inputInfo) { auto input_name = item->first; auto input = infer_request.GetBlob(input_name); } infer_request->infer(); UsingtheInferenceEngineAPI 1 Train 2 Prepare model 3 Inference
  • 30. Internet of Things Group 30 Deep Learning Object Classification Output is an array of category possibilities 1 Train 2 Prepare model 3 Inference 928-0.543935-ice cream, icecream // Read Network, Create Engine and Load Netowrk Mat frame,frame2; for (;;) { cap >> frame; //OpenCV video capture //resize to expected size (in IR .xml) resize(frame,frame2,Size(227,227)); //run inference long unsigned framesize= frame2.rows*frame2.step1(); ConvertImageToInput(frame2.data, framesize, *input); sts = _plugin->Infer(*input, *output, &dsc); //get top classifier label int blobsize=output->size(); float *data=output->data(); float max=0; int maxidx=0; for (int i1=0; i1<blobsize; i1++) { if (data[i1]>max) { max=data[i1]; maxidx=i1; } } // do something with classification data imshow( "frame", frame2 ); if (waitKey(30) >= 0) break; }
  • 31. Internet of Things Group 31 Run Inference 1: Model vehicle-license-plate- detection-barrier-0007 Detects Vehicles Run Inference 2: Model vehicle-attributes- recognition-barrier-0010 Classifies vehicle attributes Run Inference 3: Model license-plate-recognition- barrier-0001 Detects License Plates Load Input Image(s) Display Results Asynchronous and Heterogeneous 1 Train 2 Prepare model 3 Inference
  • 32. Internet of Things Group 32 OpenVINO Documentation
  • 33. 33 OpenVINO Reference Documents IE document (local) <openvino>/deployment_tools/documentation/index.html OpenVINO online documents https://software.intel.com/en-us/openvino-toolkit/documentation/featured OpenVINO Get Started Guide https://software.intel.com/en-us/openvino-toolkit/documentation/get-started
  • 34. 34 Model / Layer Support information • Open IE document (local) • Go to “Converting Your XXXX Model”
  • 35. 35 Inference Engine API Reference Manual Open IE document (local) Enter keywords to search box and incremental search result will be displayed
  • 36. Copyright © 2018, Intel Corporation. All rights reserved. *Other names and brands may be claimed as the property of others. Optimization NoticeOptimization Notice 36 Benefits of the OpenVINO™ toolkit Reduce time using a library of optimized OpenCV* & OpenVX* functions, & 15+ samples. Develop once, deploy for current & future Intel-based devices. Use the increasing repository of OpenCL™ starting points in OpenCV* to add your own unique code. Access Intel computer vision accelerators. Speed code performance. Supports heterogeneous processing & asynchronous execution. AcceleratePerformance OpenVX and the OpenVX logo are trademarks of the Khronos Group Inc. OpenCL and the OpenCL logo are trademarks of Apple Inc. used by permission by Khronos IntegrateDeeplearning Unleash convolutional neural network (CNN) based deep learning inference Up to 19.9x increase1 1Performance increase comparing certain standard framework models vs. Intel-optimized models in the Intel® Deep Learning Deployment Toolkit. Performance results are based on testing as of June 13, 2018 and may not reflect all publicly available security updates. See configuration disclosure for details. No product can be absolutely secure. For more complete information about performance and benchmark results, visit www.intel.com/benchmarks. Testing by Intel as of June 13, 2018. See Benchmark slide 14 for configuration details. Harness the Power of Intel® Processors: CPU, CPU with Integrated Graphics, FPGA,VPU speeddevelopment Innovate&customize using a common API, pre-trained models & computer vision algorithms.
  • 37. Placeholder Footer Copy / BU Logo or Name Goes Here