In this comprehensive workshop, learn how to use TensorFlow, how to build data pipelines and implement a simple deep learning model using Tensorflow Keras. Enhance your knowledge and skills by have better understanding of Tensorflow with all the resources we have available for you!
2. AGENDA
1. Optimized GPU systems for TensorFlow
Build your personal TensorFlow workstation
Grow from a single system to large clusters seamlessly
Manage denser deployments (containers)
2. Overview of OpenCV and TensorFlow 2.0
Comparative Analysis on OpenCV and TensorFlow
How we can leverage OpenCV for Model training process by perform Data Augmentation
3. Highlights of the Important changes from TF 1.4
Highlights on the core changes made in TensorFlow 2.0 which can simplify the Deep Learning model building steps
4. Core Modules and API of TensorFlow 2.0
TensorFlow 2.0 Key Modules : Why they are important and where we can use them for Deep learning model building steps
5. How to build Data pipelines
Understand the importance of having a Data Pipeline as an important component for Deep Learning model Building steps
6. How to implement a Simple Deep learning model using tf.Keras
Walk thru on the steps involved in implementing a fully connected Deep learning Model using TensorFlow Keras
Walk thru on the steps involved in leveraging Transfer Learning for Building a Robust Deep leaning model
4. Solutions that span the entire Data Center
SERVER
• HPC Servers
• Mission Critical X86
• Storage Servers
• High-Density Servers
• GPU Servers
Cloud Solutions Big Data/AIHPC Solutions
Cloud Big Data
Virtualization
AI / DEEP LEARNING
Product Portfolio
WORKSTATIONS
• GPU Workstations
• Tower | Rack
• Liquid Cooling
STORAGE
• Unified Storage
• Storage Array
• Archival
• JBOD
• Ceph Storage
NETWORKING
• InfiniBand
• Omnipath Architecture
Tyrone Kubernetes
Platform
HPC Cluster
GPU Optimised
Supercomputer
HPC On Cloud
SMP Solutions
Mngmt Tools
Analytics
Data Insights
HPC cluster parallel
file systems
Inferencing
Hyper-converged
Virtual SAN
Mixed Workloads
GPU Systems
5. Tyrone AI Ready GPU System Portfolio
DS400TG-48R
4:2 (4U)
Ratio:
GPU:CP
U
Tower/4U Rack – 1U/2U
GPUOPTIMIZED
DS400TOG-424RT
10:2 (4U)
Single Root
DS400TQV-12RT
4:2 (1U)
DS400TG-12RT
4:2 (1U)
DS400TGH-28R
6:2 (2U)
DS400TG-14R
3:2 (1U)
SS400TG-16T
SS400TG-13T
2:1 (1U)
NEW MODEL!!
DS400TG-424RT
20:2 (4U)
Rack – 4U/10U
DS400TOG-424RT
8:2 (4U)
Dual Root
NEW MODEL!!
DS400NG16-1016RT
16:2 (10U)
DS400TQV-416RT
8:2 (4U)
NVLink
NEW MODEL!!
DS400NG16-1016RT
16:2 (10U)
Personal
Workstations
SS400TR-54R
5U
6. Delivers 4XFASTER TRAINING
than other GPU-based systems
Your Personal AI Supercomputer
Power-on to Deep Learning in Minutes
Pre-installed with Powerful
Deep Learning
Software
Extend workloads from your
Desk-to-Cloud in Minutes
8. Tyrone Container Runtime Engine
HOST OPERATING SYSTEM
TYRONE CONTAINER RUNTIME ENGINE
Bins / Libs Bins / Libs Bins / Libs
App1 App1 App1
CLUSTER
From Cloud to
Your System in
Minutes
INFRASTRUCTURE
Repository of
50 containerized applications
100s of Containers
Run Multiple Applications
at the same time
9. Tyrone AI Oriented 8 & 10 GPU systems
Tyrone 8 GPU – Dual Root System
DS400TOG-424RT | 8:2 (4U)
Max CPU
56
Cores
2080 Ti
8 X
With
Single Precision
100+ TFs
Tesla V100 32
8 X
With
Single Precision
100+ TFs
Tyrone 10GPU- Single Root System
DS400TOG-424RT | 10:2 (4U)
Max CPU
56
Cores
2080 Ti
10 X
With
Single Precision
130+ TFs
Tesla V100 32
10 X
With
Single Precision
140+ TFs
Tyrone 10GPU- Single Root System
DS400TQV-416RT | 8:2 (4U)
Max CPU
56
Cores
Tesla V100 32 NVlink
8 X
With
Single Precision 125+ TFs
10. Tyrone AI Oriented 20 GPU- Inference System
Processor Support
Dual Xeon Scalable Processor; 3 UPI
Memory Capacity
24 DIMMs ECC DDR4 2666 MHz
Expansion Slots
20 PCIe 3.0 x16 for single-wide GPU cards
1 PCIe 3.0 x8 (FHFL x16 slots)
I/O ports
1x VGA, 2x 10GbaseT LAN, 4x USB 3.0, and 1x IPMI
dedicated LAN port, 1x M.2 NVMe
System management
On board BMC (Baseboard Management Controllers)
supports IPMI2.0
Drive Bays
24 hot-swap 3.5” drive bays
System Cooling
8 heavy duty fans optimize to support 8 GPU
cards
Power Supply
4 x 2000W (2+2) Titanium Level efficiency
redundant power supply
1
2
3
4
5
6
7
1 2
3 4
6
75
8
CPU CPU LOM
PCIe SwitchPCIe SwitchPCIe SwitchPCIe Switch 8
3 UPI
Turing T4 GPU
NEW MODEL!!
DS400TG-424RT
20:2 (4U)
Max CPU
56
Cores
T4 GPUs
20 X
With
Single Precision
160+ TFs
FP16/FP32
Mixed Precision
1300 TFs
11. Tyrone AI Oriented 16GPU System
Processor Support
Dual Xeon Scalable Processor; 3 UPI
16 Tesla V100 32GB SXM3 GPUs
Memory Capacity
24 DIMMs ECC DDR4 2666 MHz
Expansion Slots
16 PCI-e 3.0 x16 LP (via RDMA for IB EDR)
2 PCI-e 3.0 x16 LP
I/O ports
1x VGA, 2x 10G-BaseT LAN, 3x USB 3.0, and 1x
IPMI dedicated LAN port
Drives
16 NVMe U.2 2.5” drives bays & 6 SATA 2.5”
drives bays
2 M.2 NVMe
System Cooling
14 heavy duty fans
Power Supply
6 x 3000WTitanium Level efficiency power
supplies
NVLink + NVSwitch
based high
performance GPU
Interconnect
10U System
Includes CPU head node
X-2 Boards
1
4
5
3
7
2
6
NEW MODEL!!
DS400NG16-
1016RT
16:2 (10U)
Max CPU
56
Cores
Tesla V100 32 NVlink
16 X
With
Single Precision
250+ TFs
13. Topics Covered in Session 1
Overview of OpenCV
and TensorFlow 2.0
Highlights of the
Important changes
from TF 1.4
Core Modules and
API of TensorFlow 2.0
How to build Data
pipelines
How to implement a
Simple Deep learning
models using tf.Keras
14. AGENDA
1. Optimized GPU systems for TensorFlow
Build your personal TensorFlow workstation
Grow from a single system to large clusters seamlessly
Manage denser deployments (containers)
2. Overview of OpenCV and TensorFlow 2.0
Comparative Analysis on OpenCV and TensorFlow
How we can leverage OpenCV for Model training process by perform Data Augmentation
3. Highlights of the Important changes from TF 1.4
Highlights on the core changes made in TensorFlow 2.0 which can simplify the Deep Learning model building steps
4. Core Modules and API of TensorFlow 2.0
TensorFlow 2.0 Key Modules : Why they are important and where we can use them for Deep learning model building steps
5. How to build Data pipelines
Understand the importance of having a Data Pipeline as an important component for Deep Learning model Building steps
6. How to implement a Simple Deep learning model using tf.Keras
Walk thru on the steps involved in implementing a fully connected Deep learning Model using TensorFlow Keras
Walk thru on the steps involved in leveraging Transfer Learning for Building a Robust Deep leaning model
15. Overview of OpenCV and TensorFlow 2.0
In this topics we will highlight some of the core functionality of OpenCV and how it
compares with TensorFlow 2.0
• How we can leverage OpenCV framework along
with TensorFlow for building Deep learning models
16. OpenCV Functionalities in the Nutshell
Feature matching
Corner Detection
Algorithms
Face Feature
Detection
Image
Transformation
Object Detection
OpenCV Library consists of various Implementations of Algorithms used for Feature Detection
17. Leverage OpenCV Feature Detection Algorithms
OpenCV
TensorFlow
Model Training
Feature Detection and Extraction
process
Trained Model
Tensors
19. Highlights of the Important changes from TF 1.4
Eager Execution
Keras Fully Integrated
Consolidation of Modules
Data Pipelines
TF 2.0 Default execution mode is based on Eager Execution which
eliminates manual compilation
Keras API is fully integrated with TF 2.0 , to leverage quick model
building and implementations. Includes support for building Data
Pipelines , Estimators and Eager Execution
Consolidation of APIs , removal redundant ones , made more
productive , easy to use and Faster implementation
Data API helps us to build complex input pipelines from various
sources. Helps us to handle large volumes of data , formats etc.
20. Core Modules and API of TensorFlow 2.0
Keras Data Accelerators TF Hub TF Functions
• Easy to use TensorFlow
API for quick
prototyping of a Deep
learning Model.
• With Latest integration
and modification can
be used for Production
as well
• Data API helps users to
build Data Pipeline for
training a Deep
learning model
• Build a distribution
strategy for Model
training using TensorFlow
Accelerators
• Train on multiple GPUs
• Leverage transfer
learning using TF Hub
, a library for reusable
Machine Learning
modules.
• Construct TensorFlow
Graph using Functions
Module
Core TensorFlow Modules and APIs for End to End Deep learning Model building Support
21. Deep Learning Architectures
A fully connected neural network consists of a
series of fully connected layers.
Each output dimension depends on each input
dimension.
y i = σ ( w 1 x 1 + ⋯ + w m x m )
Here, σ is a nonlinear function (for now, think of σ
as the sigmoid function introduced in the previous
chapter), and the w i are learnable parameters in
the network. The full output y is then
y = σ ( w 1,1 x 1 + ⋯ + w 1,m x m ) ⋮ σ ( w n,1 x 1
+ ⋯ + w n,m x m )
A Convolutional Neural Network (ConvNet/CNN) is
a Deep Learning algorithm which can take in an
input image, assign importance (learnable weights
and biases) to various aspects/objects in the
image and be able to differentiate one from the
other.
The pre-processing required in a ConvNet is much
lower as compared to other classification
algorithms. While in primitive methods filters are
hand-engineered, with enough training, ConvNets
have the ability to learn these
filters/characteristics.
• The idea behind RNNs is to make use of
sequential information. In a traditional neural
network we assume that all inputs (and
outputs) are independent of each other.
• If you want to predict the next word in a
sentence you better know which words came
before it. RNNs are called recurrent because
they perform the same task for every
element of a sequence, with the output being
depended on the previous computations.
• Another way to think about RNNs is that they
have a “memory” which captures information
about what has been calculated so far.
Deep architecture has an advantage over shallow architectures when dealing with complex learning problems.
22. Data Pipelines using TensorFlow 2.0
Data Data Pipeline Model Training
• Build Data pipeline using tf.data
• Design Complex input data pipeline and incorporate transformation Functions as part of it
• Data Source
A data source constructs a
Dataset from data stored in
memory or in one or more
files.
• API introduces a
tf.data.Dataset abstraction
that represents a sequence
of elements, in which each
element consists of one or
more components.
• Transformed Data is used to
train the model by sending
the data in batches
23. TensorFlow Keras
Model Layers Compile Training
• Construct Overall Model
structure
• A Framework For the model
• Configure layers based on the
Deep learning model type
• Convolutions Layers , Recurrent
Layers , Dense Layers etc.
• Assign a Optimizer and
Loss functions
• Compile the model
• Train the model based
on the number of
Epoch Provided
24. Real World Case Studies using TensorFlow – Automated Gas Chamber Control
Inputs:
The input air is sent at two different inlets as
Primary input Air and Secondary Input Air.
The input air would contain Oxygen that
helps in burning of Bio-Mass.
The Temperature & Carbon Monoxide content
has to be maintained at proper standard in
order to obtain a high-quality output Flue Gas.
There is an operator (Human) who maintains the rate of Output Temperature & Carbon –Monoxide content of Flue Gas by altering
the Temperature, Pressure sensors each time if there is a difference at output. Here, comes the need of Deep Learning Algorithm.
Solution :
• TensorFlow Based RNN LSTM Model to Analyse the
Sequence Data and provide a mechanism for
automated Gas Chamber Control
26. Contact Us
Get in touch with our AI experts
Write to us at ai@netwebtech.com
India North Hirdey - hirdey.vikram@netwebindia.com
India West Navin - navin@netwebindia.com
India South Niraj - niraj@netwebindia.com
India East Vivek - vivek@netwebindia.com
Singapore Anupriya - Anupriya@netwebtech.com
Indonesia Agam - agam@netwebtech.com
UAE Arun - arun@netwebtech.com