SlideShare a Scribd company logo
Applying your
Convolutional Neural Networks
Logistics
• We can’t hear you…
• Recording will be available…
• Slides will be available…
• Code samples and notebooks will be available…
• Queue up Questions…
Accelerate innovation by unifying data science, engineering
and business
• Founded by the original creators of Apache Spark
• Contributes 75% of the open source code, 10x more than
any other company
• Trained 100k+ Spark users on the Databricks platform
VISION
WHO WE ARE
Unified Analytics Platform powered by Apache Spark™PRODUCT
Aboutourspeaker
Denny Lee
Technical Product Marketing Manager
Former:
• Senior Director of Data Sciences Engineering at SAP Concur
• Principal Program Manager at Microsoft
• Azure Cosmos DB Engineering Spark and Graph Initiatives
• Isotope Incubation Team (currently known as HDInsight)
• Bing’s Audience Insights Team
• Yahoo!’s 24TB Analysis Services cube
DeepLearningFundamentalsSeries
This is a three-part series:
• Introduction to Neural Networks
• Training Neural Networks
• Applying your Convolutional Neural Network
This series will be make use of Keras (TensorFlow backend) but as it is a
fundamentals series, we are focusing primarily on the concepts.
CurrentSession:ApplyingNeuralNetworks
• Diving further into CNNs
• CNN Architectures
• Convolutions at Work!
A quick review
Overfittingandunderfitting
Cost function
Source: https://bit.ly/2IoAGzL
For this linear regression example, to
determine the best (slope of the line) for
we can calculate the cost function, such as
Mean Square Error, Mean absolute error,
Mean bias error, SVM Loss, etc.
For this example, we’ll use sum of squared
absolute differences
y = x ⋅ p
p
cost =
∑
|t − y|2
Gradient Descent Optimization
Source: https://bit.ly/2IoAGzL
Small Learning Rate
Source: https://bit.ly/2IoAGzL
Small Learning Rate
Source: https://bit.ly/2IoAGzL
Small Learning Rate
Source: https://bit.ly/2IoAGzL
Small Learning Rate
Source: https://bit.ly/2IoAGzL
Hyperparameters:ActivationFunctions?
• Good starting point: ReLU
• Note many neural networks samples: Keras MNIST,  TensorFlow CIFAR10 Pruning, etc.
• Note that each activation function has its own strengths and weaknesses.  A good quote
on activation functions from CS231N summarizes the choice well:
“What neuron type should I use?” Use the ReLU non-linearity, be careful with your learning rates
and possibly monitor the fraction of “dead” units in a network. If this concerns you, give Leaky
ReLU or Maxout a try. Never use sigmoid. Try tanh, but expect it to work worse than ReLU/
Maxout.
SimplifiedTwo-LayerANN
Input Hidden Output
x1 → Apres Ski′er
x2 → Shredder
h1 → weather
h2 → powder
h3 → driving
Do I snowboard this weekend?
SimplifiedTwo-LayerANN
1
1
0.8
0.2
0.7
0.6
0.9
0.1
0.8
0.75
0.69
h1 = 𝜎(1𝑥0.8 + 1𝑥0.6) = 0.80
h2 = 𝜎(1𝑥0.2 + 1𝑥0.9) = 0.75
h3 = 𝜎(1𝑥0.7 + 1𝑥0.1) = 0.69
SimplifiedTwo-LayerANN
1
1
0.8
0.2
0.7
0.6
0.9
0.1
0.8
0.75
0.69
0.2
0.8
0.5
0.75
𝑜𝑢𝑡 = 𝜎(0.2𝑥0.8 + 0.8𝑥0.75 + 0.5𝑥0.69)
= 𝜎(1.105)
= 0.75
Backpropagation
Input Hidden Output
0.8
0.2
0.75
Backpropagation
Input Hidden Output
0.85
• Backpropagation: calculate the
gradient of the cost function in a
neural network
• Used by gradient descent optimization
algorithm to adjust weight of neurons
• Also known as backward propagation
of errors as the error is calculated and
distributed back through the network
of layers
0.10
Sigmoidfunction(continued)
Output is not zero-centered: During gradient
descent, if all values are positive then during
backpropagation the weights will become all
positive or all negative creating zig zagging
dynamics.
Source: https://bit.ly/2IoAGzL
LearningRateCallouts
• Too small, it may take too long to get minima
• Too large, it may skip the minima altogether
WhichOptimizer?
“In practice Adam is currently recommended as
the default algorithm to use, and often works
slightly better than RMSProp. However, it is often
also worth trying SGD+Nesterov Momentum as
an alternative..”
 Andrej Karpathy, et al, CS231n
Source: https://goo.gl/2da4WY
Comparison of Adam to Other Optimization Algorithms Training a Multilayer Perceptron

Taken from Adam: A Method for Stochastic Optimization, 2015.
Optimizationonlosssurfacecontours
Adaptive algorithms converge
quickly and find the right direction
for the parameters.
In comparison, SGD is slow
Momentum-based methods
overshoot
Source: http://cs231n.github.io/neural-networks-3/#hyper
Image credit: Alec Radford
Optimizationonsaddlepoint
Notice how SGD gets stuck near the
top
Meanwhile adaptive techniques
optimize the fastest
Source: http://cs231n.github.io/neural-networks-3/#hyper
Image credit: Alec Radford
GoodReferences
• Suki Lau's Learning Rate Schedules and Adaptive Learning Rate Methods for
Deep Learning
• CS23n Convolutional Neural Networks for Visual Recognition
• Fundamentals of Deep Learning
• ADADELTA: An Adaptive Learning Rate Method
• Gentle Introduction to the Adam Optimization Algorithm for Deep Learning
Convolutional Networks
ConvolutionalNeuralNetworks
• Similar to Artificial Neural Networks but CNNs (or ConvNets) make explicit
assumptions that the input are images
• Regular neural networks do not scale well against images
• E.g. CIFAR-10 images are 32x32x3 (32 width, 32 height, 3 color channels) =
3072 weights – somewhat manageable
• A larger image of 200x200x3 = 120,000 weights
• CNNs have neurons arranged in 3D: width, height, depth.
• Neurons in a layer will only be connected to a small region of the layer before
it, i.e. NOT all of the neurons in a fully-connected manner.
• Final output layer for CIFAR-10 is 1x1x10 as we will reduce the full image into
a single vector of class scores, arranged along the depth dimension
CNNs/ConvNets
Regular 3-layer neural
network
• ConvNet arranges neurons in 3 dimensions
• 3D input results in 3D output
Source: https://cs231n.github.io/convolutional-networks/
Convolutional Neural Networks
28 x 28 28 x 28 14 x 14
Convolution
32 filters
Convolution
64 filters
Subsampling
Stride (2,2)
Feature Extraction Classification
0
1
8
9
FullyConnected
Dropout
Dropout
Convolutional Neural Networks
28 x 28 28 x 28 14 x 14
Convolution
32 filters
Convolution
64 filters
Subsampling
Stride (2,2)
Feature Extraction Classification
0
1
8
9
FullyConnected
Dropout
Dropout
Input
Pixel value of 32x32x3: 32 width,
32 height, 3 color channels (RGB)
Convolutional Neural Networks
28 x 28 28 x 28 14 x 14
Convolution
32 filters
Convolution
64 filters
Subsampling
Stride (2,2)
Feature Extraction Classification
0
1
8
9
FullyConnected
Dropout
Dropout
Convolutions
Compute output of neurons (dot
product between their weights)
connected to a small local region. If
we use 32 filters, then the output is
28x28x32 (using 5x5 filter)
Convolution:Kernel=Filter
During forward pass, we
slide over the image spatially
(i.e. convolve) each filter
across the width and height,
computing dot products.
Source: http://deeplearning.stanford.edu/wiki/index.php/Feature_extraction_using_convolution
kernel = filter = feature detector
Convolution:LocalConnectivity
Connect neurons to only a
small local region as it is
impractical to connect to all
neurons. Depth of filter =
depth of input volume.
Source: http://deeplearning.stanford.edu/wiki/index.php/Feature_extraction_using_convolution
Convolution:LocalConnectivity
Source: http://deeplearning.stanford.edu/wiki/index.php/Feature_extraction_using_convolution
Source: https://cs231n.github.io/convolutional-networks/
The neurons still compute a dot product of their weights with the input followed by a non-linearity,
but their connectivity is now restricted to be local spatially.
ConvolutionGoal
Goal is to create an entire set of filters in each CONV layer and
produce 2D activation maps (e.g. for 6 filters)
Source: http://cs231n.stanford.edu/slides/2017/cs231n_2017_lecture5.pdf
ConvolutionExamplewith8filters
Source: https://cs.stanford.edu/people/karpathy/convnetjs/demo/mnist.html
Convolution:32x32to28x28using5x5filter
Takingastride…
             
                       
                       
                       
                       
                       
             
             
             
                   
                   
                   
             
             
Stride = 1 Stride = 2
             
             
             
             
             
             
             
Stride = 3
Does not fit!
Output Size:
(N – F) / stride + 1
e.g. N = 7, F = 3:
Stride 1 = 5
Stride 2 = 3
Stride 3 = 2.33 (doh!)
Source: http://cs231n.stanford.edu/slides/2017/cs231n_2017_lecture5.pdf
Padding
0 0 0 0          
0                
0                
0                
0                
                 
                 
                 
                 
Source: http://cs231n.stanford.edu/slides/2017/cs231n_2017_lecture5.pdf
Zero pad image border to preserve size spatially
Input: 7 x 7
Filter: 3 x 3
Stride = 1
Zero Pad = 1
Output: 7 x 7
0 0 0 0 0 0          
0 0 0 0 0 0          
0 0                  
0 0                  
0 0                  
0 0                  
                     
                     
                     
                     
                     
Input: 7 x 7
Filter:5 x 5
Stride = 1
Zero Pad = 2
Output: 7 x 7
Common Practice:
• Stride = 1
• Filter size: F x F
• Zero Padding: (F – 1)/2
CalculateOutputSize
                                                                       
                                                                       
    1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32    
    2                                                                  
    3                                                                  
    4                                                                  
    5                                                                  
    6                                                                  
    7                                                                  
    8                                                                  
    9                                                                  
    10                                                                  
    11                                                                  
    12                                                                  
    13                                                                  
    14                                                                  
    15                                                                  
    16                                                                  
    17                                                                  
    18                                                                  
    19                                                                  
    20                                                                  
    21                                                                  
    22                                                                  
    23                                                                  
    24                                                                  
    25                                                                  
    26                                                                  
    27                                                                  
    28                                                                  
    29                                                                  
    30                                                                  
    31                                                                  
    32                                                                  
                                                                       
                                                                       
Input: 32 x 32 x 3
5 5x5 filters
Stride = 1
Padding = 2
Output Volume Size:
Source: http://cs231n.stanford.edu/slides/2017/
cs231n_2017_lecture5.pdf
(Input + 2) x Padding − Filter
Stride + 1
=
(32 + 2) x 2 − 5
1 + 1
= 32
32 x 32 x 5
5 5x5 filters
CalculateOutputSize
                                                                       
                                                                       
    1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32    
    2                                                                  
    3                                                                  
    4                                                                  
    5                                                                  
    6                                                                  
    7                                                                  
    8                                                                  
    9                                                                  
    10                                                                  
    11                                                                  
    12                                                                  
    13                                                                  
    14                                                                  
    15                                                                  
    16                                                                  
    17                                                                  
    18                                                                  
    19                                                                  
    20                                                                  
    21                                                                  
    22                                                                  
    23                                                                  
    24                                                                  
    25                                                                  
    26                                                                  
    27                                                                  
    28                                                                  
    29                                                                  
    30                                                                  
    31                                                                  
    32                                                                  
                                                                       
                                                                       
Input: 32 x 32 x 3
Stride = 1
Padding = 2
Number of Parameters
Source: http://cs231n.stanford.edu/slides/2017/
cs231n_2017_lecture5.pdf
(Filter x Filter x Depth + 1)(# of Filters) =
(5 x 5 x 3 + 1) x (5) =
380
Convolutional Neural Networks
28 x 28 28 x 28 14 x 14
Convolution
32 filters
Convolution
64 filters
Subsampling
Stride (2,2)
Feature Extraction Classification
0
1
8
9
FullyConnected
Dropout
Dropout
Activation Function (ReLU)
Compute output of neurons (dot
product between their weights)
connected to a small local region. If
we use 32 filters, then the output is
28x28x32 (using 5x5 filter)
ReLUStep(LocalConnectivity)
Source: http://deeplearning.stanford.edu/wiki/index.php/Feature_extraction_using_convolution
Source: https://cs231n.github.io/convolutional-networks/
The neurons still compute a dot product of their weights with the input followed by a non-linearity,
but their connectivity is now restricted to be local spatially.
Convolutional Neural Networks
28 x 28 28 x 28 14 x 14
Convolution
32 filters
Convolution
64 filters
Subsampling
Stride (2,2)
Feature Extraction Classification
0
1
8
9
FullyConnected
Dropout
Dropout
Pooling
Perform down sampling operation
along spatial dimensions (w, h)
resulting in reduced volume,
e.g. 14x14x2.
Pooling=subsampling=reduceimagesize
Source: https://cs231n.github.io/convolutional-networks/#pool
• Commonly insert pooling
layers between
successive convolution
layers
• Reduces size spatially
• Reduces amount of
parameters
• Minimizes likelihood of
overfitting
Poolingcommonpractices
Source: https://cs231n.github.io/convolutional-networks/#pool
• Input:
• Output:
• Typically or
. (overlap pooling)
• Pooling with larger receptive fields
(. ) too destructive.
{𝐹 = 2, 𝑆 = 2}
W1 x H1 x D1
W2 x H2 x D2 where
W2 =
(W1 − F)
S
+ 1, D2 = D1
H2 =
(H1 − F)
S
+ 1
{F = 2, S = 2}
{F = 3, S = 2}
{F}
Stride,Pooling,Ohmy!
• Smaller strides = Larger Output
• Larger strides = Smaller Output (less overlaps)
• Less memory required (i.e. smaller volume)
• Minimizes overfitting
• Potentially use larger strides in convolution layer to reduce
number of pooling layers
• Larger strides = smaller output reduce in spatial size ala pooling
• ImageNet 2015 winner ResNet has only two pooling layers
Convolutional Neural Networks
28 x 28 28 x 28 14 x 14
Convolution
32 filters
Convolution
64 filters
Subsampling
Stride (2,2)
Feature Extraction Classification
0
1
8
9
FullyConnected
Dropout
Dropout
Fully Connected
Neurons in a fully connected layer
have full connections to all
activations in the previous layer, as
seen in regular Neural Networks.
Convolutional Neural Networks Layers
28 x 28 28 x 28 14 x 14
Convolution
32 filters
Convolution
64 filters
Subsampling
Stride (2,2)
Feature Extraction Classification
0
1
8
9
FullyConnected
Dropout
Dropout
https://cs.stanford.edu/people/karpathy/convnetjs/demo/mnist.html
ConvNetJS MNIST Demo
CNN Architectures
LeNet-5
• Introduced by Yann LeCun: http://
yann.lecun.com/exdb/publis/pdf/
lecun-01a.pdf
• Useful for recognizing single object
images
• Used for handwritten digits
recognition
AlexNet
• Introduced by Krizhevsky, Sutskever and Hinton
• 1000-class object recognition
• 60 million parameters, 650k neurons, 1.1 billion computation units in a forward pass
https://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks.pdf
Inception
https://arxiv.org/pdf/1409.4842.pdf
• Inception takes its name from the movie’s title
• Many repetitive structures called inception
modules
• 1000-class ILSVRC winner
VGG
• Introduced by Simnyan and
Zisserman in 2014
• VGG after Visual Geometry Group
at Oxford
• Conv3 layers stacked on top of
each other
• The latest: VGG19 – 19 layers in the
network
ResNet
• Introduced by He, Zhang, Ren, Sun
from MS Research
• Use residual learning as a building
block where the identity is
propagated through the network
along with detected features
• Won ILSVRC2015
Neurons … Activate!
DEMO
I’d like to thank…
GreatReferences
• Andrej Karparthy’s ConvNetJS MNIST Demo
• What is back propagation in neural networks?
• CS231n: Convolutional Neural Networks for Visual Recognition
• Syllabus and Slides | Course Notes | YouTube
• With particular focus on CS231n: Lecture 7: Convolution Neural Networks 
• Neural Networks and Deep Learning
• TensorFlow
GreatReferences
• Deep Visualization Toolbox
• Back Propagation with TensorFlow
• TensorFrames: Google TensorFlow with Apache Spark
• Integrating deep learning libraries with Apache Spark
• Build, Scale, and Deploy Deep Learning Pipelines with Ease
Attribution
Tomek Drabas
Brooke Wenig
Timothee Hunter
Cyrielle Simeone
Q&A
What’s next?
State Of The Art Deep Learning On Apache Spark™
October 31, 2018 | 09:00 PDT
https://dbricks.co/2NqogIp

More Related Content

What's hot

Convolution Neural Network (CNN)
Convolution Neural Network (CNN)Convolution Neural Network (CNN)
Convolution Neural Network (CNN)
Suraj Aavula
 
Overview of Convolutional Neural Networks
Overview of Convolutional Neural NetworksOverview of Convolutional Neural Networks
Overview of Convolutional Neural Networks
ananth
 
Introduction to Recurrent Neural Network
Introduction to Recurrent Neural NetworkIntroduction to Recurrent Neural Network
Introduction to Recurrent Neural Network
Knoldus Inc.
 
Deep Learning - Overview of my work II
Deep Learning - Overview of my work IIDeep Learning - Overview of my work II
Deep Learning - Overview of my work II
Mohamed Loey
 
Quantum neural network
Quantum neural networkQuantum neural network
Quantum neural network
surat murthy
 
Understanding Convolutional Neural Networks
Understanding Convolutional Neural NetworksUnderstanding Convolutional Neural Networks
Understanding Convolutional Neural Networks
Jeremy Nixon
 
Introduction to Convolutional Neural Networks
Introduction to Convolutional Neural NetworksIntroduction to Convolutional Neural Networks
Introduction to Convolutional Neural Networks
Hannes Hapke
 
Convolutional Neural Network and Its Applications
Convolutional Neural Network and Its ApplicationsConvolutional Neural Network and Its Applications
Convolutional Neural Network and Its Applications
Kasun Chinthaka Piyarathna
 
Deep Learning Tutorial | Deep Learning Tutorial For Beginners | What Is Deep ...
Deep Learning Tutorial | Deep Learning Tutorial For Beginners | What Is Deep ...Deep Learning Tutorial | Deep Learning Tutorial For Beginners | What Is Deep ...
Deep Learning Tutorial | Deep Learning Tutorial For Beginners | What Is Deep ...
Simplilearn
 
Activation function
Activation functionActivation function
Activation function
RakshithGowdakodihal
 
Deep neural networks
Deep neural networksDeep neural networks
Deep neural networks
Si Haem
 
Neural networks and deep learning
Neural networks and deep learningNeural networks and deep learning
Neural networks and deep learning
Jörgen Sandig
 
Recurrent Neural Networks (RNN) | RNN LSTM | Deep Learning Tutorial | Tensorf...
Recurrent Neural Networks (RNN) | RNN LSTM | Deep Learning Tutorial | Tensorf...Recurrent Neural Networks (RNN) | RNN LSTM | Deep Learning Tutorial | Tensorf...
Recurrent Neural Networks (RNN) | RNN LSTM | Deep Learning Tutorial | Tensorf...
Edureka!
 
Artificial Neural Network | Deep Neural Network Explained | Artificial Neural...
Artificial Neural Network | Deep Neural Network Explained | Artificial Neural...Artificial Neural Network | Deep Neural Network Explained | Artificial Neural...
Artificial Neural Network | Deep Neural Network Explained | Artificial Neural...
Simplilearn
 
Resnet
ResnetResnet
Cnn
CnnCnn
Convolutional Neural Networks
Convolutional Neural NetworksConvolutional Neural Networks
Convolutional Neural Networks
Ashray Bhandare
 
PyTorch Introduction
PyTorch IntroductionPyTorch Introduction
PyTorch Introduction
Yash Kawdiya
 
Python for Image Understanding: Deep Learning with Convolutional Neural Nets
Python for Image Understanding: Deep Learning with Convolutional Neural NetsPython for Image Understanding: Deep Learning with Convolutional Neural Nets
Python for Image Understanding: Deep Learning with Convolutional Neural Nets
Roelof Pieters
 

What's hot (20)

Convolution Neural Network (CNN)
Convolution Neural Network (CNN)Convolution Neural Network (CNN)
Convolution Neural Network (CNN)
 
Overview of Convolutional Neural Networks
Overview of Convolutional Neural NetworksOverview of Convolutional Neural Networks
Overview of Convolutional Neural Networks
 
Deep Learning
Deep LearningDeep Learning
Deep Learning
 
Introduction to Recurrent Neural Network
Introduction to Recurrent Neural NetworkIntroduction to Recurrent Neural Network
Introduction to Recurrent Neural Network
 
Deep Learning - Overview of my work II
Deep Learning - Overview of my work IIDeep Learning - Overview of my work II
Deep Learning - Overview of my work II
 
Quantum neural network
Quantum neural networkQuantum neural network
Quantum neural network
 
Understanding Convolutional Neural Networks
Understanding Convolutional Neural NetworksUnderstanding Convolutional Neural Networks
Understanding Convolutional Neural Networks
 
Introduction to Convolutional Neural Networks
Introduction to Convolutional Neural NetworksIntroduction to Convolutional Neural Networks
Introduction to Convolutional Neural Networks
 
Convolutional Neural Network and Its Applications
Convolutional Neural Network and Its ApplicationsConvolutional Neural Network and Its Applications
Convolutional Neural Network and Its Applications
 
Deep Learning Tutorial | Deep Learning Tutorial For Beginners | What Is Deep ...
Deep Learning Tutorial | Deep Learning Tutorial For Beginners | What Is Deep ...Deep Learning Tutorial | Deep Learning Tutorial For Beginners | What Is Deep ...
Deep Learning Tutorial | Deep Learning Tutorial For Beginners | What Is Deep ...
 
Activation function
Activation functionActivation function
Activation function
 
Deep neural networks
Deep neural networksDeep neural networks
Deep neural networks
 
Neural networks and deep learning
Neural networks and deep learningNeural networks and deep learning
Neural networks and deep learning
 
Recurrent Neural Networks (RNN) | RNN LSTM | Deep Learning Tutorial | Tensorf...
Recurrent Neural Networks (RNN) | RNN LSTM | Deep Learning Tutorial | Tensorf...Recurrent Neural Networks (RNN) | RNN LSTM | Deep Learning Tutorial | Tensorf...
Recurrent Neural Networks (RNN) | RNN LSTM | Deep Learning Tutorial | Tensorf...
 
Artificial Neural Network | Deep Neural Network Explained | Artificial Neural...
Artificial Neural Network | Deep Neural Network Explained | Artificial Neural...Artificial Neural Network | Deep Neural Network Explained | Artificial Neural...
Artificial Neural Network | Deep Neural Network Explained | Artificial Neural...
 
Resnet
ResnetResnet
Resnet
 
Cnn
CnnCnn
Cnn
 
Convolutional Neural Networks
Convolutional Neural NetworksConvolutional Neural Networks
Convolutional Neural Networks
 
PyTorch Introduction
PyTorch IntroductionPyTorch Introduction
PyTorch Introduction
 
Python for Image Understanding: Deep Learning with Convolutional Neural Nets
Python for Image Understanding: Deep Learning with Convolutional Neural NetsPython for Image Understanding: Deep Learning with Convolutional Neural Nets
Python for Image Understanding: Deep Learning with Convolutional Neural Nets
 

Similar to Applying your Convolutional Neural Networks

DL (v2).pptx
DL (v2).pptxDL (v2).pptx
DL (v2).pptx
FKKBWITTAINAN
 
Matrix Factorization
Matrix FactorizationMatrix Factorization
Matrix Factorization
Yusuke Yamamoto
 
Training Neural Networks
Training Neural NetworksTraining Neural Networks
Training Neural Networks
Databricks
 
Eye deep
Eye deepEye deep
Eye deep
sveitser
 
Introduction to Neural Networks
Introduction to Neural NetworksIntroduction to Neural Networks
Introduction to Neural Networks
Databricks
 
Paris data-geeks-2013-03-28
Paris data-geeks-2013-03-28Paris data-geeks-2013-03-28
Paris data-geeks-2013-03-28
Ted Dunning
 
Challenging Web-Scale Graph Analytics with Apache Spark with Xiangrui Meng
Challenging Web-Scale Graph Analytics with Apache Spark with Xiangrui MengChallenging Web-Scale Graph Analytics with Apache Spark with Xiangrui Meng
Challenging Web-Scale Graph Analytics with Apache Spark with Xiangrui Meng
Databricks
 
Challenging Web-Scale Graph Analytics with Apache Spark
Challenging Web-Scale Graph Analytics with Apache SparkChallenging Web-Scale Graph Analytics with Apache Spark
Challenging Web-Scale Graph Analytics with Apache Spark
Databricks
 
ACM 2013-02-25
ACM 2013-02-25ACM 2013-02-25
ACM 2013-02-25
Ted Dunning
 
CS 354 More Graphics Pipeline
CS 354 More Graphics PipelineCS 354 More Graphics Pipeline
CS 354 More Graphics Pipeline
Mark Kilgard
 
(CMP305) Deep Learning on AWS Made EasyCmp305
(CMP305) Deep Learning on AWS Made EasyCmp305(CMP305) Deep Learning on AWS Made EasyCmp305
(CMP305) Deep Learning on AWS Made EasyCmp305
Amazon Web Services
 
NVIDIA 深度學習教育機構 (DLI): Image segmentation with tensorflow
NVIDIA 深度學習教育機構 (DLI): Image segmentation with tensorflowNVIDIA 深度學習教育機構 (DLI): Image segmentation with tensorflow
NVIDIA 深度學習教育機構 (DLI): Image segmentation with tensorflow
NVIDIA Taiwan
 
Structured Forests for Fast Edge Detection [Paper Presentation]
Structured Forests for Fast Edge Detection [Paper Presentation]Structured Forests for Fast Edge Detection [Paper Presentation]
Structured Forests for Fast Edge Detection [Paper Presentation]
Mohammad Shaker
 
Tutorial on Theory and Application of Generative Adversarial Networks
Tutorial on Theory and Application of Generative Adversarial NetworksTutorial on Theory and Application of Generative Adversarial Networks
Tutorial on Theory and Application of Generative Adversarial Networks
MLReview
 
Machine Learning, Deep Learning and Data Analysis Introduction
Machine Learning, Deep Learning and Data Analysis IntroductionMachine Learning, Deep Learning and Data Analysis Introduction
Machine Learning, Deep Learning and Data Analysis Introduction
Te-Yen Liu
 
Hanjun Dai, PhD Student, School of Computational Science and Engineering, Geo...
Hanjun Dai, PhD Student, School of Computational Science and Engineering, Geo...Hanjun Dai, PhD Student, School of Computational Science and Engineering, Geo...
Hanjun Dai, PhD Student, School of Computational Science and Engineering, Geo...
MLconf
 
Image Classification (20230411)
Image Classification (20230411)Image Classification (20230411)
Image Classification (20230411)
FEG
 
Panoramic Video in Environmental Monitoring Software Development and Applica...
Panoramic Video in Environmental Monitoring Software Development and Applica...Panoramic Video in Environmental Monitoring Software Development and Applica...
Panoramic Video in Environmental Monitoring Software Development and Applica...
pycontw
 
Deep Learning meetup
Deep Learning meetupDeep Learning meetup
Deep Learning meetup
Ivan Goloskokovic
 
Introduction to Applied Machine Learning
Introduction to Applied Machine LearningIntroduction to Applied Machine Learning
Introduction to Applied Machine Learning
SheilaJimenezMorejon
 

Similar to Applying your Convolutional Neural Networks (20)

DL (v2).pptx
DL (v2).pptxDL (v2).pptx
DL (v2).pptx
 
Matrix Factorization
Matrix FactorizationMatrix Factorization
Matrix Factorization
 
Training Neural Networks
Training Neural NetworksTraining Neural Networks
Training Neural Networks
 
Eye deep
Eye deepEye deep
Eye deep
 
Introduction to Neural Networks
Introduction to Neural NetworksIntroduction to Neural Networks
Introduction to Neural Networks
 
Paris data-geeks-2013-03-28
Paris data-geeks-2013-03-28Paris data-geeks-2013-03-28
Paris data-geeks-2013-03-28
 
Challenging Web-Scale Graph Analytics with Apache Spark with Xiangrui Meng
Challenging Web-Scale Graph Analytics with Apache Spark with Xiangrui MengChallenging Web-Scale Graph Analytics with Apache Spark with Xiangrui Meng
Challenging Web-Scale Graph Analytics with Apache Spark with Xiangrui Meng
 
Challenging Web-Scale Graph Analytics with Apache Spark
Challenging Web-Scale Graph Analytics with Apache SparkChallenging Web-Scale Graph Analytics with Apache Spark
Challenging Web-Scale Graph Analytics with Apache Spark
 
ACM 2013-02-25
ACM 2013-02-25ACM 2013-02-25
ACM 2013-02-25
 
CS 354 More Graphics Pipeline
CS 354 More Graphics PipelineCS 354 More Graphics Pipeline
CS 354 More Graphics Pipeline
 
(CMP305) Deep Learning on AWS Made EasyCmp305
(CMP305) Deep Learning on AWS Made EasyCmp305(CMP305) Deep Learning on AWS Made EasyCmp305
(CMP305) Deep Learning on AWS Made EasyCmp305
 
NVIDIA 深度學習教育機構 (DLI): Image segmentation with tensorflow
NVIDIA 深度學習教育機構 (DLI): Image segmentation with tensorflowNVIDIA 深度學習教育機構 (DLI): Image segmentation with tensorflow
NVIDIA 深度學習教育機構 (DLI): Image segmentation with tensorflow
 
Structured Forests for Fast Edge Detection [Paper Presentation]
Structured Forests for Fast Edge Detection [Paper Presentation]Structured Forests for Fast Edge Detection [Paper Presentation]
Structured Forests for Fast Edge Detection [Paper Presentation]
 
Tutorial on Theory and Application of Generative Adversarial Networks
Tutorial on Theory and Application of Generative Adversarial NetworksTutorial on Theory and Application of Generative Adversarial Networks
Tutorial on Theory and Application of Generative Adversarial Networks
 
Machine Learning, Deep Learning and Data Analysis Introduction
Machine Learning, Deep Learning and Data Analysis IntroductionMachine Learning, Deep Learning and Data Analysis Introduction
Machine Learning, Deep Learning and Data Analysis Introduction
 
Hanjun Dai, PhD Student, School of Computational Science and Engineering, Geo...
Hanjun Dai, PhD Student, School of Computational Science and Engineering, Geo...Hanjun Dai, PhD Student, School of Computational Science and Engineering, Geo...
Hanjun Dai, PhD Student, School of Computational Science and Engineering, Geo...
 
Image Classification (20230411)
Image Classification (20230411)Image Classification (20230411)
Image Classification (20230411)
 
Panoramic Video in Environmental Monitoring Software Development and Applica...
Panoramic Video in Environmental Monitoring Software Development and Applica...Panoramic Video in Environmental Monitoring Software Development and Applica...
Panoramic Video in Environmental Monitoring Software Development and Applica...
 
Deep Learning meetup
Deep Learning meetupDeep Learning meetup
Deep Learning meetup
 
Introduction to Applied Machine Learning
Introduction to Applied Machine LearningIntroduction to Applied Machine Learning
Introduction to Applied Machine Learning
 

More from Databricks

DW Migration Webinar-March 2022.pptx
DW Migration Webinar-March 2022.pptxDW Migration Webinar-March 2022.pptx
DW Migration Webinar-March 2022.pptx
Databricks
 
Data Lakehouse Symposium | Day 1 | Part 1
Data Lakehouse Symposium | Day 1 | Part 1Data Lakehouse Symposium | Day 1 | Part 1
Data Lakehouse Symposium | Day 1 | Part 1
Databricks
 
Data Lakehouse Symposium | Day 1 | Part 2
Data Lakehouse Symposium | Day 1 | Part 2Data Lakehouse Symposium | Day 1 | Part 2
Data Lakehouse Symposium | Day 1 | Part 2
Databricks
 
Data Lakehouse Symposium | Day 2
Data Lakehouse Symposium | Day 2Data Lakehouse Symposium | Day 2
Data Lakehouse Symposium | Day 2
Databricks
 
Data Lakehouse Symposium | Day 4
Data Lakehouse Symposium | Day 4Data Lakehouse Symposium | Day 4
Data Lakehouse Symposium | Day 4
Databricks
 
5 Critical Steps to Clean Your Data Swamp When Migrating Off of Hadoop
5 Critical Steps to Clean Your Data Swamp When Migrating Off of Hadoop5 Critical Steps to Clean Your Data Swamp When Migrating Off of Hadoop
5 Critical Steps to Clean Your Data Swamp When Migrating Off of Hadoop
Databricks
 
Democratizing Data Quality Through a Centralized Platform
Democratizing Data Quality Through a Centralized PlatformDemocratizing Data Quality Through a Centralized Platform
Democratizing Data Quality Through a Centralized Platform
Databricks
 
Learn to Use Databricks for Data Science
Learn to Use Databricks for Data ScienceLearn to Use Databricks for Data Science
Learn to Use Databricks for Data Science
Databricks
 
Why APM Is Not the Same As ML Monitoring
Why APM Is Not the Same As ML MonitoringWhy APM Is Not the Same As ML Monitoring
Why APM Is Not the Same As ML Monitoring
Databricks
 
The Function, the Context, and the Data—Enabling ML Ops at Stitch Fix
The Function, the Context, and the Data—Enabling ML Ops at Stitch FixThe Function, the Context, and the Data—Enabling ML Ops at Stitch Fix
The Function, the Context, and the Data—Enabling ML Ops at Stitch Fix
Databricks
 
Stage Level Scheduling Improving Big Data and AI Integration
Stage Level Scheduling Improving Big Data and AI IntegrationStage Level Scheduling Improving Big Data and AI Integration
Stage Level Scheduling Improving Big Data and AI Integration
Databricks
 
Simplify Data Conversion from Spark to TensorFlow and PyTorch
Simplify Data Conversion from Spark to TensorFlow and PyTorchSimplify Data Conversion from Spark to TensorFlow and PyTorch
Simplify Data Conversion from Spark to TensorFlow and PyTorch
Databricks
 
Scaling your Data Pipelines with Apache Spark on Kubernetes
Scaling your Data Pipelines with Apache Spark on KubernetesScaling your Data Pipelines with Apache Spark on Kubernetes
Scaling your Data Pipelines with Apache Spark on Kubernetes
Databricks
 
Scaling and Unifying SciKit Learn and Apache Spark Pipelines
Scaling and Unifying SciKit Learn and Apache Spark PipelinesScaling and Unifying SciKit Learn and Apache Spark Pipelines
Scaling and Unifying SciKit Learn and Apache Spark Pipelines
Databricks
 
Sawtooth Windows for Feature Aggregations
Sawtooth Windows for Feature AggregationsSawtooth Windows for Feature Aggregations
Sawtooth Windows for Feature Aggregations
Databricks
 
Redis + Apache Spark = Swiss Army Knife Meets Kitchen Sink
Redis + Apache Spark = Swiss Army Knife Meets Kitchen SinkRedis + Apache Spark = Swiss Army Knife Meets Kitchen Sink
Redis + Apache Spark = Swiss Army Knife Meets Kitchen Sink
Databricks
 
Re-imagine Data Monitoring with whylogs and Spark
Re-imagine Data Monitoring with whylogs and SparkRe-imagine Data Monitoring with whylogs and Spark
Re-imagine Data Monitoring with whylogs and Spark
Databricks
 
Raven: End-to-end Optimization of ML Prediction Queries
Raven: End-to-end Optimization of ML Prediction QueriesRaven: End-to-end Optimization of ML Prediction Queries
Raven: End-to-end Optimization of ML Prediction Queries
Databricks
 
Processing Large Datasets for ADAS Applications using Apache Spark
Processing Large Datasets for ADAS Applications using Apache SparkProcessing Large Datasets for ADAS Applications using Apache Spark
Processing Large Datasets for ADAS Applications using Apache Spark
Databricks
 
Massive Data Processing in Adobe Using Delta Lake
Massive Data Processing in Adobe Using Delta LakeMassive Data Processing in Adobe Using Delta Lake
Massive Data Processing in Adobe Using Delta Lake
Databricks
 

More from Databricks (20)

DW Migration Webinar-March 2022.pptx
DW Migration Webinar-March 2022.pptxDW Migration Webinar-March 2022.pptx
DW Migration Webinar-March 2022.pptx
 
Data Lakehouse Symposium | Day 1 | Part 1
Data Lakehouse Symposium | Day 1 | Part 1Data Lakehouse Symposium | Day 1 | Part 1
Data Lakehouse Symposium | Day 1 | Part 1
 
Data Lakehouse Symposium | Day 1 | Part 2
Data Lakehouse Symposium | Day 1 | Part 2Data Lakehouse Symposium | Day 1 | Part 2
Data Lakehouse Symposium | Day 1 | Part 2
 
Data Lakehouse Symposium | Day 2
Data Lakehouse Symposium | Day 2Data Lakehouse Symposium | Day 2
Data Lakehouse Symposium | Day 2
 
Data Lakehouse Symposium | Day 4
Data Lakehouse Symposium | Day 4Data Lakehouse Symposium | Day 4
Data Lakehouse Symposium | Day 4
 
5 Critical Steps to Clean Your Data Swamp When Migrating Off of Hadoop
5 Critical Steps to Clean Your Data Swamp When Migrating Off of Hadoop5 Critical Steps to Clean Your Data Swamp When Migrating Off of Hadoop
5 Critical Steps to Clean Your Data Swamp When Migrating Off of Hadoop
 
Democratizing Data Quality Through a Centralized Platform
Democratizing Data Quality Through a Centralized PlatformDemocratizing Data Quality Through a Centralized Platform
Democratizing Data Quality Through a Centralized Platform
 
Learn to Use Databricks for Data Science
Learn to Use Databricks for Data ScienceLearn to Use Databricks for Data Science
Learn to Use Databricks for Data Science
 
Why APM Is Not the Same As ML Monitoring
Why APM Is Not the Same As ML MonitoringWhy APM Is Not the Same As ML Monitoring
Why APM Is Not the Same As ML Monitoring
 
The Function, the Context, and the Data—Enabling ML Ops at Stitch Fix
The Function, the Context, and the Data—Enabling ML Ops at Stitch FixThe Function, the Context, and the Data—Enabling ML Ops at Stitch Fix
The Function, the Context, and the Data—Enabling ML Ops at Stitch Fix
 
Stage Level Scheduling Improving Big Data and AI Integration
Stage Level Scheduling Improving Big Data and AI IntegrationStage Level Scheduling Improving Big Data and AI Integration
Stage Level Scheduling Improving Big Data and AI Integration
 
Simplify Data Conversion from Spark to TensorFlow and PyTorch
Simplify Data Conversion from Spark to TensorFlow and PyTorchSimplify Data Conversion from Spark to TensorFlow and PyTorch
Simplify Data Conversion from Spark to TensorFlow and PyTorch
 
Scaling your Data Pipelines with Apache Spark on Kubernetes
Scaling your Data Pipelines with Apache Spark on KubernetesScaling your Data Pipelines with Apache Spark on Kubernetes
Scaling your Data Pipelines with Apache Spark on Kubernetes
 
Scaling and Unifying SciKit Learn and Apache Spark Pipelines
Scaling and Unifying SciKit Learn and Apache Spark PipelinesScaling and Unifying SciKit Learn and Apache Spark Pipelines
Scaling and Unifying SciKit Learn and Apache Spark Pipelines
 
Sawtooth Windows for Feature Aggregations
Sawtooth Windows for Feature AggregationsSawtooth Windows for Feature Aggregations
Sawtooth Windows for Feature Aggregations
 
Redis + Apache Spark = Swiss Army Knife Meets Kitchen Sink
Redis + Apache Spark = Swiss Army Knife Meets Kitchen SinkRedis + Apache Spark = Swiss Army Knife Meets Kitchen Sink
Redis + Apache Spark = Swiss Army Knife Meets Kitchen Sink
 
Re-imagine Data Monitoring with whylogs and Spark
Re-imagine Data Monitoring with whylogs and SparkRe-imagine Data Monitoring with whylogs and Spark
Re-imagine Data Monitoring with whylogs and Spark
 
Raven: End-to-end Optimization of ML Prediction Queries
Raven: End-to-end Optimization of ML Prediction QueriesRaven: End-to-end Optimization of ML Prediction Queries
Raven: End-to-end Optimization of ML Prediction Queries
 
Processing Large Datasets for ADAS Applications using Apache Spark
Processing Large Datasets for ADAS Applications using Apache SparkProcessing Large Datasets for ADAS Applications using Apache Spark
Processing Large Datasets for ADAS Applications using Apache Spark
 
Massive Data Processing in Adobe Using Delta Lake
Massive Data Processing in Adobe Using Delta LakeMassive Data Processing in Adobe Using Delta Lake
Massive Data Processing in Adobe Using Delta Lake
 

Recently uploaded

一比一原版(RUG毕业证)格罗宁根大学毕业证成绩单
一比一原版(RUG毕业证)格罗宁根大学毕业证成绩单一比一原版(RUG毕业证)格罗宁根大学毕业证成绩单
一比一原版(RUG毕业证)格罗宁根大学毕业证成绩单
vcaxypu
 
Levelwise PageRank with Loop-Based Dead End Handling Strategy : SHORT REPORT ...
Levelwise PageRank with Loop-Based Dead End Handling Strategy : SHORT REPORT ...Levelwise PageRank with Loop-Based Dead End Handling Strategy : SHORT REPORT ...
Levelwise PageRank with Loop-Based Dead End Handling Strategy : SHORT REPORT ...
Subhajit Sahu
 
做(mqu毕业证书)麦考瑞大学毕业证硕士文凭证书学费发票原版一模一样
做(mqu毕业证书)麦考瑞大学毕业证硕士文凭证书学费发票原版一模一样做(mqu毕业证书)麦考瑞大学毕业证硕士文凭证书学费发票原版一模一样
做(mqu毕业证书)麦考瑞大学毕业证硕士文凭证书学费发票原版一模一样
axoqas
 
Tabula.io Cheatsheet: automate your data workflows
Tabula.io Cheatsheet: automate your data workflowsTabula.io Cheatsheet: automate your data workflows
Tabula.io Cheatsheet: automate your data workflows
alex933524
 
一比一原版(UVic毕业证)维多利亚大学毕业证成绩单
一比一原版(UVic毕业证)维多利亚大学毕业证成绩单一比一原版(UVic毕业证)维多利亚大学毕业证成绩单
一比一原版(UVic毕业证)维多利亚大学毕业证成绩单
ukgaet
 
Ch03-Managing the Object-Oriented Information Systems Project a.pdf
Ch03-Managing the Object-Oriented Information Systems Project a.pdfCh03-Managing the Object-Oriented Information Systems Project a.pdf
Ch03-Managing the Object-Oriented Information Systems Project a.pdf
haila53
 
Best best suvichar in gujarati english meaning of this sentence as Silk road ...
Best best suvichar in gujarati english meaning of this sentence as Silk road ...Best best suvichar in gujarati english meaning of this sentence as Silk road ...
Best best suvichar in gujarati english meaning of this sentence as Silk road ...
AbhimanyuSinha9
 
Algorithmic optimizations for Dynamic Levelwise PageRank (from STICD) : SHORT...
Algorithmic optimizations for Dynamic Levelwise PageRank (from STICD) : SHORT...Algorithmic optimizations for Dynamic Levelwise PageRank (from STICD) : SHORT...
Algorithmic optimizations for Dynamic Levelwise PageRank (from STICD) : SHORT...
Subhajit Sahu
 
一比一原版(NYU毕业证)纽约大学毕业证成绩单
一比一原版(NYU毕业证)纽约大学毕业证成绩单一比一原版(NYU毕业证)纽约大学毕业证成绩单
一比一原版(NYU毕业证)纽约大学毕业证成绩单
ewymefz
 
Predicting Product Ad Campaign Performance: A Data Analysis Project Presentation
Predicting Product Ad Campaign Performance: A Data Analysis Project PresentationPredicting Product Ad Campaign Performance: A Data Analysis Project Presentation
Predicting Product Ad Campaign Performance: A Data Analysis Project Presentation
Boston Institute of Analytics
 
一比一原版(CBU毕业证)卡普顿大学毕业证成绩单
一比一原版(CBU毕业证)卡普顿大学毕业证成绩单一比一原版(CBU毕业证)卡普顿大学毕业证成绩单
一比一原版(CBU毕业证)卡普顿大学毕业证成绩单
nscud
 
Chatty Kathy - UNC Bootcamp Final Project Presentation - Final Version - 5.23...
Chatty Kathy - UNC Bootcamp Final Project Presentation - Final Version - 5.23...Chatty Kathy - UNC Bootcamp Final Project Presentation - Final Version - 5.23...
Chatty Kathy - UNC Bootcamp Final Project Presentation - Final Version - 5.23...
John Andrews
 
一比一原版(UMich毕业证)密歇根大学|安娜堡分校毕业证成绩单
一比一原版(UMich毕业证)密歇根大学|安娜堡分校毕业证成绩单一比一原版(UMich毕业证)密歇根大学|安娜堡分校毕业证成绩单
一比一原版(UMich毕业证)密歇根大学|安娜堡分校毕业证成绩单
ewymefz
 
Empowering Data Analytics Ecosystem.pptx
Empowering Data Analytics Ecosystem.pptxEmpowering Data Analytics Ecosystem.pptx
Empowering Data Analytics Ecosystem.pptx
benishzehra469
 
一比一原版(CU毕业证)卡尔顿大学毕业证成绩单
一比一原版(CU毕业证)卡尔顿大学毕业证成绩单一比一原版(CU毕业证)卡尔顿大学毕业证成绩单
一比一原版(CU毕业证)卡尔顿大学毕业证成绩单
yhkoc
 
Opendatabay - Open Data Marketplace.pptx
Opendatabay - Open Data Marketplace.pptxOpendatabay - Open Data Marketplace.pptx
Opendatabay - Open Data Marketplace.pptx
Opendatabay
 
一比一原版(IIT毕业证)伊利诺伊理工大学毕业证成绩单
一比一原版(IIT毕业证)伊利诺伊理工大学毕业证成绩单一比一原版(IIT毕业证)伊利诺伊理工大学毕业证成绩单
一比一原版(IIT毕业证)伊利诺伊理工大学毕业证成绩单
ewymefz
 
一比一原版(UPenn毕业证)宾夕法尼亚大学毕业证成绩单
一比一原版(UPenn毕业证)宾夕法尼亚大学毕业证成绩单一比一原版(UPenn毕业证)宾夕法尼亚大学毕业证成绩单
一比一原版(UPenn毕业证)宾夕法尼亚大学毕业证成绩单
ewymefz
 
Q1’2024 Update: MYCI’s Leap Year Rebound
Q1’2024 Update: MYCI’s Leap Year ReboundQ1’2024 Update: MYCI’s Leap Year Rebound
Q1’2024 Update: MYCI’s Leap Year Rebound
Oppotus
 
一比一原版(BU毕业证)波士顿大学毕业证成绩单
一比一原版(BU毕业证)波士顿大学毕业证成绩单一比一原版(BU毕业证)波士顿大学毕业证成绩单
一比一原版(BU毕业证)波士顿大学毕业证成绩单
ewymefz
 

Recently uploaded (20)

一比一原版(RUG毕业证)格罗宁根大学毕业证成绩单
一比一原版(RUG毕业证)格罗宁根大学毕业证成绩单一比一原版(RUG毕业证)格罗宁根大学毕业证成绩单
一比一原版(RUG毕业证)格罗宁根大学毕业证成绩单
 
Levelwise PageRank with Loop-Based Dead End Handling Strategy : SHORT REPORT ...
Levelwise PageRank with Loop-Based Dead End Handling Strategy : SHORT REPORT ...Levelwise PageRank with Loop-Based Dead End Handling Strategy : SHORT REPORT ...
Levelwise PageRank with Loop-Based Dead End Handling Strategy : SHORT REPORT ...
 
做(mqu毕业证书)麦考瑞大学毕业证硕士文凭证书学费发票原版一模一样
做(mqu毕业证书)麦考瑞大学毕业证硕士文凭证书学费发票原版一模一样做(mqu毕业证书)麦考瑞大学毕业证硕士文凭证书学费发票原版一模一样
做(mqu毕业证书)麦考瑞大学毕业证硕士文凭证书学费发票原版一模一样
 
Tabula.io Cheatsheet: automate your data workflows
Tabula.io Cheatsheet: automate your data workflowsTabula.io Cheatsheet: automate your data workflows
Tabula.io Cheatsheet: automate your data workflows
 
一比一原版(UVic毕业证)维多利亚大学毕业证成绩单
一比一原版(UVic毕业证)维多利亚大学毕业证成绩单一比一原版(UVic毕业证)维多利亚大学毕业证成绩单
一比一原版(UVic毕业证)维多利亚大学毕业证成绩单
 
Ch03-Managing the Object-Oriented Information Systems Project a.pdf
Ch03-Managing the Object-Oriented Information Systems Project a.pdfCh03-Managing the Object-Oriented Information Systems Project a.pdf
Ch03-Managing the Object-Oriented Information Systems Project a.pdf
 
Best best suvichar in gujarati english meaning of this sentence as Silk road ...
Best best suvichar in gujarati english meaning of this sentence as Silk road ...Best best suvichar in gujarati english meaning of this sentence as Silk road ...
Best best suvichar in gujarati english meaning of this sentence as Silk road ...
 
Algorithmic optimizations for Dynamic Levelwise PageRank (from STICD) : SHORT...
Algorithmic optimizations for Dynamic Levelwise PageRank (from STICD) : SHORT...Algorithmic optimizations for Dynamic Levelwise PageRank (from STICD) : SHORT...
Algorithmic optimizations for Dynamic Levelwise PageRank (from STICD) : SHORT...
 
一比一原版(NYU毕业证)纽约大学毕业证成绩单
一比一原版(NYU毕业证)纽约大学毕业证成绩单一比一原版(NYU毕业证)纽约大学毕业证成绩单
一比一原版(NYU毕业证)纽约大学毕业证成绩单
 
Predicting Product Ad Campaign Performance: A Data Analysis Project Presentation
Predicting Product Ad Campaign Performance: A Data Analysis Project PresentationPredicting Product Ad Campaign Performance: A Data Analysis Project Presentation
Predicting Product Ad Campaign Performance: A Data Analysis Project Presentation
 
一比一原版(CBU毕业证)卡普顿大学毕业证成绩单
一比一原版(CBU毕业证)卡普顿大学毕业证成绩单一比一原版(CBU毕业证)卡普顿大学毕业证成绩单
一比一原版(CBU毕业证)卡普顿大学毕业证成绩单
 
Chatty Kathy - UNC Bootcamp Final Project Presentation - Final Version - 5.23...
Chatty Kathy - UNC Bootcamp Final Project Presentation - Final Version - 5.23...Chatty Kathy - UNC Bootcamp Final Project Presentation - Final Version - 5.23...
Chatty Kathy - UNC Bootcamp Final Project Presentation - Final Version - 5.23...
 
一比一原版(UMich毕业证)密歇根大学|安娜堡分校毕业证成绩单
一比一原版(UMich毕业证)密歇根大学|安娜堡分校毕业证成绩单一比一原版(UMich毕业证)密歇根大学|安娜堡分校毕业证成绩单
一比一原版(UMich毕业证)密歇根大学|安娜堡分校毕业证成绩单
 
Empowering Data Analytics Ecosystem.pptx
Empowering Data Analytics Ecosystem.pptxEmpowering Data Analytics Ecosystem.pptx
Empowering Data Analytics Ecosystem.pptx
 
一比一原版(CU毕业证)卡尔顿大学毕业证成绩单
一比一原版(CU毕业证)卡尔顿大学毕业证成绩单一比一原版(CU毕业证)卡尔顿大学毕业证成绩单
一比一原版(CU毕业证)卡尔顿大学毕业证成绩单
 
Opendatabay - Open Data Marketplace.pptx
Opendatabay - Open Data Marketplace.pptxOpendatabay - Open Data Marketplace.pptx
Opendatabay - Open Data Marketplace.pptx
 
一比一原版(IIT毕业证)伊利诺伊理工大学毕业证成绩单
一比一原版(IIT毕业证)伊利诺伊理工大学毕业证成绩单一比一原版(IIT毕业证)伊利诺伊理工大学毕业证成绩单
一比一原版(IIT毕业证)伊利诺伊理工大学毕业证成绩单
 
一比一原版(UPenn毕业证)宾夕法尼亚大学毕业证成绩单
一比一原版(UPenn毕业证)宾夕法尼亚大学毕业证成绩单一比一原版(UPenn毕业证)宾夕法尼亚大学毕业证成绩单
一比一原版(UPenn毕业证)宾夕法尼亚大学毕业证成绩单
 
Q1’2024 Update: MYCI’s Leap Year Rebound
Q1’2024 Update: MYCI’s Leap Year ReboundQ1’2024 Update: MYCI’s Leap Year Rebound
Q1’2024 Update: MYCI’s Leap Year Rebound
 
一比一原版(BU毕业证)波士顿大学毕业证成绩单
一比一原版(BU毕业证)波士顿大学毕业证成绩单一比一原版(BU毕业证)波士顿大学毕业证成绩单
一比一原版(BU毕业证)波士顿大学毕业证成绩单
 

Applying your Convolutional Neural Networks

  • 2. Logistics • We can’t hear you… • Recording will be available… • Slides will be available… • Code samples and notebooks will be available… • Queue up Questions…
  • 3. Accelerate innovation by unifying data science, engineering and business • Founded by the original creators of Apache Spark • Contributes 75% of the open source code, 10x more than any other company • Trained 100k+ Spark users on the Databricks platform VISION WHO WE ARE Unified Analytics Platform powered by Apache Spark™PRODUCT
  • 4. Aboutourspeaker Denny Lee Technical Product Marketing Manager Former: • Senior Director of Data Sciences Engineering at SAP Concur • Principal Program Manager at Microsoft • Azure Cosmos DB Engineering Spark and Graph Initiatives • Isotope Incubation Team (currently known as HDInsight) • Bing’s Audience Insights Team • Yahoo!’s 24TB Analysis Services cube
  • 5. DeepLearningFundamentalsSeries This is a three-part series: • Introduction to Neural Networks • Training Neural Networks • Applying your Convolutional Neural Network This series will be make use of Keras (TensorFlow backend) but as it is a fundamentals series, we are focusing primarily on the concepts.
  • 6. CurrentSession:ApplyingNeuralNetworks • Diving further into CNNs • CNN Architectures • Convolutions at Work!
  • 9. Cost function Source: https://bit.ly/2IoAGzL For this linear regression example, to determine the best (slope of the line) for we can calculate the cost function, such as Mean Square Error, Mean absolute error, Mean bias error, SVM Loss, etc. For this example, we’ll use sum of squared absolute differences y = x ⋅ p p cost = ∑ |t − y|2
  • 10. Gradient Descent Optimization Source: https://bit.ly/2IoAGzL
  • 11. Small Learning Rate Source: https://bit.ly/2IoAGzL
  • 12. Small Learning Rate Source: https://bit.ly/2IoAGzL
  • 13. Small Learning Rate Source: https://bit.ly/2IoAGzL
  • 14. Small Learning Rate Source: https://bit.ly/2IoAGzL
  • 15. Hyperparameters:ActivationFunctions? • Good starting point: ReLU • Note many neural networks samples: Keras MNIST,  TensorFlow CIFAR10 Pruning, etc. • Note that each activation function has its own strengths and weaknesses.  A good quote on activation functions from CS231N summarizes the choice well: “What neuron type should I use?” Use the ReLU non-linearity, be careful with your learning rates and possibly monitor the fraction of “dead” units in a network. If this concerns you, give Leaky ReLU or Maxout a try. Never use sigmoid. Try tanh, but expect it to work worse than ReLU/ Maxout.
  • 16. SimplifiedTwo-LayerANN Input Hidden Output x1 → Apres Ski′er x2 → Shredder h1 → weather h2 → powder h3 → driving Do I snowboard this weekend?
  • 17. SimplifiedTwo-LayerANN 1 1 0.8 0.2 0.7 0.6 0.9 0.1 0.8 0.75 0.69 h1 = 𝜎(1𝑥0.8 + 1𝑥0.6) = 0.80 h2 = 𝜎(1𝑥0.2 + 1𝑥0.9) = 0.75 h3 = 𝜎(1𝑥0.7 + 1𝑥0.1) = 0.69
  • 20. Backpropagation Input Hidden Output 0.85 • Backpropagation: calculate the gradient of the cost function in a neural network • Used by gradient descent optimization algorithm to adjust weight of neurons • Also known as backward propagation of errors as the error is calculated and distributed back through the network of layers 0.10
  • 21. Sigmoidfunction(continued) Output is not zero-centered: During gradient descent, if all values are positive then during backpropagation the weights will become all positive or all negative creating zig zagging dynamics. Source: https://bit.ly/2IoAGzL
  • 22. LearningRateCallouts • Too small, it may take too long to get minima • Too large, it may skip the minima altogether
  • 23. WhichOptimizer? “In practice Adam is currently recommended as the default algorithm to use, and often works slightly better than RMSProp. However, it is often also worth trying SGD+Nesterov Momentum as an alternative..”  Andrej Karpathy, et al, CS231n Source: https://goo.gl/2da4WY Comparison of Adam to Other Optimization Algorithms Training a Multilayer Perceptron
 Taken from Adam: A Method for Stochastic Optimization, 2015.
  • 24. Optimizationonlosssurfacecontours Adaptive algorithms converge quickly and find the right direction for the parameters. In comparison, SGD is slow Momentum-based methods overshoot Source: http://cs231n.github.io/neural-networks-3/#hyper Image credit: Alec Radford
  • 25. Optimizationonsaddlepoint Notice how SGD gets stuck near the top Meanwhile adaptive techniques optimize the fastest Source: http://cs231n.github.io/neural-networks-3/#hyper Image credit: Alec Radford
  • 26. GoodReferences • Suki Lau's Learning Rate Schedules and Adaptive Learning Rate Methods for Deep Learning • CS23n Convolutional Neural Networks for Visual Recognition • Fundamentals of Deep Learning • ADADELTA: An Adaptive Learning Rate Method • Gentle Introduction to the Adam Optimization Algorithm for Deep Learning
  • 28. ConvolutionalNeuralNetworks • Similar to Artificial Neural Networks but CNNs (or ConvNets) make explicit assumptions that the input are images • Regular neural networks do not scale well against images • E.g. CIFAR-10 images are 32x32x3 (32 width, 32 height, 3 color channels) = 3072 weights – somewhat manageable • A larger image of 200x200x3 = 120,000 weights • CNNs have neurons arranged in 3D: width, height, depth. • Neurons in a layer will only be connected to a small region of the layer before it, i.e. NOT all of the neurons in a fully-connected manner. • Final output layer for CIFAR-10 is 1x1x10 as we will reduce the full image into a single vector of class scores, arranged along the depth dimension
  • 29. CNNs/ConvNets Regular 3-layer neural network • ConvNet arranges neurons in 3 dimensions • 3D input results in 3D output Source: https://cs231n.github.io/convolutional-networks/
  • 30. Convolutional Neural Networks 28 x 28 28 x 28 14 x 14 Convolution 32 filters Convolution 64 filters Subsampling Stride (2,2) Feature Extraction Classification 0 1 8 9 FullyConnected Dropout Dropout
  • 31. Convolutional Neural Networks 28 x 28 28 x 28 14 x 14 Convolution 32 filters Convolution 64 filters Subsampling Stride (2,2) Feature Extraction Classification 0 1 8 9 FullyConnected Dropout Dropout Input Pixel value of 32x32x3: 32 width, 32 height, 3 color channels (RGB)
  • 32. Convolutional Neural Networks 28 x 28 28 x 28 14 x 14 Convolution 32 filters Convolution 64 filters Subsampling Stride (2,2) Feature Extraction Classification 0 1 8 9 FullyConnected Dropout Dropout Convolutions Compute output of neurons (dot product between their weights) connected to a small local region. If we use 32 filters, then the output is 28x28x32 (using 5x5 filter)
  • 33. Convolution:Kernel=Filter During forward pass, we slide over the image spatially (i.e. convolve) each filter across the width and height, computing dot products. Source: http://deeplearning.stanford.edu/wiki/index.php/Feature_extraction_using_convolution kernel = filter = feature detector
  • 34. Convolution:LocalConnectivity Connect neurons to only a small local region as it is impractical to connect to all neurons. Depth of filter = depth of input volume. Source: http://deeplearning.stanford.edu/wiki/index.php/Feature_extraction_using_convolution
  • 35. Convolution:LocalConnectivity Source: http://deeplearning.stanford.edu/wiki/index.php/Feature_extraction_using_convolution Source: https://cs231n.github.io/convolutional-networks/ The neurons still compute a dot product of their weights with the input followed by a non-linearity, but their connectivity is now restricted to be local spatially.
  • 36. ConvolutionGoal Goal is to create an entire set of filters in each CONV layer and produce 2D activation maps (e.g. for 6 filters) Source: http://cs231n.stanford.edu/slides/2017/cs231n_2017_lecture5.pdf
  • 39. Takingastride…                                                                                                                                                                                                                                                                         Stride = 1 Stride = 2                                                                                                   Stride = 3 Does not fit! Output Size: (N – F) / stride + 1 e.g. N = 7, F = 3: Stride 1 = 5 Stride 2 = 3 Stride 3 = 2.33 (doh!) Source: http://cs231n.stanford.edu/slides/2017/cs231n_2017_lecture5.pdf
  • 40. Padding 0 0 0 0           0                 0                 0                 0                                                                                         Source: http://cs231n.stanford.edu/slides/2017/cs231n_2017_lecture5.pdf Zero pad image border to preserve size spatially Input: 7 x 7 Filter: 3 x 3 Stride = 1 Zero Pad = 1 Output: 7 x 7 0 0 0 0 0 0           0 0 0 0 0 0           0 0                   0 0                   0 0                   0 0                                                                                                                                 Input: 7 x 7 Filter:5 x 5 Stride = 1 Zero Pad = 2 Output: 7 x 7 Common Practice: • Stride = 1 • Filter size: F x F • Zero Padding: (F – 1)/2
  • 41. CalculateOutputSize                                                                                                                                                     1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32         2                                                                       3                                                                       4                                                                       5                                                                       6                                                                       7                                                                       8                                                                       9                                                                       10                                                                       11                                                                       12                                                                       13                                                                       14                                                                       15                                                                       16                                                                       17                                                                       18                                                                       19                                                                       20                                                                       21                                                                       22                                                                       23                                                                       24                                                                       25                                                                       26                                                                       27                                                                       28                                                                       29                                                                       30                                                                       31                                                                       32                                                                                                                                                                                                                   Input: 32 x 32 x 3 5 5x5 filters Stride = 1 Padding = 2 Output Volume Size: Source: http://cs231n.stanford.edu/slides/2017/ cs231n_2017_lecture5.pdf (Input + 2) x Padding − Filter Stride + 1 = (32 + 2) x 2 − 5 1 + 1 = 32 32 x 32 x 5
  • 42. 5 5x5 filters CalculateOutputSize                                                                                                                                                     1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32         2                                                                       3                                                                       4                                                                       5                                                                       6                                                                       7                                                                       8                                                                       9                                                                       10                                                                       11                                                                       12                                                                       13                                                                       14                                                                       15                                                                       16                                                                       17                                                                       18                                                                       19                                                                       20                                                                       21                                                                       22                                                                       23                                                                       24                                                                       25                                                                       26                                                                       27                                                                       28                                                                       29                                                                       30                                                                       31                                                                       32                                                                                                                                                                                                                   Input: 32 x 32 x 3 Stride = 1 Padding = 2 Number of Parameters Source: http://cs231n.stanford.edu/slides/2017/ cs231n_2017_lecture5.pdf (Filter x Filter x Depth + 1)(# of Filters) = (5 x 5 x 3 + 1) x (5) = 380
  • 43. Convolutional Neural Networks 28 x 28 28 x 28 14 x 14 Convolution 32 filters Convolution 64 filters Subsampling Stride (2,2) Feature Extraction Classification 0 1 8 9 FullyConnected Dropout Dropout Activation Function (ReLU) Compute output of neurons (dot product between their weights) connected to a small local region. If we use 32 filters, then the output is 28x28x32 (using 5x5 filter)
  • 44. ReLUStep(LocalConnectivity) Source: http://deeplearning.stanford.edu/wiki/index.php/Feature_extraction_using_convolution Source: https://cs231n.github.io/convolutional-networks/ The neurons still compute a dot product of their weights with the input followed by a non-linearity, but their connectivity is now restricted to be local spatially.
  • 45. Convolutional Neural Networks 28 x 28 28 x 28 14 x 14 Convolution 32 filters Convolution 64 filters Subsampling Stride (2,2) Feature Extraction Classification 0 1 8 9 FullyConnected Dropout Dropout Pooling Perform down sampling operation along spatial dimensions (w, h) resulting in reduced volume, e.g. 14x14x2.
  • 46. Pooling=subsampling=reduceimagesize Source: https://cs231n.github.io/convolutional-networks/#pool • Commonly insert pooling layers between successive convolution layers • Reduces size spatially • Reduces amount of parameters • Minimizes likelihood of overfitting
  • 47. Poolingcommonpractices Source: https://cs231n.github.io/convolutional-networks/#pool • Input: • Output: • Typically or . (overlap pooling) • Pooling with larger receptive fields (. ) too destructive. {𝐹 = 2, 𝑆 = 2} W1 x H1 x D1 W2 x H2 x D2 where W2 = (W1 − F) S + 1, D2 = D1 H2 = (H1 − F) S + 1 {F = 2, S = 2} {F = 3, S = 2} {F}
  • 48. Stride,Pooling,Ohmy! • Smaller strides = Larger Output • Larger strides = Smaller Output (less overlaps) • Less memory required (i.e. smaller volume) • Minimizes overfitting • Potentially use larger strides in convolution layer to reduce number of pooling layers • Larger strides = smaller output reduce in spatial size ala pooling • ImageNet 2015 winner ResNet has only two pooling layers
  • 49. Convolutional Neural Networks 28 x 28 28 x 28 14 x 14 Convolution 32 filters Convolution 64 filters Subsampling Stride (2,2) Feature Extraction Classification 0 1 8 9 FullyConnected Dropout Dropout Fully Connected Neurons in a fully connected layer have full connections to all activations in the previous layer, as seen in regular Neural Networks.
  • 50. Convolutional Neural Networks Layers 28 x 28 28 x 28 14 x 14 Convolution 32 filters Convolution 64 filters Subsampling Stride (2,2) Feature Extraction Classification 0 1 8 9 FullyConnected Dropout Dropout
  • 53. LeNet-5 • Introduced by Yann LeCun: http:// yann.lecun.com/exdb/publis/pdf/ lecun-01a.pdf • Useful for recognizing single object images • Used for handwritten digits recognition
  • 54. AlexNet • Introduced by Krizhevsky, Sutskever and Hinton • 1000-class object recognition • 60 million parameters, 650k neurons, 1.1 billion computation units in a forward pass https://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks.pdf
  • 55. Inception https://arxiv.org/pdf/1409.4842.pdf • Inception takes its name from the movie’s title • Many repetitive structures called inception modules • 1000-class ILSVRC winner
  • 56. VGG • Introduced by Simnyan and Zisserman in 2014 • VGG after Visual Geometry Group at Oxford • Conv3 layers stacked on top of each other • The latest: VGG19 – 19 layers in the network
  • 57. ResNet • Introduced by He, Zhang, Ren, Sun from MS Research • Use residual learning as a building block where the identity is propagated through the network along with detected features • Won ILSVRC2015
  • 59. I’d like to thank…
  • 60. GreatReferences • Andrej Karparthy’s ConvNetJS MNIST Demo • What is back propagation in neural networks? • CS231n: Convolutional Neural Networks for Visual Recognition • Syllabus and Slides | Course Notes | YouTube • With particular focus on CS231n: Lecture 7: Convolution Neural Networks  • Neural Networks and Deep Learning • TensorFlow
  • 61. GreatReferences • Deep Visualization Toolbox • Back Propagation with TensorFlow • TensorFrames: Google TensorFlow with Apache Spark • Integrating deep learning libraries with Apache Spark • Build, Scale, and Deploy Deep Learning Pipelines with Ease
  • 63. Q&A
  • 64. What’s next? State Of The Art Deep Learning On Apache Spark™ October 31, 2018 | 09:00 PDT https://dbricks.co/2NqogIp