SlideShare a Scribd company logo
1 of 46
Deep Learning with Apache Spark
An Introduction
2
(credits: XKCD)
3
4(credits: Google Research)
5
6(credits: Google Research)
7
8
9
Machine Learning
Machine learning is a method of data analysis that automates analytical model
building. Using algorithms that iteratively learn from data, machine learning allows
computers to find hidden insights without being explicitly programmed where to look.
10
Deep Learning
● A family of Machine Learning algorithms
● It has been observed that they perform better than standard ML algorithms
(regression, SVM, …) for problems such as image recognition, audio recognition,
NLP
11
Agenda
● Introduction
● A little bit of theory
● Distributed Neural Network with Apache Spark
● Example & Code
12
Perceptrons
● Invented in 1957 at the Cornell Aeronautical
Laboratory by Frank Rosenblatt
● An algorithm for supervised learning of
binary classifiers
● Takes n binary input and produces a single
binary output
● For each input xi there is a weight wi that
determines how relevant the input xi is to the
output
● b is the bias and defines the activation
threshold (credits for image: The Project Spot)
13
Perceptrons
● Problem: a single perceptron can learn only linearly separable problems!
● XOR is not linearly separable
14
Towards Neural Networks: Perceptron layers
Solution: introduce more layers of perceptrons!
This 2-layer perceptron network actually
solves XOR!
15
Supervised learning
● Input: a training dataset with (manually) labeled data
● Output: a model that can predict new examples based on the existing
observations
● Cost function: a function that represents the error, i.e. how much the prediction
of your NN approximates the expected output of the classification
● Many supervised learning algorithms are trained by minimizing the cost function
16
Training Neural Networks
We want the neural network that minimizes the cost (or error) function
where a = a(w,b,x) is the output of the NN when x is the ouput.
The challenge of training NNs is finding an algorithm for minimizing the cost function
that is computationally efficient.
17
Training Neural Networks: Gradient Descent
An iterative, optimization algorithm for finding the minimum of a function C(w)
● Initialize w
● Calculate the gradient
● Update:
● Stop when minimum is reached
η is the learning rate of the gradient descent
18
Training Neural Networks: Gradient Descent
19
Training Neural Networks: Gradient Descent
● Problem: we have to calculate the gradients for the whole dataset for just one
update step!
● This is unfeasible for high dimensionality problems, because there would be too
many partial derivatives to compute
● Complex Neural Networks can have billion of parameters!
20
Training Neural Networks: SGD
We introduce an improved version: the mini-batch Stochastic Gradient Descent.
● Initialize x
● For each epoch repeat the following steps:
● Randomly shuffle the training dataset
● For each sample of size n of the training dataset, calculate the gradient w.r.t. the
mini batch and perform the update
21
Training Neural Networks: SGD
● The mini-batch SGD is the algorithm of choice for training most NNs, because it is
significantly faster than the vanilla gradient descent.
● It is also more memory efficient! (does not require the whole gradient vector to
be in memory)
22
Convolutional neural networks
● You can combine several layers to obtain complex neural networks
● Convolution layers work on windows of the image instead of pixels and help to
detect features
● Pooling layers offer translation invariance
(credits: Clarifai)
23
Issues with single-node training of NNs
● Increasing the number of training examples, model parameters, or both, can
drastically improve model accuracy
● Yet the training speed-up is small when the model does not fit in GPU memory
● Hence, the size of the data or parameters is tipically reduced
● Wouldn’t it be wonderful to use large-scale clusters of machines to distribute
training in deep neural networks?
24
Distributed Neural Networks
● In 2012, Google published Large Scale Distributed Deep Networks
● In that paper, they described the main concepts behind their Deep Learning
infrastructure:
• Downpour SGD
• Model Replicas on Distributed (Partitioned) Data
25
Parameter Server Model Replica
Data Shard
Model Replica
Model Replica
Data Shard
Data Shard
26
How can we easily distribute computation?
27
What is Apache Spark?
A fast and general engine for large-scale distributed data processing,
with built-in modules for streaming, sql, machine learning and graph
processing
28
Advantages of using Spark for Deep Learning
● Distributed computing out of the box
● Take advantage of the Hadoop ecosystem (Hive, HDFS, ...)
● Take advantage of the Spark ecosystem (Spark Streaming, Spark SQL, ...)
29
Driver Program Cluster Manager
Worker Node
Worker Node
Worker Node
30
31
Parameter Server Model Replica
Data Shard
Model Replica
Model Replica
Data Shard
Data Shard
32
33
Worker Node
Driver Program
Cluster Manager
Data Shard
Model Replica
Worker Node
Data Shard
Model Replica
Worker Node
Data Shard
Model Replica
Parameter Server
34
35
Deeplearning4j
Works as a job within Spark:
1. Shards of the input dataset are distributed over all cores
2. Workers process data synchronously in parallel
3. A model is trained on each shard of the input dataset
4. Workers send the transformed parameters of their models back to the master
5. The master averages the parameters
6. The parameters are used to update the model on each worker’s core
7. When the error ceases to shrink, the Spark job ends
36
37
A practical example: handwriting recognition (MNIST)
What is MNIST:
● A dataset of handwritten digits
● A subset of a larger set available from NIST (National Institute of Standards and
Technology)
● The digits have been size-normalized and centered in a fixed-size image
● Has a training set of 60,000 examples,
● Has a test set of 10,000 examples
38
39
Create the Spark Context
val sc = new SparkContext(new SparkConf()
.setMaster(masterNode)
.setAppName("Spark-Deep-Learning")
.set(SparkDl4jMultiLayer.AVERAGE_EACH_ITERATION, “true”))
Driver Program
Parameter Server
40
Load MNIST dataset into Apache Spark
// split data
val (trainingSet, testSet) = new MnistDataSetIterator(1, numSamples,true)
.splitDatasetAt(numForTraining)
// distribute training data over the cluster
val sparkTrainingData = sc.parallelize(trainingSet).cache()
Worker Node
Data Shard
Model Replica
Worker Node
Data Shard
Worker Node
Data Shard
Model Replica
Model Replica
41
Prepare the Neural Network
// prepare the neural network
val neuralNetwork = prepareNeuralNetwork(height, width, numChannels)
// create Spark multi layer network
val sparkNeuralNetwork = new SparkDl4jMultiLayer(sc, neuralNetwork)
42
The Neural Network MultiLayer
new NeuralNetConfiguration.Builder()
.seed(seed) // Random number generator
seed.
.iterations(iterations) // Number of
optimization iterations.
.regularization(true) // Whether to use
regularization
.l2(0.0005) // L2 regularization coefficient.
.learningRate(0.1) // Learning rate.
.optimizationAlgo(
OptimizationAlgorithm.STOCHASTIC_GRADIENT_DESCENT
).updater(Updater.ADAGRAD) // Gradient updater.
.list(6) // Number of layers (not including the
input layer).
43
Each layer will be similar to this one
new ConvolutionLayer.Builder(5, 5)
.nIn(numChannels) // Number of input
neurons (channels).
.stride(1, 1)
.nOut(20) //
Number of output neurons.
.weightInit(WeightInit.XAVIER) // Weight initialization
scheme.
.activation("relu") // Activation
function: rectified linear.
.build()
44
Train the Convolutional Neural Network
(0 to epochs).foreach { i =>
val network = sparkNeuralNetwork
.fitDataSet(sparkTrainingData, numCores * batchSize)
}
45
What can we do now? Evaluate our DNN
network.evaluateOn(testSet).stats
Examples labeled as 0 classified by model as 0: 978 times
...
Examples labeled as 4 classified by model as 9: 27 times
...
Examples labeled as 7 classified by model as 9: 26 times
...
Examples labeled as 9 classified by model as 9: 884 times
=====================Scores=====================
Accuracy: 0.9565
Precision: 0.9563
Recall: 0.9563
F1 Score: 0.9563
Produced with 100 epochs.
46
http://bit.ly/1Yq8Tzu

More Related Content

What's hot

What's hot (20)

Daniel Shank, Data Scientist, Talla at MLconf SF 2016
Daniel Shank, Data Scientist, Talla at MLconf SF 2016Daniel Shank, Data Scientist, Talla at MLconf SF 2016
Daniel Shank, Data Scientist, Talla at MLconf SF 2016
 
Spark Meetup TensorFrames
Spark Meetup TensorFramesSpark Meetup TensorFrames
Spark Meetup TensorFrames
 
Applying your Convolutional Neural Networks
Applying your Convolutional Neural NetworksApplying your Convolutional Neural Networks
Applying your Convolutional Neural Networks
 
Common Design of Deep Learning Frameworks
Common Design of Deep Learning FrameworksCommon Design of Deep Learning Frameworks
Common Design of Deep Learning Frameworks
 
Josh Patterson, Advisor, Skymind – Deep learning for Industry at MLconf ATL 2016
Josh Patterson, Advisor, Skymind – Deep learning for Industry at MLconf ATL 2016Josh Patterson, Advisor, Skymind – Deep learning for Industry at MLconf ATL 2016
Josh Patterson, Advisor, Skymind – Deep learning for Industry at MLconf ATL 2016
 
Basic ideas on keras framework
Basic ideas on keras frameworkBasic ideas on keras framework
Basic ideas on keras framework
 
Training Neural Networks
Training Neural NetworksTraining Neural Networks
Training Neural Networks
 
State of the art time-series analysis with deep learning by Javier Ordóñez at...
State of the art time-series analysis with deep learning by Javier Ordóñez at...State of the art time-series analysis with deep learning by Javier Ordóñez at...
State of the art time-series analysis with deep learning by Javier Ordóñez at...
 
An Introduction to TensorFlow architecture
An Introduction to TensorFlow architectureAn Introduction to TensorFlow architecture
An Introduction to TensorFlow architecture
 
MLConf 2016 SigOpt Talk by Scott Clark
MLConf 2016 SigOpt Talk by Scott ClarkMLConf 2016 SigOpt Talk by Scott Clark
MLConf 2016 SigOpt Talk by Scott Clark
 
NVIDIA 深度學習教育機構 (DLI): Image segmentation with tensorflow
NVIDIA 深度學習教育機構 (DLI): Image segmentation with tensorflowNVIDIA 深度學習教育機構 (DLI): Image segmentation with tensorflow
NVIDIA 深度學習教育機構 (DLI): Image segmentation with tensorflow
 
Language translation with Deep Learning (RNN) with TensorFlow
Language translation with Deep Learning (RNN) with TensorFlowLanguage translation with Deep Learning (RNN) with TensorFlow
Language translation with Deep Learning (RNN) with TensorFlow
 
Hussein Mehanna, Engineering Director, ML Core - Facebook at MLconf ATL 2016
Hussein Mehanna, Engineering Director, ML Core - Facebook at MLconf ATL 2016Hussein Mehanna, Engineering Director, ML Core - Facebook at MLconf ATL 2016
Hussein Mehanna, Engineering Director, ML Core - Facebook at MLconf ATL 2016
 
Intro to TensorFlow and PyTorch Workshop at Tubular Labs
Intro to TensorFlow and PyTorch Workshop at Tubular LabsIntro to TensorFlow and PyTorch Workshop at Tubular Labs
Intro to TensorFlow and PyTorch Workshop at Tubular Labs
 
Convolutional neural networks 이론과 응용
Convolutional neural networks 이론과 응용Convolutional neural networks 이론과 응용
Convolutional neural networks 이론과 응용
 
Deep Learning with TensorFlow: Understanding Tensors, Computations Graphs, Im...
Deep Learning with TensorFlow: Understanding Tensors, Computations Graphs, Im...Deep Learning with TensorFlow: Understanding Tensors, Computations Graphs, Im...
Deep Learning with TensorFlow: Understanding Tensors, Computations Graphs, Im...
 
Tensorflow
TensorflowTensorflow
Tensorflow
 
Big data app meetup 2016-06-15
Big data app meetup 2016-06-15Big data app meetup 2016-06-15
Big data app meetup 2016-06-15
 
Data Science and Deep Learning on Spark with 1/10th of the Code with Roope As...
Data Science and Deep Learning on Spark with 1/10th of the Code with Roope As...Data Science and Deep Learning on Spark with 1/10th of the Code with Roope As...
Data Science and Deep Learning on Spark with 1/10th of the Code with Roope As...
 
Machine Intelligence at Google Scale: TensorFlow
Machine Intelligence at Google Scale: TensorFlowMachine Intelligence at Google Scale: TensorFlow
Machine Intelligence at Google Scale: TensorFlow
 

Viewers also liked

Viewers also liked (16)

Deep dive into spark streaming
Deep dive into spark streamingDeep dive into spark streaming
Deep dive into spark streaming
 
Big data, Analytics and Beyond
Big data, Analytics and BeyondBig data, Analytics and Beyond
Big data, Analytics and Beyond
 
Fully fault tolerant real time data pipeline with docker and mesos
Fully fault tolerant real time data pipeline with docker and mesos Fully fault tolerant real time data pipeline with docker and mesos
Fully fault tolerant real time data pipeline with docker and mesos
 
Hadoop Operations - Best practices from the field
Hadoop Operations - Best practices from the fieldHadoop Operations - Best practices from the field
Hadoop Operations - Best practices from the field
 
Hadoop meets Agile! - An Agile Big Data Model
Hadoop meets Agile! - An Agile Big Data ModelHadoop meets Agile! - An Agile Big Data Model
Hadoop meets Agile! - An Agile Big Data Model
 
Apache Spark
Apache SparkApache Spark
Apache Spark
 
Energy analytics with Apache Spark workshop
Energy analytics with Apache Spark workshopEnergy analytics with Apache Spark workshop
Energy analytics with Apache Spark workshop
 
Introduction to Apache Spark
Introduction to Apache SparkIntroduction to Apache Spark
Introduction to Apache Spark
 
Apache Spark: What's under the hood
Apache Spark: What's under the hoodApache Spark: What's under the hood
Apache Spark: What's under the hood
 
Deep learning Tutorial - Part II
Deep learning Tutorial - Part IIDeep learning Tutorial - Part II
Deep learning Tutorial - Part II
 
Apache Spark: Coming up to speed
Apache Spark: Coming up to speedApache Spark: Coming up to speed
Apache Spark: Coming up to speed
 
Deep learning - Part I
Deep learning - Part IDeep learning - Part I
Deep learning - Part I
 
Apache Spark: The Analytics Operating System
Apache Spark: The Analytics Operating SystemApache Spark: The Analytics Operating System
Apache Spark: The Analytics Operating System
 
Deep learning and Apache Spark
Deep learning and Apache SparkDeep learning and Apache Spark
Deep learning and Apache Spark
 
Introduction to Apache Spark Developer Training
Introduction to Apache Spark Developer TrainingIntroduction to Apache Spark Developer Training
Introduction to Apache Spark Developer Training
 
Reactive dashboard’s using apache spark
Reactive dashboard’s using apache sparkReactive dashboard’s using apache spark
Reactive dashboard’s using apache spark
 

Similar to Deep Learning with Apache Spark: an Introduction

Unsupervised Feature Learning
Unsupervised Feature LearningUnsupervised Feature Learning
Unsupervised Feature Learning
Amgad Muhammad
 
Auto-Scaling Apache Spark cluster using Deep Reinforcement Learning.pdf
Auto-Scaling Apache Spark cluster using Deep Reinforcement Learning.pdfAuto-Scaling Apache Spark cluster using Deep Reinforcement Learning.pdf
Auto-Scaling Apache Spark cluster using Deep Reinforcement Learning.pdf
Kundjanasith Thonglek
 

Similar to Deep Learning with Apache Spark: an Introduction (20)

Using Deep Learning on Apache Spark to Diagnose Thoracic Pathology from Chest...
Using Deep Learning on Apache Spark to Diagnose Thoracic Pathology from Chest...Using Deep Learning on Apache Spark to Diagnose Thoracic Pathology from Chest...
Using Deep Learning on Apache Spark to Diagnose Thoracic Pathology from Chest...
 
Scaling TensorFlow Models for Training using multi-GPUs & Google Cloud ML
Scaling TensorFlow Models for Training using multi-GPUs & Google Cloud MLScaling TensorFlow Models for Training using multi-GPUs & Google Cloud ML
Scaling TensorFlow Models for Training using multi-GPUs & Google Cloud ML
 
BigDL webinar - Deep Learning Library for Spark
BigDL webinar - Deep Learning Library for SparkBigDL webinar - Deep Learning Library for Spark
BigDL webinar - Deep Learning Library for Spark
 
Final training course
Final training courseFinal training course
Final training course
 
System mldl meetup
System mldl meetupSystem mldl meetup
System mldl meetup
 
Machine learning at Scale with Apache Spark
Machine learning at Scale with Apache SparkMachine learning at Scale with Apache Spark
Machine learning at Scale with Apache Spark
 
2018 03 25 system ml ai and openpower meetup
2018 03 25 system ml ai and openpower meetup2018 03 25 system ml ai and openpower meetup
2018 03 25 system ml ai and openpower meetup
 
Apache Cassandra Lunch #54: Machine Learning with Spark + Cassandra Part 2
Apache Cassandra Lunch #54: Machine Learning with Spark + Cassandra Part 2Apache Cassandra Lunch #54: Machine Learning with Spark + Cassandra Part 2
Apache Cassandra Lunch #54: Machine Learning with Spark + Cassandra Part 2
 
DigitRecognition.pptx
DigitRecognition.pptxDigitRecognition.pptx
DigitRecognition.pptx
 
SCALABLE MONITORING USING PROMETHEUS WITH APACHE SPARK
SCALABLE MONITORING USING PROMETHEUS WITH APACHE SPARKSCALABLE MONITORING USING PROMETHEUS WITH APACHE SPARK
SCALABLE MONITORING USING PROMETHEUS WITH APACHE SPARK
 
Netflix machine learning
Netflix machine learningNetflix machine learning
Netflix machine learning
 
Performance Characterization and Optimization of In-Memory Data Analytics on ...
Performance Characterization and Optimization of In-Memory Data Analytics on ...Performance Characterization and Optimization of In-Memory Data Analytics on ...
Performance Characterization and Optimization of In-Memory Data Analytics on ...
 
A Platform for Accelerating Machine Learning Applications
 A Platform for Accelerating Machine Learning Applications A Platform for Accelerating Machine Learning Applications
A Platform for Accelerating Machine Learning Applications
 
Startup.Ml: Using neon for NLP and Localization Applications
Startup.Ml: Using neon for NLP and Localization Applications Startup.Ml: Using neon for NLP and Localization Applications
Startup.Ml: Using neon for NLP and Localization Applications
 
Spark Meetup TensorFrames
Spark Meetup TensorFramesSpark Meetup TensorFrames
Spark Meetup TensorFrames
 
Unsupervised Feature Learning
Unsupervised Feature LearningUnsupervised Feature Learning
Unsupervised Feature Learning
 
Auto-Scaling Apache Spark cluster using Deep Reinforcement Learning.pdf
Auto-Scaling Apache Spark cluster using Deep Reinforcement Learning.pdfAuto-Scaling Apache Spark cluster using Deep Reinforcement Learning.pdf
Auto-Scaling Apache Spark cluster using Deep Reinforcement Learning.pdf
 
Introduction to Machine Learning with SciKit-Learn
Introduction to Machine Learning with SciKit-LearnIntroduction to Machine Learning with SciKit-Learn
Introduction to Machine Learning with SciKit-Learn
 
Profiling & Testing with Spark
Profiling & Testing with SparkProfiling & Testing with Spark
Profiling & Testing with Spark
 
Biomedical Signal and Image Analytics using MATLAB
Biomedical Signal and Image Analytics using MATLABBiomedical Signal and Image Analytics using MATLAB
Biomedical Signal and Image Analytics using MATLAB
 

Recently uploaded

The title is not connected to what is inside
The title is not connected to what is insideThe title is not connected to what is inside
The title is not connected to what is inside
shinachiaurasa2
 
AI Mastery 201: Elevating Your Workflow with Advanced LLM Techniques
AI Mastery 201: Elevating Your Workflow with Advanced LLM TechniquesAI Mastery 201: Elevating Your Workflow with Advanced LLM Techniques
AI Mastery 201: Elevating Your Workflow with Advanced LLM Techniques
VictorSzoltysek
 
CHEAP Call Girls in Pushp Vihar (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICE
CHEAP Call Girls in Pushp Vihar (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICECHEAP Call Girls in Pushp Vihar (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICE
CHEAP Call Girls in Pushp Vihar (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICE
9953056974 Low Rate Call Girls In Saket, Delhi NCR
 
Large-scale Logging Made Easy: Meetup at Deutsche Bank 2024
Large-scale Logging Made Easy: Meetup at Deutsche Bank 2024Large-scale Logging Made Easy: Meetup at Deutsche Bank 2024
Large-scale Logging Made Easy: Meetup at Deutsche Bank 2024
VictoriaMetrics
 
%+27788225528 love spells in Huntington Beach Psychic Readings, Attraction sp...
%+27788225528 love spells in Huntington Beach Psychic Readings, Attraction sp...%+27788225528 love spells in Huntington Beach Psychic Readings, Attraction sp...
%+27788225528 love spells in Huntington Beach Psychic Readings, Attraction sp...
masabamasaba
 

Recently uploaded (20)

Introducing Microsoft’s new Enterprise Work Management (EWM) Solution
Introducing Microsoft’s new Enterprise Work Management (EWM) SolutionIntroducing Microsoft’s new Enterprise Work Management (EWM) Solution
Introducing Microsoft’s new Enterprise Work Management (EWM) Solution
 
%in Bahrain+277-882-255-28 abortion pills for sale in Bahrain
%in Bahrain+277-882-255-28 abortion pills for sale in Bahrain%in Bahrain+277-882-255-28 abortion pills for sale in Bahrain
%in Bahrain+277-882-255-28 abortion pills for sale in Bahrain
 
WSO2Con2024 - WSO2's IAM Vision: Identity-Led Digital Transformation
WSO2Con2024 - WSO2's IAM Vision: Identity-Led Digital TransformationWSO2Con2024 - WSO2's IAM Vision: Identity-Led Digital Transformation
WSO2Con2024 - WSO2's IAM Vision: Identity-Led Digital Transformation
 
%in Midrand+277-882-255-28 abortion pills for sale in midrand
%in Midrand+277-882-255-28 abortion pills for sale in midrand%in Midrand+277-882-255-28 abortion pills for sale in midrand
%in Midrand+277-882-255-28 abortion pills for sale in midrand
 
The title is not connected to what is inside
The title is not connected to what is insideThe title is not connected to what is inside
The title is not connected to what is inside
 
WSO2CON 2024 - Cloud Native Middleware: Domain-Driven Design, Cell-Based Arch...
WSO2CON 2024 - Cloud Native Middleware: Domain-Driven Design, Cell-Based Arch...WSO2CON 2024 - Cloud Native Middleware: Domain-Driven Design, Cell-Based Arch...
WSO2CON 2024 - Cloud Native Middleware: Domain-Driven Design, Cell-Based Arch...
 
Define the academic and professional writing..pdf
Define the academic and professional writing..pdfDefine the academic and professional writing..pdf
Define the academic and professional writing..pdf
 
WSO2Con2024 - Enabling Transactional System's Exponential Growth With Simplicity
WSO2Con2024 - Enabling Transactional System's Exponential Growth With SimplicityWSO2Con2024 - Enabling Transactional System's Exponential Growth With Simplicity
WSO2Con2024 - Enabling Transactional System's Exponential Growth With Simplicity
 
call girls in Vaishali (Ghaziabad) 🔝 >༒8448380779 🔝 genuine Escort Service 🔝✔️✔️
call girls in Vaishali (Ghaziabad) 🔝 >༒8448380779 🔝 genuine Escort Service 🔝✔️✔️call girls in Vaishali (Ghaziabad) 🔝 >༒8448380779 🔝 genuine Escort Service 🔝✔️✔️
call girls in Vaishali (Ghaziabad) 🔝 >༒8448380779 🔝 genuine Escort Service 🔝✔️✔️
 
W01_panagenda_Navigating-the-Future-with-The-Hitchhikers-Guide-to-Notes-and-D...
W01_panagenda_Navigating-the-Future-with-The-Hitchhikers-Guide-to-Notes-and-D...W01_panagenda_Navigating-the-Future-with-The-Hitchhikers-Guide-to-Notes-and-D...
W01_panagenda_Navigating-the-Future-with-The-Hitchhikers-Guide-to-Notes-and-D...
 
MarTech Trend 2024 Book : Marketing Technology Trends (2024 Edition) How Data...
MarTech Trend 2024 Book : Marketing Technology Trends (2024 Edition) How Data...MarTech Trend 2024 Book : Marketing Technology Trends (2024 Edition) How Data...
MarTech Trend 2024 Book : Marketing Technology Trends (2024 Edition) How Data...
 
%in tembisa+277-882-255-28 abortion pills for sale in tembisa
%in tembisa+277-882-255-28 abortion pills for sale in tembisa%in tembisa+277-882-255-28 abortion pills for sale in tembisa
%in tembisa+277-882-255-28 abortion pills for sale in tembisa
 
AI Mastery 201: Elevating Your Workflow with Advanced LLM Techniques
AI Mastery 201: Elevating Your Workflow with Advanced LLM TechniquesAI Mastery 201: Elevating Your Workflow with Advanced LLM Techniques
AI Mastery 201: Elevating Your Workflow with Advanced LLM Techniques
 
%in Soweto+277-882-255-28 abortion pills for sale in soweto
%in Soweto+277-882-255-28 abortion pills for sale in soweto%in Soweto+277-882-255-28 abortion pills for sale in soweto
%in Soweto+277-882-255-28 abortion pills for sale in soweto
 
WSO2CON 2024 - Does Open Source Still Matter?
WSO2CON 2024 - Does Open Source Still Matter?WSO2CON 2024 - Does Open Source Still Matter?
WSO2CON 2024 - Does Open Source Still Matter?
 
tonesoftg
tonesoftgtonesoftg
tonesoftg
 
%in Hazyview+277-882-255-28 abortion pills for sale in Hazyview
%in Hazyview+277-882-255-28 abortion pills for sale in Hazyview%in Hazyview+277-882-255-28 abortion pills for sale in Hazyview
%in Hazyview+277-882-255-28 abortion pills for sale in Hazyview
 
CHEAP Call Girls in Pushp Vihar (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICE
CHEAP Call Girls in Pushp Vihar (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICECHEAP Call Girls in Pushp Vihar (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICE
CHEAP Call Girls in Pushp Vihar (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICE
 
Large-scale Logging Made Easy: Meetup at Deutsche Bank 2024
Large-scale Logging Made Easy: Meetup at Deutsche Bank 2024Large-scale Logging Made Easy: Meetup at Deutsche Bank 2024
Large-scale Logging Made Easy: Meetup at Deutsche Bank 2024
 
%+27788225528 love spells in Huntington Beach Psychic Readings, Attraction sp...
%+27788225528 love spells in Huntington Beach Psychic Readings, Attraction sp...%+27788225528 love spells in Huntington Beach Psychic Readings, Attraction sp...
%+27788225528 love spells in Huntington Beach Psychic Readings, Attraction sp...
 

Deep Learning with Apache Spark: an Introduction

  • 1. Deep Learning with Apache Spark An Introduction
  • 3. 3
  • 5. 5
  • 7. 7
  • 8. 8
  • 9. 9 Machine Learning Machine learning is a method of data analysis that automates analytical model building. Using algorithms that iteratively learn from data, machine learning allows computers to find hidden insights without being explicitly programmed where to look.
  • 10. 10 Deep Learning ● A family of Machine Learning algorithms ● It has been observed that they perform better than standard ML algorithms (regression, SVM, …) for problems such as image recognition, audio recognition, NLP
  • 11. 11 Agenda ● Introduction ● A little bit of theory ● Distributed Neural Network with Apache Spark ● Example & Code
  • 12. 12 Perceptrons ● Invented in 1957 at the Cornell Aeronautical Laboratory by Frank Rosenblatt ● An algorithm for supervised learning of binary classifiers ● Takes n binary input and produces a single binary output ● For each input xi there is a weight wi that determines how relevant the input xi is to the output ● b is the bias and defines the activation threshold (credits for image: The Project Spot)
  • 13. 13 Perceptrons ● Problem: a single perceptron can learn only linearly separable problems! ● XOR is not linearly separable
  • 14. 14 Towards Neural Networks: Perceptron layers Solution: introduce more layers of perceptrons! This 2-layer perceptron network actually solves XOR!
  • 15. 15 Supervised learning ● Input: a training dataset with (manually) labeled data ● Output: a model that can predict new examples based on the existing observations ● Cost function: a function that represents the error, i.e. how much the prediction of your NN approximates the expected output of the classification ● Many supervised learning algorithms are trained by minimizing the cost function
  • 16. 16 Training Neural Networks We want the neural network that minimizes the cost (or error) function where a = a(w,b,x) is the output of the NN when x is the ouput. The challenge of training NNs is finding an algorithm for minimizing the cost function that is computationally efficient.
  • 17. 17 Training Neural Networks: Gradient Descent An iterative, optimization algorithm for finding the minimum of a function C(w) ● Initialize w ● Calculate the gradient ● Update: ● Stop when minimum is reached η is the learning rate of the gradient descent
  • 18. 18 Training Neural Networks: Gradient Descent
  • 19. 19 Training Neural Networks: Gradient Descent ● Problem: we have to calculate the gradients for the whole dataset for just one update step! ● This is unfeasible for high dimensionality problems, because there would be too many partial derivatives to compute ● Complex Neural Networks can have billion of parameters!
  • 20. 20 Training Neural Networks: SGD We introduce an improved version: the mini-batch Stochastic Gradient Descent. ● Initialize x ● For each epoch repeat the following steps: ● Randomly shuffle the training dataset ● For each sample of size n of the training dataset, calculate the gradient w.r.t. the mini batch and perform the update
  • 21. 21 Training Neural Networks: SGD ● The mini-batch SGD is the algorithm of choice for training most NNs, because it is significantly faster than the vanilla gradient descent. ● It is also more memory efficient! (does not require the whole gradient vector to be in memory)
  • 22. 22 Convolutional neural networks ● You can combine several layers to obtain complex neural networks ● Convolution layers work on windows of the image instead of pixels and help to detect features ● Pooling layers offer translation invariance (credits: Clarifai)
  • 23. 23 Issues with single-node training of NNs ● Increasing the number of training examples, model parameters, or both, can drastically improve model accuracy ● Yet the training speed-up is small when the model does not fit in GPU memory ● Hence, the size of the data or parameters is tipically reduced ● Wouldn’t it be wonderful to use large-scale clusters of machines to distribute training in deep neural networks?
  • 24. 24 Distributed Neural Networks ● In 2012, Google published Large Scale Distributed Deep Networks ● In that paper, they described the main concepts behind their Deep Learning infrastructure: • Downpour SGD • Model Replicas on Distributed (Partitioned) Data
  • 25. 25 Parameter Server Model Replica Data Shard Model Replica Model Replica Data Shard Data Shard
  • 26. 26 How can we easily distribute computation?
  • 27. 27 What is Apache Spark? A fast and general engine for large-scale distributed data processing, with built-in modules for streaming, sql, machine learning and graph processing
  • 28. 28 Advantages of using Spark for Deep Learning ● Distributed computing out of the box ● Take advantage of the Hadoop ecosystem (Hive, HDFS, ...) ● Take advantage of the Spark ecosystem (Spark Streaming, Spark SQL, ...)
  • 29. 29 Driver Program Cluster Manager Worker Node Worker Node Worker Node
  • 30. 30
  • 31. 31 Parameter Server Model Replica Data Shard Model Replica Model Replica Data Shard Data Shard
  • 32. 32
  • 33. 33 Worker Node Driver Program Cluster Manager Data Shard Model Replica Worker Node Data Shard Model Replica Worker Node Data Shard Model Replica Parameter Server
  • 34. 34
  • 35. 35 Deeplearning4j Works as a job within Spark: 1. Shards of the input dataset are distributed over all cores 2. Workers process data synchronously in parallel 3. A model is trained on each shard of the input dataset 4. Workers send the transformed parameters of their models back to the master 5. The master averages the parameters 6. The parameters are used to update the model on each worker’s core 7. When the error ceases to shrink, the Spark job ends
  • 36. 36
  • 37. 37 A practical example: handwriting recognition (MNIST) What is MNIST: ● A dataset of handwritten digits ● A subset of a larger set available from NIST (National Institute of Standards and Technology) ● The digits have been size-normalized and centered in a fixed-size image ● Has a training set of 60,000 examples, ● Has a test set of 10,000 examples
  • 38. 38
  • 39. 39 Create the Spark Context val sc = new SparkContext(new SparkConf() .setMaster(masterNode) .setAppName("Spark-Deep-Learning") .set(SparkDl4jMultiLayer.AVERAGE_EACH_ITERATION, “true”)) Driver Program Parameter Server
  • 40. 40 Load MNIST dataset into Apache Spark // split data val (trainingSet, testSet) = new MnistDataSetIterator(1, numSamples,true) .splitDatasetAt(numForTraining) // distribute training data over the cluster val sparkTrainingData = sc.parallelize(trainingSet).cache() Worker Node Data Shard Model Replica Worker Node Data Shard Worker Node Data Shard Model Replica Model Replica
  • 41. 41 Prepare the Neural Network // prepare the neural network val neuralNetwork = prepareNeuralNetwork(height, width, numChannels) // create Spark multi layer network val sparkNeuralNetwork = new SparkDl4jMultiLayer(sc, neuralNetwork)
  • 42. 42 The Neural Network MultiLayer new NeuralNetConfiguration.Builder() .seed(seed) // Random number generator seed. .iterations(iterations) // Number of optimization iterations. .regularization(true) // Whether to use regularization .l2(0.0005) // L2 regularization coefficient. .learningRate(0.1) // Learning rate. .optimizationAlgo( OptimizationAlgorithm.STOCHASTIC_GRADIENT_DESCENT ).updater(Updater.ADAGRAD) // Gradient updater. .list(6) // Number of layers (not including the input layer).
  • 43. 43 Each layer will be similar to this one new ConvolutionLayer.Builder(5, 5) .nIn(numChannels) // Number of input neurons (channels). .stride(1, 1) .nOut(20) // Number of output neurons. .weightInit(WeightInit.XAVIER) // Weight initialization scheme. .activation("relu") // Activation function: rectified linear. .build()
  • 44. 44 Train the Convolutional Neural Network (0 to epochs).foreach { i => val network = sparkNeuralNetwork .fitDataSet(sparkTrainingData, numCores * batchSize) }
  • 45. 45 What can we do now? Evaluate our DNN network.evaluateOn(testSet).stats Examples labeled as 0 classified by model as 0: 978 times ... Examples labeled as 4 classified by model as 9: 27 times ... Examples labeled as 7 classified by model as 9: 26 times ... Examples labeled as 9 classified by model as 9: 884 times =====================Scores===================== Accuracy: 0.9565 Precision: 0.9563 Recall: 0.9563 F1 Score: 0.9563 Produced with 100 epochs.