SlideShare a Scribd company logo
1 of 45
Deep Learning
1
In the name of God
Mehrnaz Faraz
Faculty of Electrical Engineering
K. N. Toosi University of Technology
Milad Abbasi
Faculty of Electrical Engineering
Sharif University of Technology
AI, ML and DL
2
Why Deep Learning?
• Perform better for classification than other traditional
Machine Learning methods, because:
– Deep learning methods include multi layer
processing with less time and better accuracy
performance.
3
Deep Learning Applications
4
Image recognitionSpeech recognition
Robots and self driving carsHealthcare
Portfolio
management
Weather
forecast
Strengths and Challenges
• Strengths:
– No need for feature engineering
– Best results with unstructured data
– No need for labeling of data
– Efficient at delivering high-quality results
– Hardware and software support
• Challenges:
– The need for lots of data
– Neural networks at the core of deep learning are black
boxes
– Overfitting the model
– Lack of flexibility
5
Deep Learning Algorithms
• Unsupervised Learning
– Auto Encoders (AE)
– Generative Adversarial Networks (GAN)
• Supervised Learning
– Recurrent Neural Networks (RNN)
– Convolutional Neural Networks (CNN)
• Semi-Supervised Learning
• Reinforcement Learning
6
Unsupervised Learning
• Find structure or patterns in the unlabeled data
– Clustering
– Compression
– Feature & Representation learning
– Dimensionality reduction
– Generative models
7
Supervised Learning
• Learn a mapping function f where: y = f(x)
– Classification
– Regression
8
Data: (x,y)
Deep Neural Network
9
Overfitting
• The model is fit too well to the training set, but does not
perform as well on the test set
• Occurs in complex networks with small data set
10
Overfitting
Overfitting
• Steps for reducing overfitting:
– Add more data
– Use data augmentation
– Use architectures that generalize well
– Add regularization (mostly dropout, L1/L2 regularization
are also possible)
– Reduce architecture complexity
11
Training Neural Network
• Data preparation
• Choosing a network architecture
• Training algorithm and optimization
• Improving training algorithm
– Improve convergence rate
12
Data preparation
• Data need to be made adequate for a given method
• Data in the real world is dirty
– Incomplete: Lacking attribute values, lacking certain
attributes of interest
– Noisy: Containing errors or outliers
13
More data = Better training
Removing incomplete and ruined data
Data preparation
• Data pre-processing:
– Normalization: Helps to prevent that attributes with large
ranges out-weight attributes with small ranges
• Min-max normalization:
• Z-score normalization:
– Doesn’t eliminate outliers
– Mean: 0 , Std: 1
14
 
   
 
min
_ max _ min _ min
max min
old old
new
old old
x x
x new new new
x x

  

 
 
minold old
new
old
x x
x
std x


Data preparation
• Histogram equalization:
– Is a technique for adjusting image intensities to enhance
contrast
15
Before histogram equalization After histogram equalization
Data preparation
• Data augmentation
– Means increasing the number of data points. In terms of
images, it may mean that increasing the number of
images in the dataset.
– Popular augmentation techniques (in terms of image):
• Flip
• Rotation
• Scale
• Crop
• Translation
• Gaussian noise
• Conditional GANs
16
Flip
Choosing a network architecture
• Layer’s type
– 1 dimensional data (signal, vector): FC, AE
– N dimensional data (image, tensor): CNN
– Time series data (speech, video, text): RNN
• Number of parameters
– Number of layers
– Number of neurons
• Start with a minimum of hidden layers and nodes.
Increase the hidden layers and nodes number until get a
good performance.
17
Try and error
Training algorithm and optimization
• Training:
• Error calculation:
– Cost/ Loss function:
• Mean squared
• Cross-entropy
• Softmax
18
Feed forward Error calculation
Back propagationUpdating parameters
Input
   
21
, i i
i
L y y y y
N
 
Training algorithm and optimization
• Optimization:
– Step by step to get the lowest amount of error with
modifying and updating the weights.
– Difference value for cost function
– Goal: Get to global minimum
19
Loss surface
Training algorithm and optimization
– Gradient descent variants:
• Stochastic gradient descent
• Batch gradient descent
• Mini-batch gradient descent
– Gradient descent optimization algorithm:
• Momentum
• Adagrad
• Adadelta
• Adam
• RMSprop
• …
20
Stochastic Gradient Descent
• A parameter updating for each training example 𝑥 𝑖
and
label 𝑦 𝑖
• Performs redundant computations for large datasets
• Performs frequent updates with a high variance
(Fluctuation)
21
Batch Gradient Descent
• Calculate the gradients for the whole dataset to perform
just one update
• can be very slow
• Learning rate (η) determining how big of an update we
perform
22
 new old J     
Mini-batch Gradient Descent
• Performs an update for every mini-batch of n training
examples
– reduces the variance of the parameter updates
– can make use of highly optimized matrix optimizations
– Common mini-batch sizes range between 50 and 256
23
   
 , ,
; ;
i i n i i n
new old J x y     
  
Improving Training Algorithms
• Batch normalization
• Regularization
• Dropout
• Transfer learning
24
• Is a normalization method/layer for neural networks
• Preventing from overfitting
• Reduces the dependence of network to weight initialization
• Improves the gradient flow through the network
• Improves the speed (η), performance, and stability of NN
• Allows higher learning rates
Batch Normalization
25
Batch Normalization
• How batch normalization prevents from overfitting?
– Reducing overfitting because of slight regularization
effects
– Similar to dropout, it adds some noise to each hidden
layer’s activations.
26
Regularization
• The process of introducing additional information to the
loss function: Regularization term
• Adds a penalty to explore certain regions in function
space
• V is the loss function
• λ is importance factor of regularization
27
1
min ( ( ), ) ( )
n
i i
w
i
V h x y R w


Regularization
• How regularization prevents from overfitting?
• L1 & L2 regularization:
– L1: Lasso Regression adds “absolute value of magnitude”
of coefficient as penalty term to the loss function.
– L2: Ridge regression adds “squared magnitude” of
coefficient as penalty term to the loss function.
28
Regularization
•
•
• Increasing λ Decreasing R(w) Decreasing w
29
 T
f w x b
Under-fitting Over-fittingAppropriate-fitting
1
min ( ( ), ) ( )
n
i i
w
i
V h x y R w


• Ignoring neurons during the training which is chosen at random.
• These neurons are not considered during a particular forward or
backward pass.
• Dropout probability = 0.5 usually works well
• Not used on the output layer
Dropout
30
Standard Neural Network After applying dropout
Dropout
• How dropout prevents from overfitting?
– Dropout is a fast regularization method
– Layer's "over-reliance" on a few of its inputs
– Network becomes less sensitive to the specific weights
of neurons
– Better generalization
31
Transfer Learning
• A model trained on one task is re-purposed on a second
related task.
• We first train a base network on a base (big) dataset and
task
• Repurpose the learned features, or transfer them, to a
second target network to be trained on a target dataset
and task
32
Transfer Learning
33
Big Data
Conv
Conv
Conv
Pool
Pool
FC
Conv
Soft Max
TrainingwithIMAGENET
Transfer the weights
Our Data
Conv
Conv
Conv
Pool
Pool
FC
Conv
Soft Max
Freeze these
(Low Data)
Train these
Hyper Parameter Optimization
• Hyper parameter:
– A parameter whose value is set before the learning
process begins
– Initial weights
– Number of layers
– Number of neurons
– Learning rate
– Convolution kernel width,…
34
Hyper Parameter Optimization
• Manual tuning
– Monitor and visualize the loss curve/ accuracy
• Automatic optimization
– Random search
– Grid search
– Bayesian Hyper parameter optimization
35
Hyper Parameter Optimization
• Bayesian Hyper parameter optimization:
– Build a probability model of the objective function and
use it to select the most promising hyper parameters to
evaluate in the true objective function
36
Cross Validation
• K-fold cross validation:
– A model is trained using (k-1) of the folds as training data
– The resulting model is validated on the remaining part of
the data
37
Weight initialization
• Zero initialization:
– Fully connected, no asymmetry
– In a layer, every neuron has the same output
– In a layer, all the weights update in the same way
38
InputSignal
OutputSignal
⋮
⋮
⋮⋮
0
0
0
Weight initialization
• Small random numbers:
– Symmetry breaks
– Causes “Vanishing Gradient” flowing backward through
the network
• A concern for deep networks
39
'2 2 '1
11
2 2 1 1
1 1 1 1
1 2 2 1 1 1
1 1 1 1 1 1
1e xf w f
E E e o net o net
w e o net o net w
 
      
     
       
Weight initialization
• Large random numbers:
– Symmetry breaks
– Causes saturation
– Causes “Exploding Gradient ” flowing backward through
the network
40
'2 2 '1
11
2 2 1 1
1 1 1 1
1 2 2 1 1 1
1 1 1 1 1 1
1e xf w f
E E e o net o net
w e o net o net w
 
      
     
       
 T
f w x b
Weight initialization
• Train with multiple small random numbers
• Measure the errors
• Select the initial weights that produce the smallest
errors
41
…
𝑤1 𝑤2
𝑤3
𝑤′1 𝑤′2
𝑤′3
𝐸 𝑛𝐸1
Feature Selection
• Automatic
• Handy
– Forward selection
– Backward elimination
42
Forward Selection
• Begins with an empty model and adds in variables one by
one
• Adds the one variable that gives the single best
improvement to our model
43
𝒙 𝟏
𝒙 𝟑
𝒙 𝟐
𝒆 𝟏
𝒆 𝟐
𝒆 𝟑
1 2 3min( , , )e e e
Feature with minimum e is selected
Forward Selection
• Suppose that 𝑥1 is selected
• 𝑥1 and 𝑥2 𝑒12
• 𝑥1 and 𝑥3 𝑒13
44
12 13min( , )e e
Features with minimum e are selected
Backward Elimination
• Removes the least significant feature at each iteration
• Steps:
– Train the model with all the independent variables
– Eliminate independent variables with no improvement
on performance
– Repeat training until no improvement is observed on
removal of features
45

More Related Content

What's hot

Integer quantization for deep learning inference: principles and empirical ev...
Integer quantization for deep learning inference: principles and empirical ev...Integer quantization for deep learning inference: principles and empirical ev...
Integer quantization for deep learning inference: principles and empirical ev...jemin lee
 
Anima Anadkumar, Principal Scientist, Amazon Web Services, Endowed Professor,...
Anima Anadkumar, Principal Scientist, Amazon Web Services, Endowed Professor,...Anima Anadkumar, Principal Scientist, Amazon Web Services, Endowed Professor,...
Anima Anadkumar, Principal Scientist, Amazon Web Services, Endowed Professor,...MLconf
 
Exploring Simple Siamese Representation Learning
Exploring Simple Siamese Representation LearningExploring Simple Siamese Representation Learning
Exploring Simple Siamese Representation LearningSungchul Kim
 
Mitchell's Face Recognition
Mitchell's Face RecognitionMitchell's Face Recognition
Mitchell's Face Recognitionbutest
 
Face Recognition: From Scratch To Hatch
Face Recognition: From Scratch To HatchFace Recognition: From Scratch To Hatch
Face Recognition: From Scratch To HatchEduard Tyantov
 
Deep gradient compression
Deep gradient compressionDeep gradient compression
Deep gradient compressionDavid Tung
 
Online video object segmentation via convolutional trident network
Online video object segmentation via convolutional trident networkOnline video object segmentation via convolutional trident network
Online video object segmentation via convolutional trident networkNAVER Engineering
 
HAWQ-V3: Dyadic Neural Network Quantization
HAWQ-V3: Dyadic Neural Network QuantizationHAWQ-V3: Dyadic Neural Network Quantization
HAWQ-V3: Dyadic Neural Network Quantizationjemin lee
 
Cerebellar Model Articulation Controller
Cerebellar Model Articulation ControllerCerebellar Model Articulation Controller
Cerebellar Model Articulation ControllerZahra Sadeghi
 
Techniques in Deep Learning
Techniques in Deep LearningTechniques in Deep Learning
Techniques in Deep LearningSourya Dey
 
Ml10 dimensionality reduction-and_advanced_topics
Ml10 dimensionality reduction-and_advanced_topicsMl10 dimensionality reduction-and_advanced_topics
Ml10 dimensionality reduction-and_advanced_topicsankit_ppt
 
QMIX: monotonic value function factorization paper review
QMIX: monotonic value function factorization paper reviewQMIX: monotonic value function factorization paper review
QMIX: monotonic value function factorization paper review민재 정
 
Ultrasound nerve segmentation, kaggle review
Ultrasound nerve segmentation, kaggle reviewUltrasound nerve segmentation, kaggle review
Ultrasound nerve segmentation, kaggle reviewEduard Tyantov
 
CMAC Neural Networks
CMAC Neural NetworksCMAC Neural Networks
CMAC Neural NetworksIJMREMJournal
 
Paper review: Learned Optimizers that Scale and Generalize.
Paper review: Learned Optimizers that Scale and Generalize.Paper review: Learned Optimizers that Scale and Generalize.
Paper review: Learned Optimizers that Scale and Generalize.Wuhyun Rico Shin
 
NVIDIA 深度學習教育機構 (DLI): Image segmentation with tensorflow
NVIDIA 深度學習教育機構 (DLI): Image segmentation with tensorflowNVIDIA 深度學習教育機構 (DLI): Image segmentation with tensorflow
NVIDIA 深度學習教育機構 (DLI): Image segmentation with tensorflowNVIDIA Taiwan
 
do adversarially robust image net models transfer better
do adversarially robust image net models transfer betterdo adversarially robust image net models transfer better
do adversarially robust image net models transfer betterLEE HOSEONG
 
Emerging Properties in Self-Supervised Vision Transformers
Emerging Properties in Self-Supervised Vision TransformersEmerging Properties in Self-Supervised Vision Transformers
Emerging Properties in Self-Supervised Vision TransformersSungchul Kim
 

What's hot (20)

Integer quantization for deep learning inference: principles and empirical ev...
Integer quantization for deep learning inference: principles and empirical ev...Integer quantization for deep learning inference: principles and empirical ev...
Integer quantization for deep learning inference: principles and empirical ev...
 
Anima Anadkumar, Principal Scientist, Amazon Web Services, Endowed Professor,...
Anima Anadkumar, Principal Scientist, Amazon Web Services, Endowed Professor,...Anima Anadkumar, Principal Scientist, Amazon Web Services, Endowed Professor,...
Anima Anadkumar, Principal Scientist, Amazon Web Services, Endowed Professor,...
 
Exploring Simple Siamese Representation Learning
Exploring Simple Siamese Representation LearningExploring Simple Siamese Representation Learning
Exploring Simple Siamese Representation Learning
 
Mitchell's Face Recognition
Mitchell's Face RecognitionMitchell's Face Recognition
Mitchell's Face Recognition
 
Face Recognition: From Scratch To Hatch
Face Recognition: From Scratch To HatchFace Recognition: From Scratch To Hatch
Face Recognition: From Scratch To Hatch
 
Deep gradient compression
Deep gradient compressionDeep gradient compression
Deep gradient compression
 
Online video object segmentation via convolutional trident network
Online video object segmentation via convolutional trident networkOnline video object segmentation via convolutional trident network
Online video object segmentation via convolutional trident network
 
Gupta datamule
Gupta datamuleGupta datamule
Gupta datamule
 
HAWQ-V3: Dyadic Neural Network Quantization
HAWQ-V3: Dyadic Neural Network QuantizationHAWQ-V3: Dyadic Neural Network Quantization
HAWQ-V3: Dyadic Neural Network Quantization
 
Cerebellar Model Articulation Controller
Cerebellar Model Articulation ControllerCerebellar Model Articulation Controller
Cerebellar Model Articulation Controller
 
Techniques in Deep Learning
Techniques in Deep LearningTechniques in Deep Learning
Techniques in Deep Learning
 
Ml10 dimensionality reduction-and_advanced_topics
Ml10 dimensionality reduction-and_advanced_topicsMl10 dimensionality reduction-and_advanced_topics
Ml10 dimensionality reduction-and_advanced_topics
 
QMIX: monotonic value function factorization paper review
QMIX: monotonic value function factorization paper reviewQMIX: monotonic value function factorization paper review
QMIX: monotonic value function factorization paper review
 
Ultrasound nerve segmentation, kaggle review
Ultrasound nerve segmentation, kaggle reviewUltrasound nerve segmentation, kaggle review
Ultrasound nerve segmentation, kaggle review
 
CMAC Neural Networks
CMAC Neural NetworksCMAC Neural Networks
CMAC Neural Networks
 
parallel
parallelparallel
parallel
 
Paper review: Learned Optimizers that Scale and Generalize.
Paper review: Learned Optimizers that Scale and Generalize.Paper review: Learned Optimizers that Scale and Generalize.
Paper review: Learned Optimizers that Scale and Generalize.
 
NVIDIA 深度學習教育機構 (DLI): Image segmentation with tensorflow
NVIDIA 深度學習教育機構 (DLI): Image segmentation with tensorflowNVIDIA 深度學習教育機構 (DLI): Image segmentation with tensorflow
NVIDIA 深度學習教育機構 (DLI): Image segmentation with tensorflow
 
do adversarially robust image net models transfer better
do adversarially robust image net models transfer betterdo adversarially robust image net models transfer better
do adversarially robust image net models transfer better
 
Emerging Properties in Self-Supervised Vision Transformers
Emerging Properties in Self-Supervised Vision TransformersEmerging Properties in Self-Supervised Vision Transformers
Emerging Properties in Self-Supervised Vision Transformers
 

Similar to An Introduction to Deep Learning

Deep learning with TensorFlow
Deep learning with TensorFlowDeep learning with TensorFlow
Deep learning with TensorFlowBarbara Fusinska
 
Week 12 Dimensionality Reduction Bagian 1
Week 12 Dimensionality Reduction Bagian 1Week 12 Dimensionality Reduction Bagian 1
Week 12 Dimensionality Reduction Bagian 1khairulhuda242
 
DeepLearningLecture.pptx
DeepLearningLecture.pptxDeepLearningLecture.pptx
DeepLearningLecture.pptxssuserf07225
 
backpropagation in neural networks
backpropagation in neural networksbackpropagation in neural networks
backpropagation in neural networksAkash Goel
 
Machine Learning Essentials Demystified part2 | Big Data Demystified
Machine Learning Essentials Demystified part2 | Big Data DemystifiedMachine Learning Essentials Demystified part2 | Big Data Demystified
Machine Learning Essentials Demystified part2 | Big Data DemystifiedOmid Vahdaty
 
Introduction to computer vision
Introduction to computer visionIntroduction to computer vision
Introduction to computer visionMarcin Jedyk
 
Introduction to computer vision with Convoluted Neural Networks
Introduction to computer vision with Convoluted Neural NetworksIntroduction to computer vision with Convoluted Neural Networks
Introduction to computer vision with Convoluted Neural NetworksMarcinJedyk
 
Computer Vision for Beginners
Computer Vision for BeginnersComputer Vision for Beginners
Computer Vision for BeginnersSanghamitra Deb
 
08 neural networks
08 neural networks08 neural networks
08 neural networksankit_ppt
 
background.pptx
background.pptxbackground.pptx
background.pptxKabileshCm
 
Introduction to deep learning
Introduction to deep learningIntroduction to deep learning
Introduction to deep learningVishwas Lele
 
Reinforcement Learning and Artificial Neural Nets
Reinforcement Learning and Artificial Neural NetsReinforcement Learning and Artificial Neural Nets
Reinforcement Learning and Artificial Neural NetsPierre de Lacaze
 
Neural network learning ability
Neural network learning abilityNeural network learning ability
Neural network learning abilityNabeel Aron
 
Training Deep Networks with Backprop (D1L4 Insight@DCU Machine Learning Works...
Training Deep Networks with Backprop (D1L4 Insight@DCU Machine Learning Works...Training Deep Networks with Backprop (D1L4 Insight@DCU Machine Learning Works...
Training Deep Networks with Backprop (D1L4 Insight@DCU Machine Learning Works...Universitat Politècnica de Catalunya
 
General Tips for participating Kaggle Competitions
General Tips for participating Kaggle CompetitionsGeneral Tips for participating Kaggle Competitions
General Tips for participating Kaggle CompetitionsMark Peng
 
Training DNN Models - II.pptx
Training DNN Models - II.pptxTraining DNN Models - II.pptx
Training DNN Models - II.pptxPrabhuSelvaraj15
 
30thSep2014
30thSep201430thSep2014
30thSep2014Mia liu
 

Similar to An Introduction to Deep Learning (20)

Deep learning with TensorFlow
Deep learning with TensorFlowDeep learning with TensorFlow
Deep learning with TensorFlow
 
Week 12 Dimensionality Reduction Bagian 1
Week 12 Dimensionality Reduction Bagian 1Week 12 Dimensionality Reduction Bagian 1
Week 12 Dimensionality Reduction Bagian 1
 
Deep learning
Deep learningDeep learning
Deep learning
 
Deeplearning
Deeplearning Deeplearning
Deeplearning
 
DeepLearningLecture.pptx
DeepLearningLecture.pptxDeepLearningLecture.pptx
DeepLearningLecture.pptx
 
backpropagation in neural networks
backpropagation in neural networksbackpropagation in neural networks
backpropagation in neural networks
 
Machine Learning Essentials Demystified part2 | Big Data Demystified
Machine Learning Essentials Demystified part2 | Big Data DemystifiedMachine Learning Essentials Demystified part2 | Big Data Demystified
Machine Learning Essentials Demystified part2 | Big Data Demystified
 
Introduction to computer vision
Introduction to computer visionIntroduction to computer vision
Introduction to computer vision
 
Introduction to computer vision with Convoluted Neural Networks
Introduction to computer vision with Convoluted Neural NetworksIntroduction to computer vision with Convoluted Neural Networks
Introduction to computer vision with Convoluted Neural Networks
 
Computer Vision for Beginners
Computer Vision for BeginnersComputer Vision for Beginners
Computer Vision for Beginners
 
08 neural networks
08 neural networks08 neural networks
08 neural networks
 
background.pptx
background.pptxbackground.pptx
background.pptx
 
Neural Network Part-2
Neural Network Part-2Neural Network Part-2
Neural Network Part-2
 
Introduction to deep learning
Introduction to deep learningIntroduction to deep learning
Introduction to deep learning
 
Reinforcement Learning and Artificial Neural Nets
Reinforcement Learning and Artificial Neural NetsReinforcement Learning and Artificial Neural Nets
Reinforcement Learning and Artificial Neural Nets
 
Neural network learning ability
Neural network learning abilityNeural network learning ability
Neural network learning ability
 
Training Deep Networks with Backprop (D1L4 Insight@DCU Machine Learning Works...
Training Deep Networks with Backprop (D1L4 Insight@DCU Machine Learning Works...Training Deep Networks with Backprop (D1L4 Insight@DCU Machine Learning Works...
Training Deep Networks with Backprop (D1L4 Insight@DCU Machine Learning Works...
 
General Tips for participating Kaggle Competitions
General Tips for participating Kaggle CompetitionsGeneral Tips for participating Kaggle Competitions
General Tips for participating Kaggle Competitions
 
Training DNN Models - II.pptx
Training DNN Models - II.pptxTraining DNN Models - II.pptx
Training DNN Models - II.pptx
 
30thSep2014
30thSep201430thSep2014
30thSep2014
 

Recently uploaded

High Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur EscortsHigh Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur Escortsranjana rawat
 
MANUFACTURING PROCESS-II UNIT-2 LATHE MACHINE
MANUFACTURING PROCESS-II UNIT-2 LATHE MACHINEMANUFACTURING PROCESS-II UNIT-2 LATHE MACHINE
MANUFACTURING PROCESS-II UNIT-2 LATHE MACHINESIVASHANKAR N
 
MANUFACTURING PROCESS-II UNIT-5 NC MACHINE TOOLS
MANUFACTURING PROCESS-II UNIT-5 NC MACHINE TOOLSMANUFACTURING PROCESS-II UNIT-5 NC MACHINE TOOLS
MANUFACTURING PROCESS-II UNIT-5 NC MACHINE TOOLSSIVASHANKAR N
 
Call Girls Pimpri Chinchwad Call Me 7737669865 Budget Friendly No Advance Boo...
Call Girls Pimpri Chinchwad Call Me 7737669865 Budget Friendly No Advance Boo...Call Girls Pimpri Chinchwad Call Me 7737669865 Budget Friendly No Advance Boo...
Call Girls Pimpri Chinchwad Call Me 7737669865 Budget Friendly No Advance Boo...roncy bisnoi
 
APPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICS
APPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICSAPPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICS
APPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICSKurinjimalarL3
 
KubeKraft presentation @CloudNativeHooghly
KubeKraft presentation @CloudNativeHooghlyKubeKraft presentation @CloudNativeHooghly
KubeKraft presentation @CloudNativeHooghlysanyuktamishra911
 
VIP Call Girls Service Kondapur Hyderabad Call +91-8250192130
VIP Call Girls Service Kondapur Hyderabad Call +91-8250192130VIP Call Girls Service Kondapur Hyderabad Call +91-8250192130
VIP Call Girls Service Kondapur Hyderabad Call +91-8250192130Suhani Kapoor
 
UNIT-III FMM. DIMENSIONAL ANALYSIS
UNIT-III FMM.        DIMENSIONAL ANALYSISUNIT-III FMM.        DIMENSIONAL ANALYSIS
UNIT-III FMM. DIMENSIONAL ANALYSISrknatarajan
 
Call Girls Service Nashik Vaishnavi 7001305949 Independent Escort Service Nashik
Call Girls Service Nashik Vaishnavi 7001305949 Independent Escort Service NashikCall Girls Service Nashik Vaishnavi 7001305949 Independent Escort Service Nashik
Call Girls Service Nashik Vaishnavi 7001305949 Independent Escort Service NashikCall Girls in Nagpur High Profile
 
Java Programming :Event Handling(Types of Events)
Java Programming :Event Handling(Types of Events)Java Programming :Event Handling(Types of Events)
Java Programming :Event Handling(Types of Events)simmis5
 
AKTU Computer Networks notes --- Unit 3.pdf
AKTU Computer Networks notes ---  Unit 3.pdfAKTU Computer Networks notes ---  Unit 3.pdf
AKTU Computer Networks notes --- Unit 3.pdfankushspencer015
 
Call for Papers - Educational Administration: Theory and Practice, E-ISSN: 21...
Call for Papers - Educational Administration: Theory and Practice, E-ISSN: 21...Call for Papers - Educational Administration: Theory and Practice, E-ISSN: 21...
Call for Papers - Educational Administration: Theory and Practice, E-ISSN: 21...Christo Ananth
 
Introduction to IEEE STANDARDS and its different types.pptx
Introduction to IEEE STANDARDS and its different types.pptxIntroduction to IEEE STANDARDS and its different types.pptx
Introduction to IEEE STANDARDS and its different types.pptxupamatechverse
 
HARDNESS, FRACTURE TOUGHNESS AND STRENGTH OF CERAMICS
HARDNESS, FRACTURE TOUGHNESS AND STRENGTH OF CERAMICSHARDNESS, FRACTURE TOUGHNESS AND STRENGTH OF CERAMICS
HARDNESS, FRACTURE TOUGHNESS AND STRENGTH OF CERAMICSRajkumarAkumalla
 
VIP Call Girls Service Hitech City Hyderabad Call +91-8250192130
VIP Call Girls Service Hitech City Hyderabad Call +91-8250192130VIP Call Girls Service Hitech City Hyderabad Call +91-8250192130
VIP Call Girls Service Hitech City Hyderabad Call +91-8250192130Suhani Kapoor
 
High Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur EscortsHigh Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur EscortsCall Girls in Nagpur High Profile
 
CCS335 _ Neural Networks and Deep Learning Laboratory_Lab Complete Record
CCS335 _ Neural Networks and Deep Learning Laboratory_Lab Complete RecordCCS335 _ Neural Networks and Deep Learning Laboratory_Lab Complete Record
CCS335 _ Neural Networks and Deep Learning Laboratory_Lab Complete RecordAsst.prof M.Gokilavani
 
The Most Attractive Pune Call Girls Budhwar Peth 8250192130 Will You Miss Thi...
The Most Attractive Pune Call Girls Budhwar Peth 8250192130 Will You Miss Thi...The Most Attractive Pune Call Girls Budhwar Peth 8250192130 Will You Miss Thi...
The Most Attractive Pune Call Girls Budhwar Peth 8250192130 Will You Miss Thi...ranjana rawat
 
Porous Ceramics seminar and technical writing
Porous Ceramics seminar and technical writingPorous Ceramics seminar and technical writing
Porous Ceramics seminar and technical writingrakeshbaidya232001
 

Recently uploaded (20)

High Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur EscortsHigh Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur Escorts
 
MANUFACTURING PROCESS-II UNIT-2 LATHE MACHINE
MANUFACTURING PROCESS-II UNIT-2 LATHE MACHINEMANUFACTURING PROCESS-II UNIT-2 LATHE MACHINE
MANUFACTURING PROCESS-II UNIT-2 LATHE MACHINE
 
MANUFACTURING PROCESS-II UNIT-5 NC MACHINE TOOLS
MANUFACTURING PROCESS-II UNIT-5 NC MACHINE TOOLSMANUFACTURING PROCESS-II UNIT-5 NC MACHINE TOOLS
MANUFACTURING PROCESS-II UNIT-5 NC MACHINE TOOLS
 
Call Girls Pimpri Chinchwad Call Me 7737669865 Budget Friendly No Advance Boo...
Call Girls Pimpri Chinchwad Call Me 7737669865 Budget Friendly No Advance Boo...Call Girls Pimpri Chinchwad Call Me 7737669865 Budget Friendly No Advance Boo...
Call Girls Pimpri Chinchwad Call Me 7737669865 Budget Friendly No Advance Boo...
 
APPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICS
APPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICSAPPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICS
APPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICS
 
KubeKraft presentation @CloudNativeHooghly
KubeKraft presentation @CloudNativeHooghlyKubeKraft presentation @CloudNativeHooghly
KubeKraft presentation @CloudNativeHooghly
 
Roadmap to Membership of RICS - Pathways and Routes
Roadmap to Membership of RICS - Pathways and RoutesRoadmap to Membership of RICS - Pathways and Routes
Roadmap to Membership of RICS - Pathways and Routes
 
VIP Call Girls Service Kondapur Hyderabad Call +91-8250192130
VIP Call Girls Service Kondapur Hyderabad Call +91-8250192130VIP Call Girls Service Kondapur Hyderabad Call +91-8250192130
VIP Call Girls Service Kondapur Hyderabad Call +91-8250192130
 
UNIT-III FMM. DIMENSIONAL ANALYSIS
UNIT-III FMM.        DIMENSIONAL ANALYSISUNIT-III FMM.        DIMENSIONAL ANALYSIS
UNIT-III FMM. DIMENSIONAL ANALYSIS
 
Call Girls Service Nashik Vaishnavi 7001305949 Independent Escort Service Nashik
Call Girls Service Nashik Vaishnavi 7001305949 Independent Escort Service NashikCall Girls Service Nashik Vaishnavi 7001305949 Independent Escort Service Nashik
Call Girls Service Nashik Vaishnavi 7001305949 Independent Escort Service Nashik
 
Java Programming :Event Handling(Types of Events)
Java Programming :Event Handling(Types of Events)Java Programming :Event Handling(Types of Events)
Java Programming :Event Handling(Types of Events)
 
AKTU Computer Networks notes --- Unit 3.pdf
AKTU Computer Networks notes ---  Unit 3.pdfAKTU Computer Networks notes ---  Unit 3.pdf
AKTU Computer Networks notes --- Unit 3.pdf
 
Call for Papers - Educational Administration: Theory and Practice, E-ISSN: 21...
Call for Papers - Educational Administration: Theory and Practice, E-ISSN: 21...Call for Papers - Educational Administration: Theory and Practice, E-ISSN: 21...
Call for Papers - Educational Administration: Theory and Practice, E-ISSN: 21...
 
Introduction to IEEE STANDARDS and its different types.pptx
Introduction to IEEE STANDARDS and its different types.pptxIntroduction to IEEE STANDARDS and its different types.pptx
Introduction to IEEE STANDARDS and its different types.pptx
 
HARDNESS, FRACTURE TOUGHNESS AND STRENGTH OF CERAMICS
HARDNESS, FRACTURE TOUGHNESS AND STRENGTH OF CERAMICSHARDNESS, FRACTURE TOUGHNESS AND STRENGTH OF CERAMICS
HARDNESS, FRACTURE TOUGHNESS AND STRENGTH OF CERAMICS
 
VIP Call Girls Service Hitech City Hyderabad Call +91-8250192130
VIP Call Girls Service Hitech City Hyderabad Call +91-8250192130VIP Call Girls Service Hitech City Hyderabad Call +91-8250192130
VIP Call Girls Service Hitech City Hyderabad Call +91-8250192130
 
High Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur EscortsHigh Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur Escorts
 
CCS335 _ Neural Networks and Deep Learning Laboratory_Lab Complete Record
CCS335 _ Neural Networks and Deep Learning Laboratory_Lab Complete RecordCCS335 _ Neural Networks and Deep Learning Laboratory_Lab Complete Record
CCS335 _ Neural Networks and Deep Learning Laboratory_Lab Complete Record
 
The Most Attractive Pune Call Girls Budhwar Peth 8250192130 Will You Miss Thi...
The Most Attractive Pune Call Girls Budhwar Peth 8250192130 Will You Miss Thi...The Most Attractive Pune Call Girls Budhwar Peth 8250192130 Will You Miss Thi...
The Most Attractive Pune Call Girls Budhwar Peth 8250192130 Will You Miss Thi...
 
Porous Ceramics seminar and technical writing
Porous Ceramics seminar and technical writingPorous Ceramics seminar and technical writing
Porous Ceramics seminar and technical writing
 

An Introduction to Deep Learning

  • 1. Deep Learning 1 In the name of God Mehrnaz Faraz Faculty of Electrical Engineering K. N. Toosi University of Technology Milad Abbasi Faculty of Electrical Engineering Sharif University of Technology
  • 2. AI, ML and DL 2
  • 3. Why Deep Learning? • Perform better for classification than other traditional Machine Learning methods, because: – Deep learning methods include multi layer processing with less time and better accuracy performance. 3
  • 4. Deep Learning Applications 4 Image recognitionSpeech recognition Robots and self driving carsHealthcare Portfolio management Weather forecast
  • 5. Strengths and Challenges • Strengths: – No need for feature engineering – Best results with unstructured data – No need for labeling of data – Efficient at delivering high-quality results – Hardware and software support • Challenges: – The need for lots of data – Neural networks at the core of deep learning are black boxes – Overfitting the model – Lack of flexibility 5
  • 6. Deep Learning Algorithms • Unsupervised Learning – Auto Encoders (AE) – Generative Adversarial Networks (GAN) • Supervised Learning – Recurrent Neural Networks (RNN) – Convolutional Neural Networks (CNN) • Semi-Supervised Learning • Reinforcement Learning 6
  • 7. Unsupervised Learning • Find structure or patterns in the unlabeled data – Clustering – Compression – Feature & Representation learning – Dimensionality reduction – Generative models 7
  • 8. Supervised Learning • Learn a mapping function f where: y = f(x) – Classification – Regression 8 Data: (x,y)
  • 10. Overfitting • The model is fit too well to the training set, but does not perform as well on the test set • Occurs in complex networks with small data set 10 Overfitting
  • 11. Overfitting • Steps for reducing overfitting: – Add more data – Use data augmentation – Use architectures that generalize well – Add regularization (mostly dropout, L1/L2 regularization are also possible) – Reduce architecture complexity 11
  • 12. Training Neural Network • Data preparation • Choosing a network architecture • Training algorithm and optimization • Improving training algorithm – Improve convergence rate 12
  • 13. Data preparation • Data need to be made adequate for a given method • Data in the real world is dirty – Incomplete: Lacking attribute values, lacking certain attributes of interest – Noisy: Containing errors or outliers 13 More data = Better training Removing incomplete and ruined data
  • 14. Data preparation • Data pre-processing: – Normalization: Helps to prevent that attributes with large ranges out-weight attributes with small ranges • Min-max normalization: • Z-score normalization: – Doesn’t eliminate outliers – Mean: 0 , Std: 1 14         min _ max _ min _ min max min old old new old old x x x new new new x x          minold old new old x x x std x  
  • 15. Data preparation • Histogram equalization: – Is a technique for adjusting image intensities to enhance contrast 15 Before histogram equalization After histogram equalization
  • 16. Data preparation • Data augmentation – Means increasing the number of data points. In terms of images, it may mean that increasing the number of images in the dataset. – Popular augmentation techniques (in terms of image): • Flip • Rotation • Scale • Crop • Translation • Gaussian noise • Conditional GANs 16 Flip
  • 17. Choosing a network architecture • Layer’s type – 1 dimensional data (signal, vector): FC, AE – N dimensional data (image, tensor): CNN – Time series data (speech, video, text): RNN • Number of parameters – Number of layers – Number of neurons • Start with a minimum of hidden layers and nodes. Increase the hidden layers and nodes number until get a good performance. 17 Try and error
  • 18. Training algorithm and optimization • Training: • Error calculation: – Cost/ Loss function: • Mean squared • Cross-entropy • Softmax 18 Feed forward Error calculation Back propagationUpdating parameters Input     21 , i i i L y y y y N  
  • 19. Training algorithm and optimization • Optimization: – Step by step to get the lowest amount of error with modifying and updating the weights. – Difference value for cost function – Goal: Get to global minimum 19 Loss surface
  • 20. Training algorithm and optimization – Gradient descent variants: • Stochastic gradient descent • Batch gradient descent • Mini-batch gradient descent – Gradient descent optimization algorithm: • Momentum • Adagrad • Adadelta • Adam • RMSprop • … 20
  • 21. Stochastic Gradient Descent • A parameter updating for each training example 𝑥 𝑖 and label 𝑦 𝑖 • Performs redundant computations for large datasets • Performs frequent updates with a high variance (Fluctuation) 21
  • 22. Batch Gradient Descent • Calculate the gradients for the whole dataset to perform just one update • can be very slow • Learning rate (η) determining how big of an update we perform 22  new old J     
  • 23. Mini-batch Gradient Descent • Performs an update for every mini-batch of n training examples – reduces the variance of the parameter updates – can make use of highly optimized matrix optimizations – Common mini-batch sizes range between 50 and 256 23      , , ; ; i i n i i n new old J x y        
  • 24. Improving Training Algorithms • Batch normalization • Regularization • Dropout • Transfer learning 24
  • 25. • Is a normalization method/layer for neural networks • Preventing from overfitting • Reduces the dependence of network to weight initialization • Improves the gradient flow through the network • Improves the speed (η), performance, and stability of NN • Allows higher learning rates Batch Normalization 25
  • 26. Batch Normalization • How batch normalization prevents from overfitting? – Reducing overfitting because of slight regularization effects – Similar to dropout, it adds some noise to each hidden layer’s activations. 26
  • 27. Regularization • The process of introducing additional information to the loss function: Regularization term • Adds a penalty to explore certain regions in function space • V is the loss function • λ is importance factor of regularization 27 1 min ( ( ), ) ( ) n i i w i V h x y R w  
  • 28. Regularization • How regularization prevents from overfitting? • L1 & L2 regularization: – L1: Lasso Regression adds “absolute value of magnitude” of coefficient as penalty term to the loss function. – L2: Ridge regression adds “squared magnitude” of coefficient as penalty term to the loss function. 28
  • 29. Regularization • • • Increasing λ Decreasing R(w) Decreasing w 29  T f w x b Under-fitting Over-fittingAppropriate-fitting 1 min ( ( ), ) ( ) n i i w i V h x y R w  
  • 30. • Ignoring neurons during the training which is chosen at random. • These neurons are not considered during a particular forward or backward pass. • Dropout probability = 0.5 usually works well • Not used on the output layer Dropout 30 Standard Neural Network After applying dropout
  • 31. Dropout • How dropout prevents from overfitting? – Dropout is a fast regularization method – Layer's "over-reliance" on a few of its inputs – Network becomes less sensitive to the specific weights of neurons – Better generalization 31
  • 32. Transfer Learning • A model trained on one task is re-purposed on a second related task. • We first train a base network on a base (big) dataset and task • Repurpose the learned features, or transfer them, to a second target network to be trained on a target dataset and task 32
  • 33. Transfer Learning 33 Big Data Conv Conv Conv Pool Pool FC Conv Soft Max TrainingwithIMAGENET Transfer the weights Our Data Conv Conv Conv Pool Pool FC Conv Soft Max Freeze these (Low Data) Train these
  • 34. Hyper Parameter Optimization • Hyper parameter: – A parameter whose value is set before the learning process begins – Initial weights – Number of layers – Number of neurons – Learning rate – Convolution kernel width,… 34
  • 35. Hyper Parameter Optimization • Manual tuning – Monitor and visualize the loss curve/ accuracy • Automatic optimization – Random search – Grid search – Bayesian Hyper parameter optimization 35
  • 36. Hyper Parameter Optimization • Bayesian Hyper parameter optimization: – Build a probability model of the objective function and use it to select the most promising hyper parameters to evaluate in the true objective function 36
  • 37. Cross Validation • K-fold cross validation: – A model is trained using (k-1) of the folds as training data – The resulting model is validated on the remaining part of the data 37
  • 38. Weight initialization • Zero initialization: – Fully connected, no asymmetry – In a layer, every neuron has the same output – In a layer, all the weights update in the same way 38 InputSignal OutputSignal ⋮ ⋮ ⋮⋮ 0 0 0
  • 39. Weight initialization • Small random numbers: – Symmetry breaks – Causes “Vanishing Gradient” flowing backward through the network • A concern for deep networks 39 '2 2 '1 11 2 2 1 1 1 1 1 1 1 2 2 1 1 1 1 1 1 1 1 1 1e xf w f E E e o net o net w e o net o net w                       
  • 40. Weight initialization • Large random numbers: – Symmetry breaks – Causes saturation – Causes “Exploding Gradient ” flowing backward through the network 40 '2 2 '1 11 2 2 1 1 1 1 1 1 1 2 2 1 1 1 1 1 1 1 1 1 1e xf w f E E e o net o net w e o net o net w                         T f w x b
  • 41. Weight initialization • Train with multiple small random numbers • Measure the errors • Select the initial weights that produce the smallest errors 41 … 𝑤1 𝑤2 𝑤3 𝑤′1 𝑤′2 𝑤′3 𝐸 𝑛𝐸1
  • 42. Feature Selection • Automatic • Handy – Forward selection – Backward elimination 42
  • 43. Forward Selection • Begins with an empty model and adds in variables one by one • Adds the one variable that gives the single best improvement to our model 43 𝒙 𝟏 𝒙 𝟑 𝒙 𝟐 𝒆 𝟏 𝒆 𝟐 𝒆 𝟑 1 2 3min( , , )e e e Feature with minimum e is selected
  • 44. Forward Selection • Suppose that 𝑥1 is selected • 𝑥1 and 𝑥2 𝑒12 • 𝑥1 and 𝑥3 𝑒13 44 12 13min( , )e e Features with minimum e are selected
  • 45. Backward Elimination • Removes the least significant feature at each iteration • Steps: – Train the model with all the independent variables – Eliminate independent variables with no improvement on performance – Repeat training until no improvement is observed on removal of features 45