Course Code Course Title L T P U
CSE431 DEEP LEARNING 3 0 0 3
Course Objectives: The student learns various state-of-the-art deep learning algorithms and their applications
to solve real-world problems. The student develops skills to design neural network architectures and training
procedures using various deep learning platforms and software libraries.
Course Learning Outcomes:
On completing this course, the student will be able to:
CO1: describe the feedforward and deep networks.
CO2:design single and multi-layer feed-forward deep networks and tune various hyper-parameters.
CO3: analyze the performance of deep networks.
Module-1
Introduction to machine learning-
Linear models (SVMs and Perceptron’s, logistic regression)- Intro to Neural Nets: What a shallow network
computes- Training a network: loss functions, back propagation and stochastic gradient descent- Neural
networks as universal function approximates
Unit-II
Historical context and motivation for deep learning; basic supervised classification task, optimizing logistic
classifier using gradient descent, stochastic gradient descent, momentum, and adaptive sub gradient method.
Feed forward neural networks, deep networks, regularizing a deepnetwork, model exploration, and hyper
parameter tuning.
Unit-III
Convolution Neural Networks: Introduction to convolution neural networks: stacking, striding and pooling,
applications like image, and text classification.
Unit-IV
Sequence Modeling: Recurrent Nets: Unfolding computational graphs, recurrent neural networks (RNNs),
bidirectional RNNs, encoder-decoder sequence to sequence architectures, deep recurrent networks.GAN
Unit-V
Auto encoders: Under complete auto encoders, regularized auto encoders, sparse auto encoders, denoising auto
encoders, representational power, layer, size, and depth of auto encoders, stochastic encoders and decoders.
LSTM
Text book(s):
T1. Ian Goodfellow, Deep Learning, MIT Press, 2016.
T2. Jeff Heaton, Deep Learning and Neural Networks, Heaton Research Inc, 2015.
Reference Book:
R1. Mindy L Hall, Deep Learning, VDM Verlag,2011.
R2. Li Deng (Author), Dong Yu, Deep Learning: Methods and Applications (Foundations and Trends in Signal
Processing), Now Publishers Inc, 2009.
Lecture-wise plan:
Lecture
No.
Learning objective Topics to be covered
Reference (Sec.
No. of Text /Ref
Books)
1 - 2
Introduction to machine
learning
Different ML methods
T1: Page:1-8
3-10
Intro to Neural Nets: What a
shallow network computes-
Training a network: loss
functions, backpropagation
and stochastic gradient
descent- Neural networks as
universal function
approximates
Intro to Neural Nets: What a
shallow network computes-
Training a network: loss
functions, backpropagation
and stochastic gradient
descent- Neural networks as
universal function
approximates
R1: Page:30-42
T2: Page:20-35
11- 15
Historical context and
motivation for deep learning;
basic supervised
classification task,
optimizing logistic classifier
using gradient descent
Historical context and
motivation for deep learning;
basic supervised
classification task,
optimizing logistic classifier
using gradient descent
T1: Page:22-34
15- 19
momentum, and adaptive
sub gradient method.
momentum, and adaptive
sub gradient method. T2: Page:51-62
20 - 29
Convolution Neural
Networks: Introduction to
convolution neural networks
stacking, striding and
pooling, applications like
image, and text
classification.
R2: Page:153-167
R1: pages:46-65
30 - 35
Sequence Modeling:
Recurrent Nets: Unfolding
computational graphs,
recurrent neural networks
(RNNs),
Sequence Modeling:
Recurrent Nets: Unfolding
computational graphs,
recurrent neural networks
(RNNs),
T1: Page:37-49
36-39
bidirectional RNNs,
encoder-decoder sequence to
sequence architectures, deep
recurrent networks
bidirectional RNNs,
encoder-decoder sequence to
sequence architectures, deep
recurrent networks
T1: Page: 70-89
40 -45
Autoencoders:
Undercompleteautoencoders,
regularized autoencoders,
sparse autoencoders,
denoisingautoencoders,
representational power,
layer, size, and depth of
autoencoders, stochastic
encoders and decoders
Autoencoders:
Undercompleteautoencoders,
regularized autoencoders,
sparse autoencoders,
denoisingautoencoders,
representational power,
layer, size, and depth of
autoencoders, stochastic
encoders and decoders
T1: Page :95-120
Evaluation Scheme:
Component Duration
Weightage
(%)
Remarks
Internal I 25
Mid Term Exam 2 hours 20 Closed Book
Internal II 25
Comprehensive
Exam
3 hours 30 Closed Book
1. Attendance Policy: A Student must normally maintain a minimum of 75% attendance in the course
without which he/she shall be disqualified from appearing in the respective examination.
2. Make-up Policy: A student, who misses any component of evaluation for genuine reasons, must
immediately approach the instructor with a request for make-up examination stating reasons. The decision of
the instructor in all matters of make-up shall be final.
3. Chamber Consultation Hours: During the Chamber Consultation Hours, the students can consult the
respective faculty in his/her chamber without prior appointment.

DEEP LEARNING.docx

  • 1.
    Course Code CourseTitle L T P U CSE431 DEEP LEARNING 3 0 0 3 Course Objectives: The student learns various state-of-the-art deep learning algorithms and their applications to solve real-world problems. The student develops skills to design neural network architectures and training procedures using various deep learning platforms and software libraries. Course Learning Outcomes: On completing this course, the student will be able to: CO1: describe the feedforward and deep networks. CO2:design single and multi-layer feed-forward deep networks and tune various hyper-parameters. CO3: analyze the performance of deep networks. Module-1 Introduction to machine learning- Linear models (SVMs and Perceptron’s, logistic regression)- Intro to Neural Nets: What a shallow network computes- Training a network: loss functions, back propagation and stochastic gradient descent- Neural networks as universal function approximates Unit-II Historical context and motivation for deep learning; basic supervised classification task, optimizing logistic classifier using gradient descent, stochastic gradient descent, momentum, and adaptive sub gradient method. Feed forward neural networks, deep networks, regularizing a deepnetwork, model exploration, and hyper parameter tuning. Unit-III Convolution Neural Networks: Introduction to convolution neural networks: stacking, striding and pooling, applications like image, and text classification. Unit-IV Sequence Modeling: Recurrent Nets: Unfolding computational graphs, recurrent neural networks (RNNs), bidirectional RNNs, encoder-decoder sequence to sequence architectures, deep recurrent networks.GAN Unit-V Auto encoders: Under complete auto encoders, regularized auto encoders, sparse auto encoders, denoising auto encoders, representational power, layer, size, and depth of auto encoders, stochastic encoders and decoders. LSTM Text book(s): T1. Ian Goodfellow, Deep Learning, MIT Press, 2016. T2. Jeff Heaton, Deep Learning and Neural Networks, Heaton Research Inc, 2015. Reference Book: R1. Mindy L Hall, Deep Learning, VDM Verlag,2011. R2. Li Deng (Author), Dong Yu, Deep Learning: Methods and Applications (Foundations and Trends in Signal Processing), Now Publishers Inc, 2009.
  • 2.
    Lecture-wise plan: Lecture No. Learning objectiveTopics to be covered Reference (Sec. No. of Text /Ref Books) 1 - 2 Introduction to machine learning Different ML methods T1: Page:1-8 3-10 Intro to Neural Nets: What a shallow network computes- Training a network: loss functions, backpropagation and stochastic gradient descent- Neural networks as universal function approximates Intro to Neural Nets: What a shallow network computes- Training a network: loss functions, backpropagation and stochastic gradient descent- Neural networks as universal function approximates R1: Page:30-42 T2: Page:20-35 11- 15 Historical context and motivation for deep learning; basic supervised classification task, optimizing logistic classifier using gradient descent Historical context and motivation for deep learning; basic supervised classification task, optimizing logistic classifier using gradient descent T1: Page:22-34 15- 19 momentum, and adaptive sub gradient method. momentum, and adaptive sub gradient method. T2: Page:51-62 20 - 29 Convolution Neural Networks: Introduction to convolution neural networks stacking, striding and pooling, applications like image, and text classification. R2: Page:153-167 R1: pages:46-65 30 - 35 Sequence Modeling: Recurrent Nets: Unfolding computational graphs, recurrent neural networks (RNNs), Sequence Modeling: Recurrent Nets: Unfolding computational graphs, recurrent neural networks (RNNs), T1: Page:37-49
  • 3.
    36-39 bidirectional RNNs, encoder-decoder sequenceto sequence architectures, deep recurrent networks bidirectional RNNs, encoder-decoder sequence to sequence architectures, deep recurrent networks T1: Page: 70-89 40 -45 Autoencoders: Undercompleteautoencoders, regularized autoencoders, sparse autoencoders, denoisingautoencoders, representational power, layer, size, and depth of autoencoders, stochastic encoders and decoders Autoencoders: Undercompleteautoencoders, regularized autoencoders, sparse autoencoders, denoisingautoencoders, representational power, layer, size, and depth of autoencoders, stochastic encoders and decoders T1: Page :95-120 Evaluation Scheme: Component Duration Weightage (%) Remarks Internal I 25 Mid Term Exam 2 hours 20 Closed Book Internal II 25 Comprehensive Exam 3 hours 30 Closed Book 1. Attendance Policy: A Student must normally maintain a minimum of 75% attendance in the course without which he/she shall be disqualified from appearing in the respective examination. 2. Make-up Policy: A student, who misses any component of evaluation for genuine reasons, must immediately approach the instructor with a request for make-up examination stating reasons. The decision of the instructor in all matters of make-up shall be final. 3. Chamber Consultation Hours: During the Chamber Consultation Hours, the students can consult the respective faculty in his/her chamber without prior appointment.