What is meant by deep learning?
Deep learning is a subset of machine learning, which is essentially a neural network with three or more layers. These neural networks attempt to simulate the behavior of the human brain—albeit far from matching its ability—allowing it to “learn” from large amounts
CCS355 Neural Network & Deep Learning UNIT III notes and Question bank .pdf
deepLearning report
1. NAME : ABHILESH KUMAR SINGH
UNIVERSITY ROLL NO :12500119177
COMPUTER SCIENCE AND ENGINEERING
NEURAL NETWORKS AND DEEP LEARNING
2. Artificial Neural Network
Artificial Neural Network Tutorial provides basic and advanced concepts of
ANNs. Our Artificial Neural Network tutorial is developed for beginners as well as
professions.
The term "Artificial neural network" refers to a biologically inspired sub-field of
artificial intelligence modeled after the brain. An Artificial neural network is
usually a computational network based on biological neural networks that
construct the structure of the human brain. Similar to a human brain has
neurons interconnected to each other, artificial neural networks also have
neurons that are linked to each other in various layers of the networks. These
neurons are known as nodes.
Artificial neural network tutorial covers all the aspects related to the artificial
neural network. In this tutorial, we will discuss ANNs, Adaptive resonance theory,
Kohonen self-organizing map, Building blocks, unsupervised learning, Genetic
algorithm, etc.
3. Multi-Layer Artificial Neural Networks
As with individual perceptrons, multi-layer networks can be used for
learning tasks. However, the learning algorithm that we look at (the
backpropogation routine) is derived mathematically, using differential
calculus. The derivation relies on having a differentiable threshold function,
which effectively rules out using perceptron units if we want to be sure that
backpropogation works correctly. The step function in perceptrons is not
continuous, hence non-differentiable
Example Multi-layer ANN with Sigmoid Units
We will concern ourselves here with ANNs containing only one hidden layer,
as this makes describing the backpropagation routine easier. Note that
networks where you can feed in the input on the left and propagate it
forward to get an output are called feed forward networks.
osen which had similar properties to the step function in perceptron units.
4. Fuzzy Relations
• Fuzzy relations also map elements of one universe, say X, to those of
another universe, say Y, through the Cartesian product of the two universes.
• ‘Strength’’ of the relation between ordered pairs of the two universes is
measured with the membership function instead of characteristic function
expressing various ‘‘degrees’’ of strength of the relation on the unit interval
[0,1].
5. Properties of Fuzzy Relations
Just as for crisp relations, the properties of commutativity, associativity,
distributivity, involution, and idempotency all hold for fuzzy relations. •
Moreover, De Morgan’s principles hold for fuzzy relations just as they do for
crisp (classical) relations. • Null relation, O,and the complete relation, E, are
analogous to the null set and the whole set in set-theoretic form,
respectively. • Fuzzy relations are not constrained, as is the case for fuzzy
sets in general, by the excluded middle axioms. • Since a fuzzy relation R is
also a fuzzy set, there is overlap between a relation and its complement.
6. Three Learning Paradigms For The
Future Development Of Deep Learning
Blended learning
In this case, semi-supervised learning is becoming increasingly popular in the
field of machine learning because it can perform exceptionally well on
supervised problems with little labeled data. For example, a well-designed
semi-supervised Generative antimartial Network (Generative antimarial
Network) uses only 25 training samples on the MNIST dataset and achieves an
accuracy
Semi-supervised learning is designed for data sets that have a large number of
unlabeled samples and a small number of labeled samples. Traditionally,
supervised learning uses the labeled part of the data set, while unsupervised
learning uses another unlabeled part of the data set. The semi-supervised
learning model can combine the labeled data with the information extracted
from the unlabeled data set.f over 90%.
7. Component learning
Component learning uses not only the knowledge of one model, but also the
knowledge of multiple models. It is believed that through a unique combination of
information or input (including static and dynamic), deep learning can be deeper in
understanding and performance than a single model.
Transfer learning is a very obvious example of component learning. Based on this idea,
model weights pre-trained on similar problems can be used to fine-tune a specific
problem. Build a pre-trained model like Inception or VGG-16 to distinguish different
categories of images.
8. Simplify learning
Logically speaking, no, GPT-3 is very convincing, but it has repeatedly shown in the
past that “successful science” is the science that has the greatest impact on mankind.
The academic world is always too far away from reality and too vague. At the end of
the 19th century, neural networks were forgotten for a short period of time due to too
little available data, so this idea, no matter how clever, was useless.
This directly or indirectly shows that in deep learning research, almost everything is
related to reducing the amount of necessary parameters, which is closely related to
improving generalization ability and performance. For example, the introduction of
convolutional layers greatly reduces the number of parameters required by neural
networks to process images. The recursive layer integrates the idea of time, while using
the same weights, so that the neural network can better process the sequence with
fewer parameters.