SlideShare a Scribd company logo
Neural Networks
Priyanka Kasture
Founder: Machine Learning India
pkasture2010@gmail.com
pkasture2010@gmail.com 1
“Artificial Intelligence leaves no doubt that it wants
its audiences to enter a realm of pure fantasy.”
Fantasy?
Say no more!
Artificial Intelligence, today, is a very plausible reality,
affecting and impacting our lives in very subtle ways, ways
so subtle that we don’t even realize.
pkasture2010@gmail.com 2
Google’s search predictions, voice recognition, Deepmind’s
AlphaGO, Microsoft’s Tay, IBM’s Watson, Apple’s Siri, are all
examples and the list seems to have no end.
What these technologies have in common are deep machine
learning algorithms that help them learn from the data they
collect, and help them react and respond in ways that are very
human-like, and are thrillingly amazing.
pkasture2010@gmail.com 3
The field of AI attempts to not just understand intelligent
entities but also, build them.
Machines exhibit AI.
In computer science, an ideal intelligent machine is a
flexible agent that humans can associate with, and which
can learn and rationally solve problems.
pkasture2010@gmail.com 4
The very core or premise of AI is the machine’s ability to
continuously learn from the data it collects, then, analyze
through carefully crafted algorithms, and become better at
gaining success at a particular goal.
Capabilities currently classified as AI are understanding
human speech, competing at high level strategic games,
self-driving and self-controlling automobiles, interpreting
complex data, proving mathematical theorems and theories
in physics, making predictions, etc.
pkasture2010@gmail.com 5
AI research is divided into subfields that focus on
reasoning, planning, learning, natural language
processing, perception, ability to move and manipulate
objects, and achieving general purpose intelligence.
Approaches include statistical methods, computational
intelligence, soft-computing i.e. machine learning, use
of probabilistic systems, mathematical optimization,
artificial neural networks, and much more.
pkasture2010@gmail.com 6
Artificial Neural Networks (ANNs)
pkasture2010@gmail.com 7
An Artificial Neural Network (ANN) is an
information processing paradigm that is inspired
by the way biological nervous systems, such as
the brain, process information.
pkasture2010@gmail.com 8
The neuron, is the fundamental functional unit of all
nervous system tissues, including the brain. Each neuron
consists of a cell body (soma), nucleus, dendrites, axon
and synapses.
pkasture2010@gmail.com 9
Note:
I. Not all synaptic links are of the same
strength.
II. The strength of a signal passing through a
synapse is proportional to the strength of
the synaptic link.
pkasture2010@gmail.com 10
Human brain v/s Computer
pkasture2010@gmail.com 11
Characteristics and capabilities of
artificial neural networks:
I. Non-linearity.
II. Input-Output mapping.
III. Adaptability.
IV. Evidential response.
V. Fault tolerance.
VI. VLSI implementability.
VII. Neurobiological analogy.
pkasture2010@gmail.com 12
Electrical equivalent of the biological neuron:
pkasture2010@gmail.com 13
BNN v/s ANN:
pkasture2010@gmail.com 14
An Artificial Neural Network:
pkasture2010@gmail.com 15
Layers in artificial neural networks:
I. The input layer.
I. Introduces input values into the network.
II. No activation function or other processing.
II. The hidden layer(s).
I. Perform classification of features.
II. Two hidden layers are sufficient to solve most of the problems.
III. More layers prove to be better.
III. The output layer.
I. Functionally is similar to that of the hidden layers.
II. Outputs are passed on to the world, outside the neural
network.
pkasture2010@gmail.com 16
Activation Functions:
The activation function of a node defines the output of
that node or neuron, given an input or a set of inputs.
Specifically in an ANN we compute the sum of
products of inputs(X) and their corresponding
Weights(W), and apply an Activation function f(x) to it,
to get the output of that layer and feed it as an input
to the next layer.
pkasture2010@gmail.com 17
The need for Activation Functions:
If we do not apply an activation function then the output
signal would simply be a simple linear function. A linear
function is just a polynomial of one degree.
Y = f(X) = X1.W1 + X2.W2 + … + Xn.Wn
Now, a linear equation is easy to solve but they are
limited in their complexity and have less power to learn
complex functional mappings from data.
pkasture2010@gmail.com 18
We want our neural network to not just learn and compute - a
linear function, but something more complicated than that.
Also without the activation function, our neural network will
not be able to learn and model other complicated kinds of data
such as images, videos, audios, etc.
Y = f(X) = X1.W1 + X2.W2 + … + Xn.Wn
(Output of a linear summer.)
A(Y) = Actual Output.
(Activation function applied over the output of the linear
summer.)
pkasture2010@gmail.com 19
Types of Activation Functions:
pkasture2010@gmail.com 20
Choosing the right activation function:
Depending upon the properties of the problem under
consideration, we can make a better choice of an
activation function, for easy and quicker convergence
of the network.
pkasture2010@gmail.com 21
I. Sigmoid functions and their combinations generally work better in
the case of classifiers.
II. Sigmoids and TanH functions are sometimes avoided due to the
vanishing gradient problem.
III. ReLU function is a general activation function and is used in most
cases these days.
IV. If we encounter a case of dead neurons in our networks the leaky
ReLU function is the best choice.
V. Always keep in mind that ReLU function should only be used in the
hidden layers.
VI. As a rule of thumb, you can begin with using ReLU function and then
move over to other activation functions in case ReLU doesn’t
provide with optimum results.
pkasture2010@gmail.com 22
Artificial Neuron Models and Linear Regression:
The McCulloch and Pitt’s model or the threshold function :
The early model of an artificial neuron is introduced by Warren McCulloch
and Walter Pitts in 1943. The McCulloch-Pitts neural model is also known as
linear threshold gate. The output is hard limited and the response of this
model is absolutely binary in nature.
pkasture2010@gmail.com 23
The linear neuron model:
To obtain the output i.e. V, as it is, for a particular range and domain, the linear
neuron model was created. In this model, the bias neuron isn’t considered
separately but as one of the inputs. Here, we hard limit the function to use it in
practical applications since signal can’t be processed outside a particular range.
pkasture2010@gmail.com 24
Error functions:
pkasture2010@gmail.com 25
The Gradient Descent Algorithm:
Gradient descent is a first-order iterative optimization algorithm. To find
a local minimum of a function using gradient descent, one takes steps
proportional to the negative of the gradient (or of the approximate
gradient) of the function at the current point. If instead one takes steps
proportional to the positive of the gradient, one approaches a local
maximum of that function; the procedure is then known as gradient
ascent.
Gradient descent is also known as steepest descent, or the method of
steepest descent. Gradient descent should not be confused with
the method of steepest descent for approximating integrals.
pkasture2010@gmail.com 26
At a theoretical level, gradient descent is an algorithm that minimizes
functions. Given a function defined by a set of parameters, gradient
descent starts with an initial set of parameter values and iteratively moves
toward a set of parameter values that minimize the function. This iterative
minimization is achieved using calculus, taking steps in the negative
direction of the function gradient.
pkasture2010@gmail.com 27
Learning rate of a Neural Network:
In order for Gradient Descent to work we must set the λ
(learning rate) to an appropriate value. This parameter
determines how fast or slow we will move towards the optimal
weights. If the λ is very large we will skip the optimal solution. If
it is too small we will need too many iterations to converge to
the best values. So using a good λ is crucial.
pkasture2010@gmail.com 28
Another good technique is to adapt the value of λ in each
iteration. The idea behind this is that the farther you are from
optimal values the faster you should move towards the solution
and thus the value of λ should be larger. The closer you get to
the solution the smaller its value should be.
Unfortunately since you don’t know the actual optimal values,
you also don’t know how close you are to them in each step. To
resolve this you can use a technique called Bold Driver.
pkasture2010@gmail.com 29
Learning Mechanisms in
Artificial Neural Networks
pkasture2010@gmail.com 30
Learning process is a method or a mathematical logic
which improves the artificial neural network's
performance and usually this rule is applied repeatedly
over the network.
It is done by updating the weights and bias levels of a
network when a network is simulated in a specific data
environment.
pkasture2010@gmail.com 31
A learning rule may accept existing condition (weights and bias) of
the network and will compare the expected result and actual result
of the network to give new and improved values for weights and
bias. Depending upon the process to develop the network there are
three main models of machine learning:
I. Unsupervised Learning
II. Supervised Learning
III. Reinforcement Learning
pkasture2010@gmail.com 32
I. Error correction based learning.
II. Memory based learning.
III. Hebbian learning.
IV. Competitive learning.
V. Boltzmann learning.
Five basic ways of learning in
neural networks:
pkasture2010@gmail.com 33
Error Correction Learning:
Error-Correction Learning, used with supervised learning, is the
technique of comparing the system output to the desired
output value, and using that error to direct the training.
In the most direct route, the error values can be used to
directly adjust the tap weights, using an algorithm such as the
backpropagation algorithm.
pkasture2010@gmail.com 34
If the system output is y, and the desired system output is
known to be d, the error signal can be defined as:
e = d-y
Error correction learning algorithms attempt to minimize this
error signal at each training iteration. The most popular
learning algorithm for use with error-correction learning is the
back-propagation algorithm.
pkasture2010@gmail.com 35
Memory Based or Instance Based Learning:
In machine learning, instance-based learning (sometimes
called memory-based learning) is a family of learning
algorithms that, instead of performing explicit generalization,
compares new problem instances with instances seen in
training, which have been stored in memory.
pkasture2010@gmail.com 36
It is called instance-based because it constructs hypotheses
directly from the training instances themselves. This means
that the hypothesis complexity can grow with the data: in the
worst case, a hypothesis is a list of n training items and the
computational complexity of classifying a single new instance
is O(n).
One advantage that instance-based learning has over other
methods of machine learning is its ability to adapt its model
to previously unseen data.
pkasture2010@gmail.com 37
Hebbian Learning:
Hebbian learning is one of the oldest learning algorithms, and
is based in large part on the dynamics of biological systems.
A synapse between two neurons is strengthened when the
neurons on either side of the synapse (input and output) have
highly correlated outputs. In essence, when an input neuron
fires, if it frequently leads to the firing of the output neuron,
the synapse is strengthened.
pkasture2010@gmail.com 38
Following the analogy to an artificial system, the tap weight is
increased with high correlation between two sequential
neurons.
Mathematical Formulation:
Wij[n+1]=Wij[n]+ηxi[n]xj[n]
Here η, is the learning rate.
pkasture2010@gmail.com 39
Competitive Learning:
Competitive learning is a form of unsupervised
learning in artificial neural networks, in which nodes
compete for the right to respond to a subset of the input
data. There are three basic elements to a competitive
learning rule.
pkasture2010@gmail.com 40
1. A set of neurons that are all the same except for some
randomly distributed synaptic weights, and which therefore
respond differently to a given set of input patterns.
2. A limit imposed on the "strength" of each neuron.
3. A mechanism that permits the neurons to compete for
the right to respond to a given subset of inputs, such that
only one output neuron (or only one neuron per group), is
active (i.e. "on") at a time. The neuron that wins the
competition is called a "winner-take-all" neuron.
pkasture2010@gmail.com 41
Boltzmann Learning:
Boltzmann learning is statistical in nature, and is derived from
the field of thermodynamics. It is similar to error-correction
learning and is used during supervised training. In this
algorithm, the state of each individual neuron, in addition to
the system output, are taken into account.
In this respect, the Boltzmann learning rule is significantly
slower than the error-correction learning rule. Neural
networks that use Boltzmann learning are called Boltzmann
machines.
pkasture2010@gmail.com 42
Boltzmann learning is similar to an error-correction
learning rule, in that an error signal is used to train the
system in each iteration.
However, instead of a direct difference between the result
value and the desired value, we take the difference
between the probability distributions of the system.
pkasture2010@gmail.com 43
The Perceptron:
Layered feed-forward networks were first studied in the late
1950s under the name perceptrons.
Although networks of all sizes and topologies were
considered, the only effective learning element at the time
was for single-layered networks, so that is where most of the
effort was spent. Today, the name perceptron is used as a
synonym for a single-layer, feed-forward network.
pkasture2010@gmail.com 44
In machine learning, the perceptron is an algorithm
for supervised learning of binary classifiers (functions that
can decide whether an input, represented by a vector of
numbers, belongs to some specific class or not).
It is a type of linear classifier, i.e. a classification algorithm
that makes its predictions based on a linear predictor
function combining a set of weights with the feature vector.
pkasture2010@gmail.com 45
A diagram
showing a
perceptron
updating its
linear
boundary as
more training
examples are
added.
pkasture2010@gmail.com 46
Perceptron Convergence:
The Perceptron convergence theorem states that for any
data set which is linearly separable the Perceptron learning
rule is guaranteed to find a solution in a finite number of
steps.
In other words, the Perceptron learning rule is guaranteed
to converge to a weight vector that correctly classifies the
examples provided the training examples are linearly
separable.
pkasture2010@gmail.com 47
Feed Forward Networks:
A feedforward neural network is an artificial neural
network wherein connections between the units do not form
a cycle. As such, it is different from recurrent neural networks.
The feedforward neural network was the first and simplest
type of artificial neural network devised. In this network, the
information moves in only one direction, forward, from the
input nodes, through the hidden nodes (if any) and to the
output nodes. There are no cycles or loops in the network.
pkasture2010@gmail.com 48
A Feed-Forward Network:
pkasture2010@gmail.com 49
Back Propagation Algorithm:
At the heart of back-propagation is an expression for the
partial derivative ∂C/∂w of the cost function C with respect
to any weight w (or bias b) in the network. The expression
tells us how quickly the cost changes when we change the
weights and biases.
pkasture2010@gmail.com 50
And while the expression is somewhat complex, it also has a
beauty to it, with each element having a natural, intuitive
interpretation. And so back-propagation isn't just a fast
algorithm for learning. It actually gives us detailed insights
into how changing the weights and biases changes the
overall behavior of the network.
Back-propagation has been used successfully for wide
variety of applications, such as speech or voice recognition,
image pattern recognition, medical diagnosis, and
automatic controls.
pkasture2010@gmail.com 51
The algorithm repeats a two phase cycle, propagation and
weight update. When an input vector is presented to the
network, it is propagated forward through the network,
layer by layer, until it reaches the output layer.
The output of the network is then compared to the desired
output, using a loss function, and an error value is calculated
for each of the neurons in the output layer.
The error values are then propagated backwards, starting
from the output, until each neuron has an associated error
value which roughly represents its contribution to the
original output.
pkasture2010@gmail.com 52
Backpropagation uses these error values to calculate the
gradient of the loss function with respect to the weights in
the network.
In the second phase, this gradient is fed to the optimization
method, which in turn uses it to update the weights, in an
attempt to minimize the loss function.
pkasture2010@gmail.com 53
Back-propagation requires a known, desired output for each
input value in order to calculate the loss function gradient – it
is therefore usually considered to be a supervised
learning method; nonetheless, it is also used in
some unsupervised networks such as auto-encoders.
It is a generalization of the delta rule to multi-layered feed-
forward networks, made possible by using the chain rule to
iteratively compute gradients for each layer. Back-propagation
requires that the activation function used by the artificial
neurons (or "nodes") be differentiable.
pkasture2010@gmail.com 54
Algorithm:
1. Compute the error values for the output units using the
observed error.
2. Starting with output layer, repeat the following for each layer
in the network, until the earliest hidden layer is reached:
I. Propagate the error values back to the previous layer.
II. Update the weights between the two layers.
pkasture2010@gmail.com 55
Back-Propagation Algorithm:
pkasture2010@gmail.com 56
Back-Propagation Algorithm:
pkasture2010@gmail.com 57
Types of Neural Networks:
pkasture2010@gmail.com 58
Applications of neural networks:
 Computer Vision
 Speech Recognition
 Sentiment Analysis
 Natural Language Processing
 Medical Diagnosis
 Conversational Agents
 Autonomous Cars
 Machine Translation
 Stock Market Prediction
 Optical Character Recognition, etc.
pkasture2010@gmail.com 59
Thank you.
And don’t forget to think in neural.
Priyanka Kasture
pkasture2010@gmail.com
GitHub: @priyanka-kasture
Instagram: @priyanka.kasture
pkasture2010@gmail.com 60

More Related Content

What's hot

Artificial Neural Networks - ANN
Artificial Neural Networks - ANNArtificial Neural Networks - ANN
Artificial Neural Networks - ANN
Mohamed Talaat
 
Machine learning and_neural_network_lecture_slide_ece_dku
Machine learning and_neural_network_lecture_slide_ece_dkuMachine learning and_neural_network_lecture_slide_ece_dku
Machine learning and_neural_network_lecture_slide_ece_dku
Seokhyun Yoon
 
Ppt on artifishail intelligence
Ppt on artifishail intelligencePpt on artifishail intelligence
Ppt on artifishail intelligencesnehal_gongle
 
Deep learning frameworks v0.40
Deep learning frameworks v0.40Deep learning frameworks v0.40
Deep learning frameworks v0.40
Jessica Willis
 
Understanding Deep Learning & Parameter Tuning with MXnet, H2o Package in R
Understanding Deep Learning & Parameter Tuning with MXnet, H2o Package in RUnderstanding Deep Learning & Parameter Tuning with MXnet, H2o Package in R
Understanding Deep Learning & Parameter Tuning with MXnet, H2o Package in R
Manish Saraswat
 
Learning Methods in a Neural Network
Learning Methods in a Neural NetworkLearning Methods in a Neural Network
Learning Methods in a Neural Network
Saransh Choudhary
 
A Study On Deep Learning
A Study On Deep LearningA Study On Deep Learning
A Study On Deep Learning
Abdelrahman Hosny
 
Deep learning Introduction and Basics
Deep learning  Introduction and BasicsDeep learning  Introduction and Basics
Deep learning Introduction and Basics
Nitin Mishra
 
Artificial neural network
Artificial neural networkArtificial neural network
Artificial neural network
Imtiaz Siddique
 
Artificial Neural Network
Artificial Neural NetworkArtificial Neural Network
Artificial Neural Network
Prakash K
 
Data Science - Part XVII - Deep Learning & Image Processing
Data Science - Part XVII - Deep Learning & Image ProcessingData Science - Part XVII - Deep Learning & Image Processing
Data Science - Part XVII - Deep Learning & Image Processing
Derek Kane
 
Designing a neural network architecture for image recognition
Designing a neural network architecture for image recognitionDesigning a neural network architecture for image recognition
Designing a neural network architecture for image recognition
ShandukaniVhulondo
 
Deep Learning With Neural Networks
Deep Learning With Neural NetworksDeep Learning With Neural Networks
Deep Learning With Neural Networks
Aniket Maurya
 
Lets build a neural network
Lets build a neural networkLets build a neural network
Lets build a neural network
Commit Software Sh.p.k.
 
Artificial neural network
Artificial neural networkArtificial neural network
Artificial neural network
Mohd Arafat Shaikh
 
Artificial neural network
Artificial neural networkArtificial neural network
Artificial neural network
sweetysweety8
 
Introduction to Applied Machine Learning
Introduction to Applied Machine LearningIntroduction to Applied Machine Learning
Introduction to Applied Machine Learning
SheilaJimenezMorejon
 
Artificial neural network
Artificial neural networkArtificial neural network
Artificial neural network
Priyank Panchmiya
 
Introduction to Deep learning
Introduction to Deep learningIntroduction to Deep learning
Introduction to Deep learning
leopauly
 

What's hot (20)

Artificial Neural Networks - ANN
Artificial Neural Networks - ANNArtificial Neural Networks - ANN
Artificial Neural Networks - ANN
 
Machine learning and_neural_network_lecture_slide_ece_dku
Machine learning and_neural_network_lecture_slide_ece_dkuMachine learning and_neural_network_lecture_slide_ece_dku
Machine learning and_neural_network_lecture_slide_ece_dku
 
Ppt on artifishail intelligence
Ppt on artifishail intelligencePpt on artifishail intelligence
Ppt on artifishail intelligence
 
Deep learning frameworks v0.40
Deep learning frameworks v0.40Deep learning frameworks v0.40
Deep learning frameworks v0.40
 
Understanding Deep Learning & Parameter Tuning with MXnet, H2o Package in R
Understanding Deep Learning & Parameter Tuning with MXnet, H2o Package in RUnderstanding Deep Learning & Parameter Tuning with MXnet, H2o Package in R
Understanding Deep Learning & Parameter Tuning with MXnet, H2o Package in R
 
Learning Methods in a Neural Network
Learning Methods in a Neural NetworkLearning Methods in a Neural Network
Learning Methods in a Neural Network
 
Neural network and fuzzy logic
Neural network and fuzzy logicNeural network and fuzzy logic
Neural network and fuzzy logic
 
A Study On Deep Learning
A Study On Deep LearningA Study On Deep Learning
A Study On Deep Learning
 
Deep learning Introduction and Basics
Deep learning  Introduction and BasicsDeep learning  Introduction and Basics
Deep learning Introduction and Basics
 
Artificial neural network
Artificial neural networkArtificial neural network
Artificial neural network
 
Artificial Neural Network
Artificial Neural NetworkArtificial Neural Network
Artificial Neural Network
 
Data Science - Part XVII - Deep Learning & Image Processing
Data Science - Part XVII - Deep Learning & Image ProcessingData Science - Part XVII - Deep Learning & Image Processing
Data Science - Part XVII - Deep Learning & Image Processing
 
Designing a neural network architecture for image recognition
Designing a neural network architecture for image recognitionDesigning a neural network architecture for image recognition
Designing a neural network architecture for image recognition
 
Deep Learning With Neural Networks
Deep Learning With Neural NetworksDeep Learning With Neural Networks
Deep Learning With Neural Networks
 
Lets build a neural network
Lets build a neural networkLets build a neural network
Lets build a neural network
 
Artificial neural network
Artificial neural networkArtificial neural network
Artificial neural network
 
Artificial neural network
Artificial neural networkArtificial neural network
Artificial neural network
 
Introduction to Applied Machine Learning
Introduction to Applied Machine LearningIntroduction to Applied Machine Learning
Introduction to Applied Machine Learning
 
Artificial neural network
Artificial neural networkArtificial neural network
Artificial neural network
 
Introduction to Deep learning
Introduction to Deep learningIntroduction to Deep learning
Introduction to Deep learning
 

Similar to Neural Networks by Priyanka Kasture

Applications in Machine Learning
Applications in Machine LearningApplications in Machine Learning
Applications in Machine Learning
Joel Graff
 
machinelearningengineeringslideshare-160909192132 (1).pdf
machinelearningengineeringslideshare-160909192132 (1).pdfmachinelearningengineeringslideshare-160909192132 (1).pdf
machinelearningengineeringslideshare-160909192132 (1).pdf
ShivareddyGangam
 
Artificial Neural Networks: Applications In Management
Artificial Neural Networks: Applications In ManagementArtificial Neural Networks: Applications In Management
Artificial Neural Networks: Applications In Management
IOSR Journals
 
Artificial Intelligence, Machine Learning and Deep Learning with CNN
Artificial Intelligence, Machine Learning and Deep Learning with CNNArtificial Intelligence, Machine Learning and Deep Learning with CNN
Artificial Intelligence, Machine Learning and Deep Learning with CNN
mojammel43
 
Nature Inspired Reasoning Applied in Semantic Web
Nature Inspired Reasoning Applied in Semantic WebNature Inspired Reasoning Applied in Semantic Web
Nature Inspired Reasoning Applied in Semantic Web
guestecf0af
 
SURVEY ON BRAIN – MACHINE INTERRELATIVE LEARNING
SURVEY ON BRAIN – MACHINE INTERRELATIVE LEARNINGSURVEY ON BRAIN – MACHINE INTERRELATIVE LEARNING
SURVEY ON BRAIN – MACHINE INTERRELATIVE LEARNING
IRJET Journal
 
Deep Learning Demystified
Deep Learning DemystifiedDeep Learning Demystified
Deep Learning Demystified
Affine Analytics
 
Camp IT: Making the World More Efficient Using AI & Machine Learning
Camp IT: Making the World More Efficient Using AI & Machine LearningCamp IT: Making the World More Efficient Using AI & Machine Learning
Camp IT: Making the World More Efficient Using AI & Machine Learning
Krzysztof Kowalczyk
 
Artificial Neural Networks for NIU
Artificial Neural Networks for NIUArtificial Neural Networks for NIU
Artificial Neural Networks for NIU
Prof. Neeta Awasthy
 
nncollovcapaldo2013-131220052427-phpapp01.pdf
nncollovcapaldo2013-131220052427-phpapp01.pdfnncollovcapaldo2013-131220052427-phpapp01.pdf
nncollovcapaldo2013-131220052427-phpapp01.pdf
GayathriRHICETCSESTA
 
nncollovcapaldo2013-131220052427-phpapp01.pdf
nncollovcapaldo2013-131220052427-phpapp01.pdfnncollovcapaldo2013-131220052427-phpapp01.pdf
nncollovcapaldo2013-131220052427-phpapp01.pdf
GayathriRHICETCSESTA
 
Fuzzy Logic Final Report
Fuzzy Logic Final ReportFuzzy Logic Final Report
Fuzzy Logic Final ReportShikhar Agarwal
 
Universal Artificial Intelligence for Intelligent Agents: An Approach to Supe...
Universal Artificial Intelligence for Intelligent Agents: An Approach to Supe...Universal Artificial Intelligence for Intelligent Agents: An Approach to Supe...
Universal Artificial Intelligence for Intelligent Agents: An Approach to Supe...
IOSR Journals
 
Survey on Artificial Neural Network Learning Technique Algorithms
Survey on Artificial Neural Network Learning Technique AlgorithmsSurvey on Artificial Neural Network Learning Technique Algorithms
Survey on Artificial Neural Network Learning Technique Algorithms
IRJET Journal
 
Introduction Of Artificial neural network
Introduction Of Artificial neural networkIntroduction Of Artificial neural network
Introduction Of Artificial neural network
Nagarajan
 
ppt document 5b6 presentation based on education.pptx
ppt document 5b6 presentation based on education.pptxppt document 5b6 presentation based on education.pptx
ppt document 5b6 presentation based on education.pptx
20nu1a05b6
 
Algorithm
AlgorithmAlgorithm
Algorithm
nivlayalat
 
Neural Network Implementation Control Mobile Robot
Neural Network Implementation Control Mobile RobotNeural Network Implementation Control Mobile Robot
Neural Network Implementation Control Mobile Robot
IRJET Journal
 
Machine Learning: Introduction to Neural Networks
Machine Learning: Introduction to Neural NetworksMachine Learning: Introduction to Neural Networks
Machine Learning: Introduction to Neural NetworksFrancesco Collova'
 
Neural Networks
Neural NetworksNeural Networks
Neural Networks
SurajKumar579888
 

Similar to Neural Networks by Priyanka Kasture (20)

Applications in Machine Learning
Applications in Machine LearningApplications in Machine Learning
Applications in Machine Learning
 
machinelearningengineeringslideshare-160909192132 (1).pdf
machinelearningengineeringslideshare-160909192132 (1).pdfmachinelearningengineeringslideshare-160909192132 (1).pdf
machinelearningengineeringslideshare-160909192132 (1).pdf
 
Artificial Neural Networks: Applications In Management
Artificial Neural Networks: Applications In ManagementArtificial Neural Networks: Applications In Management
Artificial Neural Networks: Applications In Management
 
Artificial Intelligence, Machine Learning and Deep Learning with CNN
Artificial Intelligence, Machine Learning and Deep Learning with CNNArtificial Intelligence, Machine Learning and Deep Learning with CNN
Artificial Intelligence, Machine Learning and Deep Learning with CNN
 
Nature Inspired Reasoning Applied in Semantic Web
Nature Inspired Reasoning Applied in Semantic WebNature Inspired Reasoning Applied in Semantic Web
Nature Inspired Reasoning Applied in Semantic Web
 
SURVEY ON BRAIN – MACHINE INTERRELATIVE LEARNING
SURVEY ON BRAIN – MACHINE INTERRELATIVE LEARNINGSURVEY ON BRAIN – MACHINE INTERRELATIVE LEARNING
SURVEY ON BRAIN – MACHINE INTERRELATIVE LEARNING
 
Deep Learning Demystified
Deep Learning DemystifiedDeep Learning Demystified
Deep Learning Demystified
 
Camp IT: Making the World More Efficient Using AI & Machine Learning
Camp IT: Making the World More Efficient Using AI & Machine LearningCamp IT: Making the World More Efficient Using AI & Machine Learning
Camp IT: Making the World More Efficient Using AI & Machine Learning
 
Artificial Neural Networks for NIU
Artificial Neural Networks for NIUArtificial Neural Networks for NIU
Artificial Neural Networks for NIU
 
nncollovcapaldo2013-131220052427-phpapp01.pdf
nncollovcapaldo2013-131220052427-phpapp01.pdfnncollovcapaldo2013-131220052427-phpapp01.pdf
nncollovcapaldo2013-131220052427-phpapp01.pdf
 
nncollovcapaldo2013-131220052427-phpapp01.pdf
nncollovcapaldo2013-131220052427-phpapp01.pdfnncollovcapaldo2013-131220052427-phpapp01.pdf
nncollovcapaldo2013-131220052427-phpapp01.pdf
 
Fuzzy Logic Final Report
Fuzzy Logic Final ReportFuzzy Logic Final Report
Fuzzy Logic Final Report
 
Universal Artificial Intelligence for Intelligent Agents: An Approach to Supe...
Universal Artificial Intelligence for Intelligent Agents: An Approach to Supe...Universal Artificial Intelligence for Intelligent Agents: An Approach to Supe...
Universal Artificial Intelligence for Intelligent Agents: An Approach to Supe...
 
Survey on Artificial Neural Network Learning Technique Algorithms
Survey on Artificial Neural Network Learning Technique AlgorithmsSurvey on Artificial Neural Network Learning Technique Algorithms
Survey on Artificial Neural Network Learning Technique Algorithms
 
Introduction Of Artificial neural network
Introduction Of Artificial neural networkIntroduction Of Artificial neural network
Introduction Of Artificial neural network
 
ppt document 5b6 presentation based on education.pptx
ppt document 5b6 presentation based on education.pptxppt document 5b6 presentation based on education.pptx
ppt document 5b6 presentation based on education.pptx
 
Algorithm
AlgorithmAlgorithm
Algorithm
 
Neural Network Implementation Control Mobile Robot
Neural Network Implementation Control Mobile RobotNeural Network Implementation Control Mobile Robot
Neural Network Implementation Control Mobile Robot
 
Machine Learning: Introduction to Neural Networks
Machine Learning: Introduction to Neural NetworksMachine Learning: Introduction to Neural Networks
Machine Learning: Introduction to Neural Networks
 
Neural Networks
Neural NetworksNeural Networks
Neural Networks
 

Recently uploaded

Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024
Tobias Schneck
 
JMeter webinar - integration with InfluxDB and Grafana
JMeter webinar - integration with InfluxDB and GrafanaJMeter webinar - integration with InfluxDB and Grafana
JMeter webinar - integration with InfluxDB and Grafana
RTTS
 
GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...
GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...
GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...
Sri Ambati
 
De-mystifying Zero to One: Design Informed Techniques for Greenfield Innovati...
De-mystifying Zero to One: Design Informed Techniques for Greenfield Innovati...De-mystifying Zero to One: Design Informed Techniques for Greenfield Innovati...
De-mystifying Zero to One: Design Informed Techniques for Greenfield Innovati...
Product School
 
Unsubscribed: Combat Subscription Fatigue With a Membership Mentality by Head...
Unsubscribed: Combat Subscription Fatigue With a Membership Mentality by Head...Unsubscribed: Combat Subscription Fatigue With a Membership Mentality by Head...
Unsubscribed: Combat Subscription Fatigue With a Membership Mentality by Head...
Product School
 
DevOps and Testing slides at DASA Connect
DevOps and Testing slides at DASA ConnectDevOps and Testing slides at DASA Connect
DevOps and Testing slides at DASA Connect
Kari Kakkonen
 
FIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdf
FIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdfFIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdf
FIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdf
FIDO Alliance
 
How world-class product teams are winning in the AI era by CEO and Founder, P...
How world-class product teams are winning in the AI era by CEO and Founder, P...How world-class product teams are winning in the AI era by CEO and Founder, P...
How world-class product teams are winning in the AI era by CEO and Founder, P...
Product School
 
Connector Corner: Automate dynamic content and events by pushing a button
Connector Corner: Automate dynamic content and events by pushing a buttonConnector Corner: Automate dynamic content and events by pushing a button
Connector Corner: Automate dynamic content and events by pushing a button
DianaGray10
 
FIDO Alliance Osaka Seminar: Passkeys at Amazon.pdf
FIDO Alliance Osaka Seminar: Passkeys at Amazon.pdfFIDO Alliance Osaka Seminar: Passkeys at Amazon.pdf
FIDO Alliance Osaka Seminar: Passkeys at Amazon.pdf
FIDO Alliance
 
FIDO Alliance Osaka Seminar: Overview.pdf
FIDO Alliance Osaka Seminar: Overview.pdfFIDO Alliance Osaka Seminar: Overview.pdf
FIDO Alliance Osaka Seminar: Overview.pdf
FIDO Alliance
 
Search and Society: Reimagining Information Access for Radical Futures
Search and Society: Reimagining Information Access for Radical FuturesSearch and Society: Reimagining Information Access for Radical Futures
Search and Society: Reimagining Information Access for Radical Futures
Bhaskar Mitra
 
FIDO Alliance Osaka Seminar: FIDO Security Aspects.pdf
FIDO Alliance Osaka Seminar: FIDO Security Aspects.pdfFIDO Alliance Osaka Seminar: FIDO Security Aspects.pdf
FIDO Alliance Osaka Seminar: FIDO Security Aspects.pdf
FIDO Alliance
 
GraphRAG is All You need? LLM & Knowledge Graph
GraphRAG is All You need? LLM & Knowledge GraphGraphRAG is All You need? LLM & Knowledge Graph
GraphRAG is All You need? LLM & Knowledge Graph
Guy Korland
 
Knowledge engineering: from people to machines and back
Knowledge engineering: from people to machines and backKnowledge engineering: from people to machines and back
Knowledge engineering: from people to machines and back
Elena Simperl
 
IOS-PENTESTING-BEGINNERS-PRACTICAL-GUIDE-.pptx
IOS-PENTESTING-BEGINNERS-PRACTICAL-GUIDE-.pptxIOS-PENTESTING-BEGINNERS-PRACTICAL-GUIDE-.pptx
IOS-PENTESTING-BEGINNERS-PRACTICAL-GUIDE-.pptx
Abida Shariff
 
"Impact of front-end architecture on development cost", Viktor Turskyi
"Impact of front-end architecture on development cost", Viktor Turskyi"Impact of front-end architecture on development cost", Viktor Turskyi
"Impact of front-end architecture on development cost", Viktor Turskyi
Fwdays
 
AI for Every Business: Unlocking Your Product's Universal Potential by VP of ...
AI for Every Business: Unlocking Your Product's Universal Potential by VP of ...AI for Every Business: Unlocking Your Product's Universal Potential by VP of ...
AI for Every Business: Unlocking Your Product's Universal Potential by VP of ...
Product School
 
When stars align: studies in data quality, knowledge graphs, and machine lear...
When stars align: studies in data quality, knowledge graphs, and machine lear...When stars align: studies in data quality, knowledge graphs, and machine lear...
When stars align: studies in data quality, knowledge graphs, and machine lear...
Elena Simperl
 
Designing Great Products: The Power of Design and Leadership by Chief Designe...
Designing Great Products: The Power of Design and Leadership by Chief Designe...Designing Great Products: The Power of Design and Leadership by Chief Designe...
Designing Great Products: The Power of Design and Leadership by Chief Designe...
Product School
 

Recently uploaded (20)

Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024
 
JMeter webinar - integration with InfluxDB and Grafana
JMeter webinar - integration with InfluxDB and GrafanaJMeter webinar - integration with InfluxDB and Grafana
JMeter webinar - integration with InfluxDB and Grafana
 
GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...
GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...
GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...
 
De-mystifying Zero to One: Design Informed Techniques for Greenfield Innovati...
De-mystifying Zero to One: Design Informed Techniques for Greenfield Innovati...De-mystifying Zero to One: Design Informed Techniques for Greenfield Innovati...
De-mystifying Zero to One: Design Informed Techniques for Greenfield Innovati...
 
Unsubscribed: Combat Subscription Fatigue With a Membership Mentality by Head...
Unsubscribed: Combat Subscription Fatigue With a Membership Mentality by Head...Unsubscribed: Combat Subscription Fatigue With a Membership Mentality by Head...
Unsubscribed: Combat Subscription Fatigue With a Membership Mentality by Head...
 
DevOps and Testing slides at DASA Connect
DevOps and Testing slides at DASA ConnectDevOps and Testing slides at DASA Connect
DevOps and Testing slides at DASA Connect
 
FIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdf
FIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdfFIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdf
FIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdf
 
How world-class product teams are winning in the AI era by CEO and Founder, P...
How world-class product teams are winning in the AI era by CEO and Founder, P...How world-class product teams are winning in the AI era by CEO and Founder, P...
How world-class product teams are winning in the AI era by CEO and Founder, P...
 
Connector Corner: Automate dynamic content and events by pushing a button
Connector Corner: Automate dynamic content and events by pushing a buttonConnector Corner: Automate dynamic content and events by pushing a button
Connector Corner: Automate dynamic content and events by pushing a button
 
FIDO Alliance Osaka Seminar: Passkeys at Amazon.pdf
FIDO Alliance Osaka Seminar: Passkeys at Amazon.pdfFIDO Alliance Osaka Seminar: Passkeys at Amazon.pdf
FIDO Alliance Osaka Seminar: Passkeys at Amazon.pdf
 
FIDO Alliance Osaka Seminar: Overview.pdf
FIDO Alliance Osaka Seminar: Overview.pdfFIDO Alliance Osaka Seminar: Overview.pdf
FIDO Alliance Osaka Seminar: Overview.pdf
 
Search and Society: Reimagining Information Access for Radical Futures
Search and Society: Reimagining Information Access for Radical FuturesSearch and Society: Reimagining Information Access for Radical Futures
Search and Society: Reimagining Information Access for Radical Futures
 
FIDO Alliance Osaka Seminar: FIDO Security Aspects.pdf
FIDO Alliance Osaka Seminar: FIDO Security Aspects.pdfFIDO Alliance Osaka Seminar: FIDO Security Aspects.pdf
FIDO Alliance Osaka Seminar: FIDO Security Aspects.pdf
 
GraphRAG is All You need? LLM & Knowledge Graph
GraphRAG is All You need? LLM & Knowledge GraphGraphRAG is All You need? LLM & Knowledge Graph
GraphRAG is All You need? LLM & Knowledge Graph
 
Knowledge engineering: from people to machines and back
Knowledge engineering: from people to machines and backKnowledge engineering: from people to machines and back
Knowledge engineering: from people to machines and back
 
IOS-PENTESTING-BEGINNERS-PRACTICAL-GUIDE-.pptx
IOS-PENTESTING-BEGINNERS-PRACTICAL-GUIDE-.pptxIOS-PENTESTING-BEGINNERS-PRACTICAL-GUIDE-.pptx
IOS-PENTESTING-BEGINNERS-PRACTICAL-GUIDE-.pptx
 
"Impact of front-end architecture on development cost", Viktor Turskyi
"Impact of front-end architecture on development cost", Viktor Turskyi"Impact of front-end architecture on development cost", Viktor Turskyi
"Impact of front-end architecture on development cost", Viktor Turskyi
 
AI for Every Business: Unlocking Your Product's Universal Potential by VP of ...
AI for Every Business: Unlocking Your Product's Universal Potential by VP of ...AI for Every Business: Unlocking Your Product's Universal Potential by VP of ...
AI for Every Business: Unlocking Your Product's Universal Potential by VP of ...
 
When stars align: studies in data quality, knowledge graphs, and machine lear...
When stars align: studies in data quality, knowledge graphs, and machine lear...When stars align: studies in data quality, knowledge graphs, and machine lear...
When stars align: studies in data quality, knowledge graphs, and machine lear...
 
Designing Great Products: The Power of Design and Leadership by Chief Designe...
Designing Great Products: The Power of Design and Leadership by Chief Designe...Designing Great Products: The Power of Design and Leadership by Chief Designe...
Designing Great Products: The Power of Design and Leadership by Chief Designe...
 

Neural Networks by Priyanka Kasture

  • 1. Neural Networks Priyanka Kasture Founder: Machine Learning India pkasture2010@gmail.com pkasture2010@gmail.com 1
  • 2. “Artificial Intelligence leaves no doubt that it wants its audiences to enter a realm of pure fantasy.” Fantasy? Say no more! Artificial Intelligence, today, is a very plausible reality, affecting and impacting our lives in very subtle ways, ways so subtle that we don’t even realize. pkasture2010@gmail.com 2
  • 3. Google’s search predictions, voice recognition, Deepmind’s AlphaGO, Microsoft’s Tay, IBM’s Watson, Apple’s Siri, are all examples and the list seems to have no end. What these technologies have in common are deep machine learning algorithms that help them learn from the data they collect, and help them react and respond in ways that are very human-like, and are thrillingly amazing. pkasture2010@gmail.com 3
  • 4. The field of AI attempts to not just understand intelligent entities but also, build them. Machines exhibit AI. In computer science, an ideal intelligent machine is a flexible agent that humans can associate with, and which can learn and rationally solve problems. pkasture2010@gmail.com 4
  • 5. The very core or premise of AI is the machine’s ability to continuously learn from the data it collects, then, analyze through carefully crafted algorithms, and become better at gaining success at a particular goal. Capabilities currently classified as AI are understanding human speech, competing at high level strategic games, self-driving and self-controlling automobiles, interpreting complex data, proving mathematical theorems and theories in physics, making predictions, etc. pkasture2010@gmail.com 5
  • 6. AI research is divided into subfields that focus on reasoning, planning, learning, natural language processing, perception, ability to move and manipulate objects, and achieving general purpose intelligence. Approaches include statistical methods, computational intelligence, soft-computing i.e. machine learning, use of probabilistic systems, mathematical optimization, artificial neural networks, and much more. pkasture2010@gmail.com 6
  • 7. Artificial Neural Networks (ANNs) pkasture2010@gmail.com 7
  • 8. An Artificial Neural Network (ANN) is an information processing paradigm that is inspired by the way biological nervous systems, such as the brain, process information. pkasture2010@gmail.com 8
  • 9. The neuron, is the fundamental functional unit of all nervous system tissues, including the brain. Each neuron consists of a cell body (soma), nucleus, dendrites, axon and synapses. pkasture2010@gmail.com 9
  • 10. Note: I. Not all synaptic links are of the same strength. II. The strength of a signal passing through a synapse is proportional to the strength of the synaptic link. pkasture2010@gmail.com 10
  • 11. Human brain v/s Computer pkasture2010@gmail.com 11
  • 12. Characteristics and capabilities of artificial neural networks: I. Non-linearity. II. Input-Output mapping. III. Adaptability. IV. Evidential response. V. Fault tolerance. VI. VLSI implementability. VII. Neurobiological analogy. pkasture2010@gmail.com 12
  • 13. Electrical equivalent of the biological neuron: pkasture2010@gmail.com 13
  • 15. An Artificial Neural Network: pkasture2010@gmail.com 15
  • 16. Layers in artificial neural networks: I. The input layer. I. Introduces input values into the network. II. No activation function or other processing. II. The hidden layer(s). I. Perform classification of features. II. Two hidden layers are sufficient to solve most of the problems. III. More layers prove to be better. III. The output layer. I. Functionally is similar to that of the hidden layers. II. Outputs are passed on to the world, outside the neural network. pkasture2010@gmail.com 16
  • 17. Activation Functions: The activation function of a node defines the output of that node or neuron, given an input or a set of inputs. Specifically in an ANN we compute the sum of products of inputs(X) and their corresponding Weights(W), and apply an Activation function f(x) to it, to get the output of that layer and feed it as an input to the next layer. pkasture2010@gmail.com 17
  • 18. The need for Activation Functions: If we do not apply an activation function then the output signal would simply be a simple linear function. A linear function is just a polynomial of one degree. Y = f(X) = X1.W1 + X2.W2 + … + Xn.Wn Now, a linear equation is easy to solve but they are limited in their complexity and have less power to learn complex functional mappings from data. pkasture2010@gmail.com 18
  • 19. We want our neural network to not just learn and compute - a linear function, but something more complicated than that. Also without the activation function, our neural network will not be able to learn and model other complicated kinds of data such as images, videos, audios, etc. Y = f(X) = X1.W1 + X2.W2 + … + Xn.Wn (Output of a linear summer.) A(Y) = Actual Output. (Activation function applied over the output of the linear summer.) pkasture2010@gmail.com 19
  • 20. Types of Activation Functions: pkasture2010@gmail.com 20
  • 21. Choosing the right activation function: Depending upon the properties of the problem under consideration, we can make a better choice of an activation function, for easy and quicker convergence of the network. pkasture2010@gmail.com 21
  • 22. I. Sigmoid functions and their combinations generally work better in the case of classifiers. II. Sigmoids and TanH functions are sometimes avoided due to the vanishing gradient problem. III. ReLU function is a general activation function and is used in most cases these days. IV. If we encounter a case of dead neurons in our networks the leaky ReLU function is the best choice. V. Always keep in mind that ReLU function should only be used in the hidden layers. VI. As a rule of thumb, you can begin with using ReLU function and then move over to other activation functions in case ReLU doesn’t provide with optimum results. pkasture2010@gmail.com 22
  • 23. Artificial Neuron Models and Linear Regression: The McCulloch and Pitt’s model or the threshold function : The early model of an artificial neuron is introduced by Warren McCulloch and Walter Pitts in 1943. The McCulloch-Pitts neural model is also known as linear threshold gate. The output is hard limited and the response of this model is absolutely binary in nature. pkasture2010@gmail.com 23
  • 24. The linear neuron model: To obtain the output i.e. V, as it is, for a particular range and domain, the linear neuron model was created. In this model, the bias neuron isn’t considered separately but as one of the inputs. Here, we hard limit the function to use it in practical applications since signal can’t be processed outside a particular range. pkasture2010@gmail.com 24
  • 26. The Gradient Descent Algorithm: Gradient descent is a first-order iterative optimization algorithm. To find a local minimum of a function using gradient descent, one takes steps proportional to the negative of the gradient (or of the approximate gradient) of the function at the current point. If instead one takes steps proportional to the positive of the gradient, one approaches a local maximum of that function; the procedure is then known as gradient ascent. Gradient descent is also known as steepest descent, or the method of steepest descent. Gradient descent should not be confused with the method of steepest descent for approximating integrals. pkasture2010@gmail.com 26
  • 27. At a theoretical level, gradient descent is an algorithm that minimizes functions. Given a function defined by a set of parameters, gradient descent starts with an initial set of parameter values and iteratively moves toward a set of parameter values that minimize the function. This iterative minimization is achieved using calculus, taking steps in the negative direction of the function gradient. pkasture2010@gmail.com 27
  • 28. Learning rate of a Neural Network: In order for Gradient Descent to work we must set the λ (learning rate) to an appropriate value. This parameter determines how fast or slow we will move towards the optimal weights. If the λ is very large we will skip the optimal solution. If it is too small we will need too many iterations to converge to the best values. So using a good λ is crucial. pkasture2010@gmail.com 28
  • 29. Another good technique is to adapt the value of λ in each iteration. The idea behind this is that the farther you are from optimal values the faster you should move towards the solution and thus the value of λ should be larger. The closer you get to the solution the smaller its value should be. Unfortunately since you don’t know the actual optimal values, you also don’t know how close you are to them in each step. To resolve this you can use a technique called Bold Driver. pkasture2010@gmail.com 29
  • 30. Learning Mechanisms in Artificial Neural Networks pkasture2010@gmail.com 30
  • 31. Learning process is a method or a mathematical logic which improves the artificial neural network's performance and usually this rule is applied repeatedly over the network. It is done by updating the weights and bias levels of a network when a network is simulated in a specific data environment. pkasture2010@gmail.com 31
  • 32. A learning rule may accept existing condition (weights and bias) of the network and will compare the expected result and actual result of the network to give new and improved values for weights and bias. Depending upon the process to develop the network there are three main models of machine learning: I. Unsupervised Learning II. Supervised Learning III. Reinforcement Learning pkasture2010@gmail.com 32
  • 33. I. Error correction based learning. II. Memory based learning. III. Hebbian learning. IV. Competitive learning. V. Boltzmann learning. Five basic ways of learning in neural networks: pkasture2010@gmail.com 33
  • 34. Error Correction Learning: Error-Correction Learning, used with supervised learning, is the technique of comparing the system output to the desired output value, and using that error to direct the training. In the most direct route, the error values can be used to directly adjust the tap weights, using an algorithm such as the backpropagation algorithm. pkasture2010@gmail.com 34
  • 35. If the system output is y, and the desired system output is known to be d, the error signal can be defined as: e = d-y Error correction learning algorithms attempt to minimize this error signal at each training iteration. The most popular learning algorithm for use with error-correction learning is the back-propagation algorithm. pkasture2010@gmail.com 35
  • 36. Memory Based or Instance Based Learning: In machine learning, instance-based learning (sometimes called memory-based learning) is a family of learning algorithms that, instead of performing explicit generalization, compares new problem instances with instances seen in training, which have been stored in memory. pkasture2010@gmail.com 36
  • 37. It is called instance-based because it constructs hypotheses directly from the training instances themselves. This means that the hypothesis complexity can grow with the data: in the worst case, a hypothesis is a list of n training items and the computational complexity of classifying a single new instance is O(n). One advantage that instance-based learning has over other methods of machine learning is its ability to adapt its model to previously unseen data. pkasture2010@gmail.com 37
  • 38. Hebbian Learning: Hebbian learning is one of the oldest learning algorithms, and is based in large part on the dynamics of biological systems. A synapse between two neurons is strengthened when the neurons on either side of the synapse (input and output) have highly correlated outputs. In essence, when an input neuron fires, if it frequently leads to the firing of the output neuron, the synapse is strengthened. pkasture2010@gmail.com 38
  • 39. Following the analogy to an artificial system, the tap weight is increased with high correlation between two sequential neurons. Mathematical Formulation: Wij[n+1]=Wij[n]+ηxi[n]xj[n] Here η, is the learning rate. pkasture2010@gmail.com 39
  • 40. Competitive Learning: Competitive learning is a form of unsupervised learning in artificial neural networks, in which nodes compete for the right to respond to a subset of the input data. There are three basic elements to a competitive learning rule. pkasture2010@gmail.com 40
  • 41. 1. A set of neurons that are all the same except for some randomly distributed synaptic weights, and which therefore respond differently to a given set of input patterns. 2. A limit imposed on the "strength" of each neuron. 3. A mechanism that permits the neurons to compete for the right to respond to a given subset of inputs, such that only one output neuron (or only one neuron per group), is active (i.e. "on") at a time. The neuron that wins the competition is called a "winner-take-all" neuron. pkasture2010@gmail.com 41
  • 42. Boltzmann Learning: Boltzmann learning is statistical in nature, and is derived from the field of thermodynamics. It is similar to error-correction learning and is used during supervised training. In this algorithm, the state of each individual neuron, in addition to the system output, are taken into account. In this respect, the Boltzmann learning rule is significantly slower than the error-correction learning rule. Neural networks that use Boltzmann learning are called Boltzmann machines. pkasture2010@gmail.com 42
  • 43. Boltzmann learning is similar to an error-correction learning rule, in that an error signal is used to train the system in each iteration. However, instead of a direct difference between the result value and the desired value, we take the difference between the probability distributions of the system. pkasture2010@gmail.com 43
  • 44. The Perceptron: Layered feed-forward networks were first studied in the late 1950s under the name perceptrons. Although networks of all sizes and topologies were considered, the only effective learning element at the time was for single-layered networks, so that is where most of the effort was spent. Today, the name perceptron is used as a synonym for a single-layer, feed-forward network. pkasture2010@gmail.com 44
  • 45. In machine learning, the perceptron is an algorithm for supervised learning of binary classifiers (functions that can decide whether an input, represented by a vector of numbers, belongs to some specific class or not). It is a type of linear classifier, i.e. a classification algorithm that makes its predictions based on a linear predictor function combining a set of weights with the feature vector. pkasture2010@gmail.com 45
  • 46. A diagram showing a perceptron updating its linear boundary as more training examples are added. pkasture2010@gmail.com 46
  • 47. Perceptron Convergence: The Perceptron convergence theorem states that for any data set which is linearly separable the Perceptron learning rule is guaranteed to find a solution in a finite number of steps. In other words, the Perceptron learning rule is guaranteed to converge to a weight vector that correctly classifies the examples provided the training examples are linearly separable. pkasture2010@gmail.com 47
  • 48. Feed Forward Networks: A feedforward neural network is an artificial neural network wherein connections between the units do not form a cycle. As such, it is different from recurrent neural networks. The feedforward neural network was the first and simplest type of artificial neural network devised. In this network, the information moves in only one direction, forward, from the input nodes, through the hidden nodes (if any) and to the output nodes. There are no cycles or loops in the network. pkasture2010@gmail.com 48
  • 50. Back Propagation Algorithm: At the heart of back-propagation is an expression for the partial derivative ∂C/∂w of the cost function C with respect to any weight w (or bias b) in the network. The expression tells us how quickly the cost changes when we change the weights and biases. pkasture2010@gmail.com 50
  • 51. And while the expression is somewhat complex, it also has a beauty to it, with each element having a natural, intuitive interpretation. And so back-propagation isn't just a fast algorithm for learning. It actually gives us detailed insights into how changing the weights and biases changes the overall behavior of the network. Back-propagation has been used successfully for wide variety of applications, such as speech or voice recognition, image pattern recognition, medical diagnosis, and automatic controls. pkasture2010@gmail.com 51
  • 52. The algorithm repeats a two phase cycle, propagation and weight update. When an input vector is presented to the network, it is propagated forward through the network, layer by layer, until it reaches the output layer. The output of the network is then compared to the desired output, using a loss function, and an error value is calculated for each of the neurons in the output layer. The error values are then propagated backwards, starting from the output, until each neuron has an associated error value which roughly represents its contribution to the original output. pkasture2010@gmail.com 52
  • 53. Backpropagation uses these error values to calculate the gradient of the loss function with respect to the weights in the network. In the second phase, this gradient is fed to the optimization method, which in turn uses it to update the weights, in an attempt to minimize the loss function. pkasture2010@gmail.com 53
  • 54. Back-propagation requires a known, desired output for each input value in order to calculate the loss function gradient – it is therefore usually considered to be a supervised learning method; nonetheless, it is also used in some unsupervised networks such as auto-encoders. It is a generalization of the delta rule to multi-layered feed- forward networks, made possible by using the chain rule to iteratively compute gradients for each layer. Back-propagation requires that the activation function used by the artificial neurons (or "nodes") be differentiable. pkasture2010@gmail.com 54
  • 55. Algorithm: 1. Compute the error values for the output units using the observed error. 2. Starting with output layer, repeat the following for each layer in the network, until the earliest hidden layer is reached: I. Propagate the error values back to the previous layer. II. Update the weights between the two layers. pkasture2010@gmail.com 55
  • 58. Types of Neural Networks: pkasture2010@gmail.com 58
  • 59. Applications of neural networks:  Computer Vision  Speech Recognition  Sentiment Analysis  Natural Language Processing  Medical Diagnosis  Conversational Agents  Autonomous Cars  Machine Translation  Stock Market Prediction  Optical Character Recognition, etc. pkasture2010@gmail.com 59
  • 60. Thank you. And don’t forget to think in neural. Priyanka Kasture pkasture2010@gmail.com GitHub: @priyanka-kasture Instagram: @priyanka.kasture pkasture2010@gmail.com 60