2. SYLLABUS
Artificial neural networks
Basic concepts and models
Single layer perception networks
Multilayer feed forward
Feed backward network
Supervised and un-Supervised learning
Back propagation
3. Artificial neural networks
Artificial neural networks (ANNs), usually simply called neural networks (NNs), are
computing systems inspired by the biological neural networks that constitute
animal brains.
An ANN is based on a collection of connected units or nodes called artificial neurons,
which loosely model the neurons in a biological brain.
Each connection, like the synapses in a biological brain, can transmit a signal to other
neurons. An artificial neuron receives a signal then processes it and can signal neurons
connected to it.
The "signal" at a connection is a real number, and the output of each neuron is
computed by some non-linear function of the sum of its inputs. The connections are
called edges.
Neurons and edges typically have a weight that adjusts as learning proceeds.
4. Cont….
The weight increases or decreases the strength of the signal at a connection.
Neurons may have a threshold such that a signal is sent only if the aggregate
signal crosses that threshold.
Typically, neurons are aggregated into layers.
Different layers may perform different transformations on their inputs.
Signals travel from the first layer (the input layer), to the last layer (the output
layer), possibly after traversing the layers multiple times.
5.
6. Single layer perception networks
Single layer perceptron is the first proposed neural model created.
The content of the local memory of the neuron consists of a vector of weights.
The computation of a single layer perceptron is performed over the calculation of
sum of the input vector each with the value multiplied by corresponding element
of vector of the weights.
The value which is displayed in the output will be the input of an activation
function.
7.
8. Multilayer feed forward
A feedforward neural network is an artificial neural network wherein connections
between the nodes do not form a cycle.
As such, it is different from its descendant: recurrent neural networks.
The feedforward neural network was the first and simplest type of artificial neural
network devised.
In this network, the information moves in only one direction—forward—from the
input nodes, through the hidden nodes (if any) and to the output nodes.
There are no cycles or loops in the network.
10. Feed backward network
Forward Propagation is the way to move from the Input layer (left) to the Output layer
(right) in the neural network.
The process of moving from the right to left i.e backward from the Output to the Input
layer is called the Backward Propagation.
Backward Propagation is the preferable method of adjusting or correcting the weights
to reach the minimized loss function.
Backward Propagation is the preferred method for adjusting the weights and biases
since it is faster to converge as we move from output to the hidden layer.
Here, we change the weights of the hidden layer that is closest to the output layer, re-
calculate the loss and if further need to reduce the error then repeat the entire process
and in that order move towards the input layer.
12. Supervised and un-Supervised learning
Supervised learning, as the name indicates, has the presence of a supervisor as a
teacher.
Basically supervised learning is when we teach or train the machine using data that
is well labeled. Which means some data is already tagged with the correct answer.
After that, the machine is provided with a new set of examples(data) so that the
supervised learning algorithm analyses the training data(set of training examples)
and produces a correct outcome from labeled data.
For instance, suppose you are given a basket filled with different kinds of fruits.
13. Supervised and un-Supervised learning
If the shape of the object is rounded and has a depression at the top, is red in color,
then it will be labeled as –Apple.
If the shape of the object is a long curving cylinder having Green-Yellow color, then it
will be labeled as –Banana.
Now suppose after training the data, you have given a new separate fruit, say Banana
from the basket, and asked to identify it.
Since the machine has already learned the things from previous data and this time has
to use it wisely. It will first classify the fruit with its shape and color and would confirm
the fruit name as BANANA and put it in the Banana category.
Thus the machine learns the things from training data(basket containing fruits) and
then applies the knowledge to test data(new fruit).
14. Unsupervised learning
Unsupervised learning is the training of a machine using information that is neither
classified nor labeled and allowing the algorithm to act on that information without
guidance.
Here the task of the machine is to group unsorted information according to
similarities, patterns, and differences without any prior training of data.
Unlike supervised learning, no teacher is provided that means no training will be
given to the machine.
Therefore the machine is restricted to find the hidden structure in unlabeled data
by itself.
For instance, suppose it is given an image having both dogs and cats which it has
never seen.
15. Back propagation
Back-propagation is the essence of neural net training. It is the practice of fine-tuning
the weights of a neural net based on the error rate (i.e. loss) obtained in the previous
epoch (i.e. iteration).
Proper tuning of the weights ensures lower error rates, making the model reliable by
increasing its generalization.
In machine learning, backpropagation (backprop,[1] BP) is a widely used algorithm for
training feedforward neural networks. Generalizations of backpropagation exist for
other artificial neural networks (ANNs), and for functions generally. These classes of
algorithms are all referred to generically as "backpropagation".[2] In fitting a neural
network, backpropagation computes the gradient of the loss function with respect to
the weights of the network for a single input–output example, and does so efficiently,
unlike a naive direct computation of the gradient with respect to each weight
individually.
16. CONT……
This efficiency makes it feasible to use gradient methods for training multilayer
networks, updating weights to minimize loss; gradient descent, or variants such
as stochastic gradient descent, are commonly used.
The backpropagation algorithm works by computing the gradient of the loss
function with respect to each weight by the chain rule, computing the gradient one
layer at a time, iterating backward from the last layer to avoid redundant
calculations of intermediate terms in the chain rule; this is an example of dynamic
programming