Neural Network
Objective:
Introduce theconcept of biological neurons and their relation to artificial neurons.
1. Basic foundation of neural network
2. Math behind the NN
3. Perceptron
4. Single layer perceptron
5. Multilayer perceptron
6. Examples
3.
• Neural Networkfalls under CLO-2 and 3.
Understand key concepts in the field of artificial intelligence
Implement artificial intelligence techniques and case studies
4.
Example of Perceptronwith real values: Source: 6.S191
• Computing (Weight * input)
• Add their result
• Applied activation function (compute the non-linearity)
• Forward Propagation of information through Perceptron
• Non-linearity is G
(sigmoid activation function)
• Weight and bias is defined with numbers (3, -2)
• Input is x1, x2
• 3 steps for perceptron to produce output
5.
Example of Perceptronwith real values : Source: 6.S191
• 2-D line have two inputs
• Plotting is basically the space of all possible inputs the neural
network could see
• 2-D Line act as a decision boundary (a plane) separating the two
components (x1, x2) of our space.
• Neuron in Neural Network
• Non linear Sigmoid function g
• Component of g is just 2-D line
• How to plot 2-D Line as Decision boundary?
6.
Example of Perceptronwith real values : Source: 6.S191
• Tow inputs negative 1 and positive 2, occurs
on one side of place with a certain specific
output.
• Neuron in Neural Network
• Not only a single plane but a directionally component
depending on which side the input will be
• After plugging the components the output is right positive, and passes
through the non-linear component
7.
Example of Perceptronwith real values : Source: 6.S191
• Activation function(threshold function/sigmoid function) lie over
the decision boundary to control the move from one side to other
• Neuron in Neural Network
• With other result you will be on the other
side of the plane
• 2-D line space is easy to visualize (bcz the data points of problems are in 2-D)
• What happened if the data points are not in 2-D as in most of the real world
problem (image as represented in pixel (millions of dimension)?
8.
• What happenedif the data points are not in
2-D as in most of the real world problem
(image as represented in pixel (millions of
dimension)???????
• Wait for the details.
Simplified Perceptron Revision(How perceptron propagate information)
weights
Inputs
sum Non-Linear
Activation function
output
• Core piece of information : 3 Steps
1. Dot product
2. Bias
3. Non-linearity (Activation function)
Repeat for each perceptron
• Z is the result of dot product
+ bias passes to the non-
linear activation function
11.
Multi layer outputperceptron
(Design & Build a multi-layer output Perceptron)
• Because all inputs are densely connected to all output, these layers are called Dense Layer
• Two output function y1, y2
• Two perceptron z1, z2, each perceptron will control output for its associated pieces/input
• Same input for both the perceptron
Using this mathematical
understanding start to build neural
network from scratch
12.
How to ImplementDense layer in Python
• Initialization of two important component of activation function
(weight vector and bias vector (bias + dot product of weight & input)
• How to forward Propagate the information?
Parameters of the layers
3 Important steps
Apply non-linear
activation function
13.
Multi layer outputperceptron
(don’t worry about the code, just call the function)
• Because all inputs are densely connected to all output, these layers are called Dense Layer
• Two output function y1, y2
• Two perceptron z1, z2, each perceptron will control output for its associated pieces/input
• Same input for both the perceptron
This function will replicate the code
14.
Single Layer NeuralNetwork
• How to make it simple???????
• Transformation of input to some new dimensional space (closer to the value we
want to predict)
• This transformation is basically the training of the neuron that how to
transform the input into output
15.
Single Layer NeuralNetwork
• Just focus on one neuron, z2
• Repeat the three steps (dot product,
adding bias and applying non-linearity)
• All the neurons, z1, z3, z4 have the same
story with different set of weights matrix
• Transformation of input to some new dimensional space (closer
to the value we want to predict)
• This transformation is basically the training of the neuron that
how to transform the input into output
16.
Multi Layer OutputPerceptron (Neuron) (simplified version)
• Simplified diagrams without lines (weight
and bias and transformation)
• But in fact there is everything supported by
mathematical equations
• All the neurons, z1, z3, z4 have the same
story with different set of weights matrix
• Stacking these layers on top of each others (Sequential Model)
• Forward propagation not only from one neuron but from one
layer to other layer
• Output of one layer will be input to the other layer