Back propagation
N Nasurudeen Ahamed,
Assistant Professor,
CSE
Introduction - Back Propagation
• Back propagation is a supervised learning technique for Training a neural
networks.
• Calculate the gradient of the cost function in a neural network.
• Used by gradient descent optimization algorithm to adjust weight of neurons.
• Also known as backward propagation of errors as the error is calculated and
distributed back through the network of layers.
• Goal of Back propagation : Optimize the weights .
Back Propagation
Back Propagation
• Activation Function : It’s a decision making function.
• The main purpose is convert the input signal of a node in a ANN to an output
signal.
Back Propagation
• Variants of Activation Function
• Linear Function
• Sigmoid Function
• Tanh Function (Tangent Hyperbolic function)
• RELU Function (Rectified linear unit)
• Softmax Function
Back Propagation
• Bias : The bias node a considered a “pseudo input” to each neuron in the
hidden and output layer.
• It’s used to overcome the problems associated with situations where the values
of an input pattern are zero. If any input pattern has zero values, the neural
network could not be trained without a bias node.
• Bias (threshold) activation function was proposed first.
Back Propagation
• Goal : Optimize The Weights so that the neural network can learn how to
correctly map arbitrary inputs to outputs.
Back Propagation
• Forward Pass : Input – 0.05 and 0.10
Back Propagation
• How we calculate total net input :
• Apply Activation Function :
Back Propagation
• Calculating the Total Error : Each Output Neuron using the Squared Error
Function and Sum them to get the total error.
Back Propagation
• Backwards Pass : Our Goal is to minimize the error for each output neuron and
the network as a whole.
• How much change in W5 affects total Error?
• “ Gradient With Respect to W5” -
• To Decrease the error, then subtract the value from the current weight .
Back Propagation
• Next, We will continue the backwards pass by calculating the new values for
W1,W2,W3,and W4.
Back Propagation
• Advantages :
• It is simple, fast and easy to program
• Only numbers of the input are tuned and not any other parameter
• No need to have prior knowledge about the network
• It is flexible
• A standard approach and works efficiently
• It does not require the user to learn special functions
Back Propagation
• Dis Advantages :
• Backpropagation possibly be sensitive to noisy data and irregularity
• The performance of this is highly reliant on the input data
• Needs excessive time for training
• The need for a matrix-based method for backpropagation instead of mini-batch
Back Propagation
• Applications :
• The neural network is trained to enunciate each letter of a word and a sentence
• It is used in the field of speech recognition
• It is used in the field of character and face recognition

Back propagation

  • 1.
    Back propagation N NasurudeenAhamed, Assistant Professor, CSE
  • 2.
    Introduction - BackPropagation • Back propagation is a supervised learning technique for Training a neural networks. • Calculate the gradient of the cost function in a neural network. • Used by gradient descent optimization algorithm to adjust weight of neurons. • Also known as backward propagation of errors as the error is calculated and distributed back through the network of layers. • Goal of Back propagation : Optimize the weights .
  • 3.
  • 4.
    Back Propagation • ActivationFunction : It’s a decision making function. • The main purpose is convert the input signal of a node in a ANN to an output signal.
  • 5.
    Back Propagation • Variantsof Activation Function • Linear Function • Sigmoid Function • Tanh Function (Tangent Hyperbolic function) • RELU Function (Rectified linear unit) • Softmax Function
  • 6.
    Back Propagation • Bias: The bias node a considered a “pseudo input” to each neuron in the hidden and output layer. • It’s used to overcome the problems associated with situations where the values of an input pattern are zero. If any input pattern has zero values, the neural network could not be trained without a bias node. • Bias (threshold) activation function was proposed first.
  • 7.
    Back Propagation • Goal: Optimize The Weights so that the neural network can learn how to correctly map arbitrary inputs to outputs.
  • 8.
    Back Propagation • ForwardPass : Input – 0.05 and 0.10
  • 9.
    Back Propagation • Howwe calculate total net input : • Apply Activation Function :
  • 10.
    Back Propagation • Calculatingthe Total Error : Each Output Neuron using the Squared Error Function and Sum them to get the total error.
  • 11.
    Back Propagation • BackwardsPass : Our Goal is to minimize the error for each output neuron and the network as a whole. • How much change in W5 affects total Error? • “ Gradient With Respect to W5” - • To Decrease the error, then subtract the value from the current weight .
  • 12.
    Back Propagation • Next,We will continue the backwards pass by calculating the new values for W1,W2,W3,and W4.
  • 13.
    Back Propagation • Advantages: • It is simple, fast and easy to program • Only numbers of the input are tuned and not any other parameter • No need to have prior knowledge about the network • It is flexible • A standard approach and works efficiently • It does not require the user to learn special functions
  • 14.
    Back Propagation • DisAdvantages : • Backpropagation possibly be sensitive to noisy data and irregularity • The performance of this is highly reliant on the input data • Needs excessive time for training • The need for a matrix-based method for backpropagation instead of mini-batch
  • 15.
    Back Propagation • Applications: • The neural network is trained to enunciate each letter of a word and a sentence • It is used in the field of speech recognition • It is used in the field of character and face recognition