SlideShare a Scribd company logo
Partha pratim deb
        Mtech(cse)-1st year
Netaji subhash engineering college
•   Biological inspiration vs. artificial neural network
•   Why Use Neural Networks?
•   Neural network applications
•   Learning strategy & Learning techniques
•   Generalization types
•   Artificial neurons
•   MLP neural networks and tasks
•   learning mechanism used by multilayer perceptron
•   Activation functions
•   Multi-Layer Perceptron example for approximation
The McCullogh-Pitts
model

                      neurotransmission
Learning strategy
1.Supervised learning
2.Unsupervised learning
B
A           B
A
        B   A
A   B   A   B   B   B


            A       A
A   B   B       A
   It is based on a
    labeled training                                      ε Class


    set.
                                    ε Class
                                                  A
   The class of each                    B                     λ Class


    piece of data in
                          λ Class
                                                      B
    training set is                 A
    known.                                            A        ε Class


   Class labels are                λ Class   B
    pre-determined
    and provided in
    the training phase.
   Task performed         Task performed
      Classification          Clustering
      Pattern              NN Model :
    Recognition               Self Organizing
   NN model :                Maps
      Preceptron        “class of data is not
      Feed-forward NN      defined here”

“class of data is
   defined here”
1.Linear
2.Nonlinear
Nonlinear generalization of the McCullogh-Pitts
   neuron:
                                 1
                      y=                  sigmoidal neuron
y = f ( x, w)
                                   T
                                   −w x−a
                          1+ e
                             || x − w|| 2
                           −
                     y=e         2a 2       Gaussian neuron
MLP = multi-layer perceptron
Perceptron:
                                            yout = wT x       x   yout

MLP neural network:
                  1
 y1 =
  k              − w1 kT x − a1
                                  , k = 1,2,3
       1+ e                   k



 y 1 = ( y1 , y 1 , y3 ) T
          1
                2
                     1


                    1
 yk =
  2
                 − w 2 kT y 1 − a k
                                  2
                                      , k = 1,2
       1+ e
 y 2 = ( y12 , y 2 ) T
                 2
                                                                         yout
           2
                                                          x
 y out = ∑ wk y k = w3T y 2
            3 2

          k =1
• control
• classification            These can be reformulated
                            in general as
• prediction
                            FUNCTION
• approximation
                            APPROXIMATION
                             tasks.

Approximation: given a set of values of a function g(x)
build a neural network that approximates the g(x) values
for any input x.
Activation function used for curve the input data
             to know the variation
Sigmoidal (logistic) function-common in MLP
                           1                1
     g (ai (t )) =                    =
                   1 + exp(−k ai (t )) 1 + e −k ai ( t )

                                                   where k is a positive
                                                constant. The sigmoidal
                                                function gives a value in
                                                     range of 0 to 1.
                                                  Alternatively can use
                                                 tanh(ka) which is same
                                               shape but in range –1 to 1.

                                               Input-output function of a
                                                  neuron (rate coding
                                                      assumption)
Note: when net = 0, f = 0.5
Multi-Layer Perceptron example for approximation
Algorithm (sequential)

  1. Apply an input vector and calculate all activations, a and u
             2. Evaluate ∆k for all output units via:
     ∆ (t ) =( d i (t ) − yi (t )) g ' ( ai (t ))
      i


       (Note similarity to perceptron learning algorithm)
3. Backpropagate ∆ks to get error terms δ for hidden layers using:

   δ (t ) =g ' (ui (t ))∑ k (t ) wki
    i                    ∆
                                      k


          vij (t + 1) = vij (t ) + ηδ i (t ) x j (t )
           wij (t + 1) Evaluate ) + η∆i (t ) z j (t )
                    4. = w (t changes using:
                            ij
Here I have used simple identity activation function
with an example to understand how neural network
                      works
Once weight changes are computed for all units, weights are updated
  at the same time (bias included as weights here). An example:



                     v11= -1
       x1                                 w11= 1           y1
                    v21= 0             w21= -1
                 v12= 0
                                      w12= 0
      x2          v22= 1                                   y2
                                            w22= 1
                  v10= 1
                      v20= 1

                                Have input [0 1] with target [1 0].
                           Use identity activation function (ie g(a) = a)
All biases set to 1. Will not draw them for clarity.
                 Learning rate η = 0.1


               v11= -1
x1= 0                              w11= 1           y1
              v21= 0            w21= -1
           v12= 0
                               w12= 0
x2= 1       v22= 1                                  y2
                                      w22= 1

           Have input [0 1] with target [1 0].
Forward pass. Calculate 1st layer activations:




               v11= -1      u1 = 1
x1                                   w11= 1           y1
              v21= 0            w21= -1
           v12= 0
                               w12= 0
x2          v22= 1                                    y2
                                       w22= 1
                           u2 = 2

            u1 = -1x0 + 0x1 +1 = 1
            u2 = 0x0 + 1x1 +1 = 2
Calculate first layer outputs by passing activations thru activation
                              functions


                                z1 = 1
                     v11= -1
      x1                                    w11= 1        y1
                    v21= 0               w21= -1
                 v12= 0
                                      w12= 0
     x2           v22= 1                                  y2
                                              w22= 1
                                  z2 = 2


                           z1 = g(u1) = 1
                           z2 = g(u2) = 2
Calculate 2nd layer outputs (weighted sum thru activation functions):




                      v11= -1
       x1                                  w11= 1         y1= 2
                     v21= 0              w21= -1
                  v12= 0
                                       w12= 0
      x2           v22= 1                                 y2= 2
                                              w22= 1



                            y1 = a1 = 1x1 + 0x2 +1 = 2
                        y2 = a2 = -1x1 + 1x2 +1 = 2
Backward pass:




                     v11= -1
x1                                         w11= 1     ∆1= -1
                   v21= 0             w21= -1
                v12= 0
                                     w12= 0
x2               v22= 1                               ∆2= -2
                                             w22= 1

     Target =[1, 0] so d1 = 1 and d2 = 0
                    So:
        ∆ 1 = (d1 - y1 )= 1 – 2 = -1
        ∆   2   = (d2 - y2 )= 0 – 2 = -2
Calculate weight changes for 1st layer (cf perceptron learning):




                   v11= -1   z1 = 1
    x1                                  w11= 1       ∆1 z1 =-1
                  v21= 0              w21= -1      ∆1 z2 =-2
               v12= 0
                                   w12= 0
   x2           v22= 1                             ∆2 z1 =-2
                                          w22= 1      ∆2 z2 =-4
                               z2 = 2
Weight changes will be:




         v11= -1
x1                        w11= 0.9
        v21= 0          w21= -1.2
     v12= 0
                       w12= -0.2
x2    v22= 1
                             w22= 0.6
But first must calculate δ’s:




         v11= -1
x1                           ∆ 1 w11= -1    ∆1= -1
        v21= 0
                              ∆ 2 w21= 2
     v12= 0                   ∆ 1 w12= 0
x2    v22= 1                                ∆2= -2
                              ∆ 2 w22= -2
∆’s propagate back:




         v11= -1        δ 1= 1
x1                                           ∆1= -1
        v21= 0
     v12= 0
x2    v22= 1                                 ∆2= -2

                     δ 2 = -2

                                 δ1 = - 1 + 2 = 1
                                 δ2 = 0 – 2 = -2
And are multiplied by inputs:




             v11= -1       δ 1 x1 = 0
x1= 0
                                          ∆1= -1
            v21= 0       δ 1 x2 = 1
         v12= 0
                        δ 2 x1 = 0
 x2= 1    v22= 1                          ∆2= -2

                        δ 2 x2 = -2
Finally change weights:




x1= 0            v11= -1
                                   w11= 0.9
                v21= 0          w21= -1.2
             v12= 0.1
                               w12= -0.2
 x2= 1        v22= 0.8
                                     w22= 0.6


Note that the weights multiplied by the zero input are
  unchanged as they do not contribute to the error
        We have also changed biases (not shown)
Now go forward again (would normally use a new input vector):




                   v11= -1   z1 = 1.2
   x1= 0                                w11= 0.9
                  v21= 0           w21= -1.2
               v12= 0.1
                                  w12= -0.2
    x2= 1       v22= 0.8
                                          w22= 0.6
                              z2 = 1.6
Now go forward again (would normally use a new input vector):




   x1= 0           v11= -1                              y1 = 1.66
                                      w11= 0.9
                  v21= 0            w21= -1.2
               v12= 0.1
                                  w12= -0.2
    x2= 1       v22= 0.8
                                         w22= 0.6
                                                        y2 = 0.32


            Outputs now closer to target value [1, 0]
Neural network applications
         Pattern Classification
         Applications examples
• Remote Sensing and image classification
• Handwritten character/digits Recognition
                                   Control, Time series, Estimation
                                • Machine Control/Robot manipulation
                            • Financial/Scientific/Engineering Time series
          Optimization
                                              forecasting.
     • Traveling sales person
Multiprocessor scheduling and task
                                      Real World Application Examples
           assignment
                                        • Hospital patient stay length
                                                  prediction
                                        • Natural gas price prediction
• Artificial neural networks are inspired by the learning
processes that take place in biological systems.
• Learning can be perceived as an optimisation process.
• Biological neural learning happens by the modification
of the synaptic strength. Artificial neural networks learn
in the same way.
• The synapse strength modification rules for artificial
neural networks can be derived by applying
mathematical optimisation methods.
• Learning tasks of artificial neural networks = function
approximation tasks.
• The optimisation is done with respect to the approximation
error measure.
• In general it is enough to have a single hidden layer neural
network (MLP, RBF or other) to learn the approximation of
a nonlinear function. In such cases general optimisation can
be applied to find the change rules for the synaptic weights.
1.artificial neural network,simon haykin
2.artificial neural network , yegnanarayana
3.artificial neural network , zurada
4. Hornick, Stinchcombe and White’s conclusion (1989)
Hornik K., Stinchcombe M. and White
H., “Multilayer feedforward networks are universal
approximators”, Neural Networks, vol. 2,
no. 5,pp. 359–366, 1989
5. Kumar, P. and Walia, E., (2006), “Cash Forecasting: An
Application of Artificial Neural
Networks in Finance”, International Journal of Computer
Science and Applications 3 (1): 61-
77.
Neural network and mlp

More Related Content

What's hot

Artificial Neural Networks
Artificial Neural NetworksArtificial Neural Networks
Artificial Neural Networks
Arslan Zulfiqar
 
Neural network
Neural networkNeural network
Neural network
Silicon
 
Neural networks
Neural networksNeural networks
Neural networksSlideshare
 
Artificial Neural Network
Artificial Neural NetworkArtificial Neural Network
Artificial Neural NetworkPratik Aggarwal
 
Artificial neural networks
Artificial neural networks Artificial neural networks
Artificial neural networks
Institute of Technology Telkom
 
Neural networks
Neural networksNeural networks
Neural networksSlideshare
 
Artificial neural network
Artificial neural networkArtificial neural network
Artificial neural networkDEEPASHRI HK
 
Artificial Neural Network
Artificial Neural NetworkArtificial Neural Network
Artificial Neural Network
Knoldus Inc.
 
Dr. kiani artificial neural network lecture 1
Dr. kiani artificial neural network lecture 1Dr. kiani artificial neural network lecture 1
Dr. kiani artificial neural network lecture 1
Parinaz Faraji
 
Neural network 20161210_jintaekseo
Neural network 20161210_jintaekseoNeural network 20161210_jintaekseo
Neural network 20161210_jintaekseo
JinTaek Seo
 
Neural networks1
Neural networks1Neural networks1
Neural networks1
Mohan Raj
 
Artificial Neural Networks Lect2: Neurobiology & Architectures of ANNS
Artificial Neural Networks Lect2: Neurobiology & Architectures of ANNSArtificial Neural Networks Lect2: Neurobiology & Architectures of ANNS
Artificial Neural Networks Lect2: Neurobiology & Architectures of ANNS
Mohammed Bennamoun
 
Neural network
Neural networkNeural network
Neural network
marada0033
 
Artificial Neural Network
Artificial Neural Network Artificial Neural Network
Artificial Neural Network
Iman Ardekani
 
Perceptron (neural network)
Perceptron (neural network)Perceptron (neural network)
Perceptron (neural network)
EdutechLearners
 
Ann by rutul mehta
Ann by rutul mehtaAnn by rutul mehta
Ann by rutul mehta
Rutul Mehta
 
Artificial Neural Networks for NIU
Artificial Neural Networks for NIUArtificial Neural Networks for NIU
Artificial Neural Networks for NIU
Prof. Neeta Awasthy
 
Artificial neural networks
Artificial neural networksArtificial neural networks
Artificial neural networks
arjitkantgupta
 
Learning in Networks: were Pavlov and Hebb right?
Learning in Networks: were Pavlov and Hebb right?Learning in Networks: were Pavlov and Hebb right?
Learning in Networks: were Pavlov and Hebb right?Victor Miagkikh
 

What's hot (20)

Artificial Neural Networks
Artificial Neural NetworksArtificial Neural Networks
Artificial Neural Networks
 
Neural network
Neural networkNeural network
Neural network
 
Neural networks
Neural networksNeural networks
Neural networks
 
Artificial Neural Network
Artificial Neural NetworkArtificial Neural Network
Artificial Neural Network
 
Artificial neural networks
Artificial neural networks Artificial neural networks
Artificial neural networks
 
Neural networks
Neural networksNeural networks
Neural networks
 
Artificial neural network
Artificial neural networkArtificial neural network
Artificial neural network
 
Artificial Neural Network
Artificial Neural NetworkArtificial Neural Network
Artificial Neural Network
 
Dr. kiani artificial neural network lecture 1
Dr. kiani artificial neural network lecture 1Dr. kiani artificial neural network lecture 1
Dr. kiani artificial neural network lecture 1
 
Neural network 20161210_jintaekseo
Neural network 20161210_jintaekseoNeural network 20161210_jintaekseo
Neural network 20161210_jintaekseo
 
Neural networks1
Neural networks1Neural networks1
Neural networks1
 
Artificial Neural Networks Lect2: Neurobiology & Architectures of ANNS
Artificial Neural Networks Lect2: Neurobiology & Architectures of ANNSArtificial Neural Networks Lect2: Neurobiology & Architectures of ANNS
Artificial Neural Networks Lect2: Neurobiology & Architectures of ANNS
 
Neural network
Neural networkNeural network
Neural network
 
Artificial Neural Network
Artificial Neural Network Artificial Neural Network
Artificial Neural Network
 
Hopfield Networks
Hopfield NetworksHopfield Networks
Hopfield Networks
 
Perceptron (neural network)
Perceptron (neural network)Perceptron (neural network)
Perceptron (neural network)
 
Ann by rutul mehta
Ann by rutul mehtaAnn by rutul mehta
Ann by rutul mehta
 
Artificial Neural Networks for NIU
Artificial Neural Networks for NIUArtificial Neural Networks for NIU
Artificial Neural Networks for NIU
 
Artificial neural networks
Artificial neural networksArtificial neural networks
Artificial neural networks
 
Learning in Networks: were Pavlov and Hebb right?
Learning in Networks: were Pavlov and Hebb right?Learning in Networks: were Pavlov and Hebb right?
Learning in Networks: were Pavlov and Hebb right?
 

Viewers also liked

MNN
MNNMNN
Analysis and applications of artificial neural networks
Analysis and applications of artificial neural networksAnalysis and applications of artificial neural networks
Analysis and applications of artificial neural networks
Snehil Rastogi
 
Multi Layer Network
Multi Layer NetworkMulti Layer Network
Multilayerity within multilayerity? On multilayer assortativity in social net...
Multilayerity within multilayerity? On multilayer assortativity in social net...Multilayerity within multilayerity? On multilayer assortativity in social net...
Multilayerity within multilayerity? On multilayer assortativity in social net...
Moses Boudourides
 
Whole Brain Simulations and the Discrepancy/Similarity between Artificial & N...
Whole Brain Simulations and the Discrepancy/Similarity between Artificial & N...Whole Brain Simulations and the Discrepancy/Similarity between Artificial & N...
Whole Brain Simulations and the Discrepancy/Similarity between Artificial & N...
Guillaume Dumas
 
Neural tool box
Neural tool boxNeural tool box
Neural tool box
Mohan Raj
 
Neural Network Toolbox MATLAB
Neural Network Toolbox MATLABNeural Network Toolbox MATLAB
Neural Network Toolbox MATLABESCOM
 
Neural network in matlab
Neural network in matlab Neural network in matlab
Neural network in matlab
Fahim Khan
 
Induction motor modelling and applications
Induction motor modelling and applicationsInduction motor modelling and applications
Induction motor modelling and applications
Umesh Dadde
 
Neural Network Based Brain Tumor Detection using MR Images
Neural Network Based Brain Tumor Detection using MR ImagesNeural Network Based Brain Tumor Detection using MR Images
Neural Network Based Brain Tumor Detection using MR Images
Aisha Kalsoom
 
Artificial Neural Networks Lect5: Multi-Layer Perceptron & Backpropagation
Artificial Neural Networks Lect5: Multi-Layer Perceptron & BackpropagationArtificial Neural Networks Lect5: Multi-Layer Perceptron & Backpropagation
Artificial Neural Networks Lect5: Multi-Layer Perceptron & Backpropagation
Mohammed Bennamoun
 
Induction Motor Tests Using MATLAB/Simulink
Induction Motor Tests Using MATLAB/SimulinkInduction Motor Tests Using MATLAB/Simulink
Induction Motor Tests Using MATLAB/Simulink
Girish Gupta
 
Artificial neural network model & hidden layers in multilayer artificial neur...
Artificial neural network model & hidden layers in multilayer artificial neur...Artificial neural network model & hidden layers in multilayer artificial neur...
Artificial neural network model & hidden layers in multilayer artificial neur...
Muhammad Ishaq
 
Artificial Neural Network in a Tic Tac Toe Symfony Console Application - Symf...
Artificial Neural Network in a Tic Tac Toe Symfony Console Application - Symf...Artificial Neural Network in a Tic Tac Toe Symfony Console Application - Symf...
Artificial Neural Network in a Tic Tac Toe Symfony Console Application - Symf...
aferrandini
 
Multidimensional RNN
Multidimensional RNNMultidimensional RNN
Multidimensional RNN
Grigory Sapunov
 
Anna university-ug-pg-ppt-presentation-format
Anna university-ug-pg-ppt-presentation-formatAnna university-ug-pg-ppt-presentation-format
Anna university-ug-pg-ppt-presentation-format
Veera Victory
 
Automatic irrigation 1st review(ieee project ece dept)
Automatic irrigation 1st review(ieee project ece dept)Automatic irrigation 1st review(ieee project ece dept)
Automatic irrigation 1st review(ieee project ece dept)
Siddappa Dollin
 
Fuzzy Logic and Neural Network
Fuzzy Logic and Neural NetworkFuzzy Logic and Neural Network
Fuzzy Logic and Neural Network
SHIMI S L
 

Viewers also liked (20)

MNN
MNNMNN
MNN
 
Analysis and applications of artificial neural networks
Analysis and applications of artificial neural networksAnalysis and applications of artificial neural networks
Analysis and applications of artificial neural networks
 
Multi Layer Network
Multi Layer NetworkMulti Layer Network
Multi Layer Network
 
Multilayerity within multilayerity? On multilayer assortativity in social net...
Multilayerity within multilayerity? On multilayer assortativity in social net...Multilayerity within multilayerity? On multilayer assortativity in social net...
Multilayerity within multilayerity? On multilayer assortativity in social net...
 
L005.neural networks
L005.neural networksL005.neural networks
L005.neural networks
 
Whole Brain Simulations and the Discrepancy/Similarity between Artificial & N...
Whole Brain Simulations and the Discrepancy/Similarity between Artificial & N...Whole Brain Simulations and the Discrepancy/Similarity between Artificial & N...
Whole Brain Simulations and the Discrepancy/Similarity between Artificial & N...
 
Neural tool box
Neural tool boxNeural tool box
Neural tool box
 
Neural Network Toolbox MATLAB
Neural Network Toolbox MATLABNeural Network Toolbox MATLAB
Neural Network Toolbox MATLAB
 
Neural network in matlab
Neural network in matlab Neural network in matlab
Neural network in matlab
 
Induction motor modelling and applications
Induction motor modelling and applicationsInduction motor modelling and applications
Induction motor modelling and applications
 
Neural Network Based Brain Tumor Detection using MR Images
Neural Network Based Brain Tumor Detection using MR ImagesNeural Network Based Brain Tumor Detection using MR Images
Neural Network Based Brain Tumor Detection using MR Images
 
Artificial Neural Networks Lect5: Multi-Layer Perceptron & Backpropagation
Artificial Neural Networks Lect5: Multi-Layer Perceptron & BackpropagationArtificial Neural Networks Lect5: Multi-Layer Perceptron & Backpropagation
Artificial Neural Networks Lect5: Multi-Layer Perceptron & Backpropagation
 
Induction Motor Tests Using MATLAB/Simulink
Induction Motor Tests Using MATLAB/SimulinkInduction Motor Tests Using MATLAB/Simulink
Induction Motor Tests Using MATLAB/Simulink
 
Artificial neural network model & hidden layers in multilayer artificial neur...
Artificial neural network model & hidden layers in multilayer artificial neur...Artificial neural network model & hidden layers in multilayer artificial neur...
Artificial neural network model & hidden layers in multilayer artificial neur...
 
Artificial Neural Network in a Tic Tac Toe Symfony Console Application - Symf...
Artificial Neural Network in a Tic Tac Toe Symfony Console Application - Symf...Artificial Neural Network in a Tic Tac Toe Symfony Console Application - Symf...
Artificial Neural Network in a Tic Tac Toe Symfony Console Application - Symf...
 
Multidimensional RNN
Multidimensional RNNMultidimensional RNN
Multidimensional RNN
 
Anna university-ug-pg-ppt-presentation-format
Anna university-ug-pg-ppt-presentation-formatAnna university-ug-pg-ppt-presentation-format
Anna university-ug-pg-ppt-presentation-format
 
Automatic irrigation 1st review(ieee project ece dept)
Automatic irrigation 1st review(ieee project ece dept)Automatic irrigation 1st review(ieee project ece dept)
Automatic irrigation 1st review(ieee project ece dept)
 
Fuzzy Logic and Neural Network
Fuzzy Logic and Neural NetworkFuzzy Logic and Neural Network
Fuzzy Logic and Neural Network
 
Google driverless cars
Google driverless carsGoogle driverless cars
Google driverless cars
 

Similar to Neural network and mlp

latest TYPES OF NEURAL NETWORKS (2).pptx
latest TYPES OF NEURAL NETWORKS (2).pptxlatest TYPES OF NEURAL NETWORKS (2).pptx
latest TYPES OF NEURAL NETWORKS (2).pptx
MdMahfoozAlam5
 
Artificial neural network
Artificial neural networkArtificial neural network
Artificial neural network
Ildar Nurgaliev
 
The multilayer perceptron
The multilayer perceptronThe multilayer perceptron
The multilayer perceptron
ESCOM
 
Kernel based models for geo- and environmental sciences- Alexei Pozdnoukhov –...
Kernel based models for geo- and environmental sciences- Alexei Pozdnoukhov –...Kernel based models for geo- and environmental sciences- Alexei Pozdnoukhov –...
Kernel based models for geo- and environmental sciences- Alexei Pozdnoukhov –...
Beniamino Murgante
 
Least squares support Vector Machine Classifier
Least squares support Vector Machine ClassifierLeast squares support Vector Machine Classifier
Least squares support Vector Machine Classifier
Raj Sikarwar
 
Introduction to Neural Netwoks
Introduction to Neural Netwoks Introduction to Neural Netwoks
Introduction to Neural Netwoks
Abdallah Bashir
 
Varian, microeconomic analysis, solution book
Varian, microeconomic analysis, solution bookVarian, microeconomic analysis, solution book
Varian, microeconomic analysis, solution bookJosé Antonio PAYANO YALE
 
02 2d systems matrix
02 2d systems matrix02 2d systems matrix
02 2d systems matrix
Rumah Belajar
 
05 history of cv a machine learning (theory) perspective on computer vision
05  history of cv a machine learning (theory) perspective on computer vision05  history of cv a machine learning (theory) perspective on computer vision
05 history of cv a machine learning (theory) perspective on computer visionzukun
 
EE658_Lecture_8.pdf
EE658_Lecture_8.pdfEE658_Lecture_8.pdf
EE658_Lecture_8.pdf
Rounaqueazam1
 
Gaussian Integration
Gaussian IntegrationGaussian Integration
Gaussian Integration
Reza Rahimi
 
Lect4 ellipse
Lect4 ellipseLect4 ellipse
Lect4 ellipse
BCET
 
Neural network
Neural networkNeural network
Neural network
Mahmoud Hussein
 
Shape1 d
Shape1 dShape1 d
Shape1 d
Manoj Shukla
 
Dsp U Lec04 Discrete Time Signals & Systems
Dsp U   Lec04 Discrete Time Signals & SystemsDsp U   Lec04 Discrete Time Signals & Systems
Dsp U Lec04 Discrete Time Signals & Systems
taha25
 
CS767_Lecture_04.pptx
CS767_Lecture_04.pptxCS767_Lecture_04.pptx
CS767_Lecture_04.pptx
ShujatHussainGadi
 
Inverse circular function
Inverse circular functionInverse circular function
Inverse circular function
APEX INSTITUTE
 

Similar to Neural network and mlp (20)

latest TYPES OF NEURAL NETWORKS (2).pptx
latest TYPES OF NEURAL NETWORKS (2).pptxlatest TYPES OF NEURAL NETWORKS (2).pptx
latest TYPES OF NEURAL NETWORKS (2).pptx
 
Artificial neural network
Artificial neural networkArtificial neural network
Artificial neural network
 
The multilayer perceptron
The multilayer perceptronThe multilayer perceptron
The multilayer perceptron
 
Kernel based models for geo- and environmental sciences- Alexei Pozdnoukhov –...
Kernel based models for geo- and environmental sciences- Alexei Pozdnoukhov –...Kernel based models for geo- and environmental sciences- Alexei Pozdnoukhov –...
Kernel based models for geo- and environmental sciences- Alexei Pozdnoukhov –...
 
Least squares support Vector Machine Classifier
Least squares support Vector Machine ClassifierLeast squares support Vector Machine Classifier
Least squares support Vector Machine Classifier
 
Introduction to Neural Netwoks
Introduction to Neural Netwoks Introduction to Neural Netwoks
Introduction to Neural Netwoks
 
5.n nmodels i
5.n nmodels i5.n nmodels i
5.n nmodels i
 
Varian, microeconomic analysis, solution book
Varian, microeconomic analysis, solution bookVarian, microeconomic analysis, solution book
Varian, microeconomic analysis, solution book
 
02 2d systems matrix
02 2d systems matrix02 2d systems matrix
02 2d systems matrix
 
05 history of cv a machine learning (theory) perspective on computer vision
05  history of cv a machine learning (theory) perspective on computer vision05  history of cv a machine learning (theory) perspective on computer vision
05 history of cv a machine learning (theory) perspective on computer vision
 
EE658_Lecture_8.pdf
EE658_Lecture_8.pdfEE658_Lecture_8.pdf
EE658_Lecture_8.pdf
 
Taylor problem
Taylor problemTaylor problem
Taylor problem
 
Gaussian Integration
Gaussian IntegrationGaussian Integration
Gaussian Integration
 
Lect4 ellipse
Lect4 ellipseLect4 ellipse
Lect4 ellipse
 
Neural network
Neural networkNeural network
Neural network
 
Assignment6
Assignment6Assignment6
Assignment6
 
Shape1 d
Shape1 dShape1 d
Shape1 d
 
Dsp U Lec04 Discrete Time Signals & Systems
Dsp U   Lec04 Discrete Time Signals & SystemsDsp U   Lec04 Discrete Time Signals & Systems
Dsp U Lec04 Discrete Time Signals & Systems
 
CS767_Lecture_04.pptx
CS767_Lecture_04.pptxCS767_Lecture_04.pptx
CS767_Lecture_04.pptx
 
Inverse circular function
Inverse circular functionInverse circular function
Inverse circular function
 

Recently uploaded

IOS-PENTESTING-BEGINNERS-PRACTICAL-GUIDE-.pptx
IOS-PENTESTING-BEGINNERS-PRACTICAL-GUIDE-.pptxIOS-PENTESTING-BEGINNERS-PRACTICAL-GUIDE-.pptx
IOS-PENTESTING-BEGINNERS-PRACTICAL-GUIDE-.pptx
Abida Shariff
 
The Future of Platform Engineering
The Future of Platform EngineeringThe Future of Platform Engineering
The Future of Platform Engineering
Jemma Hussein Allen
 
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...
DanBrown980551
 
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdfSmart TV Buyer Insights Survey 2024 by 91mobiles.pdf
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf
91mobiles
 
State of ICS and IoT Cyber Threat Landscape Report 2024 preview
State of ICS and IoT Cyber Threat Landscape Report 2024 previewState of ICS and IoT Cyber Threat Landscape Report 2024 preview
State of ICS and IoT Cyber Threat Landscape Report 2024 preview
Prayukth K V
 
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered Quality
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualitySoftware Delivery At the Speed of AI: Inflectra Invests In AI-Powered Quality
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered Quality
Inflectra
 
UiPath Test Automation using UiPath Test Suite series, part 3
UiPath Test Automation using UiPath Test Suite series, part 3UiPath Test Automation using UiPath Test Suite series, part 3
UiPath Test Automation using UiPath Test Suite series, part 3
DianaGray10
 
ODC, Data Fabric and Architecture User Group
ODC, Data Fabric and Architecture User GroupODC, Data Fabric and Architecture User Group
ODC, Data Fabric and Architecture User Group
CatarinaPereira64715
 
De-mystifying Zero to One: Design Informed Techniques for Greenfield Innovati...
De-mystifying Zero to One: Design Informed Techniques for Greenfield Innovati...De-mystifying Zero to One: Design Informed Techniques for Greenfield Innovati...
De-mystifying Zero to One: Design Informed Techniques for Greenfield Innovati...
Product School
 
From Daily Decisions to Bottom Line: Connecting Product Work to Revenue by VP...
From Daily Decisions to Bottom Line: Connecting Product Work to Revenue by VP...From Daily Decisions to Bottom Line: Connecting Product Work to Revenue by VP...
From Daily Decisions to Bottom Line: Connecting Product Work to Revenue by VP...
Product School
 
DevOps and Testing slides at DASA Connect
DevOps and Testing slides at DASA ConnectDevOps and Testing slides at DASA Connect
DevOps and Testing slides at DASA Connect
Kari Kakkonen
 
FIDO Alliance Osaka Seminar: Overview.pdf
FIDO Alliance Osaka Seminar: Overview.pdfFIDO Alliance Osaka Seminar: Overview.pdf
FIDO Alliance Osaka Seminar: Overview.pdf
FIDO Alliance
 
Accelerate your Kubernetes clusters with Varnish Caching
Accelerate your Kubernetes clusters with Varnish CachingAccelerate your Kubernetes clusters with Varnish Caching
Accelerate your Kubernetes clusters with Varnish Caching
Thijs Feryn
 
When stars align: studies in data quality, knowledge graphs, and machine lear...
When stars align: studies in data quality, knowledge graphs, and machine lear...When stars align: studies in data quality, knowledge graphs, and machine lear...
When stars align: studies in data quality, knowledge graphs, and machine lear...
Elena Simperl
 
FIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdf
FIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdfFIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdf
FIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdf
FIDO Alliance
 
Empowering NextGen Mobility via Large Action Model Infrastructure (LAMI): pav...
Empowering NextGen Mobility via Large Action Model Infrastructure (LAMI): pav...Empowering NextGen Mobility via Large Action Model Infrastructure (LAMI): pav...
Empowering NextGen Mobility via Large Action Model Infrastructure (LAMI): pav...
Thierry Lestable
 
JMeter webinar - integration with InfluxDB and Grafana
JMeter webinar - integration with InfluxDB and GrafanaJMeter webinar - integration with InfluxDB and Grafana
JMeter webinar - integration with InfluxDB and Grafana
RTTS
 
Bits & Pixels using AI for Good.........
Bits & Pixels using AI for Good.........Bits & Pixels using AI for Good.........
Bits & Pixels using AI for Good.........
Alison B. Lowndes
 
UiPath Test Automation using UiPath Test Suite series, part 4
UiPath Test Automation using UiPath Test Suite series, part 4UiPath Test Automation using UiPath Test Suite series, part 4
UiPath Test Automation using UiPath Test Suite series, part 4
DianaGray10
 
FIDO Alliance Osaka Seminar: Passkeys at Amazon.pdf
FIDO Alliance Osaka Seminar: Passkeys at Amazon.pdfFIDO Alliance Osaka Seminar: Passkeys at Amazon.pdf
FIDO Alliance Osaka Seminar: Passkeys at Amazon.pdf
FIDO Alliance
 

Recently uploaded (20)

IOS-PENTESTING-BEGINNERS-PRACTICAL-GUIDE-.pptx
IOS-PENTESTING-BEGINNERS-PRACTICAL-GUIDE-.pptxIOS-PENTESTING-BEGINNERS-PRACTICAL-GUIDE-.pptx
IOS-PENTESTING-BEGINNERS-PRACTICAL-GUIDE-.pptx
 
The Future of Platform Engineering
The Future of Platform EngineeringThe Future of Platform Engineering
The Future of Platform Engineering
 
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...
 
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdfSmart TV Buyer Insights Survey 2024 by 91mobiles.pdf
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf
 
State of ICS and IoT Cyber Threat Landscape Report 2024 preview
State of ICS and IoT Cyber Threat Landscape Report 2024 previewState of ICS and IoT Cyber Threat Landscape Report 2024 preview
State of ICS and IoT Cyber Threat Landscape Report 2024 preview
 
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered Quality
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualitySoftware Delivery At the Speed of AI: Inflectra Invests In AI-Powered Quality
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered Quality
 
UiPath Test Automation using UiPath Test Suite series, part 3
UiPath Test Automation using UiPath Test Suite series, part 3UiPath Test Automation using UiPath Test Suite series, part 3
UiPath Test Automation using UiPath Test Suite series, part 3
 
ODC, Data Fabric and Architecture User Group
ODC, Data Fabric and Architecture User GroupODC, Data Fabric and Architecture User Group
ODC, Data Fabric and Architecture User Group
 
De-mystifying Zero to One: Design Informed Techniques for Greenfield Innovati...
De-mystifying Zero to One: Design Informed Techniques for Greenfield Innovati...De-mystifying Zero to One: Design Informed Techniques for Greenfield Innovati...
De-mystifying Zero to One: Design Informed Techniques for Greenfield Innovati...
 
From Daily Decisions to Bottom Line: Connecting Product Work to Revenue by VP...
From Daily Decisions to Bottom Line: Connecting Product Work to Revenue by VP...From Daily Decisions to Bottom Line: Connecting Product Work to Revenue by VP...
From Daily Decisions to Bottom Line: Connecting Product Work to Revenue by VP...
 
DevOps and Testing slides at DASA Connect
DevOps and Testing slides at DASA ConnectDevOps and Testing slides at DASA Connect
DevOps and Testing slides at DASA Connect
 
FIDO Alliance Osaka Seminar: Overview.pdf
FIDO Alliance Osaka Seminar: Overview.pdfFIDO Alliance Osaka Seminar: Overview.pdf
FIDO Alliance Osaka Seminar: Overview.pdf
 
Accelerate your Kubernetes clusters with Varnish Caching
Accelerate your Kubernetes clusters with Varnish CachingAccelerate your Kubernetes clusters with Varnish Caching
Accelerate your Kubernetes clusters with Varnish Caching
 
When stars align: studies in data quality, knowledge graphs, and machine lear...
When stars align: studies in data quality, knowledge graphs, and machine lear...When stars align: studies in data quality, knowledge graphs, and machine lear...
When stars align: studies in data quality, knowledge graphs, and machine lear...
 
FIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdf
FIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdfFIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdf
FIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdf
 
Empowering NextGen Mobility via Large Action Model Infrastructure (LAMI): pav...
Empowering NextGen Mobility via Large Action Model Infrastructure (LAMI): pav...Empowering NextGen Mobility via Large Action Model Infrastructure (LAMI): pav...
Empowering NextGen Mobility via Large Action Model Infrastructure (LAMI): pav...
 
JMeter webinar - integration with InfluxDB and Grafana
JMeter webinar - integration with InfluxDB and GrafanaJMeter webinar - integration with InfluxDB and Grafana
JMeter webinar - integration with InfluxDB and Grafana
 
Bits & Pixels using AI for Good.........
Bits & Pixels using AI for Good.........Bits & Pixels using AI for Good.........
Bits & Pixels using AI for Good.........
 
UiPath Test Automation using UiPath Test Suite series, part 4
UiPath Test Automation using UiPath Test Suite series, part 4UiPath Test Automation using UiPath Test Suite series, part 4
UiPath Test Automation using UiPath Test Suite series, part 4
 
FIDO Alliance Osaka Seminar: Passkeys at Amazon.pdf
FIDO Alliance Osaka Seminar: Passkeys at Amazon.pdfFIDO Alliance Osaka Seminar: Passkeys at Amazon.pdf
FIDO Alliance Osaka Seminar: Passkeys at Amazon.pdf
 

Neural network and mlp

  • 1. Partha pratim deb Mtech(cse)-1st year Netaji subhash engineering college
  • 2. Biological inspiration vs. artificial neural network • Why Use Neural Networks? • Neural network applications • Learning strategy & Learning techniques • Generalization types • Artificial neurons • MLP neural networks and tasks • learning mechanism used by multilayer perceptron • Activation functions • Multi-Layer Perceptron example for approximation
  • 3. The McCullogh-Pitts model neurotransmission
  • 4.
  • 7. B A B A B A
  • 8. A B A B B B A A A B B A
  • 9. It is based on a labeled training ε Class set. ε Class A  The class of each B λ Class piece of data in λ Class B training set is A known. A ε Class  Class labels are λ Class B pre-determined and provided in the training phase.
  • 10. Task performed  Task performed Classification Clustering Pattern  NN Model : Recognition Self Organizing  NN model : Maps Preceptron “class of data is not Feed-forward NN defined here” “class of data is defined here”
  • 12.
  • 13.
  • 14. Nonlinear generalization of the McCullogh-Pitts neuron: 1 y= sigmoidal neuron y = f ( x, w) T −w x−a 1+ e || x − w|| 2 − y=e 2a 2 Gaussian neuron
  • 15.
  • 16. MLP = multi-layer perceptron Perceptron: yout = wT x x yout MLP neural network: 1 y1 = k − w1 kT x − a1 , k = 1,2,3 1+ e k y 1 = ( y1 , y 1 , y3 ) T 1 2 1 1 yk = 2 − w 2 kT y 1 − a k 2 , k = 1,2 1+ e y 2 = ( y12 , y 2 ) T 2 yout 2 x y out = ∑ wk y k = w3T y 2 3 2 k =1
  • 17. • control • classification These can be reformulated in general as • prediction FUNCTION • approximation APPROXIMATION tasks. Approximation: given a set of values of a function g(x) build a neural network that approximates the g(x) values for any input x.
  • 18.
  • 19. Activation function used for curve the input data to know the variation
  • 20. Sigmoidal (logistic) function-common in MLP 1 1 g (ai (t )) = = 1 + exp(−k ai (t )) 1 + e −k ai ( t ) where k is a positive constant. The sigmoidal function gives a value in range of 0 to 1. Alternatively can use tanh(ka) which is same shape but in range –1 to 1. Input-output function of a neuron (rate coding assumption) Note: when net = 0, f = 0.5
  • 21. Multi-Layer Perceptron example for approximation
  • 22. Algorithm (sequential) 1. Apply an input vector and calculate all activations, a and u 2. Evaluate ∆k for all output units via: ∆ (t ) =( d i (t ) − yi (t )) g ' ( ai (t )) i (Note similarity to perceptron learning algorithm) 3. Backpropagate ∆ks to get error terms δ for hidden layers using: δ (t ) =g ' (ui (t ))∑ k (t ) wki i ∆ k vij (t + 1) = vij (t ) + ηδ i (t ) x j (t ) wij (t + 1) Evaluate ) + η∆i (t ) z j (t ) 4. = w (t changes using: ij
  • 23. Here I have used simple identity activation function with an example to understand how neural network works
  • 24. Once weight changes are computed for all units, weights are updated at the same time (bias included as weights here). An example: v11= -1 x1 w11= 1 y1 v21= 0 w21= -1 v12= 0 w12= 0 x2 v22= 1 y2 w22= 1 v10= 1 v20= 1 Have input [0 1] with target [1 0]. Use identity activation function (ie g(a) = a)
  • 25. All biases set to 1. Will not draw them for clarity. Learning rate η = 0.1 v11= -1 x1= 0 w11= 1 y1 v21= 0 w21= -1 v12= 0 w12= 0 x2= 1 v22= 1 y2 w22= 1 Have input [0 1] with target [1 0].
  • 26. Forward pass. Calculate 1st layer activations: v11= -1 u1 = 1 x1 w11= 1 y1 v21= 0 w21= -1 v12= 0 w12= 0 x2 v22= 1 y2 w22= 1 u2 = 2 u1 = -1x0 + 0x1 +1 = 1 u2 = 0x0 + 1x1 +1 = 2
  • 27. Calculate first layer outputs by passing activations thru activation functions z1 = 1 v11= -1 x1 w11= 1 y1 v21= 0 w21= -1 v12= 0 w12= 0 x2 v22= 1 y2 w22= 1 z2 = 2 z1 = g(u1) = 1 z2 = g(u2) = 2
  • 28. Calculate 2nd layer outputs (weighted sum thru activation functions): v11= -1 x1 w11= 1 y1= 2 v21= 0 w21= -1 v12= 0 w12= 0 x2 v22= 1 y2= 2 w22= 1 y1 = a1 = 1x1 + 0x2 +1 = 2 y2 = a2 = -1x1 + 1x2 +1 = 2
  • 29. Backward pass: v11= -1 x1 w11= 1 ∆1= -1 v21= 0 w21= -1 v12= 0 w12= 0 x2 v22= 1 ∆2= -2 w22= 1 Target =[1, 0] so d1 = 1 and d2 = 0 So: ∆ 1 = (d1 - y1 )= 1 – 2 = -1 ∆ 2 = (d2 - y2 )= 0 – 2 = -2
  • 30. Calculate weight changes for 1st layer (cf perceptron learning): v11= -1 z1 = 1 x1 w11= 1 ∆1 z1 =-1 v21= 0 w21= -1 ∆1 z2 =-2 v12= 0 w12= 0 x2 v22= 1 ∆2 z1 =-2 w22= 1 ∆2 z2 =-4 z2 = 2
  • 31. Weight changes will be: v11= -1 x1 w11= 0.9 v21= 0 w21= -1.2 v12= 0 w12= -0.2 x2 v22= 1 w22= 0.6
  • 32. But first must calculate δ’s: v11= -1 x1 ∆ 1 w11= -1 ∆1= -1 v21= 0 ∆ 2 w21= 2 v12= 0 ∆ 1 w12= 0 x2 v22= 1 ∆2= -2 ∆ 2 w22= -2
  • 33. ∆’s propagate back: v11= -1 δ 1= 1 x1 ∆1= -1 v21= 0 v12= 0 x2 v22= 1 ∆2= -2 δ 2 = -2 δ1 = - 1 + 2 = 1 δ2 = 0 – 2 = -2
  • 34. And are multiplied by inputs: v11= -1 δ 1 x1 = 0 x1= 0 ∆1= -1 v21= 0 δ 1 x2 = 1 v12= 0 δ 2 x1 = 0 x2= 1 v22= 1 ∆2= -2 δ 2 x2 = -2
  • 35. Finally change weights: x1= 0 v11= -1 w11= 0.9 v21= 0 w21= -1.2 v12= 0.1 w12= -0.2 x2= 1 v22= 0.8 w22= 0.6 Note that the weights multiplied by the zero input are unchanged as they do not contribute to the error We have also changed biases (not shown)
  • 36. Now go forward again (would normally use a new input vector): v11= -1 z1 = 1.2 x1= 0 w11= 0.9 v21= 0 w21= -1.2 v12= 0.1 w12= -0.2 x2= 1 v22= 0.8 w22= 0.6 z2 = 1.6
  • 37. Now go forward again (would normally use a new input vector): x1= 0 v11= -1 y1 = 1.66 w11= 0.9 v21= 0 w21= -1.2 v12= 0.1 w12= -0.2 x2= 1 v22= 0.8 w22= 0.6 y2 = 0.32 Outputs now closer to target value [1, 0]
  • 38. Neural network applications Pattern Classification Applications examples • Remote Sensing and image classification • Handwritten character/digits Recognition Control, Time series, Estimation • Machine Control/Robot manipulation • Financial/Scientific/Engineering Time series Optimization forecasting. • Traveling sales person Multiprocessor scheduling and task Real World Application Examples assignment • Hospital patient stay length prediction • Natural gas price prediction
  • 39. • Artificial neural networks are inspired by the learning processes that take place in biological systems. • Learning can be perceived as an optimisation process. • Biological neural learning happens by the modification of the synaptic strength. Artificial neural networks learn in the same way. • The synapse strength modification rules for artificial neural networks can be derived by applying mathematical optimisation methods.
  • 40. • Learning tasks of artificial neural networks = function approximation tasks. • The optimisation is done with respect to the approximation error measure. • In general it is enough to have a single hidden layer neural network (MLP, RBF or other) to learn the approximation of a nonlinear function. In such cases general optimisation can be applied to find the change rules for the synaptic weights.
  • 41. 1.artificial neural network,simon haykin 2.artificial neural network , yegnanarayana 3.artificial neural network , zurada 4. Hornick, Stinchcombe and White’s conclusion (1989) Hornik K., Stinchcombe M. and White H., “Multilayer feedforward networks are universal approximators”, Neural Networks, vol. 2, no. 5,pp. 359–366, 1989 5. Kumar, P. and Walia, E., (2006), “Cash Forecasting: An Application of Artificial Neural Networks in Finance”, International Journal of Computer Science and Applications 3 (1): 61- 77.