Successfully reported this slideshow.
Upcoming SlideShare
×

# FUNCTION APPROXIMATION

1,423 views

Published on

Published in: Education
• Full Name
Comment goes here.

Are you sure you want to Yes No
• Be the first to comment

### FUNCTION APPROXIMATION

1. 1. PRESENTED BY:Ankita PandeyME ECE - 112616
2. 2. CONTENTLearning Paradigm• Supervised Learning• Unsupervised Learning• Learning RulesFunction ApproximationSystem IdentificationInverse ModelingSummaryReferences
3. 3. LEARNING PARADIGMTraining data• A sample from the data source with the correct classiﬁcation/regression solution already assigned.Two Types of Learning• SUPERVISED• UNSUPERVISED
4. 4. LEARNING PARADIGM Supervised learning : Learning based on training data. Example:- Perceptron, LDA, SVMs,1. Training step: Learn classiﬁer/regressor 2. Prediction step: Assign class linear/ridge/kernel ridge regression are all from training data. labels/functional values to test data. supervised methods.
5. 5. LEARNING PARADIGMUnsupervised learning: Learning without training data.Data clustering : Dimension Divide input reductiondata into groups techniques.of similar points
6. 6. Learning Task Pattern Pattern Function Beam Approximation Controlling FilteringAssociation Recognition forming
7. 7. FunctionApproximation To design a neural network that approximates the unknown function f(.) such that the function F(.) describing the input-output mapping actually realized by the network, is close enough to f(.) in a Euclidean sense over all inputs.
8. 8. Function Approximation Consider a non linear input – output mapping described by the functional relationship d f x where Vector x is input. Vector d is output. The vector valued function f(.) is assumed to be unknown.
9. 9. Function Approximation To get the knowledge about the function f(.), some set of examples are taken, N xi , di i 1 A neural network is designed to approximate the unknown function in Euclidean sense over all inputs, given by the equation F x f x
10. 10. Function Approximation Where • Ε is a small positive number. • Size N of training sample is large enough and network is equipped with an adequate number of free parameters, • Thus approximation error ε can be reduced. • The approximation problem discussed here would be example of supervised learning.
11. 11. FUNCTION APPROXIMATION SYSTEM INVERSEIDENTIFICATION MODELING
12. 12. SYSTEM BLOCK DIAGRAM IDENTIFICATION di UNKNOWN SYSTEMInputVector ei xi Σ NEURAL NETWORK MODEL yi
13. 13. System IdentificationLet input-output relation of unknown memoryless MIMOsystem i.e. time invariant system is d f xSet of examples are used to train a neural network as a modelof the system. N xi , di i 1WhereVector y i denote the actual output of the neural network.
14. 14. System Identification• x i denotes the input vector.• d i denotes the desired response.• ei denotes the error signal i.e. the difference between d i and y i .This error is used to adjust the free parameters of thenetwork to minimize the squared difference between theoutputsof the unknown system and neural network in astatistical sense and computed over entire training samples.
15. 15. INVERSE MODELING BLOCK DIAGRAM Error ei System Output ModelInput UNKNOW di Output xiVector INVERS N xi SYSTEM E MODEL yi Σ f(.)
16. 16. Inverse ModelingIn this we construct an inverse model thatproduces the vector x in response to the vector d.This can be given by the eqution : x f 1 dWheref 1 denote inverse of f .Again with the use of stated examples neuralnetwork approximation of f 1 is constructed.
17. 17. Inverse ModelingHere d i is used as input and x i as desired response. is the error signal between and produced eini response to . xi yi diThis error is used to adjust the free parameters ofthe network to minimize the squared differencebetween the outputsof the unknown system andneural network in a statistical sense and computedover entire training samples.
18. 18. References[1] Neural Network And Learning Machines, 3rd Edition, By : Simon Haykins.[2] Satish Kumar – Neural Network : A classroom approach.[3] Jacek M.Zurada- Artificial Neural Networks.[4] Rajasekaran & Pai – Neural networks, Fuzzy logic and genetic algorithms.[5] www.slideshare.net[6] www.wikipedia.org