SlideShare a Scribd company logo
1 of 27
1
Adaline and Madaline
Adaline : Adaptive Linear neuron
Madaline : Multiple Adaline
2.1 Adaline (Bernard Widrow, Stanford Univ.)
bias term
(feedback,
error, gain,
adjust term)
0
w x
n
T
j j
j
y w x

 

0 :
x
Linear combination
d = f(y)
2
2.1.1 Least Mean Square (LMS) Learning
◎ Input vectors :
Ideal outputs :
Actual outputs :
1 2
{ , , , }
L

x x x
1 2
{ , , , }
L
d d d

1 2
{ , , , }
L
y y y

   
2
2
2 2
1
2
1
2 -- (2.4)
L
T
k k k k k k
k
T T T
k k k k k
ε ε d y d
L
d d

    
  
 w x
w x x w x w
Mean square error:
Let 2
, ,
k k k
ξ ε d
 
p x :
x xT
k k
R  correlation
matrix
Assume the output function: f(y) = y = d
3
Let
( )
2 2 0.
d
R
d

  
w
w p
w
* 1
.
R

w p
Obtain
Practical difficulties of analytical formula :
1. Large dimensions - difficult to calculate
2. < > expected value - Knowledge of probabilities
1
R
Idea: *
argmin ( )
 
w
w w
2
(2.4) 2
T T
k
d R

   
w w p w
4
The graph of is a
paraboloid.
2
( ) 2
T T
k
d R
   
w w w p w
2.1.2 Steepest Descent
5
Steps: 1. Initialize weight values
2. Determine the steepest descent direction
Let
3. Modify weight values
4. Repeat 2~3.
( 1) ( ) ( ), : step size
t t t
   
w w w
 
( ( ))
( ( )) 2( ( ))
( )
d t
t R t
d t


    
w
w
w p w
w
0
( )
t
w
No calculation of
( ) ( ( ))
t t
  
w
w w

Drawbacks: i) To know R and p is equivalent to
knowing the error surface in advance. ii) Steepest
descent training is a batch training method.
1
R
6
2.1.3 Stochastic Gradient Descent
Approximate by randomly
selecting one training example at a time
1. Apply an input vector
2.
3.
4.
5. Repeat 1~4 with the next input vector
( ( )) 2( ( ))
t R t
  

w
w p w
k
x
2 2 2
( ) ( ) ( ( ) )
T
k k k k k
ε t d y d t
    
w x
2 2
( ) ( ) ( )
2( ( ) ) 2 ( )
k k
t
k k k k k
t ε t ε t
d t ε t

    
     
w w w
w x x x
( 1) ( ) 2 ( )
k k
t t με t
  
w w x
and R
p
No calculation of
7
○ Practical Considerations:
(a) No. of training vectors, (b) Stopping criteria
(c) Initial weights, (d) Step size
Drawback: time consuming.
Improvement: mini-batch training method.
2.1.4 Conjugate Gradient Descent
-- Drawback: can only minimize quadratic functions,
e.g., 1
( )
2
T T
f A c
  
w w w b w
Advantage: guarantees to find the optimum solution in
at most n iterations, where n is the size of matrix A.
A-Conjugate Vectors:
Let square, symmetric, positive-definite matrix.
Vectors are A-conjugate
if
{ (0), (1), , ( 1)}
S n
  
s s s
( ) ( ) 0,
s s
T
i A j i j
  
:
n n
A 
* If A = I (identity matrix), conjugacy = orthogonality.
Set S forms a basis for space .
n
R
The solution in can be written as
1
0
( )
n
i
i
a i



 
w s

w n
R
• The conjugate-direction method for minimizing f(w) is
defined by ( 1) ( ) ( ) ( ), 0,1, , 1
i i i i i n

     
w w s
where w(0) is an arbitrary starting vector.
is determined by
( )
i
 min ( ( ) ( ))
f i i



w s
Let ( ) ( ) ( ) ( 1), 1,2, , 1 (A)
i i i i i n

      
s r s
Define , which is in the steepest
descent direction of
( ) ( )
i A i
 
r b w
( ( ) 2( )).
f A
  
w
w b w
How to determine ( )?
i
s
( )
f w
10
Multiply by s(i-1)A,
( 1) ( ) ( 1) ( ( ) ( ) ( 1)).
i A i i A i i i

    
T T
s s s r s
( ) ( ) 0,
T
i A j i j
  
s s
In order to be A-conjugate:
0 ( 1) ( ) ( ) ( 1) ( 1).
i A i i i A i

    
T T
s r s s
( 1) ( )
( ) (B)
( 1) ( 1)
i A i
i
i A i


    
 
T
T
s r
s s
(1), (2), , ( 1)
n
 
s s s generated by Eqs. (A) and (B)
are A-conjugate.
• Desire that evaluating does not need to know A.
Polak-Ribiere formula:
( )( ( ) ( 1))
( )
( 1) ( 1)
T
T
r r r
r r
i i i
i
i i
 
 
 

( )
i

Fletcher-Reeves formula:
( ) ( )
( )
( 1) ( 1)
T
T
r r
r r
i i
i
i i

 

* The conjugate-direction method for minimizing
Let ( 1) ( ) ( ) ( ), 0,1, , 1
i i i i i n

     
w w s
2
( ) 2
T T
k
d R
   
w w w p w
w(0) is an arbitrary starting vector
( )
i
 min ( ( ) ( ))
i i

 

w s
is determined by
( ) ( ) ( ) ( 1),
s r s
i i i i
  
 ( ) ( )
i R i
 
r p w
( 1) ( )
( )
( 1) ( 1)
i R i
i
i R i


 
 
T
T
s r
s s
Nonlinear Conjugate Gradient Algorithm
Initialize w(0) by an appropriate process
Conjugate gradient
converges in at most
n steps where n is the
size of the matrix of
the system (here n=2).
Example: A comparison of the convergences of
gradient descent (green) and conjugate gradient
(red) for minimizing a quadratic function.
14
2.3. Applications
2.3.1. Echo Cancellation in Telephone Circuits
n
n
n : incoming voice, s : outgoing voice
: noise (leakage of the incoming voice)
y : the output of the filter mimics
15
Hybrid circuit: deals with the leakage issue, which
attempts to isolate incoming from outgoing signals
Adaptive filter: deals with the choppy issue, which
mimics the leakage of the incoming voice for
suppressing the choppy speech from the outgoing
signals
2 2 2 2
2 2
( ) ( ) 2 ( )
( )
s s n y s n y s n y
s n y
   
                 

      
( ) 0
s n y

    (s not correlated with y, )
n
2 2 2 2
( )
ε s s n y
 
           
2 2
min min ( )
ε n y

      
16
2.3.2 Predict Signal
An adaptive filter is trained to predict signal.
The signal used to train the filter is a delayed
actual signal.
The expected output is the current signal.
17
2.3.3 Reproduce Signal
18
2.3.4. Adaptive beam – forming antenna
arrays
Antenna : spatial array of sensors which are
directional in their reception
characteristics.
Adaptive filter learns to steer antennae in order
that they can respond to incoming signals no
matter what their directions are, which reduce
responses to unwanted noise signals coming in
from other directions
19
2.4 Madaline : Many adaline
○ XOR function
?
20
21
2.4.2. Madaline Rule II (MRII)
○ Training algorithm – A trial–and–error procedure
with a minimum disturbance principle (those
nodes that can affect the output error while
incurring the least change in their weights
should have precedence in the learning
process)
○ Procedure –
1. Input a training pattern
2. Count #incorrect values in the output layer
22
3.1. Select the first previously unselected error
node whose analog output is closest to zero
( this node can reverse its bipolar output
with the least change in its weights)
3.2. Change the weights on the selected unit s.t.
the bipolar output of the unit changes
3.3. Input the same training pattern
3.4. If reduce #errors, accept the weight change,
otherwise restore the original weights
Q
3. For all units on the output layer
4. Repeat Step 3 for all layers except the input layer
23
6. Repeat step 5 for all layers except the input layer.
5. For all units on the output layer
5.1. Select the previously unselected pair of units
whose output are closest to zero
5.2. Apply a weight correction to both units, in
order to change their bipolar outputs
5.3. Input the same training pattern
5.4. If reduce # errors, accept the correction;
otherwise, restore the original weights.
24
※ Steps 5 and 6 can be repeated with triplets,
quadruplets or longer combinations of units
until satisfactory results are obtained
The MRII learning rule considers the network with
only one hidden layer. For networks with more hidden
layers, the backpropagation learning strategy to be
discussed later can be employed.
25
2.4.3. A Madaline for Translation–Invariant
Pattern Recognition
26
。Relationships among the weight matrices of Adalines
27
○ Extension -- Mutiple slabs with different key weight
matrices for discriminating more then two classes of
patterns

More Related Content

What's hot

What's hot (20)

Adaline madaline
Adaline madalineAdaline madaline
Adaline madaline
 
Perceptron
PerceptronPerceptron
Perceptron
 
Cnn
CnnCnn
Cnn
 
Wavelet transform in two dimensions
Wavelet transform in two dimensionsWavelet transform in two dimensions
Wavelet transform in two dimensions
 
Feed Forward Neural Network.pptx
Feed Forward Neural Network.pptxFeed Forward Neural Network.pptx
Feed Forward Neural Network.pptx
 
Perceptron & Neural Networks
Perceptron & Neural NetworksPerceptron & Neural Networks
Perceptron & Neural Networks
 
Neural Networks: Multilayer Perceptron
Neural Networks: Multilayer PerceptronNeural Networks: Multilayer Perceptron
Neural Networks: Multilayer Perceptron
 
Lecture 16 KL Transform in Image Processing
Lecture 16 KL Transform in Image ProcessingLecture 16 KL Transform in Image Processing
Lecture 16 KL Transform in Image Processing
 
Feedforward neural network
Feedforward neural networkFeedforward neural network
Feedforward neural network
 
Matlab1
Matlab1Matlab1
Matlab1
 
support vector regression
support vector regressionsupport vector regression
support vector regression
 
Introduction to Neural Networks
Introduction to Neural NetworksIntroduction to Neural Networks
Introduction to Neural Networks
 
Fuzzy Set
Fuzzy SetFuzzy Set
Fuzzy Set
 
Art network
Art networkArt network
Art network
 
Artificial neural network
Artificial neural networkArtificial neural network
Artificial neural network
 
Back propagation
Back propagationBack propagation
Back propagation
 
Neural Networks
Neural NetworksNeural Networks
Neural Networks
 
03 Single layer Perception Classifier
03 Single layer Perception Classifier03 Single layer Perception Classifier
03 Single layer Perception Classifier
 
Associative memory network
Associative memory networkAssociative memory network
Associative memory network
 
Svm
SvmSvm
Svm
 

Similar to Adaline and Madaline: Adaptive Linear Neurons for Pattern Recognition

Nbhm m. a. and m.sc. scholarship test september 20, 2014 with answer key
Nbhm m. a. and m.sc. scholarship test september 20, 2014 with answer keyNbhm m. a. and m.sc. scholarship test september 20, 2014 with answer key
Nbhm m. a. and m.sc. scholarship test september 20, 2014 with answer keyMD Kutubuddin Sardar
 
EC8553 Discrete time signal processing
EC8553 Discrete time signal processing EC8553 Discrete time signal processing
EC8553 Discrete time signal processing ssuser2797e4
 
Notions of equivalence for linear multivariable systems
Notions of equivalence for linear multivariable systemsNotions of equivalence for linear multivariable systems
Notions of equivalence for linear multivariable systemsStavros Vologiannidis
 
MLP輪読スパース8章 トレースノルム正則化
MLP輪読スパース8章 トレースノルム正則化MLP輪読スパース8章 トレースノルム正則化
MLP輪読スパース8章 トレースノルム正則化Akira Tanimoto
 
Note and assignment mis3 5.3
Note and assignment mis3 5.3Note and assignment mis3 5.3
Note and assignment mis3 5.3RoshanTushar1
 
Paper computer
Paper computerPaper computer
Paper computerbikram ...
 
Paper computer
Paper computerPaper computer
Paper computerbikram ...
 
Metodo Monte Carlo -Wang Landau
Metodo Monte Carlo -Wang LandauMetodo Monte Carlo -Wang Landau
Metodo Monte Carlo -Wang Landauangely alcendra
 
線形回帰モデル
線形回帰モデル線形回帰モデル
線形回帰モデル貴之 八木
 
Matlab lab manual
Matlab lab manualMatlab lab manual
Matlab lab manualnmahi96
 
Linear response theory and TDDFT
Linear response theory and TDDFT Linear response theory and TDDFT
Linear response theory and TDDFT Claudio Attaccalite
 
A Mathematically Derived Number of Resamplings for Noisy Optimization (GECCO2...
A Mathematically Derived Number of Resamplings for Noisy Optimization (GECCO2...A Mathematically Derived Number of Resamplings for Noisy Optimization (GECCO2...
A Mathematically Derived Number of Resamplings for Noisy Optimization (GECCO2...Jialin LIU
 
The Magic of Auto Differentiation
The Magic of Auto DifferentiationThe Magic of Auto Differentiation
The Magic of Auto DifferentiationSanyam Kapoor
 
Laplace equation
Laplace equationLaplace equation
Laplace equationalexkhan129
 
Chap-1 Preliminary Concepts and Linear Finite Elements.pptx
Chap-1 Preliminary Concepts and Linear Finite Elements.pptxChap-1 Preliminary Concepts and Linear Finite Elements.pptx
Chap-1 Preliminary Concepts and Linear Finite Elements.pptxSamirsinh Parmar
 
Talk at SciCADE2013 about "Accelerated Multiple Precision ODE solver base on ...
Talk at SciCADE2013 about "Accelerated Multiple Precision ODE solver base on ...Talk at SciCADE2013 about "Accelerated Multiple Precision ODE solver base on ...
Talk at SciCADE2013 about "Accelerated Multiple Precision ODE solver base on ...Shizuoka Inst. Science and Tech.
 

Similar to Adaline and Madaline: Adaptive Linear Neurons for Pattern Recognition (20)

Conference ppt
Conference pptConference ppt
Conference ppt
 
Randomized algorithms ver 1.0
Randomized algorithms ver 1.0Randomized algorithms ver 1.0
Randomized algorithms ver 1.0
 
Nbhm m. a. and m.sc. scholarship test september 20, 2014 with answer key
Nbhm m. a. and m.sc. scholarship test september 20, 2014 with answer keyNbhm m. a. and m.sc. scholarship test september 20, 2014 with answer key
Nbhm m. a. and m.sc. scholarship test september 20, 2014 with answer key
 
EC8553 Discrete time signal processing
EC8553 Discrete time signal processing EC8553 Discrete time signal processing
EC8553 Discrete time signal processing
 
Notions of equivalence for linear multivariable systems
Notions of equivalence for linear multivariable systemsNotions of equivalence for linear multivariable systems
Notions of equivalence for linear multivariable systems
 
MLP輪読スパース8章 トレースノルム正則化
MLP輪読スパース8章 トレースノルム正則化MLP輪読スパース8章 トレースノルム正則化
MLP輪読スパース8章 トレースノルム正則化
 
Note and assignment mis3 5.3
Note and assignment mis3 5.3Note and assignment mis3 5.3
Note and assignment mis3 5.3
 
Paper computer
Paper computerPaper computer
Paper computer
 
Paper computer
Paper computerPaper computer
Paper computer
 
Metodo Monte Carlo -Wang Landau
Metodo Monte Carlo -Wang LandauMetodo Monte Carlo -Wang Landau
Metodo Monte Carlo -Wang Landau
 
線形回帰モデル
線形回帰モデル線形回帰モデル
線形回帰モデル
 
Matlab lab manual
Matlab lab manualMatlab lab manual
Matlab lab manual
 
Linear response theory and TDDFT
Linear response theory and TDDFT Linear response theory and TDDFT
Linear response theory and TDDFT
 
A Mathematically Derived Number of Resamplings for Noisy Optimization (GECCO2...
A Mathematically Derived Number of Resamplings for Noisy Optimization (GECCO2...A Mathematically Derived Number of Resamplings for Noisy Optimization (GECCO2...
A Mathematically Derived Number of Resamplings for Noisy Optimization (GECCO2...
 
The Magic of Auto Differentiation
The Magic of Auto DifferentiationThe Magic of Auto Differentiation
The Magic of Auto Differentiation
 
Laplace equation
Laplace equationLaplace equation
Laplace equation
 
Signal & system
Signal & systemSignal & system
Signal & system
 
Chap-1 Preliminary Concepts and Linear Finite Elements.pptx
Chap-1 Preliminary Concepts and Linear Finite Elements.pptxChap-1 Preliminary Concepts and Linear Finite Elements.pptx
Chap-1 Preliminary Concepts and Linear Finite Elements.pptx
 
Talk at SciCADE2013 about "Accelerated Multiple Precision ODE solver base on ...
Talk at SciCADE2013 about "Accelerated Multiple Precision ODE solver base on ...Talk at SciCADE2013 about "Accelerated Multiple Precision ODE solver base on ...
Talk at SciCADE2013 about "Accelerated Multiple Precision ODE solver base on ...
 
Lecture5
Lecture5Lecture5
Lecture5
 

More from neelamsanjeevkumar

simulated aneeleaning in artificial intelligence .pptx
simulated aneeleaning in artificial intelligence .pptxsimulated aneeleaning in artificial intelligence .pptx
simulated aneeleaning in artificial intelligence .pptxneelamsanjeevkumar
 
Feed forward back propogation algorithm .pptx
Feed forward back propogation algorithm .pptxFeed forward back propogation algorithm .pptx
Feed forward back propogation algorithm .pptxneelamsanjeevkumar
 
IOT Unit 3 for engineering second year .pptx
IOT Unit 3 for engineering second year .pptxIOT Unit 3 for engineering second year .pptx
IOT Unit 3 for engineering second year .pptxneelamsanjeevkumar
 
Genetic-Algorithms forv artificial .ppt
Genetic-Algorithms forv artificial  .pptGenetic-Algorithms forv artificial  .ppt
Genetic-Algorithms forv artificial .pptneelamsanjeevkumar
 
Genetic_Algorithms_genetic for_data .ppt
Genetic_Algorithms_genetic for_data .pptGenetic_Algorithms_genetic for_data .ppt
Genetic_Algorithms_genetic for_data .pptneelamsanjeevkumar
 
Genetic-Algorithms for machine learning and ai.ppt
Genetic-Algorithms for machine learning and ai.pptGenetic-Algorithms for machine learning and ai.ppt
Genetic-Algorithms for machine learning and ai.pptneelamsanjeevkumar
 
Stepwise Selection Choosing the Optimal Model .ppt
Stepwise Selection  Choosing the Optimal Model .pptStepwise Selection  Choosing the Optimal Model .ppt
Stepwise Selection Choosing the Optimal Model .pptneelamsanjeevkumar
 
the connection of iot with lora pan which enable
the connection of iot with lora pan which enablethe connection of iot with lora pan which enable
the connection of iot with lora pan which enableneelamsanjeevkumar
 
what is lorapan ,explanation of iot module with
what is lorapan ,explanation of iot module withwhat is lorapan ,explanation of iot module with
what is lorapan ,explanation of iot module withneelamsanjeevkumar
 
What is First Order Logic in AI or FOL in AI.docx
What is First Order Logic in AI or FOL in AI.docxWhat is First Order Logic in AI or FOL in AI.docx
What is First Order Logic in AI or FOL in AI.docxneelamsanjeevkumar
 
unit2_mental objects pruning and game theory .pptx
unit2_mental objects pruning and game theory .pptxunit2_mental objects pruning and game theory .pptx
unit2_mental objects pruning and game theory .pptxneelamsanjeevkumar
 
2_RaspberryPi presentation.pptx
2_RaspberryPi presentation.pptx2_RaspberryPi presentation.pptx
2_RaspberryPi presentation.pptxneelamsanjeevkumar
 
Neural networks are parallel computing devices.docx.pdf
Neural networks are parallel computing devices.docx.pdfNeural networks are parallel computing devices.docx.pdf
Neural networks are parallel computing devices.docx.pdfneelamsanjeevkumar
 
An_Introduction_To_The_Backpropagation_A.ppt
An_Introduction_To_The_Backpropagation_A.pptAn_Introduction_To_The_Backpropagation_A.ppt
An_Introduction_To_The_Backpropagation_A.pptneelamsanjeevkumar
 

More from neelamsanjeevkumar (20)

simulated aneeleaning in artificial intelligence .pptx
simulated aneeleaning in artificial intelligence .pptxsimulated aneeleaning in artificial intelligence .pptx
simulated aneeleaning in artificial intelligence .pptx
 
Feed forward back propogation algorithm .pptx
Feed forward back propogation algorithm .pptxFeed forward back propogation algorithm .pptx
Feed forward back propogation algorithm .pptx
 
IOT Unit 3 for engineering second year .pptx
IOT Unit 3 for engineering second year .pptxIOT Unit 3 for engineering second year .pptx
IOT Unit 3 for engineering second year .pptx
 
Genetic-Algorithms forv artificial .ppt
Genetic-Algorithms forv artificial  .pptGenetic-Algorithms forv artificial  .ppt
Genetic-Algorithms forv artificial .ppt
 
Genetic_Algorithms_genetic for_data .ppt
Genetic_Algorithms_genetic for_data .pptGenetic_Algorithms_genetic for_data .ppt
Genetic_Algorithms_genetic for_data .ppt
 
Genetic-Algorithms for machine learning and ai.ppt
Genetic-Algorithms for machine learning and ai.pptGenetic-Algorithms for machine learning and ai.ppt
Genetic-Algorithms for machine learning and ai.ppt
 
Stepwise Selection Choosing the Optimal Model .ppt
Stepwise Selection  Choosing the Optimal Model .pptStepwise Selection  Choosing the Optimal Model .ppt
Stepwise Selection Choosing the Optimal Model .ppt
 
the connection of iot with lora pan which enable
the connection of iot with lora pan which enablethe connection of iot with lora pan which enable
the connection of iot with lora pan which enable
 
what is lorapan ,explanation of iot module with
what is lorapan ,explanation of iot module withwhat is lorapan ,explanation of iot module with
what is lorapan ,explanation of iot module with
 
What is First Order Logic in AI or FOL in AI.docx
What is First Order Logic in AI or FOL in AI.docxWhat is First Order Logic in AI or FOL in AI.docx
What is First Order Logic in AI or FOL in AI.docx
 
unit2_mental objects pruning and game theory .pptx
unit2_mental objects pruning and game theory .pptxunit2_mental objects pruning and game theory .pptx
unit2_mental objects pruning and game theory .pptx
 
2_RaspberryPi presentation.pptx
2_RaspberryPi presentation.pptx2_RaspberryPi presentation.pptx
2_RaspberryPi presentation.pptx
 
LINEAR REGRESSION.pptx
LINEAR REGRESSION.pptxLINEAR REGRESSION.pptx
LINEAR REGRESSION.pptx
 
Neural networks are parallel computing devices.docx.pdf
Neural networks are parallel computing devices.docx.pdfNeural networks are parallel computing devices.docx.pdf
Neural networks are parallel computing devices.docx.pdf
 
PRNN syllabus.pdf
PRNN syllabus.pdfPRNN syllabus.pdf
PRNN syllabus.pdf
 
PRNN syllabus.pdf
PRNN syllabus.pdfPRNN syllabus.pdf
PRNN syllabus.pdf
 
backprop.ppt
backprop.pptbackprop.ppt
backprop.ppt
 
feedforward-network-
feedforward-network-feedforward-network-
feedforward-network-
 
An_Introduction_To_The_Backpropagation_A.ppt
An_Introduction_To_The_Backpropagation_A.pptAn_Introduction_To_The_Backpropagation_A.ppt
An_Introduction_To_The_Backpropagation_A.ppt
 
conept map gate.pdf
conept map gate.pdfconept map gate.pdf
conept map gate.pdf
 

Recently uploaded

Work Experience-Dalton Park.pptxfvvvvvvv
Work Experience-Dalton Park.pptxfvvvvvvvWork Experience-Dalton Park.pptxfvvvvvvv
Work Experience-Dalton Park.pptxfvvvvvvvLewisJB
 
Paper Tube : Shigeru Ban projects and Case Study of Cardboard Cathedral .pdf
Paper Tube : Shigeru Ban projects and Case Study of Cardboard Cathedral .pdfPaper Tube : Shigeru Ban projects and Case Study of Cardboard Cathedral .pdf
Paper Tube : Shigeru Ban projects and Case Study of Cardboard Cathedral .pdfNainaShrivastava14
 
Class 1 | NFPA 72 | Overview Fire Alarm System
Class 1 | NFPA 72 | Overview Fire Alarm SystemClass 1 | NFPA 72 | Overview Fire Alarm System
Class 1 | NFPA 72 | Overview Fire Alarm Systemirfanmechengr
 
Artificial Intelligence in Power System overview
Artificial Intelligence in Power System overviewArtificial Intelligence in Power System overview
Artificial Intelligence in Power System overviewsandhya757531
 
List of Accredited Concrete Batching Plant.pdf
List of Accredited Concrete Batching Plant.pdfList of Accredited Concrete Batching Plant.pdf
List of Accredited Concrete Batching Plant.pdfisabel213075
 
"Exploring the Essential Functions and Design Considerations of Spillways in ...
"Exploring the Essential Functions and Design Considerations of Spillways in ..."Exploring the Essential Functions and Design Considerations of Spillways in ...
"Exploring the Essential Functions and Design Considerations of Spillways in ...Erbil Polytechnic University
 
National Level Hackathon Participation Certificate.pdf
National Level Hackathon Participation Certificate.pdfNational Level Hackathon Participation Certificate.pdf
National Level Hackathon Participation Certificate.pdfRajuKanojiya4
 
OOP concepts -in-Python programming language
OOP concepts -in-Python programming languageOOP concepts -in-Python programming language
OOP concepts -in-Python programming languageSmritiSharma901052
 
GSK & SEAMANSHIP-IV LIFE SAVING APPLIANCES .pptx
GSK & SEAMANSHIP-IV LIFE SAVING APPLIANCES .pptxGSK & SEAMANSHIP-IV LIFE SAVING APPLIANCES .pptx
GSK & SEAMANSHIP-IV LIFE SAVING APPLIANCES .pptxshuklamittt0077
 
Robotics Group 10 (Control Schemes) cse.pdf
Robotics Group 10  (Control Schemes) cse.pdfRobotics Group 10  (Control Schemes) cse.pdf
Robotics Group 10 (Control Schemes) cse.pdfsahilsajad201
 
Ch10-Global Supply Chain - Cadena de Suministro.pdf
Ch10-Global Supply Chain - Cadena de Suministro.pdfCh10-Global Supply Chain - Cadena de Suministro.pdf
Ch10-Global Supply Chain - Cadena de Suministro.pdfChristianCDAM
 
ROBOETHICS-CCS345 ETHICS AND ARTIFICIAL INTELLIGENCE.ppt
ROBOETHICS-CCS345 ETHICS AND ARTIFICIAL INTELLIGENCE.pptROBOETHICS-CCS345 ETHICS AND ARTIFICIAL INTELLIGENCE.ppt
ROBOETHICS-CCS345 ETHICS AND ARTIFICIAL INTELLIGENCE.pptJohnWilliam111370
 
home automation using Arduino by Aditya Prasad
home automation using Arduino by Aditya Prasadhome automation using Arduino by Aditya Prasad
home automation using Arduino by Aditya Prasadaditya806802
 
Virtual memory management in Operating System
Virtual memory management in Operating SystemVirtual memory management in Operating System
Virtual memory management in Operating SystemRashmi Bhat
 
Cooling Tower SERD pH drop issue (11 April 2024) .pptx
Cooling Tower SERD pH drop issue (11 April 2024) .pptxCooling Tower SERD pH drop issue (11 April 2024) .pptx
Cooling Tower SERD pH drop issue (11 April 2024) .pptxmamansuratman0253
 
Main Memory Management in Operating System
Main Memory Management in Operating SystemMain Memory Management in Operating System
Main Memory Management in Operating SystemRashmi Bhat
 
Internship PPT ukai thermal power station .pptx
Internship PPT ukai thermal power station .pptxInternship PPT ukai thermal power station .pptx
Internship PPT ukai thermal power station .pptxmalikavita731
 
US Department of Education FAFSA Week of Action
US Department of Education FAFSA Week of ActionUS Department of Education FAFSA Week of Action
US Department of Education FAFSA Week of ActionMebane Rash
 
THE SENDAI FRAMEWORK FOR DISASTER RISK REDUCTION
THE SENDAI FRAMEWORK FOR DISASTER RISK REDUCTIONTHE SENDAI FRAMEWORK FOR DISASTER RISK REDUCTION
THE SENDAI FRAMEWORK FOR DISASTER RISK REDUCTIONjhunlian
 
Indian Dairy Industry Present Status and.ppt
Indian Dairy Industry Present Status and.pptIndian Dairy Industry Present Status and.ppt
Indian Dairy Industry Present Status and.pptMadan Karki
 

Recently uploaded (20)

Work Experience-Dalton Park.pptxfvvvvvvv
Work Experience-Dalton Park.pptxfvvvvvvvWork Experience-Dalton Park.pptxfvvvvvvv
Work Experience-Dalton Park.pptxfvvvvvvv
 
Paper Tube : Shigeru Ban projects and Case Study of Cardboard Cathedral .pdf
Paper Tube : Shigeru Ban projects and Case Study of Cardboard Cathedral .pdfPaper Tube : Shigeru Ban projects and Case Study of Cardboard Cathedral .pdf
Paper Tube : Shigeru Ban projects and Case Study of Cardboard Cathedral .pdf
 
Class 1 | NFPA 72 | Overview Fire Alarm System
Class 1 | NFPA 72 | Overview Fire Alarm SystemClass 1 | NFPA 72 | Overview Fire Alarm System
Class 1 | NFPA 72 | Overview Fire Alarm System
 
Artificial Intelligence in Power System overview
Artificial Intelligence in Power System overviewArtificial Intelligence in Power System overview
Artificial Intelligence in Power System overview
 
List of Accredited Concrete Batching Plant.pdf
List of Accredited Concrete Batching Plant.pdfList of Accredited Concrete Batching Plant.pdf
List of Accredited Concrete Batching Plant.pdf
 
"Exploring the Essential Functions and Design Considerations of Spillways in ...
"Exploring the Essential Functions and Design Considerations of Spillways in ..."Exploring the Essential Functions and Design Considerations of Spillways in ...
"Exploring the Essential Functions and Design Considerations of Spillways in ...
 
National Level Hackathon Participation Certificate.pdf
National Level Hackathon Participation Certificate.pdfNational Level Hackathon Participation Certificate.pdf
National Level Hackathon Participation Certificate.pdf
 
OOP concepts -in-Python programming language
OOP concepts -in-Python programming languageOOP concepts -in-Python programming language
OOP concepts -in-Python programming language
 
GSK & SEAMANSHIP-IV LIFE SAVING APPLIANCES .pptx
GSK & SEAMANSHIP-IV LIFE SAVING APPLIANCES .pptxGSK & SEAMANSHIP-IV LIFE SAVING APPLIANCES .pptx
GSK & SEAMANSHIP-IV LIFE SAVING APPLIANCES .pptx
 
Robotics Group 10 (Control Schemes) cse.pdf
Robotics Group 10  (Control Schemes) cse.pdfRobotics Group 10  (Control Schemes) cse.pdf
Robotics Group 10 (Control Schemes) cse.pdf
 
Ch10-Global Supply Chain - Cadena de Suministro.pdf
Ch10-Global Supply Chain - Cadena de Suministro.pdfCh10-Global Supply Chain - Cadena de Suministro.pdf
Ch10-Global Supply Chain - Cadena de Suministro.pdf
 
ROBOETHICS-CCS345 ETHICS AND ARTIFICIAL INTELLIGENCE.ppt
ROBOETHICS-CCS345 ETHICS AND ARTIFICIAL INTELLIGENCE.pptROBOETHICS-CCS345 ETHICS AND ARTIFICIAL INTELLIGENCE.ppt
ROBOETHICS-CCS345 ETHICS AND ARTIFICIAL INTELLIGENCE.ppt
 
home automation using Arduino by Aditya Prasad
home automation using Arduino by Aditya Prasadhome automation using Arduino by Aditya Prasad
home automation using Arduino by Aditya Prasad
 
Virtual memory management in Operating System
Virtual memory management in Operating SystemVirtual memory management in Operating System
Virtual memory management in Operating System
 
Cooling Tower SERD pH drop issue (11 April 2024) .pptx
Cooling Tower SERD pH drop issue (11 April 2024) .pptxCooling Tower SERD pH drop issue (11 April 2024) .pptx
Cooling Tower SERD pH drop issue (11 April 2024) .pptx
 
Main Memory Management in Operating System
Main Memory Management in Operating SystemMain Memory Management in Operating System
Main Memory Management in Operating System
 
Internship PPT ukai thermal power station .pptx
Internship PPT ukai thermal power station .pptxInternship PPT ukai thermal power station .pptx
Internship PPT ukai thermal power station .pptx
 
US Department of Education FAFSA Week of Action
US Department of Education FAFSA Week of ActionUS Department of Education FAFSA Week of Action
US Department of Education FAFSA Week of Action
 
THE SENDAI FRAMEWORK FOR DISASTER RISK REDUCTION
THE SENDAI FRAMEWORK FOR DISASTER RISK REDUCTIONTHE SENDAI FRAMEWORK FOR DISASTER RISK REDUCTION
THE SENDAI FRAMEWORK FOR DISASTER RISK REDUCTION
 
Indian Dairy Industry Present Status and.ppt
Indian Dairy Industry Present Status and.pptIndian Dairy Industry Present Status and.ppt
Indian Dairy Industry Present Status and.ppt
 

Adaline and Madaline: Adaptive Linear Neurons for Pattern Recognition

  • 1. 1 Adaline and Madaline Adaline : Adaptive Linear neuron Madaline : Multiple Adaline 2.1 Adaline (Bernard Widrow, Stanford Univ.) bias term (feedback, error, gain, adjust term) 0 w x n T j j j y w x     0 : x Linear combination d = f(y)
  • 2. 2 2.1.1 Least Mean Square (LMS) Learning ◎ Input vectors : Ideal outputs : Actual outputs : 1 2 { , , , } L  x x x 1 2 { , , , } L d d d  1 2 { , , , } L y y y      2 2 2 2 1 2 1 2 -- (2.4) L T k k k k k k k T T T k k k k k ε ε d y d L d d           w x w x x w x w Mean square error: Let 2 , , k k k ξ ε d   p x : x xT k k R  correlation matrix Assume the output function: f(y) = y = d
  • 3. 3 Let ( ) 2 2 0. d R d     w w p w * 1 . R  w p Obtain Practical difficulties of analytical formula : 1. Large dimensions - difficult to calculate 2. < > expected value - Knowledge of probabilities 1 R Idea: * argmin ( )   w w w 2 (2.4) 2 T T k d R      w w p w
  • 4. 4 The graph of is a paraboloid. 2 ( ) 2 T T k d R     w w w p w 2.1.2 Steepest Descent
  • 5. 5 Steps: 1. Initialize weight values 2. Determine the steepest descent direction Let 3. Modify weight values 4. Repeat 2~3. ( 1) ( ) ( ), : step size t t t     w w w   ( ( )) ( ( )) 2( ( )) ( ) d t t R t d t        w w w p w w 0 ( ) t w No calculation of ( ) ( ( )) t t    w w w  Drawbacks: i) To know R and p is equivalent to knowing the error surface in advance. ii) Steepest descent training is a batch training method. 1 R
  • 6. 6 2.1.3 Stochastic Gradient Descent Approximate by randomly selecting one training example at a time 1. Apply an input vector 2. 3. 4. 5. Repeat 1~4 with the next input vector ( ( )) 2( ( )) t R t     w w p w k x 2 2 2 ( ) ( ) ( ( ) ) T k k k k k ε t d y d t      w x 2 2 ( ) ( ) ( ) 2( ( ) ) 2 ( ) k k t k k k k k t ε t ε t d t ε t             w w w w x x x ( 1) ( ) 2 ( ) k k t t με t    w w x and R p No calculation of
  • 7. 7 ○ Practical Considerations: (a) No. of training vectors, (b) Stopping criteria (c) Initial weights, (d) Step size Drawback: time consuming. Improvement: mini-batch training method.
  • 8. 2.1.4 Conjugate Gradient Descent -- Drawback: can only minimize quadratic functions, e.g., 1 ( ) 2 T T f A c    w w w b w Advantage: guarantees to find the optimum solution in at most n iterations, where n is the size of matrix A. A-Conjugate Vectors: Let square, symmetric, positive-definite matrix. Vectors are A-conjugate if { (0), (1), , ( 1)} S n    s s s ( ) ( ) 0, s s T i A j i j    : n n A  * If A = I (identity matrix), conjugacy = orthogonality.
  • 9. Set S forms a basis for space . n R The solution in can be written as 1 0 ( ) n i i a i      w s  w n R • The conjugate-direction method for minimizing f(w) is defined by ( 1) ( ) ( ) ( ), 0,1, , 1 i i i i i n        w w s where w(0) is an arbitrary starting vector. is determined by ( ) i  min ( ( ) ( )) f i i    w s Let ( ) ( ) ( ) ( 1), 1,2, , 1 (A) i i i i i n         s r s Define , which is in the steepest descent direction of ( ) ( ) i A i   r b w ( ( ) 2( )). f A    w w b w How to determine ( )? i s ( ) f w
  • 10. 10 Multiply by s(i-1)A, ( 1) ( ) ( 1) ( ( ) ( ) ( 1)). i A i i A i i i       T T s s s r s ( ) ( ) 0, T i A j i j    s s In order to be A-conjugate: 0 ( 1) ( ) ( ) ( 1) ( 1). i A i i i A i       T T s r s s ( 1) ( ) ( ) (B) ( 1) ( 1) i A i i i A i          T T s r s s (1), (2), , ( 1) n   s s s generated by Eqs. (A) and (B) are A-conjugate. • Desire that evaluating does not need to know A. Polak-Ribiere formula: ( )( ( ) ( 1)) ( ) ( 1) ( 1) T T r r r r r i i i i i i        ( ) i 
  • 11. Fletcher-Reeves formula: ( ) ( ) ( ) ( 1) ( 1) T T r r r r i i i i i     * The conjugate-direction method for minimizing Let ( 1) ( ) ( ) ( ), 0,1, , 1 i i i i i n        w w s 2 ( ) 2 T T k d R     w w w p w w(0) is an arbitrary starting vector ( ) i  min ( ( ) ( )) i i     w s is determined by ( ) ( ) ( ) ( 1), s r s i i i i     ( ) ( ) i R i   r p w ( 1) ( ) ( ) ( 1) ( 1) i R i i i R i       T T s r s s
  • 12. Nonlinear Conjugate Gradient Algorithm Initialize w(0) by an appropriate process
  • 13. Conjugate gradient converges in at most n steps where n is the size of the matrix of the system (here n=2). Example: A comparison of the convergences of gradient descent (green) and conjugate gradient (red) for minimizing a quadratic function.
  • 14. 14 2.3. Applications 2.3.1. Echo Cancellation in Telephone Circuits n n n : incoming voice, s : outgoing voice : noise (leakage of the incoming voice) y : the output of the filter mimics
  • 15. 15 Hybrid circuit: deals with the leakage issue, which attempts to isolate incoming from outgoing signals Adaptive filter: deals with the choppy issue, which mimics the leakage of the incoming voice for suppressing the choppy speech from the outgoing signals 2 2 2 2 2 2 ( ) ( ) 2 ( ) ( ) s s n y s n y s n y s n y                               ( ) 0 s n y      (s not correlated with y, ) n 2 2 2 2 ( ) ε s s n y               2 2 min min ( ) ε n y        
  • 16. 16 2.3.2 Predict Signal An adaptive filter is trained to predict signal. The signal used to train the filter is a delayed actual signal. The expected output is the current signal.
  • 18. 18 2.3.4. Adaptive beam – forming antenna arrays Antenna : spatial array of sensors which are directional in their reception characteristics. Adaptive filter learns to steer antennae in order that they can respond to incoming signals no matter what their directions are, which reduce responses to unwanted noise signals coming in from other directions
  • 19. 19 2.4 Madaline : Many adaline ○ XOR function ?
  • 20. 20
  • 21. 21 2.4.2. Madaline Rule II (MRII) ○ Training algorithm – A trial–and–error procedure with a minimum disturbance principle (those nodes that can affect the output error while incurring the least change in their weights should have precedence in the learning process) ○ Procedure – 1. Input a training pattern 2. Count #incorrect values in the output layer
  • 22. 22 3.1. Select the first previously unselected error node whose analog output is closest to zero ( this node can reverse its bipolar output with the least change in its weights) 3.2. Change the weights on the selected unit s.t. the bipolar output of the unit changes 3.3. Input the same training pattern 3.4. If reduce #errors, accept the weight change, otherwise restore the original weights Q 3. For all units on the output layer 4. Repeat Step 3 for all layers except the input layer
  • 23. 23 6. Repeat step 5 for all layers except the input layer. 5. For all units on the output layer 5.1. Select the previously unselected pair of units whose output are closest to zero 5.2. Apply a weight correction to both units, in order to change their bipolar outputs 5.3. Input the same training pattern 5.4. If reduce # errors, accept the correction; otherwise, restore the original weights.
  • 24. 24 ※ Steps 5 and 6 can be repeated with triplets, quadruplets or longer combinations of units until satisfactory results are obtained The MRII learning rule considers the network with only one hidden layer. For networks with more hidden layers, the backpropagation learning strategy to be discussed later can be employed.
  • 25. 25 2.4.3. A Madaline for Translation–Invariant Pattern Recognition
  • 26. 26 。Relationships among the weight matrices of Adalines
  • 27. 27 ○ Extension -- Mutiple slabs with different key weight matrices for discriminating more then two classes of patterns