• Save
Lec 5
Upcoming SlideShare
Loading in...5
×
 

Lec 5

on

  • 286 views

 

Statistics

Views

Total Views
286
Views on SlideShare
286
Embed Views
0

Actions

Likes
0
Downloads
0
Comments
0

0 Embeds 0

No embeds

Accessibility

Categories

Upload Details

Uploaded via as Adobe PDF

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

Lec 5 Lec 5 Presentation Transcript

  •  Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 1Lecture 7Lecture 7Artificial neural networks:Artificial neural networks:Supervised learningSupervised learningss Introduction, or how the brain worksIntroduction, or how the brain worksss The neuron as a simple computing elementThe neuron as a simple computing elementss The perceptronThe perceptronss Multilayer neural networksMultilayer neural networksss Accelerated learning in multilayer neural networksAccelerated learning in multilayer neural networksss The Hopfield networkThe Hopfield networkss Bidirectional associative memories (BAM)Bidirectional associative memories (BAM)ss SummarySummary
  •  Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 2Introduction, orIntroduction, or howhow thethe brainbrain worksworksMachineMachine learning involves adaptive mechanismslearning involves adaptive mechanismsthat enable computers to learn from experience,that enable computers to learn from experience,learn by example and learn by analogy. Learninglearn by example and learn by analogy. Learningcapabilities can improve the performance of ancapabilities can improve the performance of anintelligent system over time. The most popularintelligent system over time. The most popularapproaches to machine learning areapproaches to machine learning are artificialartificialneural networksneural networks andand genetic algorithmsgenetic algorithms. This. Thislecture is dedicated to neural networks.lecture is dedicated to neural networks.
  •  Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 3ss AA neural networkneural network can be defined as a model ofcan be defined as a model ofreasoning based on the human brain. The brainreasoning based on the human brain. The brainconsists of a densely interconnected set of nerveconsists of a densely interconnected set of nervecells, or basic information-processing units, calledcells, or basic information-processing units, calledneuronsneurons..ss The human brain incorporates nearly 10 billionThe human brain incorporates nearly 10 billionneurons and 60 trillion connections,neurons and 60 trillion connections, synapsessynapses,,between them. By using multiple neuronsbetween them. By using multiple neuronssimultaneously, the brain cansimultaneously, the brain can perform its functionsperform its functionsmuch faster than the fastest computers in existencemuch faster than the fastest computers in existencetoday.today.
  •  Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 4ss EachEach neuronneuron has a very simple structure, but anhas a very simple structure, but anarmy of such elements constitutes a tremendousarmy of such elements constitutes a tremendousprocessing power.processing power.ss AA neuronneuron consists of a cell body,consists of a cell body, somasoma, a number of, a number offibersfibers calledcalled dendritesdendrites, and a single long, and a single long fiberfibercalled thecalled the axonaxon..
  •  Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 5Biological neural networkBiological neural networkSoma SomaSynapseSynapseDendritesAxonSynapseDendritesAxon
  •  Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 6ss Our brain can be considered as a highly complex,Our brain can be considered as a highly complex,non-linear and parallel information-processingnon-linear and parallel information-processingsystem.system.ss Information is stored and processed in a neuralInformation is stored and processed in a neuralnetwork simultaneously throughout the wholenetwork simultaneously throughout the wholenetwork, rather than at specific locations. In othernetwork, rather than at specific locations. In otherwords, in neural networks, both data and itswords, in neural networks, both data and itsprocessing areprocessing are globalglobal rather than local.rather than local.ss Learning is a fundamental and essentialLearning is a fundamental and essentialcharacteristic of biological neural networks. Thecharacteristic of biological neural networks. Theease with which they can learn led to attempts toease with which they can learn led to attempts toemulate a biological neural network in a computer.emulate a biological neural network in a computer.
  •  Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 7ss An artificial neural network consists of a number ofAn artificial neural network consists of a number ofvery simple processors, also calledvery simple processors, also called neuronsneurons, which, whichare analogous to the biological neurons in the brain.are analogous to the biological neurons in the brain.ss The neurons are connected by weighted linksThe neurons are connected by weighted linkspassingpassing signals from onesignals from one neuronneuron to another.to another.ss The output signal is transmitted through theThe output signal is transmitted through theneuron’s outgoing connection. The outgoingneuron’s outgoing connection. The outgoingconnection splitsconnection splits into a number of branches thatinto a number of branches thattransmit the same signal. The outgoing branchestransmit the same signal. The outgoing branchesterminate at the incoming connections of otherterminate at the incoming connections of otherneuronsneurons in the network.in the network.
  •  Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 8Architecture of a typical artificial neural networkArchitecture of a typical artificial neural networkInput Layer Output LayerMiddle LayerInputSignalsOutputSignals
  •  Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 9Biological Neural Network Artificial Neural NetworkSomaDendriteAxonSynapseNeuronInputOutputWeightAnalogy between biological andAnalogy between biological andartificialartificial neuralneural networksnetworks
  •  Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 10TheThe neuronneuron as aas a simplesimple computingcomputing elementelementDiagram of aDiagram of a neuronneuronNeuron YInput Signalsx1x2xnOutput SignalsYYYw2w1wnWeights
  •  Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 11ss TheThe neuronneuron computes the weighted sum of the inputcomputes the weighted sum of the inputsignals and compares the result with asignals and compares the result with a thresholdthresholdvaluevalue,, θθ. If the net input is less than the threshold,. If the net input is less than the threshold,thethe neuronneuron output is –1. But if the net input is greateroutput is –1. But if the net input is greaterthan or equal to the threshold, thethan or equal to the threshold, the neuronneuron becomesbecomesactivated and its output attains a value +1activated and its output attains a value +1..ss The neuron uses the following transfer orThe neuron uses the following transfer or activationactivationfunctionfunction::ss This type of activation function is called aThis type of activation function is called a signsignfunctionfunction..∑==niiiwxX1 θ<−θ≥+=XXYif,1if,1
  •  Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 12Activation functions of aActivation functions of a neuronneuronStepfunction Signfunction+1-10+1-10XYXY+1-10 XYSigmoidfunction+1-10 XYLinearfunction<≥=0if,00if,1XXYstep<−≥+=0if,10if,1XXYsignXsigmoideY−+=11XYlinear=
  •  Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 13Can a singleCan a single neuronneuron learn a task?learn a task?ss In 1958,In 1958, FrankFrank RosenblattRosenblatt introduced a trainingintroduced a trainingalgorithm that provided the first procedure foralgorithm that provided the first procedure fortraining a simple ANN: atraining a simple ANN: a perceptronperceptron..ss TheThe perceptronperceptron is the simplest form of a neuralis the simplest form of a neuralnetwork. It consists of a singlenetwork. It consists of a single neuronneuron withwithadjustableadjustable synaptic weights and asynaptic weights and a hardhard limiterlimiter..
  •  Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 14ThresholdInputsx1x2OutputY∑∑∑∑HardLimiterw2w1LinearCombinerθSingle-layer two-inputSingle-layer two-input perceptronperceptron
  •  Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 15TheThe PerceptronPerceptronss The operation ofThe operation of Rosenblatt’s perceptronRosenblatt’s perceptron is basedis basedon theon the McCullochMcCulloch andand Pitts neuronPitts neuron modelmodel. The. Themodel consists of a linearmodel consists of a linear combinercombiner followed by afollowed by ahardhard limiterlimiter..ss The weighted sum of the inputs is applied to theThe weighted sum of the inputs is applied to thehardhard limiterlimiter, which produces an output equal to +1, which produces an output equal to +1if its input is positive andif its input is positive and −−1 if it is negative.1 if it is negative.
  •  Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 16ss The aim of the perceptron is to classify inputs,The aim of the perceptron is to classify inputs,xx11,, xx22, . . .,, . . ., xxnn, into one of two classes, say, into one of two classes, sayAA11 andand AA22..ss In the case of an elementary perceptron, the n-In the case of an elementary perceptron, the n-dimensional space is divided by adimensional space is divided by a hyperplanehyperplane intointotwo decision regions. The hyperplane is defined bytwo decision regions. The hyperplane is defined bythethe linearlylinearly separableseparable functionfunction::01=θ−∑=niiiwx
  •  Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 17Linear separabilityLinear separability in thein the perceptronsperceptronsx1x2Class A2Class A112x1w1 + x2w2 − θ = 0(a) Two-input perceptron. (b) Three-input perceptron.x2x1x3x1w1 + x2w2 + x3w3 − θ = 012
  •  Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 18ThisThis is done by making small adjustments in theis done by making small adjustments in theweights to reduce the difference between the actualweights to reduce the difference between the actualand desired outputs of theand desired outputs of the perceptronperceptron. The initial. The initialweights are randomly assigned, usually in the rangeweights are randomly assigned, usually in the range[[−−0.5, 0.5], and then updated to obtain the output0.5, 0.5], and then updated to obtain the outputconsistent with the training examples.consistent with the training examples.How does the perceptronHow does the perceptron learn its classificationlearn its classificationtasks?tasks?
  •  Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 19ss If at iterationIf at iteration pp, the actual output is, the actual output is YY((pp) and the) and thedesired output isdesired output is YYdd ((pp), then the error is given by:), then the error is given by:wherewhere pp = 1, 2, 3, . . .= 1, 2, 3, . . .IterationIteration pp here refers to thehere refers to the ppth training exampleth training examplepresented to the perceptron.presented to the perceptron.ss If the error,If the error, ee((pp), is positive, we need to increase), is positive, we need to increaseperceptron outputperceptron output YY((pp), but if it is negative, we), but if it is negative, weneed to decreaseneed to decrease YY((pp).).)()()( pYpYpe d −=
  •  Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 20The perceptron learning ruleThe perceptron learning rulewherewhere pp = 1, 2, 3, . . .= 1, 2, 3, . . .αα is theis the learning ratelearning rate, a positive constant less than, a positive constant less thanunity.unity.The perceptron learning rule was first proposed byThe perceptron learning rule was first proposed byRosenblattRosenblatt in 1960. Using this rule we can derivein 1960. Using this rule we can derivethe perceptron training algorithm for classificationthe perceptron training algorithm for classificationtasks.tasks.)()()()1( pepxpwpw iii ⋅⋅+=+ α
  •  Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 21Step 1Step 1:: InitialisationInitialisationSetSet initial weightsinitial weights ww11,, ww22,…,,…, wwnn and thresholdand threshold θθto random numbers in the range [to random numbers in the range [−−0.5, 0.5].0.5, 0.5].IfIf the error,the error, ee((pp), is positive, we need to increase), is positive, we need to increaseperceptronperceptron outputoutput YY((pp), but if it is negative, we), but if it is negative, weneed to decreaseneed to decrease YY((pp).).Perceptron’s tarining algorithmPerceptron’s tarining algorithm
  •  Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 22Step 2Step 2: Activation: ActivationActivate the perceptron by applying inputsActivate the perceptron by applying inputs xx11((pp),),xx22((pp),…,),…, xxnn((pp) and desired output) and desired output YYdd ((pp).).Calculate the actual output at iterationCalculate the actual output at iteration pp = 1= 1wherewhere nn is the number of the perceptron inputs,is the number of the perceptron inputs,andand stepstep is a step activationis a step activation function.function.Perceptron’s tarining algorithm (continued)Perceptron’s tarining algorithm (continued)θ−= ∑=niii pwpxsteppY1)()()(
  •  Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 23Step 3Step 3: Weight: Weight trainingtrainingUpdateUpdate the weights of thethe weights of the perceptronperceptronwherewhere ∆∆wwii((pp) is) is the weight correction at iterationthe weight correction at iteration pp..The weight correction is computed by theThe weight correction is computed by the deltadeltarulerule::Step 4Step 4:: IterationIterationIncreaseIncrease iterationiteration pp by one, go back toby one, go back to Step 2Step 2 andandrepeat the process until convergence.repeat the process until convergence.)()()1( pwpwpw iii ∆+=+Perceptron’s tarining algorithm (continued)Perceptron’s tarining algorithm (continued))()()( pepxpw ii ⋅⋅α=∆
  •  Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 24Example of perceptronExample of perceptron learning: the logical operationlearning: the logical operation ANDANDInputsx1 x200110101000EpochDesiredoutputYd1Initialweightsw1 w210.30.30.30.2−0.1−0.1−0.1−0.10010ActualoutputYErrore00−11Finalweightsw1 w20.30.30.20.3−0.1−0.1−0.10.000110101000210.30.30.30.2001100−100.30.30.20.20.00.00.00.000110101000310.20.20.20.10.00.00.00.00.00.00.00.0001000−110.20.20.10.20.00.00.00.100110101000410.20.20.20.10.10.10.10.1001100−100.20.20.10.10.10.10.10.100110101000510.10.10.10.10.10.10.10.100010000.10.10.10.10.10.10.10.10Threshold: θ = 0.2; learning rate: α = 0.1
  •  Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 25Two-dimensionalTwo-dimensional plots of basic logical operationsplots of basic logical operationsx1x21(a) AND (x1 ∩ x2)1x1x211(b) OR (x1 ∪ x2)x1x211(c) Exclusive-OR(x1 ⊕ x2)00 0AA perceptronperceptron can learn the operationscan learn the operations ANDAND andand OROR,,but notbut not Exclusive-ORExclusive-OR..
  •  Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 26MultilayerMultilayer neuralneural networksnetworksss AA multilayer perceptronmultilayer perceptron is ais a feedforwardfeedforward neuralneuralnetwork with one or more hidden layers.network with one or more hidden layers.ss TheThe network consists of annetwork consists of an input layerinput layer of sourceof sourceneuronsneurons, at least one middle or, at least one middle or hidden layerhidden layer ofofcomputationalcomputational neuronsneurons, and an, and an output layeroutput layer ofofcomputationalcomputational neuronsneurons..ss TheThe input signals are propagated in a forwardinput signals are propagated in a forwarddirection on a layer-by-layer basis.direction on a layer-by-layer basis.
  •  Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 27Multilayer perceptronMultilayer perceptron with two hidden layerswith two hidden layersInputlayerFirsthiddenlayerSecondhiddenlayerOutputlayerOutputSignalsInputSignals
  •  Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 28What does the middle layer hideWhat does the middle layer hide??ss A hidden layer “hides” its desired output.A hidden layer “hides” its desired output.NeuronsNeurons in the hidden layer cannot be observedin the hidden layer cannot be observedthrough the input/output behaviour of the network.through the input/output behaviour of the network.There is no obvious way to know what the desiredThere is no obvious way to know what the desiredoutput of the hidden layer should be.output of the hidden layer should be.ss Commercial ANNs incorporate three andCommercial ANNs incorporate three andsometimes four layers, including one or twosometimes four layers, including one or twohidden layershidden layers. Each layer can contain from 10 to. Each layer can contain from 10 to10001000 neuronsneurons. Experimental neural networks may. Experimental neural networks mayhave five or even six layers, including three orhave five or even six layers, including three orfour hidden layers, and utilise millions offour hidden layers, and utilise millions of neurons.neurons.
  •  Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 29Back-propagation neural networkBack-propagation neural networkss Learning in aLearning in a multilayermultilayer network proceeds thenetwork proceeds thesame way as for asame way as for a perceptronperceptron..ss A training set of input patterns is presented to theA training set of input patterns is presented to thenetwork.network.ss TheThe network computes its output pattern, and ifnetwork computes its output pattern, and ifthere is an errorthere is an error −− or in other words a differenceor in other words a differencebetween actual and desired output patternsbetween actual and desired output patterns −− thetheweights are adjusted to reduce this error.weights are adjusted to reduce this error.
  •  Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 30ss In a back-propagation neural network, the learningIn a back-propagation neural network, the learningalgorithm has two phases.algorithm has two phases.ss First, a training input pattern is presented to theFirst, a training input pattern is presented to thenetwork input layer. The network propagates thenetwork input layer. The network propagates theinput pattern from layer to layer until the outputinput pattern from layer to layer until the outputpattern is generated by the output layer.pattern is generated by the output layer.ss If this pattern is different from the desired output,If this pattern is different from the desired output,an error is calculated and then propagatedan error is calculated and then propagatedbackwards through the network from the outputbackwards through the network from the outputlayer to the input layer. The weights are modifiedlayer to the input layer. The weights are modifiedas the error is propagated.as the error is propagated.
  •  Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 31Three-layer back-propagation neural networkThree-layer back-propagation neural networkInputlayerxix1x2xn12inOutputlayer12klyky1y2ylInput signalsError signalswjkHiddenlayerwij12jm
  •  Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 32Step 1Step 1:: InitialisationInitialisationSetSet all the weights and threshold levels of theall the weights and threshold levels of thenetwork to random numbers uniformlynetwork to random numbers uniformlydistributed inside a small range:distributed inside a small range:wherewhere FFii is the total number of inputs ofis the total number of inputs of neuronneuron iiin the network. The weight initialisation is donein the network. The weight initialisation is doneon aon a neuronneuron-by--by-neuronneuron basis.basis.The back-propagation trainingThe back-propagation training algorithmalgorithm+−ii FF4.2,4.2
  •  Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 33Step 2Step 2:: ActivationActivationActivateActivate the back-propagation neural network bythe back-propagation neural network byapplying inputsapplying inputs xx11((pp),), xx22((pp),…,),…, xxnn((pp) and desired) and desiredoutputsoutputs yydd,1,1((pp),), yydd,2,2((pp),…,),…, yydd,,nn((pp).).((aa) Calculate the actual outputs of the) Calculate the actual outputs of the neuronsneurons ininthe hidden layer:the hidden layer:wherewhere nn is the number of inputs ofis the number of inputs of neuronneuron jj in thein thehidden layer, andhidden layer, and sigmoidsigmoid is theis the sigmoidsigmoid activationactivationfunctionfunction..θ−⋅= ∑=jniijij pwpxsigmoidpy1)()()(
  •  Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 34((bb) Calculate the actual outputs of the neurons in) Calculate the actual outputs of the neurons inthe output layer:the output layer:wherewhere mm is the number of inputs of neuronis the number of inputs of neuron kk in thein theoutput layer.output layer.θ−⋅= ∑=kmjjkjkk pwpxsigmoidpy1)()()(Step 2Step 2: Activation (continued): Activation (continued)
  •  Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 35Step 3Step 3: Weight: Weight trainingtrainingUpdateUpdate the weights in the back-propagation networkthe weights in the back-propagation networkpropagating backward the errors associated withpropagating backward the errors associated withoutputoutput neuronsneurons..((aa) Calculate the error gradient for the) Calculate the error gradient for the neuronsneurons in thein theoutput layer:output layer:wherewhereCalculate the weight corrections:Calculate the weight corrections:UpdateUpdate the weights at the outputthe weights at the output neuronsneurons::[ ] )()(1)()( pepypyp kkkk ⋅−⋅=δ)()()( , pypype kkdk −=)()()( ppypw kjjk δα ⋅⋅=∆)()()1( pwpwpw jkjkjk ∆+=+
  •  Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 36((bb) Calculate the error gradient for the) Calculate the error gradient for the neuronsneurons ininthe hidden layer:the hidden layer:Calculate the weight corrections:Calculate the weight corrections:Update the weights at the hiddenUpdate the weights at the hidden neuronsneurons::)()()(1)()(1][ pwppypyp jklkkjjj ∑=⋅−⋅= δδ)()()( ppxpw jiij δα ⋅⋅=∆)()()1( pwpwpw ijijij ∆+=+Step 3Step 3: Weight training (continued): Weight training (continued)
  •  Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 37Step 4Step 4: Iteration: IterationIncrease iterationIncrease iteration pp by one, go back toby one, go back to Step 2Step 2 andandrepeat the process until the selected error criterionrepeat the process until the selected error criterionis satisfied.is satisfied.As an example, we may consider the three-layerAs an example, we may consider the three-layerback-propagation network. Suppose that theback-propagation network. Suppose that thenetwork is required to perform logical operationnetwork is required to perform logical operationExclusive-ORExclusive-OR. Recall that a single-layer. Recall that a single-layer perceptronperceptroncould not do this operation. Now we will apply thecould not do this operation. Now we will apply thethree-layer net.three-layer net.
  •  Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 38Three-layer network for solving theThree-layer network for solving theExclusive-OR operationExclusive-OR operationy55x1 31x2InputlayerOutputlayerHidden layer42θ3w13w24w23w24w35w45θ4θ5−1−1−1
  •  Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 39ss The effect of the threshold applied to aThe effect of the threshold applied to a neuronneuron in thein thehidden or output layer is represented by its weight,hidden or output layer is represented by its weight, θθ,,connected to a fixed input equal toconnected to a fixed input equal to −−1.1.ss The initial weights and threshold levels are setThe initial weights and threshold levels are setrandomly as follows:randomly as follows:ww1313 = 0.5,= 0.5, ww1414 = 0.9,= 0.9, ww2323 = 0.4,= 0.4, ww2424 = 1.0= 1.0,, ww3535 == −−1.2,1.2,ww4545 = 1.1= 1.1,, θθ33 = 0.8,= 0.8, θθ44 == −−0.1 and0.1 and θθ55 = 0.3.= 0.3.
  •  Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 40ss We consider a training set where inputsWe consider a training set where inputs xx11 andand xx22 areareequal to 1 and desired outputequal to 1 and desired output yydd,5,5 is 0. The actualis 0. The actualoutputs of neurons 3 and 4 in the hidden layer areoutputs of neurons 3 and 4 in the hidden layer arecalculated ascalculated as[ ] 5250.01/1)( )8.014.015.01(32321313 =+=θ−+= ⋅−⋅+⋅−ewxwxsigmoidy[ ] 8808.01/1)( )1.010.119.01(42421414 =+=θ−+= ⋅+⋅+⋅−ewxwxsigmoidyss Now the actual output ofNow the actual output of neuronneuron 5 in the output layer5 in the output layeris determined as:is determined as:ss ThusThus, the following error is obtained:, the following error is obtained:[ ] 5097.01/1)( )3.011.18808.02.15250.0(54543535 =+=θ−+= ⋅−⋅+⋅−−ewywysigmoidy5097.05097.0055, −=−=−= yye d
  •  Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 41ss The next step is weight training. To update theThe next step is weight training. To update theweights and threshold levels in our network, weweights and threshold levels in our network, wepropagate the errorpropagate the error,, ee, from the output layer, from the output layerbackward to the input layer.backward to the input layer.ss First, we calculate the error gradient forFirst, we calculate the error gradient for neuronneuron 5 in5 inthe output layer:the output layer:1274.05097).0(0.5097)(10.5097)1( 555 −=−⋅−⋅=−= eyyδss Then we determine the weight corrections assumingThen we determine the weight corrections assumingthat the learning rate parameter,that the learning rate parameter, αα, is equal to 0.1:, is equal to 0.1:0112.0)1274.0(8808.01.05445 −=−⋅⋅=⋅⋅=∆ δα yw0067.0)1274.0(5250.01.05335 −=−⋅⋅=⋅⋅=∆ δα yw0127.0)1274.0()1(1.0)1( 55 −=−⋅−⋅=⋅−⋅=θ∆ δα
  •  Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 42ss Next we calculate the error gradients for neurons 3Next we calculate the error gradients for neurons 3and 4 in the hidden layer:and 4 in the hidden layer:ss We then determine the weightWe then determine the weight corrections:corrections:0381.0)2.1(0.1274)(0.5250)(10.5250)1( 355333 =−⋅−⋅−⋅=⋅⋅−= wyy δδ0.0147.114)0.127(0.8808)(10.8808)1( 455444 −=⋅−⋅−⋅=⋅⋅−= wyy δδ0038.00381.011.03113 =⋅⋅=⋅⋅=∆ δα xw0038.00381.011.03223 =⋅⋅=⋅⋅=∆ δα xw0038.00381.0)1(1.0)1( 33 −=⋅−⋅=⋅−⋅=θ∆ δα0015.0)0147.0(11.04114 −=−⋅⋅=⋅⋅=∆ δα xw0015.0)0147.0(11.04224 −=−⋅⋅=⋅⋅=∆ δα xw0015.0)0147.0()1(1.0)1( 44 =−⋅−⋅=⋅−⋅=θ∆ δα
  •  Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 43ss At last, we update all weights andAt last, we update all weights and threshold:threshold:5038.00038.05.0131313 =+=∆+= www8985.00015.09.0141414 =−=∆+= www4038.00038.04.0232323 =+=∆+= www9985.00015.00.1242424 =−=∆+= www2067.10067.02.1353535 −=−−=∆+= www0888.10112.01.1454545 =−=∆+= www7962.00038.08.0333 =−=θ∆+θ=θ0985.00015.01.0444 −=+−=θ∆+θ=θ3127.00127.03.0555 =+=θ∆+θ=θss The training process is repeated until the sum ofThe training process is repeated until the sum ofsquared errors is less than 0.001.squared errors is less than 0.001.
  •  Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 44Learning curve for operationLearning curve for operation Exclusive-ORExclusive-OR0 50 100 150 200101EpochSum-SquaredErrorSum-Squared Network Error for 224 Epochs10010-110-210-310-4
  •  Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 45Final results of three-layer network learningFinal results of three-layer network learningInputsx1 x210101100011Desiredoutputyd00.0155Actualoutputy5ErroreSum ofsquarederrors0.98490.98490.0175−0.01550.01510.0151−0.01750.0010
  •  Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 46Network represented byNetwork represented by McCullochMcCulloch--PittsPitts modelmodelfor solving thefor solving the Exclusive-ORExclusive-OR operationoperationy55x1 31x2 42+1.0−1−1−1+1.0+1.0+1.0+1.5+1.0+1.0+0.5+0.5
  •  Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 47((aa) Decision) Decision boundary constructed by hiddenboundary constructed by hidden neuronneuron 3;3;((bb)) Decision boundary constructed by hiddenDecision boundary constructed by hidden neuronneuron 4;4;((cc) Decision boundaries constructed by the) Decision boundaries constructed by the completecompletethreethree-layer network-layer networkx1x21(a)1x211(b)00x1 + x2 – 1.5 = 0 x1 + x2 – 0.5 = 0x1 x1x211(c)0Decision boundariesDecision boundaries
  •  Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 48AcceleratedAccelerated learninglearning inin multilayermultilayerneuralneural networksnetworksss AA multilayermultilayer network learns much fasternetwork learns much faster when thewhen thesigmoidalsigmoidal activation function is represented by aactivation function is represented by ahyperbolic tangenthyperbolic tangent::wherewhere aa andand bb are constants.are constants.SuitableSuitable values forvalues for aa andand bb are:are:aa = 1.716 and= 1.716 and bb = 0.= 0.667667aeaY bXhtan−+= −12
  •  Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 49ss We also can accelerate training by including aWe also can accelerate training by including amomentum termmomentum term in the delta rule:in the delta rule:wherewhere ββ is a positive number (0is a positive number (0 ≤≤ ββ << 1) called the1) called themomentum constantmomentum constant. Typically, the momentum. Typically, the momentumconstant is set to 0.95.constant is set to 0.95.This equation is calledThis equation is called thethe generalised delta rulegeneralised delta rule..)()()1()( ppypwpw kjjkjk δαβ ⋅⋅+−∆⋅=∆
  •  Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 50Learning with momentum for operationLearning with momentum for operation Exclusive-ORExclusive-OR0 20 40 60 80 100 12010-410-2100102EpochSum-SquaredErrorTraining for 126 Epochs0 100 140-1-0.500.511.5EpochLearningRate10-310110-120 40 60 80 120
  •  Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 51Learning with adaptive learning rateLearning with adaptive learning rateTo accelerate the convergence and yet avoid theTo accelerate the convergence and yet avoid thedanger of instability, we can apply two heuristics:danger of instability, we can apply two heuristics:Heuristic 1Heuristic 1If the change of the sum of squared errors has the sameIf the change of the sum of squared errors has the samealgebraic sign for several consequent epochs, then thealgebraic sign for several consequent epochs, then thelearning rate parameter,learning rate parameter, αα, should be increased., should be increased.Heuristic 2Heuristic 2If the algebraic sign of the change of the sum ofIf the algebraic sign of the change of the sum ofsquared errors alternates for several consequentsquared errors alternates for several consequentepochs, then the learning rate parameter,epochs, then the learning rate parameter, αα, should be, should bedecreased.decreased.
  •  Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 52ss Adapting the learning rate requires some changesAdapting the learning rate requires some changesin the back-propagation algorithm.in the back-propagation algorithm.ss If the sum of squared errors at the current epochIf the sum of squared errors at the current epochexceeds the previous value by more than aexceeds the previous value by more than apredefinedpredefined ratio (typically 1.04), the learning rateratio (typically 1.04), the learning rateparameter is decreased (typically by multiplyingparameter is decreased (typically by multiplyingby 0.7) and new weights and thresholds areby 0.7) and new weights and thresholds arecalculated.calculated.ss If the errorIf the error is less than the previous one, theis less than the previous one, thelearning rate is increased (typically by multiplyinglearning rate is increased (typically by multiplyingby 1.05).by 1.05).
  •  Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 53Learning with adaptive learningLearning with adaptive learning raterate0 10 20 30 40 50 60 70 80 90 100EpochTraining for 103 Epochs0 20 40 60 80 100 12000.20.40.60.81EpochLearningRate10-410-2100102Sum-SquaredError10-310110-1
  •  Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 54Learning with momentum and adaptive learning rateLearning with momentum and adaptive learning rate0 10 20 30 40 50 60 70 80EpochTraining for 85 Epochs0 10 20 30 40 50 60 70 80 9000.512.5EpochLearningRate10-410-2100102Sum-SquaredErro10-310110-11.52
  •  Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 55ss Neural networks were designed on analogy withNeural networks were designed on analogy withthe brain. The brain’s memory, however, worksthe brain. The brain’s memory, however, worksby association. For example, we can recognise aby association. For example, we can recognise afamiliar face even in an unfamiliar environmentfamiliar face even in an unfamiliar environmentwithin 100-200 ms. We can also recall awithin 100-200 ms. We can also recall acomplete sensory experience, including soundscomplete sensory experience, including soundsand scenes, when we hear only a few bars ofand scenes, when we hear only a few bars ofmusic. The brain routinely associates one thingmusic. The brain routinely associates one thingwith another.with another.TheThe HopfieldHopfield NetworkNetwork
  •  Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 56ss Multilayer neural networks trained with the back-Multilayer neural networks trained with the back-propagation algorithm are used for patternpropagation algorithm are used for patternrecognition problems. However, to emulate therecognition problems. However, to emulate thehuman memory’s associative characteristics wehuman memory’s associative characteristics weneed a different type of network: aneed a different type of network: a recurrentrecurrentneural networkneural network..ss A recurrent neural network has feedback loopsA recurrent neural network has feedback loopsfrom its outputs to its inputs. The presence offrom its outputs to its inputs. The presence ofsuch loops has a profound impact on the learningsuch loops has a profound impact on the learningcapability of the network.capability of the network.
  •  Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 57ss The stability of recurrent networks intriguedThe stability of recurrent networks intriguedseveral researchers in the 1960s and 1970s.several researchers in the 1960s and 1970s.However, none was able to predict which networkHowever, none was able to predict which networkwould be stable, and some researchers werewould be stable, and some researchers werepessimistic about finding a solution at all. Thepessimistic about finding a solution at all. Theproblem was solved only in 1982, whenproblem was solved only in 1982, when JohnJohnHopfieldHopfield formulated the physical principle offormulated the physical principle ofstoring information in a dynamically stablestoring information in a dynamically stablenetwork.network.
  •  Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 58Single-layerSingle-layer nn--neuron Hopfieldneuron Hopfield networknetworkxix1x2xnInputSignalsyiy1y2yn12inOutputSignals
  •  Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 59ss TheThe HopfieldHopfield network usesnetwork uses McCullochMcCulloch andand PittsPittsneuronsneurons with thewith the sign activation functionsign activation function as itsas itscomputing element:computing element:0=0<−>+=XYXXYsignif,if,10if,1
  •  Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 60ss The current state ofThe current state of the Hopfield network isthe Hopfield network isdetermined by the current outputs of alldetermined by the current outputs of all neuronsneurons,,yy11,, yy22, . . .,, . . ., yynn..ThusThus, for a single-layer, for a single-layer nn--neuronneuron network, the statenetwork, the statecan be defined by thecan be defined by the state vectorstate vector as:as:=nyyy!21Y
  •  Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 61ss In the Hopfield network, synaptic weights betweenIn the Hopfield network, synaptic weights betweenneurons are usually represented in matrix form asneurons are usually represented in matrix form asfollows:follows:wherewhere MM is the number of states to be memorisedis the number of states to be memorisedby the network,by the network, YYmm is theis the nn-dimensional binary-dimensional binaryvector,vector, II isis nn ×× nn identity matrix, and superscriptidentity matrix, and superscript TTdenotes a matrix transposition.denotes a matrix transposition.IYYW MMmTmm −= ∑=1
  •  Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 62Possible states for the three-neuronPossible states for the three-neuronHopfield networkHopfield networky1y2y3(1, −1, 1)(−1, −1, 1)(−1, −1, −1) (1, −1, −1)(1, 1, 1)(−1, 1, 1)(1, 1, −1)(−1, 1, −1)0
  •  Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 63ss The stable state-vertex is determined by the weightThe stable state-vertex is determined by the weightmatrixmatrix WW, the current input vector, the current input vector XX, and the, and thethreshold matrixthreshold matrix θθθθθθθθ. If the input vector is partially. If the input vector is partiallyincorrect or incomplete, the initial state will convergeincorrect or incomplete, the initial state will convergeinto the stable state-vertex after a few iterations.into the stable state-vertex after a few iterations.ss Suppose, for instance, that our network is required toSuppose, for instance, that our network is required tomemorise two opposite states, (1, 1, 1) and (memorise two opposite states, (1, 1, 1) and (−−1,1, −−1,1, −−1).1).Thus,Thus,ororwherewhere YY11 andand YY22 are the three-dimensional vectors.are the three-dimensional vectors.=1111Y−−−=1112Y [ ]1111 =TY [ ]1112 −−−=TY
  •  Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 64ss The 3The 3 ×× 3 identity matrix3 identity matrix II isisss ThusThus, we can now determine the weight matrix as, we can now determine the weight matrix asfollows:follows:ss Next, the network is tested by the sequence of inputNext, the network is tested by the sequence of inputvectors,vectors, XX11 andand XX22, which are equal to the output (or, which are equal to the output (ortarget) vectorstarget) vectors YY11 andand YY22, respectively., respectively.=100010001I[ ] [ ]−−−−−−−+=1000100012111111111111W=022202220
  •  Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 65ss First, we activate the Hopfield network by applyingFirst, we activate the Hopfield network by applyingthe input vectorthe input vector XX. Then, we calculate the actual. Then, we calculate the actualoutput vectoroutput vector YY, and finally, we compare the result, and finally, we compare the resultwith the initial input vectorwith the initial input vector XX..=−=1110001110222022201 signY−−−=−−−−=1110001110222022202 signY
  •  Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 66ss The remaining six states are all unstable. However,The remaining six states are all unstable. However,stable states (also calledstable states (also called fundamental memoriesfundamental memories) are) arecapable of attracting states that arecapable of attracting states that are close to them.close to them.ss TheThe fundamental memory (1, 1, 1) attracts unstablefundamental memory (1, 1, 1) attracts unstablestates (states (−−1, 1, 1), (1,1, 1, 1), (1, −−1, 1) and (1, 1,1, 1) and (1, 1, −−1). Each of1). Each ofthese unstable states represents a single error,these unstable states represents a single error,compared to the fundamental memory (1, 1, 1).compared to the fundamental memory (1, 1, 1).ss TheThe fundamental memory (fundamental memory (−−1,1, −−1,1, −−1) attracts1) attractsunstable states (unstable states (−−1,1, −−1, 1), (1, 1), (−−1, 1,1, 1, −−1) and (1,1) and (1, −−1,1, −−11).).ss ThusThus, the, the HopfieldHopfield network can act as annetwork can act as an errorerrorcorrection networkcorrection network..
  •  Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 67ss Storage capacityStorage capacity isis or the largest number ofor the largest number offundamentalfundamental memories that can be stored andmemories that can be stored andretrieved correctly.retrieved correctly.ss The maximum number of fundamental memoriesThe maximum number of fundamental memoriesMMmaxmax that can be stored in thethat can be stored in the nn--neuronneuron recurrentrecurrentnetwork is limitednetwork is limited bybynMmax 15.0=Storage capacity of the Hopfield networkStorage capacity of the Hopfield network
  •  Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 68ss The Hopfield network represents anThe Hopfield network represents an autoassociativeautoassociativetype of memorytype of memory −− it can retrieve a corrupted orit can retrieve a corrupted orincomplete memory but cannot associate this memoryincomplete memory but cannot associate this memorywith another different memory.with another different memory.ss Human memory isHuman memory is essentiallyessentially associativeassociative. One thing. One thingmay remind us of another, and that of another, and somay remind us of another, and that of another, and soon. We use a chain of mental associations to recoveron. We use a chain of mental associations to recovera lost memory. Ifa lost memory. If we forget wherewe forget where we left anwe left anumbrella, we try to recall where we lastumbrella, we try to recall where we last had ithad it, what, whatwe were doing, and who we were talking to. Wewe were doing, and who we were talking to. Weattempt to establish a chain of associations, andattempt to establish a chain of associations, andthereby to restore a lost memory.thereby to restore a lost memory.BidirectionalBidirectional associativeassociative memory (BAM)memory (BAM)
  •  Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 69ss To associate one memory with another, we need aTo associate one memory with another, we need arecurrent neural network capable of accepting anrecurrent neural network capable of accepting aninput pattern on one set of neurons and producinginput pattern on one set of neurons and producinga relateda related, but different, output pattern on another, but different, output pattern on anotherset ofset of neuronsneurons..ss Bidirectional associative memoryBidirectional associative memory (BAM)(BAM), first, firstproposed byproposed by Bart KoskoBart Kosko, is a heteroassociative, is a heteroassociativenetwork. It associates patterns from one set, setnetwork. It associates patterns from one set, set AA,,to patterns from another set, setto patterns from another set, set BB, and vice versa., and vice versa.Like aLike a HopfieldHopfield network, the BAM can generalisenetwork, the BAM can generaliseand also produce correct outputs despite corruptedand also produce correct outputs despite corruptedor incomplete inputs.or incomplete inputs.
  •  Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 70BAMBAM operationoperationyj(p)y1(p)y2(p)ym(p)12jmOutputlayerInputlayerxi(p)x1(p)x2(p)xn(p)2in1xi(p+1)x1(p+1)x2(p+1)xn(p+1)yj(p)y1(p)y2(p)ym(p)12jmOutputlayerInputlayer2in1(a) Forward direction. (b) Backward direction.
  •  Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 71The basic idea behind the BAM is to storeThe basic idea behind the BAM is to storepattern pairs so that whenpattern pairs so that when nn-dimensional vector-dimensional vectorXX from setfrom set AA is presented as input, the BAMis presented as input, the BAMrecallsrecalls mm-dimensional vector-dimensional vector YY from setfrom set BB, but, butwhenwhen YY is presented as input, the BAM recallsis presented as input, the BAM recalls XX..
  •  Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 72ss To develop the BAM, we need to create aTo develop the BAM, we need to create acorrelation matrix for each pattern pair we want tocorrelation matrix for each pattern pair we want tostore. The correlation matrix is the matrix productstore. The correlation matrix is the matrix productof the input vectorof the input vector XX, and the transpose of the, and the transpose of theoutput vectoroutput vector YYTT. The BAM weight matrix is the. The BAM weight matrix is thesum of all correlation matrices, that is,sum of all correlation matrices, that is,wherewhere MM is the number of pattern pairs to be storedis the number of pattern pairs to be storedin the BAM.in the BAM.TmMmm YXW ∑==1
  •  Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 73ss The BAM isThe BAM is unconditionally stableunconditionally stable. This means that. This means thatany set of associations can be learned without risk ofany set of associations can be learned without risk ofinstability.instability.ss TheThe maximum number of associations to be storedmaximum number of associations to be storedin the BAM should not exceed the number ofin the BAM should not exceed the number ofneuronsneurons in the smaller layer.in the smaller layer.ss The more serious problem with the BAMThe more serious problem with the BAM isisincorrect convergenceincorrect convergence. The BAM may not. The BAM may notalways produce the closest association. In fact, aalways produce the closest association. In fact, astable association may be only slightly related tostable association may be only slightly related tothe initial input vector.the initial input vector.Stability and storage capacity of the BAMStability and storage capacity of the BAM