Neural Networks

2,699
-1

Published on

Neural Networks - Artificial Intelligence Course

Published in: Technology
2 Comments
6 Likes
Statistics
Notes
No Downloads
Views
Total Views
2,699
On Slideshare
0
From Embeds
0
Number of Embeds
1
Actions
Shares
0
Downloads
339
Comments
2
Likes
6
Embeds 0
No embeds

No notes for slide

Neural Networks

  1. 1. BY:Eng.Ismail El-GayarUnder Supervision OfProf.Dr. Sheren Youssef
  2. 2. IntroductionUnderstanding the BrainNeural Networks as aParadigm for Parallel Processing• The Perceptron Network• Training a Perceptron• Multilayer Perceptrons• BackpropagationAlgorithmTwo-Class DiscriminationMulticlass DiscriminationMultiple Hidden Layers• Training ProceduresImproving ConvergenceMomentumAdaptive Learning Rate• Learning TimeTime Delay Neural NetworksRecurrent Networks
  3. 3. Massive parallelismBrain computer as aninformation or signal processingsystem, is composed of a largenumber of a simple processingelements, called neurons. Theseneurons are interconnected bynumerous direct links, which arecalled connection, and cooperatewhich other to perform a paralleldistributed processing (PDP) inorder to soft a desiredcomputation tasks.ConnectionismBrain computer is a highlyinterconnected neurons system insuch a way that the state of oneneuron affects the potential of thelarge number of other neuronswhich are connected according toweights or strength. The key idea ofsuch principle is the functionalcapacity of biological neural netsdeterms mostly not so of a singleneuron but of its connectionsAssociative distributed memoryStorage of information in a brainis supposed to be concentratedin synaptic connections of brainneural network, or moreprecisely, in the pattern of theseconnections and strengths(weights) of the synapticconnections.A process of patternrecognition and patternmanipulation is basedon:How our brainmanipulates withpatterns ?Processing:-3Human braincontains amassivelyinterconnectednet of 1010-1011(10 billion)neurons
  4. 4. The Biological Neuron:-The schematicmodel of abiological neuronSynapsesDendritesSomaAxonDendrite fromotherAxon fromotherneuron1. Soma or body cell - is a large, round central body in which almost all the logical functions of the neuronare realized.2. The axon (output), is a nerve fibre attached to the soma which can serve as a final output channel of theneuron. An axon is usually highly branched.3. The dendrites (inputs)- represent a highly branching tree of fibres. These long irregularly shaped nervefibres (processes) are attached to the soma.4. Synapses are specialized contacts on a neuron which are the termination points for the axons from otherneurons.
  5. 5. ?Brain-Like ComputerBrain-like computer –is a mathematical model of humane-brainprinciples of computations. This computer consistsof those elements which can be called thebiological neuron prototypes, which areinterconnected by direct links called connectionsand which cooperate to perform paralleldistributed processing (PDP) in order to solve adesired computational task.Neurons and Neural NetThe new paradigm of computingmathematics consists of the combinationof such artificial neurons into someartificial neuron net.Artificial Neural Network – MathematicalParadigms of Brain-Like ComputerBrain-like Computer
  6. 6. NN as an model of brain-like Computer An artificial neural network (ANN) is a massivelyparallel distributed processor that has a naturalpropensity for storing experimental knowledge andmaking it available for use. It means that:Knowledge is acquired by the network through alearning (training) process; The strength of the interconnections between neurons isimplemented by means of the synaptic weights used tostore the knowledge.The learning process is a procedure of the adapting theweights with a learning algorithm in order to capture theknowledge. On more mathematically, the aim of thelearning process is to map a given relation between inputsand output (outputs) of the network.BrainThe human brain is still not wellunderstood and indeed its behavior is verycomplex!There are about 10 billion neurons in thehuman cortex and 60 trillion synapses ofconnectionsThe brain is a highly complex, nonlinearand parallel computer (information-processing system)ANN as a Brain-Like Computer7
  7. 7. 8ArtificialIntellect withNeuralNetworksIntelligentControlIntelligentControlTechnicalDiagnisticsTechnicalDiagnisticsIntelligentData Analysisand SignalProcessingIntelligentData Analysisand SignalProcessingAdvanceRoboticsAdvanceRoboticsMachineVisionMachineVisionImage &PatternRecognitionImage &PatternRecognitionIntelligentSecuritySystemsIntelligentSecuritySystemsIntelligentlMedicineDevicesIntelligentlMedicineDevicesIntelligentExpertSystemsIntelligentExpertSystemsApplications of Artificial Neural Networks8
  8. 8. Artificial Neural Networks
  9. 9. PerceptronsMultiple input nodesSingle output nodeTakes a weighted sum of the inputs, call this SUnit function calculates the output for the networkUseful to study becauseWe can use perceptrons to build larger networksPerceptrons have limited representational abilitiesWe will look at concepts they can’t learn later
  10. 10. 1( ,..., )nf x x0 1( , ,..., )nw w w- unknown multi-factor decision ruleLearning process using a representative learning set- a set of weighting vectors is the resultof the learning process10 1 1ˆ( ,..., )( ... )nn nf x xP w w x w x== + + +- a partially defined function, whichis an approximation of the decisionrule function 11Why neural network?
  11. 11. Artificial Neuronf is a function to be earnedare the inputsφ is the activation function1xnx1( ,..., )nxf x...φ(z)0 1 1 ... n nz w w x w x= + + +1,..., nx xZ is the weighted sum1 0 1 1( ,..., ) ( ... )n n nf x x F w w x w x= + + +
  12. 12. Perceptrons:-Output:- using Hardlims function
  13. 13. Simple Example:Categorising VehiclesInput to function: pixel data from vehicle imagesOutput: numbers: 1 for a car; 2 for a bus; 3 for a tankINPUT INPUT INPUT INPUTOUTPUT = 3 OUTPUT = 2 OUTPUT = 1 OUTPUT=1
  14. 14. General Idea1.12.73.0-1.32.74.2-0.87.12.1-1.21.10.20.3HIDDEN LAYERSINPUT LAYERNUMBERSINPUTNUMBERSOUTPUTOUTPUT LAYER CATEGORYVALUES PROPAGATE THROUGH THE NETWORKCat ACat BCat CChoose Cat A(largest output value)Value calculated usingall the input unit values
  15. 15. Calculation Example:-Categorisation of 2x2 pixel black & white imagesInto “bright” and “dark”Representation of this rule:If it contains 2, 3 or 4 white pixels, it is “bright”If it contains 0 or 1 white pixels, it is “dark”Perceptron architecture:Four input units, one for each pixelOne output unit: +1 for white, -1 for dark
  16. 16. Calculation Example:-Example calculation: x1=-1, x2=1, x3=1, x4=-1S = 0.25*(-1) + 0.25*(1) + 0.25*(1) + 0.25*(-1) = 00 > -0.1, so the output from the ANN is +1So the image is categorised as “bright”
  17. 17. Unit FunctionsLinear FunctionsSimply output the weighted sumThreshold FunctionsOutput low values Until the weighted sum gets over a threshold Then output high values Equivalent of “firing” of neuronsStep function:Output +1 if S > Threshold TOutput –1 otherwiseSigma function:Similar to step function but differentiableStepFunctionSigmaFunction
  18. 18. Learning In Perceptron
  19. 19. Learning Process of ANNLearn from experienceLearning algorithmsRecognize pattern ofactivitiesInvolves 3 tasksCompute outputsCompare outputs withdesired targetsAdjust the weights andrepeat the processComputeoutputIsDesiredOutputachievedStopAdjustWeightyesNo
  20. 20. Training a Perceptron:-η -> Learning RateT -> target outputO -> outputX -> input
  21. 21. Worked ExampleReturn to the “bright” and “dark” exampleUse a learning rate of η = 0.1Suppose we have set random weights:
  22. 22. Worked ExampleUse this training example, E, to update weights:Here, x1 = -1, x2 = 1, x3 = 1, x4 = -1 as beforePropagate this information through the network: S = (-0.5 * 1) + (0.7 * -1) + (-0.2 * +1) + (0.1 * +1) + (0.9 * -1) = -2.2Hence the network outputs o(E) = -1But this should have been “bright”=+1So t(E) = +1
  23. 23. Calculating the Error ValuesΔ0 = η(t(E)-o(E))x0= 0.1 * (1 - (-1)) * (1) = 0.1 * (2) = 0.2Δ1 = η(t(E)-o(E))x1= 0.1 * (1 - (-1)) * (-1) = 0.1 * (-2) = -0.2Δ2 = η(t(E)-o(E))x2= 0.1 * (1 - (-1)) * (1) = 0.1 * (2) = 0.2Δ3 = η(t(E)-o(E))x3= 0.1 * (1 - (-1)) * (1) = 0.1 * (2) = 0.2Δ4 = η(t(E)-o(E))x4
  24. 24. Calculating the New Weightsw’0 = -0.5 + Δ0 = -0.5 + 0.2 = -0.3w’1 = 0.7 + Δ1 = 0.7 + -0.2 = 0.5w’2 = -0.2 + Δ2 = -0.2 + 0.2 = 0w’3= 0.1 + Δ3 = 0.1 + 0.2 = 0.3w’4 = 0.9 + Δ4 = 0.9 - 0.2 = 0.7
  25. 25. New Look PerceptronCalculate for the example, E, again: S = (-0.3 * 1) + (0.5 * -1) + (0 * +1) + (0.3 * +1) + (0.7 * -1) = -1.2Still gets the wrong categorisationBut the value is closer to zero (from -2.2 to -1.2)In a few epochs time, this example will be correctly categorised
  26. 26. is an alternative neural network architecture whose primary purposeis to work on continuous data.The advantage of this architecture is to adapt the network onlineand hence helpful in many real time applications, like time seriesprediction, online spell check, continuous speech recognition,etc.The architecture has a continuous input that is delayed and sent asan input to the neural network.As an example, consider training a feed forward neural networkbeing trained for a time series prediction. The desired output ofthe network is the present state of the time series and inputs tothe neural network are the delayed time series (past values).Hence, the output of the neural network is the predicted next valuein the time series which is computed as the function of the pastvalues of the time series.Time delay neural network (TDNN):-
  27. 27. Recurrent Neural Networks
  28. 28. TYPES OF ANN:-feed-forward feedback4.1 Feed-forward networksFeed-forward ANNs allow signals totravel one way only; from input tooutput. There is no feedback (loops)i.e. the output of any layer does notaffect that same layer. Feed-forwardANNs tend to be straight forwardnetworks that associate inputs withoutputs. They are extensively used inpattern recognition.4.2 Feedback networksFeedback networks can have signalstravelling in both directions byintroducing loops in the network.Feedback networks are very powerfuland can get extremely complicated.Feedback networks are dynamic;
  29. 29. Some Topologies of ANN:-Fully-connected feed-forward Partially recurrent networkFully recurrent network
  30. 30. Recurrent Neural Networks:-recurrent neural network:-is a class of neuralnetwork where connectionsbetween units form adirected cycle. Thiscreates an internal stateof the network whichallows it to exhibitdynamic temporal behaviorPartially recurrent networkFully recurrent network
  31. 31. References:-[1] Simon Colton - www.doc.ic.ac.uk/~sgc/teaching/v231/[2] http://www.doc.ic.ac.uk/~nd/surprise_96/journal/vol4/cs11/report.html[3] http://www.willamette.edu/~gorr/classes/cs449/intro.html[4] http://www.scribd.com/doc/12774663/Neural-Network-Presentation[5] http://www.speech.sri.com/people/anand/771/html/node32.html[6] http://en.wikipedia.org/wiki/Recurrent_neural_network

×