Neural network & its applications


Published on

It is a presentation that acquaints you with the latest technology that can recognise patterns i.e neural networks and some of its applications.

Published in: Technology
No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide

Neural network & its applications

  1. 1. Neural Networks and its Applications Presented By: Ahmed Hashmi Chinmoy Das
  2. 2. What is neural networkAn Artificial Neural Network (ANN) is an informationprocessing paradigm that is inspired by biologicalnervous systems.It is composed of a large number of highlyinterconnected processing elements called neurons.An ANN is configured for a specific application, suchas pattern recognition or data classification
  3. 3. Why use neural networksability to derive meaning from complicated orimprecise dataextract patterns and detect trends that are toocomplex to be noticed by either humans or othercomputer techniquesAdaptive learningReal Time Operation
  4. 4. Neural Networks v/s Conventional Computers Conventional computers use an algorithmic approach, but neural networks works similar to human brain and learns by example.
  5. 5. Inspiration from NeurobiologyA neuron: many-inputs / one-output unitoutput can be excited or notexcitedincoming signals from otherneurons determine if theneuron shall excite ("fire")Output subject to attenuationin the synapses, which arejunction parts of the neuron
  6. 6. A simple neuronTakes the Inputs .Calculate the summationof the Inputs .Compare it with thethreshold being setduring the learning stage.
  7. 7. Firing RulesA firing rule determines how one calculates whether aneuron should fire for any input pattern. some sets cause it to fire (the 1-taught set ofpatterns) and others which prevent it from doing so(the 0-taught set)
  8. 8. Example… XFor example, a 3-input 1: 0 0 0 0 1 1 1 1neuron is taught to X 0 0 1 1 0 0 1 1output 1 when the input 2:(X1,X2 and X3) is 111 or 101 X 0 1 0 1 0 1 0 1and to output 0 when the 3:input is 000 or 001. O 0/ 0/ 0/ 0/ U 0 0 1 1 1 1 1 1 T:
  9. 9. Example… Take the pattern 010. It differs Xfrom 000 in 1 element, from 001 0 0 0 0 1 1 1 1 1:in 2 elements, from 101 in 3elements and from 111 in 2 Xelements. Therefore, the 0 0 1 1 0 0 1 1 2:nearest pattern is 000 whichbelongs in the 0-taught set. Thus X 0 1 0 1 0 1 0 1the firing rule requires that the 3:neuron should not fire when theinput is 001. On the other hand,011 is equally distant from two Otaught patterns that have 0/ 0/ U 0 0 0 1 1 1different outputs and thus the 1 1 T:output stays undefined (0/1).
  10. 10. Types of neural networkfixed networks in which the weights cannot bechanged, ie dW/dt=0. In such networks, the weightsare fixed a priori according to the problem to solve. adaptive networks which are able to change theirweights, ie dW/dt not= 0.
  11. 11. The Learning ProcessAssociative mapping in which the network learns toproduce a particular pattern on the set of input unitswhenever another particular pattern is applied on the setof input units. The associative mapping can generally bebroken down into two mechanisms:
  12. 12. Hetero-association: is related to two recall mechanisms:Nearest-neighbour recall, where the output patternproduced corresponds to the input pattern stored, whichis closest to the pattern presented, and Interpolative recall, where the output pattern is asimilarity dependent interpolation of the patterns storedcorresponding to the pattern presented. Yet anotherparadigm, which is a variant associative mapping isclassification, ie when there is a fixed set of categories intowhich the input patterns are to be classified.
  13. 13. Supervised Learning Supervised learning which incorporates an externalteacher, so that each output unit is told what its desiredresponse to input signals ought to be. During the learningprocess global information may be required. Paradigms ofsupervised learning include error-correctionlearning, reinforcement learning and stochastic learning.An important issue concerning supervised learning is theproblem of error convergence, ie the minimisation of errorbetween the desired and computed unit values. The aim isto determine a set of weights which minimises the error.One well-known method, which is common to manylearning paradigms is the least mean square (LMS)convergence.
  14. 14. Unsupervised LearningUnsupervised learning uses no external teacher and isbased upon only local information. It is also referred to asself-organisation, in the sense that it self-organises datapresented to the network and detects their emergentcollective properties.From Human Neurons to Artificial Neurons their aspect oflearning concerns the distinction or not of a separatephase, during which the network is trained, and asubsequent operation phase. We say that a neural networklearns off-line if the learning phase and the operationphase are distinct. A neural network learns on-line if itlearns and operates at the same time. Usually, supervisedlearning is performed off-line, whereas unsupervisedlearning is performed on-line.
  15. 15. Back-propagation Algorithmit calculates how the error changes as each weight isincreased or decreased slightly. The algorithm computes each EW by first computingthe EA, the rate at which the error changes as theactivity level of a unit is changed. For outputunits, the EA is simply the difference between theactual and the desired output.
  16. 16. Transfer FunctionThe behaviour of an ANN (Artificial Neural Network) depends on boththe weights and the input-output function (transfer function) that isspecified for the units. This function typically falls into one of threecategories: linear (or ramp) threshold sigmoidFor linear units, the output activity is proportional to the totalweighted output.For threshold units, the output is set at one of two levels, dependingon whether the total input is greater than or less than somethreshold value.For sigmoid units, the output varies continuously but not linearly asthe input changes. Sigmoid units bear a greater resemblance to realneurones than do linear or threshold units, but all three must beconsidered rough approximations.
  17. 17. Application INTRODUCTION  Features of finger prints  Finger print recognition system  Why neural networks?  Goal of the system Preprocessing system Feature extraction using neural networks Classification result
  18. 18. Features of finger printsFinger prints are the unique pattern of ridges and valleys in every person’s fingers. Their patterns are permanent and unchangeable for whole life of a person. They are unique and the probability that two fingerprints are alike is only 1 in 1.9x10^15. Their uniqueness is used for identification of a person.
  19. 19. Finger print recognition system Image edge Ridge Thinin Feature classifi acquisiti detecti extractio g extracti cation on on n onImage acquisition: the acquired image is digitalized into 512x512image with each pixel assigned a particular gray scale value(raster image).edge detection and thinning: these are preprocessing of theimage , remove noise and enhance the image.
  20. 20. Finger print recognition systemFeature extraction: thisthe step where we pointout the features such asridge bifurcation andridge endings of thefinger print with the helpof neural network.Classification: here a classlabel is assigned to theimage depending on theextracted features.
  21. 21. Why using neural networks?Neural networks enable us to find solutionwhere algorithmic methods arecomputationally intensive or do not exist.There is no need to program neural networksthey learn with examples.Neural networks offer significant speedadvantage over conventional techniques.
  22. 22. Preprocessing systemThe first phase of finger print recognition is to capture a image . The image is captured using total internal reflection of light (TIR). The image is stored as a two dimensional array of 512x512 size, each element of array representing a pixel and assigned a gray scale value from 256 gray scale levels.
  23. 23. Preprocessing systemAfter image is captured ,noise is removed using edge detection, ridge extraction and thinning. Edge detection: the edge of the image is defined where the gray scale levels changes greatly. also, orientation of ridges is determined for each 32x32 block of pixels using gray scale gradient. Ridge extraction: ridges are extracted using the fact that gray scale value of pixels are maximum along the direction normal to the ridge orientation.
  24. 24. Preprocessing systemThinning: the extracted ridgesare converted into skeletalstructure in which ridges areonly one pixel wide. thinningshould not- Remove isolated as well as surrounded pixel. Break connectedness. Make the image shorter.
  25. 25. Feature extraction using neural networksMultilayer perceptron network ofthree layers is trained to detectminutiae in the thinned image. The first layer has nine perceptrons The hidden layer has five perceptrons The output layer has one perceptron.The network is trained to output ‘1’ when the input window is centered at the minutiae and it outputs ‘0’ when minutiae are not present.
  26. 26. Feature extraction using neural networksTrained neural networksare used to analyze theimage by scanning theimage with a 3x3 window.To avoid falsely reportedfeatures which are due tonoise – The size of scanning window is increased to 5x5 If the minutiae are too close to each other than we ignore all of them.
  27. 27. classificationfinger prints can be classified mainly in four classes depending upon their general pattern- Arch Tented arch Right loop Left loop
  28. 28. Applications of Fingerprint RecognitionAs finger print recognition system can be easily embedded in any system. It is used in- Recognition of criminals in law enforcement bodies. Used to provide security to cars, lockers, banks ,shops. To differentiate between a person who has voted and those who have not voted in govt. elections. To count individuals.
  29. 29. Neural Network Toolbox in MATLABNeural Network Toolbox™ provides tools fordesigning, implementing, visualizing, and simulating neuralnetworks. Neural networks are used for applications whereformal analysis would be difficult or impossible, such aspattern recognition and nonlinear system identification andcontrol. Neural Network Toolbox supports feedforwardnetworks, radial basis networks, dynamic networks, self-organizing maps, and other proven network paradigms.
  30. 30. Key FeaturesNeural network design, training, and simulationPattern recognition, clustering, and data-fitting toolsSupervised networks including feedforward, radial basis, LVQ, timedelay, nonlinear autoregressive (NARX), and layer-recurrentUnsupervised networks including self-organizing maps andcompetitive layersPreprocessing and postprocessing for improving the efficiency ofnetwork training and assessing network performanceModular network representation for managing and visualizingnetworks of arbitrary sizeRoutines for improving generalization to prevent overfittingSimulink blocks for building and evaluating neural networks, andadvanced blocks for control systems applications
  31. 31. Working with Neural Network ToolboxLike its counterpart in the biological nervous system, a neuralnetwork can learn and therefore can be trained to findsolutions, recognize patterns, classify data, and forecast futureevents. The behavior of a neural network is defined by the way itsindividual computing elements are connected and by the strengthof those connections, or weights. The weights are automaticallyadjusted by training the network according to a specified learningrule until it performs the desired task correctly.Neural Network Toolbox includes command-line functions andgraphical tools for creating, training, and simulating neuralnetworks. Graphical tools make it easy to develop neuralnetworks for tasks such as data fitting (including time-seriesdata), pattern recognition, and clustering. After creating yournetworks in these tools, you can automaticallygenerate MATLAB code to capture your work and automatetasks.
  32. 32. Network ArchitecturesNeural Network Toolbox supports a variety of supervisedand unsupervised network architectures. With the toolbox’smodular approach to building networks, you can developcustom architectures for your specific problem. You canview the network architecture including allinputs, layers, outputs, and interconnections.
  33. 33. Supervised NetworksSupervised neural networks are trained to produce desired outputs in response tosample inputs, making them particularly well-suited to modeling and controllingdynamic systems, classifying noisy data, and predicting future events.Neural Network Toolbox supports four types of supervised networks:Feedforward networks have one-way connections from input to output layers. Theyare most commonly used for prediction, pattern recognition, and nonlinear functionfitting. Supported feedforward networks include feedforward backpropagation,cascade-forward backpropagation, feedforward input-delay backpropagation, linear,and perceptron networks.Radial basis networks provide an alternative, fast method for designing nonlinearfeedforward networks. Supported variations include generalized regression andprobabilistic neural networks.Dynamic networks use memory and recurrent feedback connections to recognizespatial and temporal patterns in data. They are commonly used for time-seriesprediction, nonlinear dynamic system modeling, and control systems applications.Prebuilt dynamic networks in the toolbox include focused and distributed time-delay,nonlinear autoregressive (NARX), layer-recurrent, Elman, and Hopfield networks. Thetoolbox also supports dynamic training of custom networks with arbitrary connections.Learning vector quantization (LVQ) is a powerful method for classifying patterns thatare not linearly separable. LVQ lets you specify class boundaries and the granularity ofclassification.
  34. 34. Unsupervised NetworksUnsupervised neural networks are trained by letting thenetwork continually adjust itself to new inputs. They findrelationships within data and can automatically defineclassification schemes.Neural Network Toolbox supports two types of self-organizing,unsupervised networks:Competitive layers recognize and group similar input vectors,enabling them to automatically sort inputs into categories.Competitive layers are commonly used for classification andpattern recognition.Self-organizing maps learn to classify input vectors according tosimilarity. Like competitive layers, they are used for classificationand pattern recognition tasks; however, they differ fromcompetitive layers because they are able to preserve thetopology of the input vectors, assigning nearby inputs to nearbycategories.
  35. 35. Training and Learning FunctionsTraining and learning functions are mathematical procedures used toautomatically adjust the networks weights and biases. The trainingfunction dictates a global algorithm that affects all the weights andbiases of a given network. The learning function can be applied toindividual weights and biases within a network.Neural Network Toolbox supports a variety of training algorithms,including several gradient descent methods, conjugate gradientmethods, the Levenberg-Marquardt algorithm (LM), and the resilientbackpropagation algorithm (Rprop). The toolbox’s modular frameworklets you quickly develop custom training algorithms that can beintegrated with built-in algorithms. While training your neural network,you can use error weights to define the relative importance of desiredoutputs, which can be prioritized in terms of sample, timestep (fortime-series problems), output element, or any combination of these.You can access training algorithms from the command line or via agraphical tool that shows a diagram of the network being trained andprovides network performance plots and status information to helpyou monitor the training process.
  36. 36. Improving GeneralizationImproving the network’s ability to generalize helps prevent overfitting,a common problem in neural network design. Overfitting occurs whena network has memorized the training set but has not learned togeneralize to new inputs. Overfitting produces a relatively small erroron the training set but a much larger error when new data is presentedto the network.Neural Network Toolbox provides two solutions to improvegeneralization:Regularization modifies the network’s performance function (themeasure of error that the training process minimizes). By including thesizes of the weights and biases, regularization produces a network thatperforms well with the training data and exhibits smoother behaviorwhen presented with new data.Early stopping uses two different data sets: the training set, to updatethe weights and biases, and the validation set, to stop training whenthe network begins to overfit the data.
  37. 37. Some different applicationsCharacter Recognition - The idea of character recognition hasbecome very important as handheld devices like the Palm Pilotare becoming increasingly popular. Neural networks can beused to recognize handwritten characters.Image Compression - Neural networks can receive and processvast amounts of information at once, making them useful inimage compression. With the Internet explosion and more sitesusing more images on their sites, using neural networks forimage compression is worth a look.
  38. 38. Stock Market Prediction - The day-to-day business of the stockmarket is extremely complicated. Many factors weigh inwhether a given stock will go up or down on any given day. Sinceneural networks can examine a lot of information quickly andsort it all out, they can be used to predict stock prices.Traveling Salesman Problem- Interestingly enough, neuralnetworks can solve the traveling salesman problem, but only toa certain degree of approximation.Medicine, Electronic Nose, Security, and Loan Applications -These are some applications that are in their proof-of-conceptstage, with the acceptance of a neural network that will decidewhether or not to grant a loan, something that has already beenused more successfully than many humans.Miscellaneous Applications - These are some very interesting(albeit at times a little absurd) applications of neural networks.
  39. 39. Application principlesThe solution of a problem must be the simple.Complicated solutions waste time and resources.If a problem can be solved with a small look-up table that can beeasily calculated that is a more preferred solution than a complexneural network with many layers that learns with back-propagation.
  40. 40. Application principlesThe speed is crucial for computer game applications.If it is possible on-line neural network solutions should be avoided,because they are big time consumers. Preferably, neural networks shouldbe applied in an off-line fashion, when the learning phase doesn’t happenduring the game playing time.
  41. 41. Application principlesOn-line neural network solutions should be very simple.Using many layer neural networks should be avoided, if possible.Complex learning algorithms should be avoided. If possible a prioriknowledge should be used to set the initial parameters such that veryshort training is needed for optimal performance.
  42. 42. Application principlesAll the available data should be collected about the problem.Having redundant data is usually a smaller problem than not having thenecessary data.The data should be partitioned in training, validation and testing data.
  43. 43. Application principlesThe neural network solution of a problem should be selected from alarge enough pool of potential solutions.Because of the nature of the neural networks, it is likely that if a singlesolution is build than that will not be the optimal one.If a pool of potential solutions is generated and trained, it is more likelythat one which is close to the optimal one is found.
  44. 44. ProblemProblem analysis: • variables • modularisation into sub-problems • objectives • data collection
  45. 45. Neural network solutionData collection and organization: training, validation and testing data setsExample: Training set: ~ 75% of the data Validation set: ~ 10% of the data Testing set: ~ 5% of the data
  46. 46. Neural network solution Neural network solution selection each candidate solution is tested with the 5 2.5 5 validation data and the best performing network is 0 4 -2.5 selected 1 3 2 3 2 4 1 Network 11 Network 4 Network 7 5 7.5 5 5 5 5 2.5 52.5 2.5 0 0 0 4 4 4-2.5 -2.5 -2.5 1 3 1 3 1 3 2 2 2 3 2 3 2 3 2 4 4 4 1 1 5 1 5
  47. 47. Neural network solutionChoosing a solution representation: the solution can be represented directly as a neural network specifying the parameters of the neurons alternatively the solution can be represented as a multi-dimensional look-up table the representation should allow fast use of the solution within the application
  48. 48. Summary• Neural network solutions should be kept as simple as possible.• For the sake of the gaming speed neural networks should be applied preferablyoff-line.• A large data set should be collected and it should be divided into training,validation, and testing data.• Neural networks fit as solutions of complex problems.• A pool of candidate solutions should be generated, and the best candidatesolution should be selected using the validation data.• The solution should be represented to allow fast application.