Neural Networks


Published on

basic concepts of neural networks

Published in: Education
  • Be the first to comment

No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide

Neural Networks

  2. 2. KNOWLEDGE-BASED INFORMATION SYSTEMS <ul><li>Knowledge-based system is a program that acquires, represents and uses knowledge for a specific purpose. </li></ul><ul><li>Consists of a knowledge-base and an inference engine. </li></ul><ul><li>Knowledge is stored in the knowledge-base while control strategies reside in the separate inference engine. </li></ul>
  3. 3. KNOWLEDGE-BASED INFO SYSTEM knowledge-base inference engine
  4. 4. WHAT ARE NEURAL NETWORKS ? <ul><li>Artificial Neural Network (ANN) :- an information processing paradigm inspired by the HUMAN nervous system. </li></ul><ul><li>Composed of large number of highly interconnected processing elements (neurons). </li></ul><ul><li>ANNs, like people, learn by example. </li></ul><ul><li>An ANN is configured for a specific application, like pattern recognition or data classification, through learning. </li></ul><ul><li>Learning in biological systems involves synaptic connections between neurons. This is true of ANNs as well. </li></ul>
  5. 5. Why use neural networks ? <ul><li>Knowledge acquisition under noise and uncertainty. </li></ul><ul><li>Flexible knowledge representation. </li></ul><ul><li>Efficient knowledge processing. </li></ul><ul><li>Fault Tolerance . </li></ul><ul><li>They have learning capability. </li></ul>
  6. 6. Neural networks versus conventional computers <ul><li> ANN </li></ul><ul><li>Learning approach </li></ul><ul><li>Not programmed for specific tasks </li></ul><ul><li>Used in decision making </li></ul><ul><li>Operation is unpredictable </li></ul><ul><li>COMPUTERS </li></ul><ul><li>Algorithmic approach </li></ul><ul><li>They are necessarily programmed </li></ul><ul><li>Work on predefined set of instructions </li></ul><ul><li>Operations are predictable </li></ul>
  7. 7. How does human brain learns ? <ul><li>Brain ,made up of large no. of neurons. </li></ul><ul><li>Each neuron connects to thousands of neurons, communicates by electrochemical signals. </li></ul><ul><li>Signals coming are received via SYNAPSES, located at the end of DENDRITES. </li></ul><ul><li>A neuron sum up the inputs, and if threshold value is reached then it generates a voltage and o/p signal, along the AXON. </li></ul>
  9. 9. SYNAPSE
  10. 10. -:THE ARTIFICIAL NEURON:- <ul><li>Electronically modeled biological neuron. </li></ul><ul><li>Has many inputs and one output. </li></ul><ul><li>Has 2 modes -training mode & using mode. </li></ul><ul><li>Training mode - neuron is trained to fire (or not), for particular input patterns. </li></ul>
  11. 11. -:THE ARTIFICIAL NEURON :- <ul><li>Using mode - when a taught input pattern is detected at input, its associated output becomes current output . </li></ul><ul><li>If input pattern does not belong in taught list, firing rule is used. </li></ul>
  13. 13. <((FIRING RULE))> <ul><li>Firing rule calculates whether neuron should fire for an input pattern or not. </li></ul><ul><li>relates to all the input patterns, seen or unseen. </li></ul><ul><li>The rule states :- </li></ul><ul><li>Take collection of training patterns for node, some that cause it to fire (the 1-taught set of patterns) and others which prevent it from firing (the 0-taught set). </li></ul>
  14. 14. <((FIRING RULE))> <ul><li>Then, the patterns not in collection cause node to fire if, they are more similar to patterns in the 1-taught set, than with patterns in the 0-taught set. If there is a tie, then pattern remains in undefined state. </li></ul>
  15. 15. <((FIRING RULE))> <ul><li>Example : </li></ul><ul><li>a 3-input neuron is taught to output 1 when the input (X1,X2 and X3) is 111 or 101 and to output 0 when the input is 000 or 001. </li></ul><ul><li>Now, if we give 010,then the neuron will not fire, for 011 o/p is undefined. </li></ul>
  16. 16. ~:“PATTERN RECOGNITION”:~ <ul><li>Pattern recognition is implemented by using neural networks. </li></ul><ul><li>During training, the network is trained to associate outputs with input patterns. </li></ul><ul><li>The n/w then identifies the input pattern and tries to output associated output pattern. </li></ul>
  17. 17. ~:“PATTERN RECOGNITION”:~ <ul><li>The power of neural networks comes to life when a pattern that has no output associated with it, is given as an input. </li></ul><ul><li>In this case, the network gives the output that corresponds to a taught input pattern that is least different from the given pattern. </li></ul><ul><li>Example :- recognition of alphabets, symbols etc. </li></ul>
  18. 18. <ul><li>Here the inputs are weighted inputs. </li></ul><ul><li>Effect of an input on decision making is directly proportional to the weight of that input. </li></ul><ul><li>Weight is a floating pt. number, can be +ve or –ve. </li></ul><ul><li>As each input enters the nucleus it is multiplied by its weight . </li></ul>McCulloch And Pitts Model Of Neuron
  19. 19. <ul><li>Neuron then sums these new input values which gives us the activation . </li></ul><ul><li>If activation is greater than threshold value, the neuron outputs a signal, else zero output. </li></ul><ul><li>This is typically called a step function . </li></ul>McCulloch And Pitts Model Of Neuron
  21. 21. <ul><li>In mathematical terms, neuron fires if and only if, </li></ul><ul><li>X1W1 + X2W2 + X3W3 + ... > T </li></ul><ul><li>The MCP neuron has the ability to adapt to a particular situation by, changing its weights and/or threshold. </li></ul><ul><li>Various algorithms exist that cause the neuron to 'adapt'; the most used ones are the Delta rule and the back error propagation. </li></ul>McCulloch And Pitts Model Of Neuron
  22. 22. Architecture Of Neural Networks <ul><li>FEED –FORWARD NETWORKS :- </li></ul><ul><li>allow signals to travel one way only ; from input to output . </li></ul><ul><li>no feedback (loops) i.e. the output of any layer does not affect that same layer. </li></ul><ul><li>Feed-forward ANNs tend to be straight forward networks that associate inputs with outputs. </li></ul><ul><li>extensively used in pattern recognition . </li></ul>
  24. 24. Architecture Of Neural Networks <ul><li>FEEDBACK NETWORKS :- </li></ul><ul><li>can have signals traveling in both directions by introducing loops in the network. </li></ul><ul><li>Feedback networks are dynamic; their 'state' is changing continuously until they reach an equilibrium point. </li></ul><ul><li>They remain at the equilibrium point until the input changes and a new equilibrium needs to be found . </li></ul><ul><li>also referred to as interactive or recurrent . </li></ul>
  26. 26. Architecture Of Neural Networks <ul><li>Network layers :- </li></ul><ul><li> Artificial neural network mostly consists of three groups, or layers, </li></ul><ul><li>Input Layer - activity of input units represents raw information that is fed into the network. </li></ul><ul><li>Hidden Layer - activity of each hidden unit is determined by activities of input units and weights on connections between input and hidden units. </li></ul><ul><li>Output Layer - behavior of the output units depends on the activity of the hidden units and the weights between the hidden and output units. </li></ul>
  27. 27. Layers Of Neural Network
  28. 28. Architecture Of Neural Networks <ul><li>Perceptrons :- </li></ul><ul><li>It is an MCP model with some additional, fixed, pre-processing . </li></ul><ul><li>Units A1, A2, Aj , Ap are called association units and they extract specific, localized features from input images. </li></ul><ul><li>mimic the basic idea behind the human visual system. </li></ul>
  29. 29. <ul><li>Most common methods used are :- </li></ul><ul><li>Supervised Learning </li></ul><ul><li>Unsupervised Learning </li></ul>Learning Methods
  30. 30. <ul><li>Incorporates an external teacher </li></ul><ul><li>Each output unit is told what its desired response to input signals should be. </li></ul><ul><li>During learning process global information is required. </li></ul><ul><li>Supervised learning includes error- correction and reinforcement learning. </li></ul>Supervised Learning
  31. 31. <ul><li>There is problem of error convergence i.e. minimization of error between desired and computed values . </li></ul><ul><li>The aim is to determine a set of weights which minimizes the error. </li></ul><ul><li>A well-known method is least mean square (LMS) convergence. </li></ul>Supervised Learning
  32. 32. <ul><li>Uses no external teacher, based upon only local information. </li></ul><ul><li>Also referred to as self-organizing, in sense that it self-organizes data presented to network, detects their emergent collective properties. </li></ul><ul><li>Methods of unsupervised learning are Hebbian and competitive learning . </li></ul>Unsupervised Learning
  33. 33. <ul><li>Learning consists of two phases- training phase and operation phase. </li></ul><ul><li>We say that neural network learns off-line if learning phase and operation phase are distinct. </li></ul><ul><li>A neural network learns on-line if it learns and operates at the same time. </li></ul><ul><li>Usually, supervised learning is performed off-line, whereas unsupervised learning is performed on-line. </li></ul>Learning Contd....
  34. 34. <ul><li>Character Recognition </li></ul><ul><li>Since neural networks are best at identifying patterns or trends in data, they are well suited for prediction or forecasting needs including: </li></ul><ul><li>sales forecasting </li></ul><ul><li>industrial process control </li></ul><ul><li>data validation </li></ul><ul><li>risk management </li></ul>Applications
  35. 35. <ul><li>Neural networks are also used for </li></ul><ul><li>Genetic pattern recognition </li></ul><ul><li>Drug discovery </li></ul><ul><li>Flow Cytometric Analysis of Leukemia </li></ul><ul><li>Also used in field of Robotics , Facial Animation, Lip Reading , Event Prediction </li></ul><ul><li>and many more fields. </li></ul>Applications
  36. 36. Learning mode Prediction mode Applications
  37. 37. Shape of Lips Lip reading Applications
  38. 38. <ul><li>The computing world has a lot to gain from neural networks. </li></ul><ul><li>Their ability to learn by example makes them very flexible and powerful. </li></ul><ul><li>Further, there is no need to devise an algorithm in order to perform a specific task; i.e. there is no need to understand the internal mechanisms of that task. </li></ul><ul><li>They are also very well suited for real time systems because of their fast response and computational times which are due to their parallel architecture. </li></ul>Conclusion
  39. 39. <ul><li>Neural networks also contribute to other areas of research such as neurology and psychology. </li></ul><ul><li>Finally, I would like to state that even though neural networks have a huge potential we will only get the best of them when they are integrated with computing, AI and related subjects. </li></ul>Conclusion
  40. 40. <ul><li> </li></ul><ul><li> </li></ul><ul><li> </li></ul><ul><li> </li></ul><ul><li> </li></ul><ul><li>NEURAL NETWORKS IN COMPUTER INTELLIGENCE ,by – LiMin Fu </li></ul>REFERENCES
  41. 41. Thank You !