Ann by rutul mehta


Published on



Published in: Technology
1 Like
  • Be the first to comment

No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide
  • Thank you
  • Ann by rutul mehta

    1. 1. ARTIFICIAL NEURAL NETWORKSGUIDED BY:- Vishwesh Sir BY:- Mehta Rutul R.
    3. 3. What is Artificial Neural NetworkAn Artificial Neural Network (ANN) is an information processing thatis inspired by the way biological nervous systems, such as the brain.It is composed of a large number of highly interconnected processingelements (neurons) working to solve specific problems.It is an attempt to simulate within specialized hardware orsophisticated software, the multiple layers of simple processingelements called neurons.An ANN is configured for a specific application, such as patternrecognition or data classification, through a learning process.
    4. 4. Research History• McCulloch and Pitts (1943) are generally recognized as the designers of the first neural network.• They combined many simple processing units together that could lead to an overall increase in computational power.• They suggested many ideas like : a neuron has a threshold level and once that level is reached the neuron fires.• The McCulloch and Pittss network had a fixed set of weights.• Hebb (1949) developed the first learning rule, that is if two neurons are active at the same time then the strength between them should be increased.• Minsky & Papert (1969) showed that perceptron could not learn those functions which are not linearly separable. The researchers, Parkerand and LeCun discovered a learning algorithm for multi- layer networks called back propagation that could solve problems that were not linearly separable.
    5. 5. Biological Neurons1. Soma or body cell - is a large, round central body in which almost all the logical functions of the neuron are realized.2. The axon (output), is a nerve fibre attached to the soma which can serve as a final output channel of the neuron. An axon is usually highly branched. Synapses3. The dendrites (inputs)- represent a highly Axon from branching tree of fibres. These long other neuron irregularly shaped nerve fibres (processes) are attached to the soma. Soma4. Synapses are specialized contacts on a neuron which are the termination points for the axons from other neurons. Dendrite Axon from other Dendrites The schematic model of a biological neuron 5
    6. 6. Why neural network? f ( x1 ,..., xn ) - unknown multi-factor decision rule Learning process using a representative learning set - a set of weighting vectors is the result ( w0 , w1 ,..., wn ) of the learning processˆf ( x1 ,..., xn ) = - a partially defined function, which is an approximation of the decision= P ( w0 + w1 x1 + ... + wn xn ) rule function 6
    7. 7. A Neuron f ( x1 ,..., xn ) = F ( w0 + w1 x1 + ... + wn xn ) f is a function to be earned x1 ,..., xn are the inputs x1 φ is the activation function. f ( x1 ,..., xn ). φ(z).xn z = w0 + w1 x1 + ... + wn xn Z is the weighted sum 7
    8. 8. A Neuron• Neurons’ functionality is determined by the nature of its activation function, its main properties, its plasticity and flexibility, its ability to approximate a function to be learned 8
    9. 9. When we need a network• The functionality of a single neuron is limited. For example, the threshold neuron can not learn non-linearly separable functions.• To learn those functions that can not be learned by a single neuron, a neural network should be used. 9
    10. 10. A simplest network Neuron 1 Neuron 3 Neuron 2 10
    11. 11. Similarities- Artificial Neuron & Brain NeuronIn the human brain, a typical neuron collects signals fromothers through a host of fine structures called dendrites.The neuron sends out spikes of electrical activity through along, thin stand known as an axon, which splits into thousandsof branches. While in Artificial Neuron……..
    12. 12. Similarities- Artificial Neuron & Brain NeuronWe conduct Artificial neural networks by first trying to deducethe essential features of neurons and their interconnections.We then typically program a computer to simulate thesefeatures.However because our knowledge of neurons is incomplete andour computing power is limited, our models are necessarilygross idealizations of real networks of neurons.
    13. 13. Firing RuleThe firing rule is an important concept in neural networks andaccounts for their high flexibility. A firing rule determines howone calculates whether a neuron should fire for any inputpattern. It relates to all the input patterns, not only the oneson which the node was trained.A simple firing rule can be implemented by using Hammingdistance technique. The rule goes as follows:• Take a collection of training patterns for a node, someof which cause it to fire (the 1-taught set of patterns) andothers which prevent it from doing so (the 0-taught set).• Then the patterns not in the collection cause the nodeto fire if, on comparison , they have more input elements incommon with the nearest pattern in the 1-taught set thanwith the nearest pattern in the 0-taught set. If there is a tie,then the pattern remains in the undefined state.
    14. 14. Simple NeuronAn artificial neuron is a device with many inputs and oneoutput.If the input pattern does not belong in the taught list ofinput patterns, the firing rule is used to determine whetherto fire or not.The neuron has two modes of operation; the training modeand the using mode. In the training mode, the neuron canbe trained to fire (or not), for particular input patterns. Inthe using mode, when a taught input pattern is detected atthe input, its associated output becomes the current output.
    15. 15. More Complicated NeuronA more sophisticated neuron (figure) is the McCulloch andPitts model (MCP).The inputs are weighted, the effect that each input has atdecision making is dependent on the weight of the particularinput.These weighted inputs are then added together and if theyexceed a pre-set threshold value, the neuron fires. In anyother case the neuron does not fire.In mathematical terms, the neuron fires if and only if;X1W1 + X2W2 + X3W3 + ... > T( Threshold Value)
    16. 16. Weighted:The weight of an input is a number which whenmultiplied with the input gives the weighted input.
    17. 17. ArchitectureThere are two types of architecture of NeuralNetworks:• Feed-forward Networks• Feed-back Networks
    18. 18. Feed-forward NetworksFeed-forward ANN’s (figure 1) allow signals to travel one wayonly; from input to output.There is no feedback (loops) i.e. the output of any layer doesnot affect that same layer.Feed-forward Anns tend to be straight forward networks thatassociate inputs with outputs.
    19. 19. Feed-back NetworksFeedback networks (figure) can have signals traveling in bothdirections by introducing loops in the networkFeedback networks are very powerful and can get extremelycomplicated.They remain at the equilibrium point until the input changesand a new equilibrium needs to be found.
    20. 20. Network LayersThe commonest type of artificial neural network consists ofthree groups, or layers, of units:• A layer of "input" units is connected to a layer of "hidden"units, which is connected to a layer of "output" units.The activity of the input units represents the raw informationthat is fed into the networkThe activity of each hidden unit is determined by theactivities of the input units and the weights on theconnections between the input and the hidden units.The behavior of the output units depends on the activity ofthe hidden units and the weights between the hidden andoutput units.
    21. 21. Threshold Neuron (Perceptrons)The most influential work on neural nets in the 60s wentunder the heading of perceptrons a term coined by FrankRosenblatt.The perceptron (figure 4.4) turns out to be an MCP model( neuron with weighted inputs ) with some additional, fixed,pre--processing.Units labeled A1, A2, Aj , Ap are called association units andtheir task is to extract specific, localized featured from theinput images.
    22. 22. PerceptronsPerceptrons mimic the basic idea behind the mammalianvisual system.They were mainly used in pattern recognition even thoughtheir capabilities extended a lot more.In 1969 Minsky and Papert wrote a book in which theydescribed the limitations of single layer Perceptrons.The book was very well written and showed mathematicallythat single layer perceptrons could not do some basic patternrecognition operations like determining the parity of a shapeor determining whether a shape is connected or not.What they did not realized, until the 80s, is that given theappropriate training, multilevel perceptrons can do theseoperations.
    23. 23. ADVANTAGE OF ANN• A neural network can perform tasks that a linear program can not.• When an element of the neural network fails, it can continue without any problem by their parallel nature.• A neural network learns and does not need to be reprogrammed.• It can be implemented in any application.• It can be implemented without any problem.
    24. 24. DISADVANTAGE OF ANN• The neural network needs training to operate.• The architecture of a neural network is different from the architecture of microprocessors therefore needs to be emulated.• Requires high processing time for large neural networks.
    25. 25. Applications of Artificial Neural Networks Intelligent Intelligent Advance Control Control Advance Robotics Robotics Technical Technical Diagnisti Diagnisti cs cs Machine Intelligent IntelligentMachine Vision Data Analysis Data Analysis Vision Artificial and Signal and Signal Intellect with Processing Processing Neural Networks Image & Image & Pattern Pattern Recognition Recognition Intelligent Intelligent Expert Expert Systems Systems Intelligent Intelligent Intelligent Intelligent ll Security Security Medicine Medicine Systems Systems Devices Devices 35
    26. 26. THANK YOU