Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Character Recognition using Artificial Neural Networks


Published on

Mini Project, Computer Science Department, College of Engineering Chengannur 2003-2007, Affiliated to Cochin University of Science and Technology (CUSAT), Kerala, India

Published in: Education
  • Login to see the comments

Character Recognition using Artificial Neural Networks

  2. 2. AIM  To create an ADALINE neural network  Specific Application – Recognize trained characters in a given matrix grid  Develop object oriented programming skill
  3. 3. BIOLOGICAL NEURAL NETWORKS Dendrites Synapse Axon
  4. 4. ARTIFICIAL NEURAL NETWORKS  An information-processing system that has certain performance characteristics in common with biological neural networks.  Information processing occurs at many simple elements called neurons.  Each connection link has an associated weight, which, in an ANN multiplies the signal transmitted.  Each neuron applies an activation function (usually nonlinear) to its net input (sum of weighted signals) to determine its output signal.
  5. 5. A Multi-Layer ANN
  6. 6. ARTIFICIAL NEURON ∑ f a y x1 x x 2 N w w w 1 2 N a w xi i i N = = ∑1
  7. 7. SOME COMMON ANN MODELS  McCulloch-Pitts Model  Perceptron  ADALINE (Adaptive Linear Neuron)  MADALINE (Many ADALINE)
  8. 8. THE ADALINE The ADALINE (Adaptive Linear Neuron) [ Widrow & Hoff, 1960] typically uses bipolar (1 or -1) activations for its input signals and its target output. The weights on the connections from the input unit to the ADALINE are adjusted. The ADALINE also has a bias, which acts like an adjustable weight on a connection from a unit whose activation is always 1. In general, an ADALINE can be trained using the delta rule also known as Least Mean Squares (LMS) or Widrow-Hoff Rule.
  9. 9. THE ADALINE - Architecture Architecture of an ADALINE
  10. 10. ADALINE - Algorithm Step 0: Initialize weights Set Learning rate Step 1: While Stopping condition is false, do steps 2-6 Step 2: For each bipolar training pair s:t, do steps 3-5 Step3: Set activations of input units, i=1,…..,n; Xi=Si Step 4: Compute net input to output unit; y_in = b + sigma i Xi Wi Step 5: Update weights, i=1,….,n ; Wi (new) = Wi (old) + alpha (t-y_in)Xi Step 6: Test for stopping Condition; If the largest weight change that occurred in step 2 is smaller than a specified tolerance, then stop; Otherwise continue
  12. 12. REQUIREMENTS  A basic Operating System (MS DOS or above)  Turbo C++
  13. 13. DESIGN AND IMPLEMENTATION The design of the neural network we call “Neurotron v1.0” involves five stages.  Implementing the structure  Training the Artificial Neural Network  Getting the input to the network  Processing the data using the ADALINE Network  Displaying the output.
  14. 14. Implementing the structure – A single layer, feed forward, fully connected network is designed and implemented using neuron and network objects. – It contains 72 (9x8) input neurons and a bias term – It contains 8 output neurons to represent the ASCII code of the recognized alphabet in binary. – It contains a total of 73x8=584 connections and weights.
  15. 15. Training the ANN  The ANN is trained using the Delta Rule mentioned earlier.  The initial weights are random numbers between -0.5 and +0.5  It is currently trained for 70 characters including 58 ’A’s and one set of ‘B’ to ‘L’.  The input is given in a 9x8 matrix of 1’s and 0’s.
  16. 16. Sample input matrix { {0,0,0,1,1,0,0,0}, {0,0,1,0,0,1,0,0}, {0,1,0,0,0,1,1,1}, {1,0,0,0,0,0,0,1}, {1,1,1,1,1,1,1,1}, {1,0,0,0,0,0,0,1}, {1,0,0,0,0,0,0,1}, {1,0,0,0,0,0,0,1}, {1,0,0,0,0,0,0,1}, },
  17. 17. GETTING THE INPUT TO THE NETWORK  Input is received on a Black and white grid by mouse clicks
  18. 18. PROCESSING THE DATA  The Neurotron v1.0 loads the input, propagates the network and calculates and displays the output on clicking the generate button in the GUI.
  19. 19. 5 DISPLAYING THE OUTPUT  The output is displayed in the Recognized Character Box given on screen.
  22. 22. RESULTS AND FUTURE SCOPE  The Neurotron v1.0 is currently trained to identify 70 characters consisting of 58 ‘A’s and one set of characters ‘B’ to ‘L’.  May be further trained to recognize any other character set by training it with a suitable character set.  Learning capability is limited by the number of neurons and connections in the system. Training with very large character sets may result in the weights not converging i.e. the net may be unable to learn the entire set.
  23. 23. FURTHER IMPROVEMENTS  The network can be trained for a wide range of other characters, using optimal training set.  The number of input and output layers may be increased as, in the current system, weights may not converge during large training sets. This can be done by changing the way getting the output. Instead of getting the ASCII of the character, the output may be only one neuron with output ‘1’ for each character.  Another way of increasing the power o the neural network is to add one or more hidden layers to the network and use the back propagation algorithms and training them using the Back propagation training algorithm.  The application and trainer can be integrated to form a complete flexible software.
  24. 24. APPLICATIONS  Language Processing  Image and Audio Processing  Finance and Marketing  Control Systems  Database  Weather forecasting  Other
  25. 25. THANK YOU