Character recognition of kannada text in scene images using neural
Upcoming SlideShare
Loading in...5
×
 

Character recognition of kannada text in scene images using neural

on

  • 506 views

 

Statistics

Views

Total Views
506
Views on SlideShare
506
Embed Views
0

Actions

Likes
0
Downloads
12
Comments
0

0 Embeds 0

No embeds

Accessibility

Upload Details

Uploaded via as Adobe PDF

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

Character recognition of kannada text in scene images using neural Character recognition of kannada text in scene images using neural Document Transcript

  • International Journal of Graphics and Multimedia (IJGM), ISSN 0976 – 6448(Print),ISSN 0976 – 6456(Online) Volume 4, Issue 1, January - April 2013, © IAEME9CHARACTER RECOGNITION OF KANNADA TEXT IN SCENEIMAGES USING NEURAL NETWORKM. M. Kodabagi1, S. A. Angadi2, Chetana. R. Shivanagi31Department of Computer Science and Engineering, Basaveshwar Engineering College,Bagalkot-587102, Karnataka, India,2Department of Computer Science and Engineering, Basaveshwar Engineering College,Bagalkot-587102, Karnataka, India3Department of Information Science and Engineering, Basaveshwar Engineering College,Bagalkot-587102, KarnatakaABSTRACTCharacter recognition in scene images is one of the most fascinating and challengingareas of pattern recognition with various practical application potentials. It can contributeimmensely to the advancement of an automation process and can improve the interfacebetween man and machine in many applications. Some practical application potentials ofcharacter recognition system are: reading aid for the blind, traffic guidance systems, tourguide systems, location aware systems and many more. In this work, a novel method forrecognizing basic Kannada characters in natural scene images is proposed. The proposedmethod uses zone wise horizontal and vertical profile based features of character images. Themethod works in two phases. During training, zone wise vertical and horizontal profile basedfeatures are extracted from training samples and neural network is trained. During testing, thetest image is processed to obtain features and recognized using neural network classifier. Themethod has been evaluated on 490 Kannada character images captured from 2 Mega Pixelscameras on mobile phones at various sizes 240x320, 600x800 and 900x1200, which containssamples of different sizes, styles and with different degradations, and achieves an averagerecognition accuracy of 92%. The system is efficient and insensitive to the variations in sizeand font, noise, blur and other degradations.Keywords: Character Recognition, Display Boards, Low Resolution Images, NeuralNetwork Classifier, Zone Wise Profile Features.INTERNATIONAL JOURNAL OF GRAPHICS ANDMULTIMEDIA (IJGM)ISSN 0976 - 6448 (Print)ISSN 0976 -6456 (Online)Volume 4, Issue 1, January - April 2013, pp. 09-19© IAEME: www.iaeme.com/ijgm.aspJournal Impact Factor (2013): 4.1089 (Calculated by GISI)www.jifactor.comIJGM© I A E M E
  • International Journal of Graphics and Multimedia (IJGM), ISSN 0976 – 6448(Print),ISSN 0976 – 6456(Online) Volume 4, Issue 1, January - April 2013, © IAEME101. INTRODUCTIONIn recent years, the hand held devices with increased computing and communicationcapabilities are widespread and being used for various purposes such as information access,mobile commerce, mobile learning, multimedia streaming, and many more. One such newapplication that can be integrated in such devices is a text understanding and translationsystem for low resolution natural scene images of display boards.Everyday, several people visit various places across the world for business and otheractivities, often they face problem with the language where they travel. This is especially truein countries like India, which are multilingual. For these reasons, there is a demand for anautomated system that understands text in low resolution natural scene images and providestranslated information in localized language.Natural scene display board images contain text information which is often required tobe automatically recognized and processed. Scene text may be any textual part of the sceneimages such as names of streets, institutes names, names of shops, building names, companynames, road signs, traffic information, warning signs etc. Researchers have focused theirattention on development of techniques for understanding text on such display boards. There isa spurt of activity in the development of web based intelligent hand held systems for suchapplications.In the reported works [1-10] on intelligent systems for hand held devices, not manyworks pertain to understanding of written text on display boards. Therefore, scope exists forexploring such possibilities. The text understanding involves several processing steps; textdetection and extraction, preprocessing for line, word and character separation, scriptidentification, text recognition and language translation. Therefore, text recognition atcharacter level is one of the very important processing steps for development of such systemsprior to further analysis.Therefore, text recognition at word/character level is premise for the later stages of textunderstanding system. The recognition of text in low resolution images of display boards is adifficult and challenging problem due to various issues such as variability in font size, styleand spacing between characters, skew, perspective distortions, viewing angle, unevenilluminations, script specific characters and other degradations [11]. The current work aims atinvestigating the use of zone wise statistical features for recognition of Kannada characters inscene images. The proposed method uses zone wise horizontal and vertical profile basedfeatures of character images. The method works in two phases. During training, zone wisehorizontal and vertical profile based features are extracted from training samples and neuralnetwork is trained. During testing, the test image is processed to obtain features andrecognized using neural network classifier. The method has been evaluated on 490 Kannadacharacter images captured from 2 Mega Pixels cameras on mobile phones at various sizes240x320, 600x800 and 900x1200, which contains samples of different sizes, styles and withdifferent degradations, and achieves an average recognition accuracy of 92%. The system isefficient and insensitive to the variations in size and font, noise, blur and other degradations.The rest of the paper is organized as follows; the detailed survey related to characterrecognition of text in scene images is described in Section 2. The proposed method ispresented in Section 3. The experimental results and discussions are given in Section 4.Section 5 concludes the work and lists future directions of the work.
  • International Journal of Graphics and Multimedia (IJGM), ISSN 0976 – 6448(Print),ISSN 0976 – 6456(Online) Volume 4, Issue 1, January - April 2013, © IAEME112. RELATED WORKSThe character recognition of text in low resolution natural scene images is a necessarystep for development of various tasks of text understanding system. A substantial amount ofwork has gone into the research related to character recognition of text in natural sceneimages. Some of the related works are summarized in the following.A robust approach for recognition of text embedded in natural scenes is given in [11].The proposed method extracts features from intensity of an image directly and utilizes a localintensity normalization to effectively handle lighting variations. Then, Gabor transform isemployed to obtain local features and linear discriminant analysis (LDA) is used for selectionand classification of features. The proposed method has been applied to a Chinese signrecognition task. This work is further extended integrating sign detection component withrecognition [12]. The extended method embeds multi-resolution and multi-scale edgedetection, adaptive searching, color analysis, and affine rectification in a hierarchicalframework for sign detection. The affine rectification recovers deformation of the text regionscaused by an inappropriate camera view angle and significantly improve text detection rateand optical character recognition.A framework that exploits both bottom-up and top-down cues for scene textrecognition at word level is presented in [13]. The method derives bottom-up cues fromindividual character detections from the image. Then, a Conditional Random Field model isbuilt on these detections to jointly model the strength of the detections and the interactionsbetween them. It also imposes top-down cues obtained from a lexicon-based prior, i.e.language statistics. The optimal word represented by the text image is obtained by minimizingthe energy function corresponding to the random field model. The method reports significantimprovements in accuracies on two challenging public datasets, namely Street View Text andICDAR 2003 compared to other methods. The test results showed that the reported accuracy isonly 73% and requires further improvement.The hierarchical multilayered neural network recognition method described in [14]extracts oriented edges, corners, and end points for color text characters in scene image. Amethod called selective metric clustering which mainly deals with color is employed in [15].A fast lexicon based and discriminative semi-Markov models for recognizing scene text arepresented in [16, 17]. An object categorization framework based on a bag-of-visual-wordsrepresentation for recognition of character in natural scene images is described in [18]. Theeffectiveness of raw grayscale pixel intensities, shape context descriptors, and wavelet featuresto recognize the characters is evaluated in [19]. A method for unconstrained handwrittenKannada vowels recognition based upon invariant moments is described in [20].The technique presented in [21] extracts stroke density, length, and number of strokesfor handwritten Kannada and English characters recognition. The method found in [22] usesmodified invariant moments for recognition of multi-font/size Kannada vowels and numeralsrecognition. A model employed in [23] calculates features from connected components andobtains 3k dimensional feature vectors for memory based recognition of camera-capturedcharacters. A character recognition method described in [24] uses local features forrecognition of multiple characters in a scene image.After the thorough study of literature, it is noticed that, the some [18, 12, 23, 14] ofthe reported methods work with limited datasets, other cited works [18, 17, 16] report lowrecognition rates in the presence of noise and other degradations and very few works [18-22]pertain to recognition of Kannada characters from scene images. Hence, more research is
  • International Journal of Graphics and Multimedia (IJGM), ISSN 0976 – 6448(Print),ISSN 0976 – 6456(Online) Volume 4, Issue 1, January - April 2013, © IAEME12desirable to obtain new set of discriminating features suitable for Kannada text in sceneimages. In the current work, zone wise statistical features are employed for recognition ofKannada characters in low resolution images. The detailed description of the proposedmethodology is given in the next section.3. PROPOSED METHODOLOGY FOR CHARACTER RECOGNITIONThe proposed method uses zone wise horizontal and vertical profile based features forrecognition of Kannada characters in mobile camera based images. The proposed methodcontains various phases such as Preprocessing, Feature Extraction, Construction ofKnowledge Base for Training Neural Network, Training and Character Recognition withNeural Network Classifier. The block diagram of the proposed model is given in Fig 1. Thedetailed description of each phase is given in the following subsections.3.1 PreprocessingThe input character image is preprocessed for binarization, noise removal, boundingbox generation and resized to a constant resolution of size 30x30 pixels. Further, the image isthinned.Fig. 1. Block Diagram of Proposed Model3.2 Feature extractionIn this phase, each image is divided into 15 vertical zones and 15 horizontal zones,where size of each horizontal zone is 2*30 pixels and the size of each vertical zone is 30*2pixels. Then sum of all on pixels in every zone is determined as a feature value for the zone.Finally, 30 features are computed from all zones and are stored in to a feature vector X asdescribed in the equations (1) to (5):( )( )[ ]HFeaturesVFeaturesX = (1)Test SamplePreprocessingExtraction of Zone WiseHorizontal and VerticalProfile FeaturesConstruction ofKnowledge BasePreprocessingCharacter Recognition usingusing Neural NetworkClassifierExtraction of Zone WiseHorizontal and VerticalProfile FeaturesTraining SamplesTrain Neural NetworkRecognized Character
  • International Journal of Graphics and Multimedia (IJGM), ISSN 0976 – 6448(Print),ISSN 0976 – 6456(Online) Volume 4, Issue 1, January - April 2013, © IAEME13VFeatures = [ iVf ] 151 ≤≤ i (2)HFeatures = [ iHf ] 151 ≤≤ i (3)Where,iHf is a feature value of ithhorizontal zone and it is computed as shown in (4).iVf is a feature value of ithvertical zone and it is computed as shown in (5).∑∑=21301),( yxgHf ii  (4)∑ ∑=30121),( yxgVf ii  (5)Where, gi is ithzone that encompasses the chosen region of the character image. Thedataset of such feature vectors obtained from training samples is further used for constructionof knowledge base.3.3 Construction of Knowledge Base for Training Neural NetworkFor the purpose of knowledge base construction, the images were captured fromdisplay boards of Karnataka Government offices, names of streets, institute names, names ofshops, building names, company names, road signs, traffic direction and warning signscaptured from 2 Mega Pixels cameras on mobile phones. The images are captured at varioussizes 240x320, 600x800, 900x1200 at a distance of 1 to 6 meters. All these images are usedfor evaluating the performance of the proposed model. The images captured with a size of240x320 at a distance of 1 to 3 meters are found to be clear when the viewing angle is parallelto the text plane, perspective distortions and other degradations occur beyond 3 meters withother viewing angles. But the images captured at a distance of 1 to 6 meters with other statedresolutions are clear, perspective distortions still occur when the viewing angle is not parallel.The images in the database are characterized by variable font size and style, uneven thickness,minimal information context, small skew, noise, perspective distortion and other degradations.The image database consists of 490 Kannada basic character images of varying resolutions.Then from the database, 50% of samples are used for training. During training, the features areextracted from all training samples and knowledge base is organized as a dataset of featurevectors as depicted in (6). The stored information in the knowledge base sufficientlycharacterizes all variations in the input. Testing is carried out for all samples containing 50%trained and 50% untrained samples. Some sample images captured using 2 Mega Pixelscameras on mobile phones from display boards are shown in Fig 2.][ jXKB = Nj ≤≤1 (6)Where, KB is knowledge base comprising feature vectors of training samples., Xj is a featurevector of jthimage in the KB and N is the number of training sample images.
  • International Journal of Graphics and Multimedia (IJGM), ISSN 0976 – 6448(Print),ISSN 0976 – 6456(Online) Volume 4, Issue 1, January - April 2013, © IAEME14Fig. 2. Sample Images Captured from 2 Mega Pixels Cameras on Mobile Phones3.4 Training and Recognition with Feed Forward Neural NetworkAfter the data set is obtained and organized into knowledge base of basic Kannadacharacter images, training and recognition tasks are carried out using feed forward neuralnetworks. The details of training and recognition are described in the following;Before network design, the data from in the knowledge base is prepared to cover therange of inputs for which the network will be used. The feed forward neural network does nothave the ability to accurately extrapolate beyond the range of inputs, so the training data ischosen to span the full range of the input space. Later, the normalization step is applied toboth the input vectors and the target vectors in the data set. In this way, the network outputalways falls into a normalized range. Once the data is ready, the feed forward neural networkobject is created with 30 neurons in the input layer, 15 neurons in the hidden layer, andconfigured with default weights and biases for the prepared data set in the knowledgebase.The network is configured with tan sigmoid functions in the input and hidden neurons, lineartransfer functions for output neurons and Levenberg-Marquardt and Gradient Descent withMomentum learning algorithms. The default performance function for feed forward networkused is mean square error. The parameters learning rate and minimum performance areinitialized with value 0.01. The magnitude of the gradient and the number of validationchecks are used to terminate the training. The number of validation checks parameter isconfigured with value 10 and represents the number of successive iterations that thevalidation performance fails to decrease.After the network weights and biases are initialized and configured with other trainingparameters, the network is ready for training. The multilayer feed forward network is trainedfor function approximation (nonlinear regression) or pattern recognition with network inputsand target outputs. The training process tunes the values of the weights and biases of thenetwork to optimize network performance, as defined by the network performance function.After the network is trained, its performance is verified using several trained and testcharacter images. The neural network classifier gives an average recognition accuracy of92%.
  • International Journal of Graphics and Multimedia (IJGM), ISSN 0976 – 6448(Print),ISSN 0976 – 6456(Online) Volume 4, Issue 1, January - April 2013, © IAEME154. EXPERIMENTAL RESULTS AND ANALYSISThe proposed methodology has been evaluated for 490 low resolution basic Kannada characterimages of varying font size and style, uneven thickness and other degradations. The experimentalresults of processing a sample character image is described in section 4.1. And the results of processingseveral other character images dealing with various issues, the overall performance of the system andcomparison results with other methods are reported in section 4.2.4.1. An Experimental Analysis for a Sample Kannada Character ImageThe Character image with uneven thickness, uneven lighting conditions, and otherdegradations given in Fig. 3a is initially preprocessed for binarization, resized to a constant size of30x30 pixels and thinned as shown in Fig. 3b.Fig. 3. a) A Sample Character Test Image b) Preprocessed ImageFurther, the image is divided into 15 vertical zones and 15 horizontal zones. Then, the zonewise statistical features are computed from all zones and are organized into a feature vector T as in (1)to (5). The experimental values of all zones are shown in Table 1.TABLE 1. Zone Wise Vertical and Horizontal Features of Sample Input Image in Fig. 3bFeature VectorT[ VFeatures (4 3 13 5 6 6 6 8 6 7 6 9 13 13 4)HFeatures (2 2 3 6 3 4 9 5 5 6 4 4 5 9 15)]T= [ 4 3 13 5 6 8 6 6 8 6 7 6 9 13 13 4 2 2 3 6 3 4 9 5 5 6 4 4 5 9 15]The experimental values in Table 1 clearly depict the distribution of pixels in varioussegments/primitives of the character image. And these distributions are different from character tocharacter because of varying positions and shapes of segments/primitives of basic Kannada characters.This is demonstrated considering two sample images in Table 2.TABLE 2. Vertical and Horizontal Features of Two Sample Images Demonstrating PixelDistribution PatternsCharacter Image Zone Wise Statistical Features9 5 6 2 3 2 4 3 11 7 8 11 21 10 2 13 1 5 114 4 4 13 9 4 8 5 2 3 5 412 8 6 6 6 6 14 18 8 6 6 6 9 14 10 32 2 6 8 22 2 2 17 17 9 7 12 10 16The values in Table 2 clearly show that, the feature values in most of the corresponding zonesof the characters are distinct. For example, the feature values 9, 5, 6, 2 of vertical zones 1, 2, 3 and 4 ofcharacter in first row of Table 2 are distinct from feature values 12, 8, 6, 6 in the corresponding zonesof character in the second row. The similar characteristic exists with the feature values in other zones.The arrangement of these features into a feature vector creates a pixel distribution pattern that makes
  • International Journal of Graphics and Multimedia (IJGM), ISSN 0976 – 6448(Print),ISSN 0976 – 6456(Online) Volume 4, Issue 1, January - April 2013, © IAEME16samples distinguishable. It is also observed that, the proposed zone wise features also take care ofuncertainty in the appearance of primitives of character image. After extracting features from test inputimage in Fig. 2a, the neural network classifier is used to recognize the character.4.2. An Experimental Analysis dealing with various issuesThe proposed methodology has produced good results for low resolution images containingKannada characters of different size, font, and alignment with varying background. The advantage liesin less computation involved in feature extraction and recognition phases of the method. Duringexperiments it is noticed that, the zone wise features made samples separable in the feature space.Hence, the proposed work is robust and achieves an average recognition accuracy of 92%. The overallperformance of the system after conducting the experimentation on the dataset is reported in Table 3.The comparison of the proposed method with other scene text recognition methods is described inTable 4.TABLE 3. Overall system performanceCharacterImageNumberofSamplesTestedNumber ofSamplesCorrectlyRecognizedNumberofSamplesMissClassified% ofRecognition AccuracyCharacterImageNumberofSamplesTestedNumber ofSamplesCorrectlyRecognizedNumber ofSamplesMissClassified% ofRecognitionAccuracy10 9 1 90 10 10 0 10010 9 1 90 10 9 1 9010 9 1 90 10 9 1 9010 9 1 90 10 10 0 10010 10 0 100 10 9 1 9010 9 1 90 10 10 0 10010 9 1 90 10 9 1 9010 10 0 100 10 9 1 9010 10 0 100 10 8 2 8010 9 1 90 10 10 0 10010 10 0 100 10 9 1 9010 9 1 90 10 9 1 9010 9 1 90 10 9 1 9010 9 1 90 10 9 1 9010 8 2 80 10 10 0 10010 10 0 100 10 8 2 8010 10 0 100 10 10 0 10010 10 0 100 10 9 1 9010 9 1 90 10 8 2 8010 8 2 80 10 10 0 10010 10 0 100 10 9 1 9010 9 1 90 10 9 1 9010 9 1 90 10 8 2 8010 10 0 100 10 9 1 9010 9 1 90
  • International Journal of Graphics and Multimedia (IJGM), ISSN 0976 – 6448(Print),ISSN 0976 – 6456(Online) Volume 4, Issue 1, January - April 2013, © IAEME17A closer examination of results revealed that misclassifications arise due to noise,more similarity between character structures/primitives and other degradations. It is alsonoticed that, zonal features takes care of variations in the appearance of character primitives. Itis also found that, if the knowledge base is trained for all variations and degradations, betterperformance can be obtained.TABLE 4. Comparison of Proposed Method with Other Scene Text RecognitionMethodsAuthor Approach Features RecognitionAccuracyJerod J. Weinman. et. al.(2008)A DiscriminativeSemi-Markov Modelfor Robust SceneText RecognitionWavelet features 82.08%Onur Tekdas. et. al(2009)RecognizingCharacters in NaturalScenes: A FeatureStudyRaw intensities,Shape Contexts, andwavelet features85.328Masakazu Iwamura. et.al (2011)Recognition ofMultiple Charactersin a Scene ImageUsing Arrangementof Local FeaturesScale invariantfeature transform andvoting method76.5%Anand Mishra., etal.,(2012)Top down andbottom up cues forscene text recogntionBottom up cues,language statisticsand condtionalrandom field model.73%Proposed Method CharacterRecognition ofKannada Text inScene Images UsingNeural NetworkZone wise verticaland horizontal profilebased features92%5. CONCLUSIONIn this work, a novel method for recognition of basic Kannada characters from camerabased images is proposed. The proposed method uses zone wise horizontal and verticalprofile based features and neural network classifier for basic Kannada character recognition.The system works in two phases, training phase and testing phase. Exhaustiveexperimentation was done to analyze zone wise horizontal and vertical profile based featuresusing neural networks classifier. The results obtained by considering zone wise horizontaland vertical profile features and neural network classifier are encouraging and it has beenobserved that the system is robust and insensitive for several challenges like, unusual fonts,variable lighting condition, noise, blur etc. The method is tested on 490 samples and gives anaverage recognition accuracy of 92%. The proposed method can be extended for characterrecognition considering new set of features and classification algorithm.
  • International Journal of Graphics and Multimedia (IJGM), ISSN 0976 – 6448(Print),ISSN 0976 – 6456(Online) Volume 4, Issue 1, January - April 2013, © IAEME18REFERENCES[1] Abowd Gregory D. Christopher G. Atkeson, Jason Hong, Sue Long, Rob Kooper,and Mike Pinkerton, 1997, “CyberGuide: A mobile context-aware tour guide”,Wireless Networks, 3(5): pp.421-433.[2] Natalia Marmasse and Chris Schamandt, 2000, “Location aware informationdelivery with comMotion”, In Proceedings of Conference on Human Factors inComputing Systems, pp.157-171.[3] Tollmar K. Yeh T. and Darrell T., 2004, “IDeixis - Image-Based Deixis for FindingLocation-Based Information”, In Proceedings of Conference on Human Factors inComputing Systems (CHI’04), pp.781-782.[4] Gillian Leetch, Dr. Eleni Mangina, 2005, “A Multi-Agent System to StreamMultimedia to Handheld Devices”, Proceedings of the Sixth InternationalConference on Computational Intelligence and Multimedia Applications(ICCIMA’05).[5] Wichian Premchaiswadi, 2009, “A mobile Image search for Tourist InformationSystem”, Proceedings of 9th international conference on SIGNAL PROCESSING,COMPUTATIONAL GEOMETRY and ARTIFICIAL VISION, pp.62-67.[6] Ma Chang-jie, Fang Jin-yun, 2008, “Location Based Mobile Tour Guide ServicesTowards Digital Dunhaung”, International archives of phtotgrammtery, RemoteSensing and Spatial Information Sciences, Vol. XXXVII, Part B4, Beijing.[7] Shih-Hung Wu, Min-Xiang Li, Ping-che Yanga, Tsun Kub, 2010, “UbiquitousWikipedia on Handheld Device for Mobile Learning”, 6th IEEE InternationalConference on Wireless, Mobile, and Ubiquitous Technologies in Education, pp.228-230.[8] Tom yeh, Kristen Grauman, and K. Tollmar., 2005, “A picture is worth a thousandkeywords: image-based object search on a mobile platform”, In Proceedings ofConference on Human Factors in Computing Systems, pp.2025-2028.[9] Fan X. Xie X. Li Z. Li M. and Ma. 2005, “Photo-to-search: using multimodalqueries to search web from mobile phones”, In proceedings of 7th ACM SIGMMinternational workshop on multimedia information retrieval.[10] Lim Joo Hwee, Jean Pierre Chevallet and Sihem Nouarah Merah, 2005,“SnapToTell: Ubiquitous information access from camera”, Mobile humancomputer interaction with mobile devices and services, Glasgow, Scotland.[11] Jing Zhang, Xilin Chen, Andreas Hanneman, Jie Yang, and Alex Waibel.,2002, “ARobust Approach for Recognition of Text Embedded in Natural Scenes”, proc.16th International conf. Pattern recognition, volume 3, pp. 204-207 (2002).[12] Xilin Chen, Jie Yang, Jing Zhang, and Alex Waibel, January 2004, “AutomaticDetection and Recognition of Signs From Natural Scenes”, IEEE Transactions OnImage Processing, Vol. 13, No. 1, pp. 87-99 (January 2004).[13] Anand Mishra, Karteek Alahari, C. V. Jawahar, 2012, “Top-Down and Bottom-UpCues for Scene Text Recognition” , Proceedings of the IEEE Conference onComputer Vision and Pattern Recognition (CVPR), 2012.[14] Zohra Saidane and Christophe Garcia, 2007, “Automatic Scene Text Recognitionusing a Convolutional Neural Network”, CBDAR, p6, pp. 100-106 (2007)
  • International Journal of Graphics and Multimedia (IJGM), ISSN 0976 – 6448(Print),ISSN 0976 – 6456(Online) Volume 4, Issue 1, January - April 2013, © IAEME19[15] Céline Mancas-Thillou, June 2007, “Natural Scene Text Understanding”,Segmentation and Pattern Recognition, I-Tech, Vienna, Austria, pp.123-142 (June2007)[16] Jerod J. Weinman, Erik Learned-Miller, and Allen Hanson, September 2007, “FastLexicon-Based Scene Text Recognition with Sparse Belief Propagation”, Proc. Intl.Conf. on Document Analysis and Recognition, Curitiba, Brazil (September 2007)[17] Jerod J. Weinman, Erik Learned-Miller and Allen Hanson, December 2008, “ADiscriminative Semi-Markov Model for Robust Scene Text Recognition”, IEEE,Proc. Intl. Conf. on Pattern Recognition (ICPR), Tampa, FL, USA, pp. 1-5(December 2008)[18] Te´ofilo E. de Campos and Bodla Rakesh Bab, 2009, “Character Recognition InNatural Images”, Computer Vision Theory and Applications, Proc. InternationalConf. volume , pp. 273-280 (2009)[19] Onur Tekdas and Nikhil Karnad, 2009, “Recognizing Characters in Natural Scenes:A Feature Study”, CSCI 5521 Pattern Recognition, pp. 1-14 (2009)[20] Sangame S.K., Ramteke R.J., and Rajkumar Benne, 2009, “Recognition of isolatedhandwritten Kannada vowels”, Advances in Computational Research, ISSN: 0975–3273, Volume 1, Issue 2, pp 52-55 (2009)[21] B.V.Dhandra, Mallikarjun Hangarge, and Gururaj Mukarambi, 2010, ”SpatialFeatures for Handwritten Kannada and English Character Recognition”, IJCASpecial Issue on Recent Trends in Image Processing and Pattern Recognition(RTIPPR), pp 146-151 (2010)[22] Mallikarjun Hangarge, Shashikala Patil, and B.V.Dhandra, 2010, “Multi-font/sizeKannada Vowels and Numerals Recognition Based on Modified InvariantMoments”, IJCA Special Issue on Recent Trends in Image Processing and PatternRecognition (RTIPPR), pp 126-130 (2010)[23] Masakazu Iwamura, Tomohiko Tsuji, and Koichi Kise, 2010, “Memory-BasedRecognition of Camera-Captured Characters”, 9thIAPR international workshop ondocument analysis systems, pp. 89-96 (2010)[24] Masakazu Iwamura, Takuya Kobayashi, and Koichi Kise, 2011, “Recognition ofMultiple Characters in a Scene Image Using Arrangement of Local Features”,IEEE, International Conference on Document Analysis and Recognition, pp. 1409-1413(2011)[25] Primekumar K.P and Sumam Mary Idicula, “Performance of on-Line MalayalamHandwritten character Recognition using Hmm And Sfam”, International Journal ofComputer Engineering & Technology (IJCET), Volume 3, Issue 1, 2012,pp. 115 - 125, ISSN Print: 0976 – 6367, ISSN Online: 0976 – 6375.[26] Mr.Lokesh S. Khedekar and Dr.A.S.Alvi, “Advanced Smart Credential CumUnique Identification and Recognition System. (Ascuirs)”, International Journal ofComputer Engineering & Technology (IJCET), Volume 4, Issue 1, 2013,pp. 97 - 104, ISSN Print: 0976 – 6367, ISSN Online: 0976 – 6375.