This document describes the development of an online handwritten character recognition system using a modified hybrid neural network model. It developed a hybrid feature extraction technique that combines stroke information, contour pixels, and zoning of characters to create feature vectors. A hybrid neural network model combining modified counterpropagation and optical backpropagation networks was also developed. Experiments using 6,200 character samples from 50 subjects achieved a 99% recognition rate with an average recognition time of 2 milliseconds when testing samples from new subjects.
11.development of a feature extraction technique for online character recogni...Alexander Decker
The document describes a study that developed a hybrid feature extraction technique for online character recognition. The technique combines geometrical and statistical features. Geometrical features included stroke information (number, pressure, junctions, horizontal projection count) and contour pixels. Statistical features included zoning, which divides the character image into zones and calculates the percentage of black pixels in each zone. A hybrid algorithm was created that integrated geometrical and statistical features to take advantage of their complementarity and gain new insights into character properties. The goal was to improve recognition performance over existing single-feature techniques.
Development of a feature extraction technique for online character recognitio...Alexander Decker
The document describes a study that developed a hybrid feature extraction technique for online character recognition. The technique combines geometrical and statistical features. Geometrical features included stroke information (number, pressure, junctions, horizontal projection count) and contour pixels. Statistical features included zoning, which divides the character image into zones and calculates the percentage of black pixels in each zone. A hybrid algorithm was created that integrated geometrical and statistical features to take advantage of their complementarity and gain new insights into character properties. The goal was to improve recognition performance over existing single-feature techniques.
This document discusses offline handwritten Devanagari script recognition using a probabilistic neural network. It begins with an abstract that outlines the goal of recognizing offline handwritten Devanagari numerals using structural and local features classified with a probabilistic neural network classifier. The introduction provides background on handwritten numeral recognition challenges. The document then reviews related work on character recognition from the early 1900s to modern advancements, describes the Devanagari script, discusses theoretical neural network and proposed recognition methods, and concludes that accurate recognition depends on the input quality and more efficient, accurate systems are needed to recognize varied writing styles.
Design and implementation of optical character recognition using template mat...eSAT Journals
Abstract
Optical character recognition (OCR) is an efficient way of converting scanned image into machine code which can further edit. There are variety of methods have been implemented in the field of character recognition. This paper proposes Optical character recognition by using Template Matching. The templates formed, having variety of fonts and size .In this proposed system, Image pre-processing, Feature extraction and classification algorithms have been implemented so as to build an excellent character recognition technique for different scripts .Result of this approach is also discussed in this paper. This system is implemented in Matlab.
Keywords- OCR, Feature Extraction, Classification
This document discusses an OCR-based speech synthesis system developed using LabVIEW 2013. The system has two main parts: optical character recognition and text-to-speech conversion. It uses a digital camera to capture images, performs preprocessing like binarization, then matches characters to a template for recognition. The recognized text is converted to speech using text-to-speech synthesis for audio output. The system achieves 75-80% accuracy but could be improved with support for more fonts and font sizes.
A STUDY ON OPTICAL CHARACTER RECOGNITION TECHNIQUESijcsitcejournal
Optical Character Recognition (OCR) is the process which enables a system to without human intervention
identifies the scripts or alphabets written into the users’ verbal communication. Optical Character
identification has grown to be individual of the mainly flourishing applications of knowledge in the field of
pattern detection and artificial intelligence. In our survey we study on the various OCR techniques. In this
paper we resolve and examine the hypothetical and numerical models of Optical Character Identification.
The Optical character identification or classification (OCR) and Magnetic Character Recognition (MCR)
techniques are generally utilized for the recognition of patterns or alphabets. In general the alphabets are
in the variety of pixel pictures and it could be either handwritten or stamped, of any series, shape or
direction etc. Alternatively in MCR the alphabets are stamped with magnetic ink and the studying machine
categorize the alphabet on the basis of the exclusive magnetic field that is shaped by every alphabet. Both
MCR and OCR discover utilization in banking and different trade appliances. Earlier exploration going on
Optical Character detection or recognition has shown that the In Handwritten text there is no limitation
lying on the script technique. Hand written correspondence is complicated to be familiar through due to
diverse human handwriting style, disparity in angle, size and shape of calligraphy. An assortment of
approaches of Optical Character Identification is discussed here all along through their achievement.
Optical character recognition (ocr) pptDeijee Kalita
The document discusses optical character recognition (OCR), which is the process of converting scanned images of printed or handwritten text into machine-encoded text. It provides a brief history of OCR, explaining some of the early developments. It also outlines the typical steps involved, including pre-processing, character recognition, and post-processing. Examples of applications of OCR technology are given.
11.development of a feature extraction technique for online character recogni...Alexander Decker
The document describes a study that developed a hybrid feature extraction technique for online character recognition. The technique combines geometrical and statistical features. Geometrical features included stroke information (number, pressure, junctions, horizontal projection count) and contour pixels. Statistical features included zoning, which divides the character image into zones and calculates the percentage of black pixels in each zone. A hybrid algorithm was created that integrated geometrical and statistical features to take advantage of their complementarity and gain new insights into character properties. The goal was to improve recognition performance over existing single-feature techniques.
Development of a feature extraction technique for online character recognitio...Alexander Decker
The document describes a study that developed a hybrid feature extraction technique for online character recognition. The technique combines geometrical and statistical features. Geometrical features included stroke information (number, pressure, junctions, horizontal projection count) and contour pixels. Statistical features included zoning, which divides the character image into zones and calculates the percentage of black pixels in each zone. A hybrid algorithm was created that integrated geometrical and statistical features to take advantage of their complementarity and gain new insights into character properties. The goal was to improve recognition performance over existing single-feature techniques.
This document discusses offline handwritten Devanagari script recognition using a probabilistic neural network. It begins with an abstract that outlines the goal of recognizing offline handwritten Devanagari numerals using structural and local features classified with a probabilistic neural network classifier. The introduction provides background on handwritten numeral recognition challenges. The document then reviews related work on character recognition from the early 1900s to modern advancements, describes the Devanagari script, discusses theoretical neural network and proposed recognition methods, and concludes that accurate recognition depends on the input quality and more efficient, accurate systems are needed to recognize varied writing styles.
Design and implementation of optical character recognition using template mat...eSAT Journals
Abstract
Optical character recognition (OCR) is an efficient way of converting scanned image into machine code which can further edit. There are variety of methods have been implemented in the field of character recognition. This paper proposes Optical character recognition by using Template Matching. The templates formed, having variety of fonts and size .In this proposed system, Image pre-processing, Feature extraction and classification algorithms have been implemented so as to build an excellent character recognition technique for different scripts .Result of this approach is also discussed in this paper. This system is implemented in Matlab.
Keywords- OCR, Feature Extraction, Classification
This document discusses an OCR-based speech synthesis system developed using LabVIEW 2013. The system has two main parts: optical character recognition and text-to-speech conversion. It uses a digital camera to capture images, performs preprocessing like binarization, then matches characters to a template for recognition. The recognized text is converted to speech using text-to-speech synthesis for audio output. The system achieves 75-80% accuracy but could be improved with support for more fonts and font sizes.
A STUDY ON OPTICAL CHARACTER RECOGNITION TECHNIQUESijcsitcejournal
Optical Character Recognition (OCR) is the process which enables a system to without human intervention
identifies the scripts or alphabets written into the users’ verbal communication. Optical Character
identification has grown to be individual of the mainly flourishing applications of knowledge in the field of
pattern detection and artificial intelligence. In our survey we study on the various OCR techniques. In this
paper we resolve and examine the hypothetical and numerical models of Optical Character Identification.
The Optical character identification or classification (OCR) and Magnetic Character Recognition (MCR)
techniques are generally utilized for the recognition of patterns or alphabets. In general the alphabets are
in the variety of pixel pictures and it could be either handwritten or stamped, of any series, shape or
direction etc. Alternatively in MCR the alphabets are stamped with magnetic ink and the studying machine
categorize the alphabet on the basis of the exclusive magnetic field that is shaped by every alphabet. Both
MCR and OCR discover utilization in banking and different trade appliances. Earlier exploration going on
Optical Character detection or recognition has shown that the In Handwritten text there is no limitation
lying on the script technique. Hand written correspondence is complicated to be familiar through due to
diverse human handwriting style, disparity in angle, size and shape of calligraphy. An assortment of
approaches of Optical Character Identification is discussed here all along through their achievement.
Optical character recognition (ocr) pptDeijee Kalita
The document discusses optical character recognition (OCR), which is the process of converting scanned images of printed or handwritten text into machine-encoded text. It provides a brief history of OCR, explaining some of the early developments. It also outlines the typical steps involved, including pre-processing, character recognition, and post-processing. Examples of applications of OCR technology are given.
Optical character recognition (OCR) is a technology that converts images of typed, handwritten or printed text into machine-encoded text. The document describes the OCR process which includes image pre-processing, segmentation, feature extraction and recognition using a multi-layer perceptron neural network. It discusses advantages such as increased efficiency and ability to instantly search text. Disadvantages include issues with low quality documents. Applications include data entry for business documents and making printed documents searchable.
This document summarizes and reviews various techniques for optical character recognition (OCR) of English text, including matrix matching, fuzzy logic, feature extraction, structural analysis, and neural networks. It discusses the structure and stages of OCR systems, including image preprocessing, segmentation, feature extraction, classification, and output. Challenges for OCR systems include degraded documents like old books, photocopies, and newspapers. The document reviews several related works on OCR and discusses techniques for English, Indian languages, license plate recognition, document binarization, and removing "bleed-through" effects from financial documents.
I have presented the power point presentation on Basics of the optical character recognition. Here i have focused to discuss about hoe OCR is used in scanning process and can it be used for document scanning and its uses.
The document describes a project to develop optical character recognition (OCR) software for recognizing online and offline handwritten text in multiple languages. It aims to recognize characters from scanned documents or real-time handwriting input and create a user profile. The system scope includes recognizing handwriting from multiple users and cursive script. It will store recognized characters in a text file and optionally convert words to audio for reading documents aloud. The document provides details on OCR technology, applications, literature review, user and system requirements, and the project's goal of using OCR for applications like forms processing.
This document provides an introduction to character recognition and optical character recognition (OCR). It discusses the purpose and history of OCR, including early technologies from the 1910s-1930s. It also covers the scope, technology used, and how to use OCR software. Finally, it discusses the feasibility study for an OCR project, including technical, operational, and economic feasibility. The overall purpose is to develop an efficient OCR software system to convert paper documents to electronic format for improved document processing and searchability.
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
This research tries to find out amethodology through which any data from the daily-use printed bills and invoices can be extracted. The data from these bills or invoices can be used extensively later on –such as machine learning or statistical analysis. This research focuses on extraction of final bill-amount, itinerary, date and similar data from bills and invoices as they encapsulate an ample amount of information about the users purchases, likes or dislikes etc. Optical Character Recognition (OCR) technology is a system that provides a full alphanumeric recognition of printed or handwritten characters from images. Initially, OpenCV has been used to detect the bill or invoice from the image and filter out the unnecessary noise from the image. Then intermediate image is passed for further processing using Tesseract OCR engine, which is an optical character recognition engine. Tesseract intends to apply Text Segmentation in order to extract written text in various fonts and languages. Our methodology proves to be highly accurate while tested on a variety of input images of bills and invoices.
Optical Character Recognition Using PythonYogeshIJTSRD
Optical Character Recognition is a process of classifying optical patterns with respect to alphanumeric or other characters. It also includes segmentation, feature extraction and classification. Deep learning is part of a broader family of machine learning methods based on artificial neural networks with. representation learning The idea of the project is to extract text from image using Deep Learning by OCR Ponvizhi. U | Ramya. P | Ramya. R "Optical Character Recognition Using Python" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-5 | Issue-3 , April 2021, URL: https://www.ijtsrd.com/papers/ijtsrd41099.pdf Paper URL: https://www.ijtsrd.com/computer-science/other/41099/optical-character-recognition-using-python/ponvizhi-u
Optical Character Recognition (OCR) based RetrievalBiniam Asnake
The document outlines research works on optical character recognition (OCR) systems, including both global and local (Amharic language) research. It discusses several local studies from 1997-2011 focused on developing OCR for printed, typewritten and handwritten Amharic text. The studies explored various preprocessing, segmentation, recognition algorithms and achieved recognition accuracy rates ranging from 15-99% depending on the type of Amharic text and techniques used. Future research directions included improving techniques for formatted text, different font styles and improving accuracy.
The document presents a presentation on character recognition and conversion. It discusses the purpose of character recognition as document processing and speeding up recognition. It describes the architecture as containing templates, scanning, recognition, and coding. It details testing through sample and performance testing, showing the conversion of various images to text. It concludes by discussing applications and limitations of character recognition technology.
The document discusses Optical Character Recognition (OCR) and the IMPACT project. IMPACT is supported by the European Community under the FP7 ICT Work Programme and aims to improve access to historical text. It is coordinated by the National Library of the Netherlands and involves several other European libraries and research institutions. ABBYY provides OCR technology for the IMPACT members to recognize text in old documents.
Optical character recognition (OCR) is the conversion of images of typed or printed text into machine-encoded text. The document discusses OCR including defining it, describing its problem overview, types, steps in the OCR process like pre-processing and character recognition, accuracy considerations, use of free OCR software, pros and cons, and areas for further research like improving recognition of cursive text.
Optical Character Recognition (OCR) involves the conversion of scanned images of printed text into machine-readable text. It is heavily used in industry for applications like editing, scanning, searching, and compact storage. The document discusses developing an OCR system using machine learning, artificial intelligence, and neural networks to recognize characters despite variations in image quality, orientation, and language. It outlines the technologies, current progress implementing linear and logistic regression models, and plans for character segmentation and feature extraction.
The document discusses Optical Character Recognition (OCR) and describes the key steps and algorithms involved. It summarizes the main modules in an OCR system including pre-processing, feature extraction, classification, and post-processing. It then discusses two specific algorithms - Principal Component Analysis and Learning Vector Quantization - that can be used to implement OCR. The document also evaluates the feasibility and provides a high-level design for an OCR system including graphical user interface, scanner, training, and main modules.
Presentation on the New Technology based on the recognition of letters that would be available on Soft and Hard copy both and allow all the format in Soft Copy. Optical character Recognition based on the recognition of letters with all the existing languages.
CONTENT RECOVERY AND IMAGE RETRIVAL IN IMAGE DATABASE CONTENT RETRIVING IN TE...Editor IJMTER
Digital Images are used in magazines, blogs, website, television and more. Digital image processing
techniques are used for feature selection, pattern extraction classification and retrieval requirements. Color, texture
and shape features are used in the image processing. Digital images processing also supports computer graphics
and computer vision domains. Scene text recognition is performed with two schemes. They are character
recognizer and binary character classifier models. A character recognizer is trained to predict the category of a
character in an image patch. A binary character classifier is trained for each character class to predict the existence
of this category in an image patch. Scene text recognition is performed on detected text regions. Pixel-based layout
analysis method is adopted to extract text regions and segment text characters in images. Text character
segmentation is carried out with color uniformity and horizontal alignment of text characters. Discriminative
character descriptor is designed by combining several feature detectors and descriptors. Histogram of Oriented
Gradients (HOG) is used to identify the character descriptors. Character structure is modeled at each character
class by designing stroke configuration maps. The scene text extraction scheme is also supports for smart mobile
devices. Text recognition methods are used with text understanding and text retrieval applications. The text
recognition scheme is enhanced with content based image retrieval process. The system is integrated with
additional representative and discriminative features for text structure modeling process. The system is enhanced to
perform text and word level recognition using lexicon analysis. The training process is included with word
database update task.
OCR (Optical Character Recognition) is a technology that recognizes text within digital images. It examines text in documents and converts characters into machine-readable code. OCR is commonly used to convert printed paper documents into editable digital text files. The basic process involves preprocessing the image to clean it up, isolating individual characters, and using character recognition libraries or more advanced techniques to identify each character and assign it the corresponding text. OCR is needed to convert scanned documents into text-searchable files that can be edited, searched, and managed more easily within document systems.
1. The document discusses optical character recognition (OCR), including its applications, how it works, and the platform used.
2. OCR involves using software to convert scanned images of text into machine-encoded text by recognizing glyphs and classifying characters through feature extraction and neural networks.
3. The authors explore using OCR for tasks like digitization and security monitoring to reduce human error, and discuss future enhancements like recognizing multiple characters and improving accuracy.
Artificial neural networks are commonly used in optical character recognition algorithms due to their flexibility, ability to learn, and power. ANNs work by taking an input, running it through a network of neurons arranged in layers, and producing an output. They can be trained to recognize patterns through a learning stage where they are given many examples of input and output pairs. Once trained, ANNs can accurately evaluate new inputs and recognize characters at a 98% rate with only 5% error. Common types of ANNs include feedforward, recurrent, radial basis function, and self-organizing networks.
Comparative Analysis of PSO and GA in Geom-Statistical Character Features Sel...IJERA Editor
Online handwriting recognition today has special interest due to increased usage of the hand held devices and it
has become a difficult problem because of the high variability and ambiguity in the character shapes written by
individuals. One major problem encountered by researchers in developing character recognition system is
selection of efficient features (optimal features). In this paper, a feature extraction technique for online character
recognition system was developed using hybrid of geometrical and statistical (Geom-statistical) features. Thus,
through the integration of geometrical and statistical features, insights were gained into new character
properties, since these types of features were considered to be complementary. Several optimization techniques
have been used in literature for feature selection in character recognition such as; Ant Colony Optimization
Algorithm (ACO), Genetic Algorithm (GA), Particle Swarm Optimization (PSO) and Simulated Annealing but
comparative analysis of GA and PSO in online character has not been carried out. In this paper, a comparative
analysis of performance was made between the GA and PSO in optimizing the Geom-statistical features in
online character recognition using Modified Optical Backpropagation (MOBP) as classifier. Simulation of the
system was done and carried out on Matlab 7.10a. The results generated show that PSO is a well-accepted
optimization algorithm in selection of optimal features as it outperforms the GA in terms of number of features
selected, training time and recognition accuracy.
This document summarizes and reviews various techniques for optical character recognition (OCR) of English text, including matrix matching, fuzzy logic, feature extraction, structural analysis, and neural networks. It discusses the structure and stages of OCR systems, including image preprocessing, segmentation, feature extraction, classification, and output. Challenges for OCR systems include degraded documents like old books, photocopies, and newspapers. The document reviews several related works on OCR and discusses techniques to improve recognition of degraded text.
Optical character recognition (OCR) is a technology that converts images of typed, handwritten or printed text into machine-encoded text. The document describes the OCR process which includes image pre-processing, segmentation, feature extraction and recognition using a multi-layer perceptron neural network. It discusses advantages such as increased efficiency and ability to instantly search text. Disadvantages include issues with low quality documents. Applications include data entry for business documents and making printed documents searchable.
This document summarizes and reviews various techniques for optical character recognition (OCR) of English text, including matrix matching, fuzzy logic, feature extraction, structural analysis, and neural networks. It discusses the structure and stages of OCR systems, including image preprocessing, segmentation, feature extraction, classification, and output. Challenges for OCR systems include degraded documents like old books, photocopies, and newspapers. The document reviews several related works on OCR and discusses techniques for English, Indian languages, license plate recognition, document binarization, and removing "bleed-through" effects from financial documents.
I have presented the power point presentation on Basics of the optical character recognition. Here i have focused to discuss about hoe OCR is used in scanning process and can it be used for document scanning and its uses.
The document describes a project to develop optical character recognition (OCR) software for recognizing online and offline handwritten text in multiple languages. It aims to recognize characters from scanned documents or real-time handwriting input and create a user profile. The system scope includes recognizing handwriting from multiple users and cursive script. It will store recognized characters in a text file and optionally convert words to audio for reading documents aloud. The document provides details on OCR technology, applications, literature review, user and system requirements, and the project's goal of using OCR for applications like forms processing.
This document provides an introduction to character recognition and optical character recognition (OCR). It discusses the purpose and history of OCR, including early technologies from the 1910s-1930s. It also covers the scope, technology used, and how to use OCR software. Finally, it discusses the feasibility study for an OCR project, including technical, operational, and economic feasibility. The overall purpose is to develop an efficient OCR software system to convert paper documents to electronic format for improved document processing and searchability.
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
This research tries to find out amethodology through which any data from the daily-use printed bills and invoices can be extracted. The data from these bills or invoices can be used extensively later on –such as machine learning or statistical analysis. This research focuses on extraction of final bill-amount, itinerary, date and similar data from bills and invoices as they encapsulate an ample amount of information about the users purchases, likes or dislikes etc. Optical Character Recognition (OCR) technology is a system that provides a full alphanumeric recognition of printed or handwritten characters from images. Initially, OpenCV has been used to detect the bill or invoice from the image and filter out the unnecessary noise from the image. Then intermediate image is passed for further processing using Tesseract OCR engine, which is an optical character recognition engine. Tesseract intends to apply Text Segmentation in order to extract written text in various fonts and languages. Our methodology proves to be highly accurate while tested on a variety of input images of bills and invoices.
Optical Character Recognition Using PythonYogeshIJTSRD
Optical Character Recognition is a process of classifying optical patterns with respect to alphanumeric or other characters. It also includes segmentation, feature extraction and classification. Deep learning is part of a broader family of machine learning methods based on artificial neural networks with. representation learning The idea of the project is to extract text from image using Deep Learning by OCR Ponvizhi. U | Ramya. P | Ramya. R "Optical Character Recognition Using Python" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-5 | Issue-3 , April 2021, URL: https://www.ijtsrd.com/papers/ijtsrd41099.pdf Paper URL: https://www.ijtsrd.com/computer-science/other/41099/optical-character-recognition-using-python/ponvizhi-u
Optical Character Recognition (OCR) based RetrievalBiniam Asnake
The document outlines research works on optical character recognition (OCR) systems, including both global and local (Amharic language) research. It discusses several local studies from 1997-2011 focused on developing OCR for printed, typewritten and handwritten Amharic text. The studies explored various preprocessing, segmentation, recognition algorithms and achieved recognition accuracy rates ranging from 15-99% depending on the type of Amharic text and techniques used. Future research directions included improving techniques for formatted text, different font styles and improving accuracy.
The document presents a presentation on character recognition and conversion. It discusses the purpose of character recognition as document processing and speeding up recognition. It describes the architecture as containing templates, scanning, recognition, and coding. It details testing through sample and performance testing, showing the conversion of various images to text. It concludes by discussing applications and limitations of character recognition technology.
The document discusses Optical Character Recognition (OCR) and the IMPACT project. IMPACT is supported by the European Community under the FP7 ICT Work Programme and aims to improve access to historical text. It is coordinated by the National Library of the Netherlands and involves several other European libraries and research institutions. ABBYY provides OCR technology for the IMPACT members to recognize text in old documents.
Optical character recognition (OCR) is the conversion of images of typed or printed text into machine-encoded text. The document discusses OCR including defining it, describing its problem overview, types, steps in the OCR process like pre-processing and character recognition, accuracy considerations, use of free OCR software, pros and cons, and areas for further research like improving recognition of cursive text.
Optical Character Recognition (OCR) involves the conversion of scanned images of printed text into machine-readable text. It is heavily used in industry for applications like editing, scanning, searching, and compact storage. The document discusses developing an OCR system using machine learning, artificial intelligence, and neural networks to recognize characters despite variations in image quality, orientation, and language. It outlines the technologies, current progress implementing linear and logistic regression models, and plans for character segmentation and feature extraction.
The document discusses Optical Character Recognition (OCR) and describes the key steps and algorithms involved. It summarizes the main modules in an OCR system including pre-processing, feature extraction, classification, and post-processing. It then discusses two specific algorithms - Principal Component Analysis and Learning Vector Quantization - that can be used to implement OCR. The document also evaluates the feasibility and provides a high-level design for an OCR system including graphical user interface, scanner, training, and main modules.
Presentation on the New Technology based on the recognition of letters that would be available on Soft and Hard copy both and allow all the format in Soft Copy. Optical character Recognition based on the recognition of letters with all the existing languages.
CONTENT RECOVERY AND IMAGE RETRIVAL IN IMAGE DATABASE CONTENT RETRIVING IN TE...Editor IJMTER
Digital Images are used in magazines, blogs, website, television and more. Digital image processing
techniques are used for feature selection, pattern extraction classification and retrieval requirements. Color, texture
and shape features are used in the image processing. Digital images processing also supports computer graphics
and computer vision domains. Scene text recognition is performed with two schemes. They are character
recognizer and binary character classifier models. A character recognizer is trained to predict the category of a
character in an image patch. A binary character classifier is trained for each character class to predict the existence
of this category in an image patch. Scene text recognition is performed on detected text regions. Pixel-based layout
analysis method is adopted to extract text regions and segment text characters in images. Text character
segmentation is carried out with color uniformity and horizontal alignment of text characters. Discriminative
character descriptor is designed by combining several feature detectors and descriptors. Histogram of Oriented
Gradients (HOG) is used to identify the character descriptors. Character structure is modeled at each character
class by designing stroke configuration maps. The scene text extraction scheme is also supports for smart mobile
devices. Text recognition methods are used with text understanding and text retrieval applications. The text
recognition scheme is enhanced with content based image retrieval process. The system is integrated with
additional representative and discriminative features for text structure modeling process. The system is enhanced to
perform text and word level recognition using lexicon analysis. The training process is included with word
database update task.
OCR (Optical Character Recognition) is a technology that recognizes text within digital images. It examines text in documents and converts characters into machine-readable code. OCR is commonly used to convert printed paper documents into editable digital text files. The basic process involves preprocessing the image to clean it up, isolating individual characters, and using character recognition libraries or more advanced techniques to identify each character and assign it the corresponding text. OCR is needed to convert scanned documents into text-searchable files that can be edited, searched, and managed more easily within document systems.
1. The document discusses optical character recognition (OCR), including its applications, how it works, and the platform used.
2. OCR involves using software to convert scanned images of text into machine-encoded text by recognizing glyphs and classifying characters through feature extraction and neural networks.
3. The authors explore using OCR for tasks like digitization and security monitoring to reduce human error, and discuss future enhancements like recognizing multiple characters and improving accuracy.
Artificial neural networks are commonly used in optical character recognition algorithms due to their flexibility, ability to learn, and power. ANNs work by taking an input, running it through a network of neurons arranged in layers, and producing an output. They can be trained to recognize patterns through a learning stage where they are given many examples of input and output pairs. Once trained, ANNs can accurately evaluate new inputs and recognize characters at a 98% rate with only 5% error. Common types of ANNs include feedforward, recurrent, radial basis function, and self-organizing networks.
Comparative Analysis of PSO and GA in Geom-Statistical Character Features Sel...IJERA Editor
Online handwriting recognition today has special interest due to increased usage of the hand held devices and it
has become a difficult problem because of the high variability and ambiguity in the character shapes written by
individuals. One major problem encountered by researchers in developing character recognition system is
selection of efficient features (optimal features). In this paper, a feature extraction technique for online character
recognition system was developed using hybrid of geometrical and statistical (Geom-statistical) features. Thus,
through the integration of geometrical and statistical features, insights were gained into new character
properties, since these types of features were considered to be complementary. Several optimization techniques
have been used in literature for feature selection in character recognition such as; Ant Colony Optimization
Algorithm (ACO), Genetic Algorithm (GA), Particle Swarm Optimization (PSO) and Simulated Annealing but
comparative analysis of GA and PSO in online character has not been carried out. In this paper, a comparative
analysis of performance was made between the GA and PSO in optimizing the Geom-statistical features in
online character recognition using Modified Optical Backpropagation (MOBP) as classifier. Simulation of the
system was done and carried out on Matlab 7.10a. The results generated show that PSO is a well-accepted
optimization algorithm in selection of optimal features as it outperforms the GA in terms of number of features
selected, training time and recognition accuracy.
This document summarizes and reviews various techniques for optical character recognition (OCR) of English text, including matrix matching, fuzzy logic, feature extraction, structural analysis, and neural networks. It discusses the structure and stages of OCR systems, including image preprocessing, segmentation, feature extraction, classification, and output. Challenges for OCR systems include degraded documents like old books, photocopies, and newspapers. The document reviews several related works on OCR and discusses techniques to improve recognition of degraded text.
This document summarizes a research paper that evaluated different machine learning algorithms for offline handwritten digit recognition. The researchers tested Multilayer Perceptron, Support Vector Machine, Naive Bayes, Bayes Net, Random Forest, J48 and Random Tree classifiers using the WEKA machine learning toolkit. The Multilayer Perceptron achieved the highest accuracy of 90.37% for recognizing handwritten digits. The paper aims to develop effective approaches for handwritten digit recognition using machine learning techniques.
Handwritten Digit Recognition Using CNNIRJET Journal
This document discusses a research project on handwritten digit recognition using convolutional neural networks. The project aims to build a model that can recognize handwritten digits in images using the MNIST dataset to train a convolutional neural network. Specifically, it uses Keras and TensorFlow to create a 7-layer LeNet-5 CNN model on 70,000 MNIST images. The model is trained using stochastic gradient descent and backpropagation. Once trained, the model can be used to predict handwritten digits in new images. The document provides background on handwritten digit recognition and CNNs, describes the dataset and tools used, and outlines the methodology for building the recognition model.
1. The document discusses an optical character recognition (OCR) system that uses a neural network to recognize handwritten English characters and numerals.
2. It describes the background of OCR, including offline vs online recognition. The key steps of OCR systems are discussed as image acquisition, preprocessing, feature extraction, training and recognition, and post processing.
3. Neural networks are described as being useful for pattern recognition problems like character classification. The proposed system uses a grid infrastructure to allow multi-lingual OCR and more efficient document processing compared to other methods.
IRJET- Intelligent Character Recognition of Handwritten CharactersIRJET Journal
This document summarizes research on intelligent character recognition of handwritten characters using neural networks. It discusses how neural networks can be trained on feature vectors extracted from images to accurately recognize (up to 95%) handwritten alphanumeric characters. The proposed system segments images into characters, extracts features like intersections and endpoints, trains a neural network on feature vectors, and then uses the trained network to recognize new characters. It achieved high accuracy after training on a large dataset of 400 samples. The system automatically transfers recognized text to an Excel sheet.
Character Recognition using Data Mining Technique (Artificial Neural Network)Sudipto Krishna Dutta
This Presentation is on Character Recognition using Artificial Neural networks,
Presented to
Farhana Afrin Duty
Assistant Professor
Department of Statistics
Jahangirnagar University
Savar, Dhaka-1342, Bangladesh
Because of the rapid growth in technology breakthroughs, including
multimedia and cell phones, Telugu character recognition (TCR) has recently
become a popular study area. It is still necessary to construct automated and
intelligent online TCR models, even if many studies have focused on offline
TCR models. The Telugu character dataset construction and validation using
an Inception and ResNet-based model are presented. The collection of 645
letters in the dataset includes 18 Achus, 38 Hallus, 35 Othulu, 34×16
Guninthamulu, and 10 Ankelu. The proposed technique aims to efficiently
recognize and identify distinctive Telugu characters online. This model's main
pre-processing steps to achieve its goals include normalization, smoothing,
and interpolation. Improved recognition performance can be attained by using
stochastic gradient descent (SGD) to optimize the model's hyperparameters.
Online Hand Written Character RecognitionIOSR Journals
This document discusses online handwritten character recognition. It begins by describing the differences between online and offline recognition systems. Online systems capture stroke order and timing information while writing, while offline systems analyze static images. The document then discusses challenges in recognition like variability between writers. It presents several previous works in online handwriting recognition. The document proposes a method for online recognition that uses shape, pixel density, and stroke movement template matching to identify characters. It describes preprocessing input and generating training templates to match against. Overall, the document outlines challenges in online handwriting recognition and proposes a template matching approach to address these challenges.
A Deep Learning Approach to Recognize Cursive HandwritingIRJET Journal
This document presents a deep learning approach for recognizing cursive handwriting. It discusses how cursive handwriting recognition is challenging due to variations in individual writing styles. The proposed system uses a convolutional neural network (CNN) for feature extraction and classification of handwritten characters. It takes input images of handwritten text, performs preprocessing like resizing and segmentation, extracts features using CNN, and classifies characters for recognition. The system is trained on datasets containing cursive text written by different people. It aims to accurately recognize cursive text and convert it to digital text formats like documents. Experimental results show the system achieves high recognition accuracy compared to conventional approaches.
The document describes a system for offline transcription of handwritten text using artificial intelligence. The system takes scanned images of handwritten forms as input. It uses image processing techniques like thresholding and morphological operations to preprocess the images and localize the boxes containing handwritten text. A recurrent neural network model with Tesseract OCR is used for handwritten character recognition. The recognized text is post-processed and stored in an Excel sheet. The system was able to recognize over 80% of characters correctly on test data. Future work may include expanding it to recognize additional languages and improving accuracy for low-quality images.
Lei Zheng has over 15 years of experience in areas such as machine learning, data mining, and software development. He currently works as a Senior Software Engineer at Yahoo, where he develops algorithms for spam filtering and detection of abusive behavior. Previously he held research positions at the University of Pittsburgh and JustSystems Evans Research, where he implemented algorithms and systems for information retrieval, natural language processing, and data mining.
This document summarizes research on recognizing online handwritten Sanskrit characters using support vector classification. It discusses using Freeman chain code to extract features from character images and represent boundary pixels. A randomized algorithm generates the chain codes. Features vectors are then built and used to train a support vector machine classifier. Segmentation is also used to evaluate possible segmentation zones. The goal is to develop an accurate system for recognizing Sanskrit characters, which is challenging due to complex character shapes and styles. Previous work on character recognition is discussed, focusing on Indian scripts like Devanagari and techniques like feature extraction and classification.
The document is a project proposal for Bangla handwritten digit recognition using deep learning. It outlines collecting a dataset of Bangla handwritten digits with various variations, preprocessing the images, using a convolutional neural network model for feature extraction and classification, training the model on the dataset, evaluating the trained model on a test set, and developing a user interface to demonstrate recognition of input digits. The overall goal is to develop an accurate system for recognizing Bangla handwritten digits with applications in fields such as banking, mail, and document digitization.
Implementation and Performance Evaluation of Neural Network for English Alpha...ijtsrd
One of the most classical applications of the Artificial Neural Network is the character recognition system. This system is the base for many different types of applications in various fields, many of which are used in daily lives. Cost effective and less time consuming, businesses, post offices, banks, security systems, and even the field of robotics employ this system as the base of their operations. For character recognition, there are many prosperous algorithms for training neural networks. Back propagation (BP) is the most popular algorithm for supervised training multilayer neural networks. In this thesis, Back propagation (BP) algorithm is implemented for the training of multilayer neural networks employing in character recognition system. The neural network architecture used in this implementation is a fully connected three layer network. The network can train over 16 characters since the 4-element output vector is used as output units. This thesis also evaluates the performance of Back propagation (BP) algorithm with various learning rates and mean square errors. MATLAB Programming language is used for implementation. Myat Thida Tun"Implementation and Performance Evaluation of Neural Network for English Alphabet Recognition System" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-2 | Issue-5 , August 2018, URL: http://www.ijtsrd.com/papers/ijtsrd15863.pdf http://www.ijtsrd.com/engineering/information-technology/15863/implementation-and-performance-evaluation-of-neural-network-for-english-alphabet-recognition-system/myat-thida-tun
Cursive Handwriting Recognition System using Feature Extraction and Artif...IRJET Journal
The document describes a system for recognizing cursive handwriting using feature extraction and an artificial neural network. It involves preprocessing scanned images, segmenting them into individual characters, extracting features from the characters using a diagonal scanning method, and classifying the characters using a neural network. This approach provides higher recognition accuracy compared to conventional methods. The key steps are preprocessing images, segmenting into characters, extracting 54 features from each character by moving along diagonals in a grid, and training a neural network classifier on the extracted features.
A Novel Framework For Numerical Character Recognition With Zoning Distance Fe...IJERD Editor
Advancements of Computer technology has made every organization to implement the automatic processing systems for its activities. One of the examples is the recognition of handwritten characters, which has always been a challenging task in image processing and pattern recognition. In this paper we propose Zone based features for recognition of the handwritten characters. In this zoning approach a digit image is divided into 8x8 zones and centre pixel is computed for each zone. This procedure is sequentially repeated for entire zone. Finally features are extracted for classification and recognition.
An offline signature verification using pixels intensity levelsSalam Shah
Offline signature recognition has great importance in our day to day activities. Researchers are trying to use them as biometric identification in various areas like banks, security systems and for other identification purposes. Fingerprints, iris, thumb impression and face detection based biometrics are successfully used for identification of individuals because of their static nature. However, people’s signatures show variability that makes it difficult to recognize the original signatures correctly and to use them as biometrics. The handwritten signatures have importance in banks for cheque, credit card processing, legal and financial transactions, and the signatures are the main target of fraudulence. To deal with complex signatures, there should be a robust signature verification method in places such as banks that can correctly classify the signatures into genuine or forgery to avoid financial frauds. This paper, presents a pixels intensity level based offline signature verification model for the correct classification of signatures. To achieve the target, three statistical classifiers; Decision Tree (J48), probability based Naïve Bayes (NB tree) and Euclidean distance based k-Nearest Neighbor (IBk), are used.
For comparison of the accuracy rates of offline signatures with online signatures, three classifiers were applied on online signature database and achieved a 99.90% accuracy rate with decision tree (J48), 99.82% with Naïve Bayes Tree and 98.11% with K-Nearest Neighbor (with 10 fold cross validation). The results of offline signatures were 64.97% accuracy rate with decision tree (J48), 76.16% with Naïve Bayes Tree and 91.91% with k-Nearest Neighbor (IBk) (without forgeries). The accuracy rate dropped with the inclusion of forgery signatures as, 55.63% accuracy rate with decision tree (J48), 67.02% with Naïve Bayes Tree and 88.12% (with forgeries).
Basically handwriting recognition can be divided into two parts as Offline handwriting recognition and Online handwriting recognition. Highly accurate output with predefined constraints can be given by Online handwriting recognition system as it is related to size of vocabulary and writer dependency, printed writing style etc. Hidden markov model increases the success rate of online recognition system. Online handwriting recognition gives additional time information which is not present in Offline system. A Markov process is a random prediction process whose future behavior rely only on its present state, does not depend on the past state. Which means it should satisfy the Markov condition. A Hidden markov model (HMM) is a statistical markov model. In HMM model the system being modeled is assumed to be a markov process with hidden states. Hidden Markov models (HMMs) can be viewed as extensions of discrete-state Markov processes. Human-machine interaction can be drastically getting improved as On-line handwriting recognition technology contains that capability. As instead of using keyboard any person can write anything by hand with the help of digital pen or any similar equipment would be more natural. HMM build a effective mathematical models for characterizing the variance both in time and signal space presented in speech signal.
Similar to 11.development of a writer independent online handwritten character recognition system using modified hybrid (20)
Abnormalities of hormones and inflammatory cytokines in women affected with p...Alexander Decker
Women with polycystic ovary syndrome (PCOS) have elevated levels of hormones like luteinizing hormone and testosterone, as well as higher levels of insulin and insulin resistance compared to healthy women. They also have increased levels of inflammatory markers like C-reactive protein, interleukin-6, and leptin. This study found these abnormalities in the hormones and inflammatory cytokines of women with PCOS ages 23-40, indicating that hormone imbalances associated with insulin resistance and elevated inflammatory markers may worsen infertility in women with PCOS.
A usability evaluation framework for b2 c e commerce websitesAlexander Decker
This document presents a framework for evaluating the usability of B2C e-commerce websites. It involves user testing methods like usability testing and interviews to identify usability problems in areas like navigation, design, purchasing processes, and customer service. The framework specifies goals for the evaluation, determines which website aspects to evaluate, and identifies target users. It then describes collecting data through user testing and analyzing the results to identify usability problems and suggest improvements.
A universal model for managing the marketing executives in nigerian banksAlexander Decker
This document discusses a study that aimed to synthesize motivation theories into a universal model for managing marketing executives in Nigerian banks. The study was guided by Maslow and McGregor's theories. A sample of 303 marketing executives was used. The results showed that managers will be most effective at motivating marketing executives if they consider individual needs and create challenging but attainable goals. The emerged model suggests managers should provide job satisfaction by tailoring assignments to abilities and monitoring performance with feedback. This addresses confusion faced by Nigerian bank managers in determining effective motivation strategies.
A unique common fixed point theorems in generalized dAlexander Decker
This document presents definitions and properties related to generalized D*-metric spaces and establishes some common fixed point theorems for contractive type mappings in these spaces. It begins by introducing D*-metric spaces and generalized D*-metric spaces, defines concepts like convergence and Cauchy sequences. It presents lemmas showing the uniqueness of limits in these spaces and the equivalence of different definitions of convergence. The goal of the paper is then stated as obtaining a unique common fixed point theorem for generalized D*-metric spaces.
A trends of salmonella and antibiotic resistanceAlexander Decker
This document provides a review of trends in Salmonella and antibiotic resistance. It begins with an introduction to Salmonella as a facultative anaerobe that causes nontyphoidal salmonellosis. The emergence of antimicrobial-resistant Salmonella is then discussed. The document proceeds to cover the historical perspective and classification of Salmonella, definitions of antimicrobials and antibiotic resistance, and mechanisms of antibiotic resistance in Salmonella including modification or destruction of antimicrobial agents, efflux pumps, modification of antibiotic targets, and decreased membrane permeability. Specific resistance mechanisms are discussed for several classes of antimicrobials.
A transformational generative approach towards understanding al-istifhamAlexander Decker
This document discusses a transformational-generative approach to understanding Al-Istifham, which refers to interrogative sentences in Arabic. It begins with an introduction to the origin and development of Arabic grammar. The paper then explains the theoretical framework of transformational-generative grammar that is used. Basic linguistic concepts and terms related to Arabic grammar are defined. The document analyzes how interrogative sentences in Arabic can be derived and transformed via tools from transformational-generative grammar, categorizing Al-Istifham into linguistic and literary questions.
A time series analysis of the determinants of savings in namibiaAlexander Decker
This document summarizes a study on the determinants of savings in Namibia from 1991 to 2012. It reviews previous literature on savings determinants in developing countries. The study uses time series analysis including unit root tests, cointegration, and error correction models to analyze the relationship between savings and variables like income, inflation, population growth, deposit rates, and financial deepening in Namibia. The results found inflation and income have a positive impact on savings, while population growth negatively impacts savings. Deposit rates and financial deepening were found to have no significant impact. The study reinforces previous work and emphasizes the importance of improving income levels to achieve higher savings rates in Namibia.
A therapy for physical and mental fitness of school childrenAlexander Decker
This document summarizes a study on the importance of exercise in maintaining physical and mental fitness for school children. It discusses how physical and mental fitness are developed through participation in regular physical exercises and cannot be achieved solely through classroom learning. The document outlines different types and components of fitness and argues that developing fitness should be a key objective of education systems. It recommends that schools ensure pupils engage in graded physical activities and exercises to support their overall development.
A theory of efficiency for managing the marketing executives in nigerian banksAlexander Decker
This document summarizes a study examining efficiency in managing marketing executives in Nigerian banks. The study was examined through the lenses of Kaizen theory (continuous improvement) and efficiency theory. A survey of 303 marketing executives from Nigerian banks found that management plays a key role in identifying and implementing efficiency improvements. The document recommends adopting a "3H grand strategy" to improve the heads, hearts, and hands of management and marketing executives by enhancing their knowledge, attitudes, and tools.
This document discusses evaluating the link budget for effective 900MHz GSM communication. It describes the basic parameters needed for a high-level link budget calculation, including transmitter power, antenna gains, path loss, and propagation models. Common propagation models for 900MHz that are described include Okumura model for urban areas and Hata model for urban, suburban, and open areas. Rain attenuation is also incorporated using the updated ITU model to improve communication during rainfall.
A synthetic review of contraceptive supplies in punjabAlexander Decker
This document discusses contraceptive use in Punjab, Pakistan. It begins by providing background on the benefits of family planning and contraceptive use for maternal and child health. It then analyzes contraceptive commodity data from Punjab, finding that use is still low despite efforts to improve access. The document concludes by emphasizing the need for strategies to bridge gaps and meet the unmet need for effective and affordable contraceptive methods and supplies in Punjab in order to improve health outcomes.
A synthesis of taylor’s and fayol’s management approaches for managing market...Alexander Decker
1) The document discusses synthesizing Taylor's scientific management approach and Fayol's process management approach to identify an effective way to manage marketing executives in Nigerian banks.
2) It reviews Taylor's emphasis on efficiency and breaking tasks into small parts, and Fayol's focus on developing general management principles.
3) The study administered a survey to 303 marketing executives in Nigerian banks to test if combining elements of Taylor and Fayol's approaches would help manage their performance through clear roles, accountability, and motivation. Statistical analysis supported combining the two approaches.
A survey paper on sequence pattern mining with incrementalAlexander Decker
This document summarizes four algorithms for sequential pattern mining: GSP, ISM, FreeSpan, and PrefixSpan. GSP is an Apriori-based algorithm that incorporates time constraints. ISM extends SPADE to incrementally update patterns after database changes. FreeSpan uses frequent items to recursively project databases and grow subsequences. PrefixSpan also uses projection but claims to not require candidate generation. It recursively projects databases based on short prefix patterns. The document concludes by stating the goal was to find an efficient scheme for extracting sequential patterns from transactional datasets.
A survey on live virtual machine migrations and its techniquesAlexander Decker
This document summarizes several techniques for live virtual machine migration in cloud computing. It discusses works that have proposed affinity-aware migration models to improve resource utilization, energy efficient migration approaches using storage migration and live VM migration, and a dynamic consolidation technique using migration control to avoid unnecessary migrations. The document also summarizes works that have designed methods to minimize migration downtime and network traffic, proposed a resource reservation framework for efficient migration of multiple VMs, and addressed real-time issues in live migration. Finally, it provides a table summarizing the techniques, tools used, and potential future work or gaps identified for each discussed work.
A survey on data mining and analysis in hadoop and mongo dbAlexander Decker
This document discusses data mining of big data using Hadoop and MongoDB. It provides an overview of Hadoop and MongoDB and their uses in big data analysis. Specifically, it proposes using Hadoop for distributed processing and MongoDB for data storage and input. The document reviews several related works that discuss big data analysis using these tools, as well as their capabilities for scalable data storage and mining. It aims to improve computational time and fault tolerance for big data analysis by mining data stored in Hadoop using MongoDB and MapReduce.
1. The document discusses several challenges for integrating media with cloud computing including media content convergence, scalability and expandability, finding appropriate applications, and reliability.
2. Media content convergence challenges include dealing with the heterogeneity of media types, services, networks, devices, and quality of service requirements as well as integrating technologies used by media providers and consumers.
3. Scalability and expandability challenges involve adapting to the increasing volume of media content and being able to support new media formats and outlets over time.
This document surveys trust architectures that leverage provenance in wireless sensor networks. It begins with background on provenance, which refers to the documented history or derivation of data. Provenance can be used to assess trust by providing metadata about how data was processed. The document then discusses challenges for using provenance to establish trust in wireless sensor networks, which have constraints on energy and computation. Finally, it provides background on trust, which is the subjective probability that a node will behave dependably. Trust architectures need to be lightweight to account for the constraints of wireless sensor networks.
This document discusses private equity investments in Kenya. It provides background on private equity and discusses trends in various regions. The objectives of the study discussed are to establish the extent of private equity adoption in Kenya, identify common forms of private equity utilized, and determine typical exit strategies. Private equity can involve venture capital, leveraged buyouts, or mezzanine financing. Exits allow recycling of capital into new opportunities. The document provides context on private equity globally and in developing markets like Africa to frame the goals of the study.
This document discusses a study that analyzes the financial health of the Indian logistics industry from 2005-2012 using Altman's Z-score model. The study finds that the average Z-score for selected logistics firms was in the healthy to very healthy range during the study period. The average Z-score increased from 2006 to 2010 when the Indian economy was hit by the global recession, indicating the overall performance of the Indian logistics industry was good. The document reviews previous literature on measuring financial performance and distress using ratios and Z-scores, and outlines the objectives and methodology used in the current study.
Taking AI to the Next Level in Manufacturing.pdfssuserfac0301
Read Taking AI to the Next Level in Manufacturing to gain insights on AI adoption in the manufacturing industry, such as:
1. How quickly AI is being implemented in manufacturing.
2. Which barriers stand in the way of AI adoption.
3. How data quality and governance form the backbone of AI.
4. Organizational processes and structures that may inhibit effective AI adoption.
6. Ideas and approaches to help build your organization's AI strategy.
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slackshyamraj55
Discover the seamless integration of RPA (Robotic Process Automation), COMPOSER, and APM with AWS IDP enhanced with Slack notifications. Explore how these technologies converge to streamline workflows, optimize performance, and ensure secure access, all while leveraging the power of AWS IDP and real-time communication via Slack notifications.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/building-and-scaling-ai-applications-with-the-nx-ai-manager-a-presentation-from-network-optix/
Robin van Emden, Senior Director of Data Science at Network Optix, presents the “Building and Scaling AI Applications with the Nx AI Manager,” tutorial at the May 2024 Embedded Vision Summit.
In this presentation, van Emden covers the basics of scaling edge AI solutions using the Nx tool kit. He emphasizes the process of developing AI models and deploying them globally. He also showcases the conversion of AI models and the creation of effective edge AI pipelines, with a focus on pre-processing, model conversion, selecting the appropriate inference engine for the target hardware and post-processing.
van Emden shows how Nx can simplify the developer’s life and facilitate a rapid transition from concept to production-ready applications.He provides valuable insights into developing scalable and efficient edge AI solutions, with a strong focus on practical implementation.
Introduction of Cybersecurity with OSS at Code Europe 2024Hiroshi SHIBATA
I develop the Ruby programming language, RubyGems, and Bundler, which are package managers for Ruby. Today, I will introduce how to enhance the security of your application using open-source software (OSS) examples from Ruby and RubyGems.
The first topic is CVE (Common Vulnerabilities and Exposures). I have published CVEs many times. But what exactly is a CVE? I'll provide a basic understanding of CVEs and explain how to detect and handle vulnerabilities in OSS.
Next, let's discuss package managers. Package managers play a critical role in the OSS ecosystem. I'll explain how to manage library dependencies in your application.
I'll share insights into how the Ruby and RubyGems core team works to keep our ecosystem safe. By the end of this talk, you'll have a better understanding of how to safeguard your code.
How to Interpret Trends in the Kalyan Rajdhani Mix Chart.pdfChart Kalyan
A Mix Chart displays historical data of numbers in a graphical or tabular form. The Kalyan Rajdhani Mix Chart specifically shows the results of a sequence of numbers over different periods.
Have you ever been confused by the myriad of choices offered by AWS for hosting a website or an API?
Lambda, Elastic Beanstalk, Lightsail, Amplify, S3 (and more!) can each host websites + APIs. But which one should we choose?
Which one is cheapest? Which one is fastest? Which one will scale to meet our needs?
Join me in this session as we dive into each AWS hosting service to determine which one is best for your scenario and explain why!
AI 101: An Introduction to the Basics and Impact of Artificial IntelligenceIndexBug
Imagine a world where machines not only perform tasks but also learn, adapt, and make decisions. This is the promise of Artificial Intelligence (AI), a technology that's not just enhancing our lives but revolutionizing entire industries.
OpenID AuthZEN Interop Read Out - AuthorizationDavid Brossard
During Identiverse 2024 and EIC 2024, members of the OpenID AuthZEN WG got together and demoed their authorization endpoints conforming to the AuthZEN API
Digital Marketing Trends in 2024 | Guide for Staying AheadWask
https://www.wask.co/ebooks/digital-marketing-trends-in-2024
Feeling lost in the digital marketing whirlwind of 2024? Technology is changing, consumer habits are evolving, and staying ahead of the curve feels like a never-ending pursuit. This e-book is your compass. Dive into actionable insights to handle the complexities of modern marketing. From hyper-personalization to the power of user-generated content, learn how to build long-term relationships with your audience and unlock the secrets to success in the ever-shifting digital landscape.
Generating privacy-protected synthetic data using Secludy and MilvusZilliz
During this demo, the founders of Secludy will demonstrate how their system utilizes Milvus to store and manipulate embeddings for generating privacy-protected synthetic data. Their approach not only maintains the confidentiality of the original data but also enhances the utility and scalability of LLMs under privacy constraints. Attendees, including machine learning engineers, data scientists, and data managers, will witness first-hand how Secludy's integration with Milvus empowers organizations to harness the power of LLMs securely and efficiently.
Salesforce Integration for Bonterra Impact Management (fka Social Solutions A...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on integration of Salesforce with Bonterra Impact Management.
Interested in deploying an integration with Salesforce for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
HCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-und-domino-lizenzkostenreduzierung-in-der-welt-von-dlau/
DLAU und die Lizenzen nach dem CCB- und CCX-Modell sind für viele in der HCL-Community seit letztem Jahr ein heißes Thema. Als Notes- oder Domino-Kunde haben Sie vielleicht mit unerwartet hohen Benutzerzahlen und Lizenzgebühren zu kämpfen. Sie fragen sich vielleicht, wie diese neue Art der Lizenzierung funktioniert und welchen Nutzen sie Ihnen bringt. Vor allem wollen Sie sicherlich Ihr Budget einhalten und Kosten sparen, wo immer möglich. Das verstehen wir und wir möchten Ihnen dabei helfen!
Wir erklären Ihnen, wie Sie häufige Konfigurationsprobleme lösen können, die dazu führen können, dass mehr Benutzer gezählt werden als nötig, und wie Sie überflüssige oder ungenutzte Konten identifizieren und entfernen können, um Geld zu sparen. Es gibt auch einige Ansätze, die zu unnötigen Ausgaben führen können, z. B. wenn ein Personendokument anstelle eines Mail-Ins für geteilte Mailboxen verwendet wird. Wir zeigen Ihnen solche Fälle und deren Lösungen. Und natürlich erklären wir Ihnen das neue Lizenzmodell.
Nehmen Sie an diesem Webinar teil, bei dem HCL-Ambassador Marc Thomas und Gastredner Franz Walder Ihnen diese neue Welt näherbringen. Es vermittelt Ihnen die Tools und das Know-how, um den Überblick zu bewahren. Sie werden in der Lage sein, Ihre Kosten durch eine optimierte Domino-Konfiguration zu reduzieren und auch in Zukunft gering zu halten.
Diese Themen werden behandelt
- Reduzierung der Lizenzkosten durch Auffinden und Beheben von Fehlkonfigurationen und überflüssigen Konten
- Wie funktionieren CCB- und CCX-Lizenzen wirklich?
- Verstehen des DLAU-Tools und wie man es am besten nutzt
- Tipps für häufige Problembereiche, wie z. B. Team-Postfächer, Funktions-/Testbenutzer usw.
- Praxisbeispiele und Best Practices zum sofortigen Umsetzen
Driving Business Innovation: Latest Generative AI Advancements & Success StorySafe Software
Are you ready to revolutionize how you handle data? Join us for a webinar where we’ll bring you up to speed with the latest advancements in Generative AI technology and discover how leveraging FME with tools from giants like Google Gemini, Amazon, and Microsoft OpenAI can supercharge your workflow efficiency.
During the hour, we’ll take you through:
Guest Speaker Segment with Hannah Barrington: Dive into the world of dynamic real estate marketing with Hannah, the Marketing Manager at Workspace Group. Hear firsthand how their team generates engaging descriptions for thousands of office units by integrating diverse data sources—from PDF floorplans to web pages—using FME transformers, like OpenAIVisionConnector and AnthropicVisionConnector. This use case will show you how GenAI can streamline content creation for marketing across the board.
Ollama Use Case: Learn how Scenario Specialist Dmitri Bagh has utilized Ollama within FME to input data, create custom models, and enhance security protocols. This segment will include demos to illustrate the full capabilities of FME in AI-driven processes.
Custom AI Models: Discover how to leverage FME to build personalized AI models using your data. Whether it’s populating a model with local data for added security or integrating public AI tools, find out how FME facilitates a versatile and secure approach to AI.
We’ll wrap up with a live Q&A session where you can engage with our experts on your specific use cases, and learn more about optimizing your data workflows with AI.
This webinar is ideal for professionals seeking to harness the power of AI within their data management systems while ensuring high levels of customization and security. Whether you're a novice or an expert, gain actionable insights and strategies to elevate your data processes. Join us to see how FME and AI can revolutionize how you work with data!
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
11.development of a writer independent online handwritten character recognition system using modified hybrid
1. Computer Engineering and Intelligent Systems www.iiste.org
ISSN 2222-1719 (Paper) ISSN 2222-2863 (Online)
Vol 3, No.4, 2012
Development of a Writer-Independent Online Handwritten
Character Recognition System Using Modified Hybrid
Neural Network Model
Fenwa O. D.*
Department of Computer Science and Engineering,
Ladoke Akintola University of Technology, P.M.B 4000, Ogbomoso, Nigeria.
*E-mail of the corresponding author: odfenwa@lautech.edu.ng
Omidiora E. O.
Department of Computer Science and Engineering,
Ladoke Akintola University of Technology, P.M.B 4000, Ogbomoso, Nigeria.
E-mail: omidiorasayo@yahoo.co.uk
Fakolujo O. A.
Department of Computer Science and Engineering,
Ladoke Akintola University of Technology, P.M.B 4000, Ogbomoso, Nigeria.
E-mail: ola@fakolujo.com
Ganiyu R. A.
Department of Computer Science and Engineering,
Ladoke Akintola University of Technology, P.M.B 4000, Ogbomoso, Nigeria.
E-mail: ganiyurafiu@yahoo.com
Abstract
Recognition of handwritten characters has become a difficult problem because of the high variability and
ambiguity in the character shapes written by individuals. Some of the problems encountered by researchers
include selection of efficient feature extraction method, long network training time, long recognition time
and low recognition accuracy. However, many feature extraction techniques have been proposed in
literature to improve overall recognition rate although most of the techniques used only one property of the
handwritten character. This research focuses on developing a feature extraction technique that combined
three characteristics (stroke information, contour pixels and zoning) of the handwritten character to create a
global feature vector. A hybrid feature extraction algorithm was developed to alleviate the problem of poor
feature extraction algorithm of online character recognition system. Besides, this research work also
focused on alleviating the problem of standard backpropagation algorithm based on ‘error adjustment’. A
hybrid of modified Counterpropagation and modified optical backpropagation neural network model was
developed to enhance the performance of the proposed character recognition system. Experiments were
89
2. Computer Engineering and Intelligent Systems www.iiste.org
ISSN 2222-1719 (Paper) ISSN 2222-2863 (Online)
Vol 3, No.4, 2012
performed with 6200 handwriting character samples (English uppercase, lowercase and digits) collected
from 50 subjects using G-Pen 450 digitizer and the system was tested with 100 character samples written
by people who did not participate in the initial data acquisition process. The performance of the system was
evaluated based on different learning rates, different image sizes and different database sizes. The
developed system achieved better performance with no recognition failure, 99% recognition rate and an
average recognition time of 2 milliseconds.
Keywords: Character recognition, Feature extraction, Neural Network, Counterpropagation, Optical
Backpropagation, Learning rate.
1. Introduction
The use of neural network for handwriting recognition is a field that is attracting a lot of attention. As the
computing technology advances, the benefits of using Artificial Neural Network (ANN) for the purpose of
handwriting recognition become more obvious. Hence, new ANN approaches geared toward the task of
handwriting recognition are constantly being studied. Character recognition is the process of applying
pattern-matching methods to character shapes that has been read into a computer to determine which alpha-
numeric character, punctuation marks, and symbols the shapes represent. The classes of recognition
systems that are usually distinguished are online systems for which handwriting data are captured during
the writing process (which makes available the information on the ordering of the strokes) and offline
systems for which recognition takes place on a static image captured once the writing process is over
(Anoop and Anil, 2004; Liu et al., 2004; Mohamad and Zafar, 2004; Naser et al., 2009; Pradeep et al.,
2011). The online methods have been shown to be superior to their offline counterpart in recognizing
handwriting characters due the temporal information available with the former (Pradeep et al., 2011).
Handwriting recognition system can further be broken down into two categories: writer independent
recognition system which recognizes wide range of possible writing styles and a writer dependent
recognition system which recognizes writing styles only from specific users (Santosh and Nattee, 2009).
Online handwriting recognition today has special interest due to increased usage of the hand held devices.
The incorporation of keyboard being difficult in the hand held devices demands for alternatives, and in this
respect, online method of giving input with stylus is gaining quite popularity (Gupta et al., 2007).
Recognition of handwritten characters with respect to any language is difficult due to variability of writing
styles, state of mood of individuals, multiple patterns to represent a single character, cursive representation
of character and number of disconnected and multi-stroke characters (Shanthi and Duraiswamy, 2007).
Current technology supporting pen-based input devices include: Digital Pen by Logitech, Smart Pad by
Pocket PC, Digital Tablets by Wacom and Tablet PC by Compaq (Manuel and Joaquim, 2001). Although
these systems with handwriting recognition capability are already widely available in the market, further
improvements can be made on the recognition performances for these applications.
The challenges posed by the online handwritten character recognition systems are to increase the
recognition accuracy and to reduce the recognition time (Rejean and Sargurl, 2000; Gupta et. al., 2007).
Various approaches that have been used by many researchers to develop character recognition systems,
these include; template matching approach, statistical approach, structural approach, neural networks
approach and hybrid approach. Hybrid approach (combination of multiple classifiers) has become a very
active area of research recently (Kittler and Roli, 2000; 2001). It has been demonstrated in a number of
applications that using more than a single classifier in a recognition task can lead to a significant
improvement of the system’s overall performance. Hence, hybrid approach seems to be a promising
approach to improve the recognition rate and recognition accuracy of current handwriting recognition
systems (Simon and Horst, 2004). However, Selection of a feature extraction method is probably the single
most important factor in achieving high recognition performance in character recognition system (Pradeep
et. al., 2011). No matter how sophisticated the classifiers and learning algorithms, poor feature extraction
will always lead to poor system performance (Marc et. al., 2001). In furtherance, Fenwa et al.(2012)
90
3. Computer Engineering and Intelligent Systems www.iiste.org
ISSN 2222-1719 (Paper) ISSN 2222-2863 (Online)
Vol 3, No.4, 2012
developed a feature extraction technique for online character recognition system using hybrid of
geometrical and statistical features. Thus, through the integration of geometrical and statistical features,
insights were gained into new character properties, since these types of features were considered to be
complementary.
2. Research Methodology
The five stages involved in developing the proposed character recognition system, which include data
acquisition, pre-processing, character processing that comprises feature extraction and character digitization,
training and classification using hybrid neural network model and testing, are as shown in Figure 2.2.
Experiments were performed with 6200 handwriting character samples (English uppercase, lowercase and
digits) collected from 50 subjects using G-Pen 450 digitizer and the system was tested with 100 character
samples written by people who did not participate in the initial data acquisition process. The performance
of the system was evaluated based on different learning rates, different image sizes and different database
sizes.
2.1 Data Acquisition
The data used in this work were collected using Digitizer tablet (G-Pen 450) shown in Figure 2.3. It has an
electric pen with sensing writing board. An interface was developed using C# to acquire data (character
information) such as stroke number, stroke pressure, etc from different subjects using the digitizer tablet. 26
Upper case (A-Z), 26 lower case (a-z) English alphabets and 10 digits (0-9) making a total number of 62
characters. 6,200 characters (62 x 2 x 50) were collected from 50 subjects as each individual was requested
to write each of the characters 2 times (this is done to allow the network learn various possible variations of
a single character and become adaptive in nature). This serves as the training data set which was the input
data that was fed into the neural network.
2.2 Data Preprocessing
Pre-processing is done prior to the application of feature extraction algorithms. Pre-processing aims to
produce clean character images that are easy for the character recognition systems to operate more
accurately. Feature extraction stage relies on the output of this process. The pre processing techniques used
in this research work is Grid Resizing.
2.3 Resizing Grid
From the interface that was provided, there wasn’t any degree of measurement to determine how small/big
the input character should be. Hence, the character written is resized to a matrix size of 5 by 7, 10 by 14
and 20 by 28 for all input characters. This is used to get the universe of discourse which is the shortest
matrix that fit the entire character skeleton. The universe of discourse is measured to easily get a uniform
matrix size in multiple of 5 by 7, 10 by 14 and 20 by 28 respectively.
Any character measured smaller than the required size is considerably resized to the multiple of 5 by 7, 10
by 14 and 20 by 28 conversely any character measured larger than the required size will also be resized to
the multiple of 5 by 7, 10 by 14 and 20 by 28. This implies that rows and columns of Zero’s are added or
subtracted from the resized image matrix to achieve the required multiple of 5 by 7, 10 by 14 and 20 by 28.
2.4 Feature Extraction Development
The goal of feature extraction is to extract a set of features, which maximizes the recognition rate with the
least amount of elements. Many feature extraction techniques have been proposed to improve overall
recognition rate; however, most of the techniques used only one property of the handwritten character. This
research focuses on a feature extraction technique that combined three characteristics (stroke information,
contour pixels and zoning) of the handwritten character to create a global feature vector. Hence, a hybrid
feature extraction algorithm was developed using Geometrical and Statistical features as shown in Figure
91
4. Computer Engineering and Intelligent Systems www.iiste.org
ISSN 2222-1719 (Paper) ISSN 2222-2863 (Online)
Vol 3, No.4, 2012
2.4. Integration of Geometrical and Statistical features was used to highlight different character properties,
since these types of features are considered to be complementary.
2.4.1 The Developed Hybrid (Geom-Statistical) Feature Extraction Algorithm
The Hybrid feature extraction model adopted in this work is as shown in Figure 2.4. Stages of
development of the proposed hybrid feature extraction algorithm are as follow:
Stage 1: Get the stroke information of the input characters from the digitizer (G- pen 450). These include:
(i) Pressure used in writing the strokes of the characters
(ii) Number (s) of strokes used in writing the characters
(iii) Number of junctions and the location in the written characters
(iv) The horizontal projection count of the character.
Stage 2: Apply Contour tracing algorithm to trace out the contour of the characters:
Stage 3: Develop a modified hybrid zoning algorithm and RUN it on the contours of the characters:
Two zoning algorithms were proposed by Vanajah and Rajashekararadhya in 2008 for the
recognition of four popular Indian scripts (Kannada, Telugu, Tamil and Malayalam) numeral.
These were: Image Centroid and Zone-based (ICZ) Distance Metric Feature Extraction System
(Vanajah and Rajashekararadhya, 2008a) and Zone Centroid and Zone-based (ZCZ) Distance
Metric Feature Extraction System (Rajashekararadhya and Vanajah, 2008b). The Two Algorithms
were modified in terms of:
(i) Number of zones being used (25 Zones) as shown in Figure 2.1.
(ii) Measurement of the distances from both the Image Centroid and Zone Centroid to the
pixels present in each zone.
(iii) The area of application (uppercase (A-Z), lowercase (a-z) English alphabet and digit (0-
9)).
Few zones are adopted in this research but emphasis is laid on how to effectively measure the pixel
densities in each zone. However, pixels pass through the zones at varied distances that is why we measured
five distances for each zone at an angle of 200 .
Hybrid of Modified ICZ and Modified ZCZ based Distance Metric Feature Extraction Algorithm.
Input: Pre processed character image
Output: Features for classification and recognition
Method Begins
Step1: Divide the input image into 25 equal zones.
Step2: Compute the input image centroid
Step3: Compute the distance between the image centroid to each pixel present in the zone measured at
angle 200
Step4: Repeat step 3 for the entire pixel present in the zone.
Step5: Compute average distance between these points.
92
5. Computer Engineering and Intelligent Systems www.iiste.org
ISSN 2222-1719 (Paper) ISSN 2222-2863 (Online)
Vol 3, No.4, 2012
Step6: Compute the zone centroid
Step7: Compute the distance between the zone centroid to each pixel present in the zone measured at angle
200.
Step8: Repeat step 7 for the entire pixel present in the zone
Step9: Compute average distance between these points.
Step10: Repeat the steps 3-9 sequentially for the entire zones.
Step11: Finally, 2xn (50) such features were obtained for classification and recognition.
Method Ends
Stage 4: Feed the outputs of the extracted features of the characters into the digitization stage in order to
convert all the extracted features into digital forms.
2.4.2 Development of Hybrid Neural Network Model
In this research work, hybrid approach with serial combination scheme was adopted. Hybrid of modified
Counterpropagation and modified Optical Backpropagation neural network was developed as shown in
Figure 2.5. The architecture of the neural network with three layers is illustrated in Figure 2.5. The output
of ith layer is given by (2.1) except the output layer which uses maxsoft function:
ai = logsig(wi ai-1 + bi)
(2.1)
where i = (2, 3) and a0 = P, a1 = E where E is the Euclidean distance between the weight vector and
the input vector.
wi = Weight vector of ith layer
ai = Output of ith layer
bi = Bias vector for ith layer.
The input vector ‘P’ is represented by the solid vertical bar at the left. The dimensions of ‘P’ are displayed
as 35 x 1, indicating that the input is a single vector of 35 elements (i.e. the image size). These inputs go to
weight matrix ‘W1’, which has 86 rows (i.e. 86 neurons in the first hidden layer) and 35 columns. A
constant ‘1’ enters the neuron as input and is multiplied by a bias ‘b1’. The net input to the transfer function
(Euclidean distance) in the Kohonen (hidden layer) is ‘n1’, which is given as the Euclidean distance
between the weight vector wi and the input vector P. The neurons output ‘a1’ serves as inputs to the second
hidden layer. These inputs go to weight matrix ‘W2’, which has 86 rows (i.e.86 neurons in the first hidden
layer) and 35 columns. A constant ‘1’ enters the neuron as input and is multiplied by a bias ‘b2’.
The net input to the transfer function (log sigmoid) in the second (hidden layer) is ‘n2’, which is the sum of
the bias ‘b2’ and the product ‘W2a1’. The output ‘a2’ is a single vector of 86 elements and this serves as
inputs to the output layer ‘a3’. These inputs go to weight matrix ‘W3’, which has 86 rows (i.e. 86 neurons
from the second hidden layer) and 62 columns (26 uppercase + 26 lowercase + 10 digits). The net input to
the transfer function (maxsoft) in the output layer is ‘n3’, which is the sum of the bias ‘b3’ and the product
‘W3a2’. The neurons output ‘a3’ is a single vector of 62 elements and this serves as the final output of the
neural network.
This research work adopted the modified Counterpropagation network (CPN) developed by Jude, Vijila,
and Anitha (2010). The training algorithm involves the following two phases:
(i) Weight adjustment between the input layer and the hidden layer (Kohonen layer)
The weight adjustment procedure for the hidden layer weights is same as that of the conventional CPN. It
follows the unsupervised methodology to obtain the stabilized weights. After convergence, the weights
between the hidden layer and the output layer are calculated.
93
6. Computer Engineering and Intelligent Systems www.iiste.org
ISSN 2222-1719 (Paper) ISSN 2222-2863 (Online)
Vol 3, No.4, 2012
(ii) Weight adjustment between the hidden layer and its output layer
The weight adjustment procedure employed in this work is significantly different from the conventional
CPN. The weights are calculated in the reverse direction without any iterative procedures. Normally, the
weights are calculated based on the criteria of minimizing the error. But in this work, a minimum error
value is specified initially and the weights are estimated based on the error value. Thus without any training
methodology, the weight values are estimated. This technique accounts for higher convergence rate since
one set of weights are estimated directly. It is this output that served as the input to the modified optical
backpropagation algorithm.
2.4.2.1 Modified Optical Backpropagation Neural Network
In the standard backpropagation, the error at a single output unit is defined according to Equation as:
δo pk = (Ypk - Opk ) . fo' k (neto pk)
(2.2)
where the subscript “p” refers to the pth training vector, and “k” refers to the kth output unit, Ypk is the
desired output value, Opk is the actual output from kth unit, then δopk will propagate backward to update the
output-layer weights and the hidden-layer weights. In the Optical backpropagation (OBP), error at a single
output unit is adjusted according to Otair and Salameh (2005) as:
New δopk = (1+e (Ypk - Opk )2 . fo'k (neto pk)), if (Y – O) >= zero
(2.3a)
New δopk = - (1+e (Ypk - Opk )2 . fo'k (neto pk)), if (Y – O) < zero
(2.3b)
The error function defined in Optical Backpropagation (Otair and Salameh, 2005) earlier is proportional to
the square of the Euclidean distance between the desired output and the actual output of the network for a
particular input pattern. As an alternative, other error functions whose derivatives exist and can be
calculated at the output layer can replace the traditional square error criterion (Haykin, 2003). In this
research work, error of the third order (Cubic error) has been adopted to replace the traditional square error
criterion used in Optical backpropagation. The Equation of the cubic error is given as:
δopk = -3(Ypk – Opk)2 . fo'k(netopk)
(2.4)
The cubic error in Equation (2.4) was manipulated mathematically in order to maximize the error of each
output unit which will be transmitted backward from the output layer to each unit in the intermediate layer.
These are as shown in Equations (2.5a) and (2.5b) below:
Modified δopk = 3((1+ et)2 . fo'k(netopk)) If (Ypk – Opk)2 >= 0
(2.5a)
Modified δopk = -3((1+ et)2 . fo'k(netopk)) If (Ypk – Opk)2 <=0
(2.5b) where Ypk = Target or Desired output
Opk = Network output
t = (Ypk – Opk)2
However, one of the ways to reduce the training time is through the use of momentum, as it enhances the
stability of the training process. The momentum is used to keep the training process going in the same
general direction (Haykin, 2003). In the modified Optical Backpropagation network, momentum was
introduced. Hence, the weight update for the output unit is:
Wokj(t+1) = Wokj(t) + µWokj(t) + (η . Modified δopk. ipj)
(2.6)
where µ is the momentum coefficient typically about 0.9 and η is the learning rate.
2.4.2.1 The Modified Optical Backpropagation Algorithm
Modifications of the algorithm are in terms of:
94
7. Computer Engineering and Intelligent Systems www.iiste.org
ISSN 2222-1719 (Paper) ISSN 2222-2863 (Online)
Vol 3, No.4, 2012
(i) Error signal function
(ii) Application area
With the introduction of Cubic error function and Momentum, the modified Optical Backpropagation is
given as:
1. Apply the input example to the input units.
2. Calculate the net-input values to the hidden layer units.
3. Calculate the outputs from the hidden layer.
4. Calculate the net-input values to the output layer units
5. Calculate the outputs from the output units
6. Calculate the error term for the output units, using Equations (2.5a) and (2.5b)
7. Calculate the error term for the hidden units, through applying Modified δopk also
Modified δh pj = fh'j(neth pj) .( δopk . Wokj)
(2.7)
8. Update weights on the output layer.
Wokj(t+1) = Wokj(t) + µWokj(t) + (η . Modified δopk . ipj)
(2.8)
9. Update weights on the hidden layer.
Whji(t+1) = Whji(t) + (η . Modified δhpj . Xi)
(2.9)
Repeat steps from step 1 to step 9 until the error (Ypk – Opk) is acceptably small for each of the training
vector pair. The proposed algorithm as in OBP is stopped when the cubes of the differences between the
actual and target values summed over units and all patterns are acceptably small.
2.4.3 The Hybrid Neural Network Algorithm
This research work employed hybrid of modified Counterpropagation and modified Optical
Backpropagation neural networks for the training and classification of the input pattern. The training
algorithm involves the following two stages:
Stage A: Performs the training of the weights from the input nodes to the Kohonen hidden node.
Step 1: Weight adjustment between the input layer and the hidden layer
The weight adjustment procedure for the hidden layer weights is same as that of the conventional CPN. It
follows the unsupervised methodology to obtain the stabilized weights. This process is repeated for a
suitable number of iterations and the stabilized set of weights are obtained. After convergence, the weights
between the Kohonen hidden layer and the output layer are calculated.
Step 2: Weight adjustment between the hidden layer and the output layer
The weight adjustment procedure employed in this work is significantly different from the
conventional CPN. The weights are calculated in the reverse direction without any iterative procedures.
Normally, the weights are calculated based on the criteria of minimizing the error. But in this work, a
minimum error value is specified initially and the weights are estimated based on the error value. The
detailed steps of the modified algorithm are given below.
Step 1: The stabilized weight values are obtained when the error value (target output) is equal to zero (or) a
predefined minimum value. The error value used for convergence in this work is 0.1. The following
procedure uses this concept for weight matrices calculation.
Step 2: Supply the target vectors t1 to the output layer neurons
Step 3: Since ( t1 – y1 ) = 0.1 for convergence
(2.10)
95
8. Computer Engineering and Intelligent Systems www.iiste.org
ISSN 2222-1719 (Paper) ISSN 2222-2863 (Online)
Vol 3, No.4, 2012
The output of the output layer neurons is set equal to the target values as:
y1 = t1 − 0.1
(2.11)
Step 4: Once the output value is calculated, the sum of the weighted input signals. Thus without any
training methodology, the weight values are estimated. This technique accounts for higher convergence rate
since one set of weights are estimated directly.
STAGE B: Performs the training of the weights from the second hidden node to the output nodes.
1. Calculate the net-input values from the Kohonen to the second hidden layer units.
2. Calculate the outputs from the second hidden layer.
3. Calculate the net-input values to the output layer units
4. Calculate the outputs from the output units
5. Calculate the error term for the output units, using Equations (2.5a) and (2.5b)
6. Calculate the error term for the hidden units, through applying modified δopk as in Equation (2.7)
7. Update weights on the output layer using (2.8)
8. Update weights on the hidden layer using Equation (2.9)
Repeat steps from step 1 to step 8 until the error (Ypk – Opk) is acceptably small for each of the training
vector pair. The proposed algorithm as classical BP is stopped when the cubes of the differences between
the actual and target values summed over units and all patterns are acceptably small.
The description of notation used in the training procedure is as given below:
Xpi : net input to the ith input unit
nethpj: net input to the jth hidden unit
Whji: weight on the connection from the ith input unit to jth hidden unit
ipj: net input to the jth hidden unit
netopk: net input to the kth output unit
Wokj: weight on the connection from the jth hidden unit to kth output unit
Opk: actual output for the kth output unit
3. Software Implementation
This section discussed the phases of developing the proposed character recognition system. All the
algorithms have been implemented using C# programming language and RUN under Windows7 operating
system on Pentium (R) 2.00GB RAM, 1.83GH Processor and Pentium (R) 4.00GB RAM, 2.13GH
Processor respectively. Different interfaces representing the phases of the system development are shown
from Figures 2.6 to 2.11.
3.1 Character Acquisition
Character acquisition interface as in Figures 2.7 and 2.8 displayed a drawing area to serve as a platform to
acquire characters from users. Any drawn character on this platform must be saved into the developed
system’s database by clicking the ‘Add Botton’. It also captured number of strokes and pressure used in
writing a character. A total 6,200 character samples were acquired using G-Pen 450 as shown in Figure 2.2
96
9. Computer Engineering and Intelligent Systems www.iiste.org
ISSN 2222-1719 (Paper) ISSN 2222-2863 (Online)
Vol 3, No.4, 2012
and stored in the database.
3.2 Network Training
Before the network can be trained, the network parameters such as the learning rate, epoch value, the quit
error and character image size must be specified in the ‘Application setting’ interface. The next thing is to
specify the total number of the character to be loaded from the database for training. The database size can
be varied by specifying the maximum number of character to be selected from character category
(uppercase, lowercase and digit). This can be accomplished with the interfaces shown in Figure 2.9 and
Figure 2.10. On clicking the ‘Training phase button’, the neural network trains itself with alphanumeric
characters of A – Z, a – z and 0 – 9 by performing several iterations based on the error value, learning rate
and the epoch value stipulated for the Neural Network. Epoch increases from 1 to 10,000 and the Error
reduces from a particular value depending on the value specified by the user. The training is terminated
when either the epoch reaches 10,000 or the error reaches the minimum value specified. Network training
results such the training time, the epoch value were displayed as shown in Figure 2.10.
3.3 Recognition Phase
Recognition of the character can be accomplished by clicking the ‘Data acquisition/ Recognition phase
botton’. User is required to write a character and click the ‘Recognize botton’. The results such as the
‘detected character’, ‘recognition rate’ and ‘recognition time’ will be displayed as shown in Figure 2.11.
4. Performance Evaluation
The performance metrics adopted in evaluating the developed system are: different learning rate parameters,
different image sizes, different database sizes and system configurations. The results are as given in Figure
2.12 to Figure 2.17. Figure 2.12 shows the graph of learning rate versus epoch. Learning rate parameter
variation has a positive effect on the network performance. The smaller the value of learning rate, the lower
the value with which the network updates its weights. This intuitively implies that it will be less likely to
face the over learning difficulty since it will be updating its links slowly and in a more refined manner.
However, this would also imply more number of iterations is required to reach its optimal state. Figure 2.13
indicates the graph of image size versus epoch. The image size is directly proportional to epoch. Usually,
the complex and large sized input sets require a large topology network with more number of iterations and
this has an adverse effect on the recognition time. Figure 2.14 shows the graph of variation of database size
and recognition rates. Increase in database size has a positive proportionality relation to the recognition
rates due to the fact that network is able to attribute the test character to larger character samples in the
vector space. However, rate of increment in recognition rate with respect to database size is considerably
small. Moreover, it can be shown from Figure 2.15 that the more the dimensional input vector (image
matrix size), the less the recognition performance. Three different image sizes (5 by 7, 10 by 14 and 20 by
28) were considered; it was shown from the results that the higher the image size, the higher the percentage
of recognition, although, the rate of change was small. Furthermore, increase in image size also increases
the recognition time.
System configuration is also another factor influencing the performance of the recognition system. Both the
training time and recognition time are measured in CPU (processor) seconds, the higher the speed of the
system (processor speed), the lower the training time and the recognition time. This is due to the fact that
more time will be used by 1.8GH system to reach the required epochs (iteration) than the system with
2.13GH speed. The result is as shown in Figure 2.16. Figure 2.17 is the graph showing comparison of
performance of the developed system with related works in literature. It can be shown from the graph that
the accuracy of the system developed by Muhammad et. al. (2005) is higher than that of Muhammad et. al.
(2006). This shows that the performance of Counterpropagation neural network is better than that of
Standard Backpropagation. The best recognition rate was achieved in the developed system with no
97
10. Computer Engineering and Intelligent Systems www.iiste.org
ISSN 2222-1719 (Paper) ISSN 2222-2863 (Online)
Vol 3, No.4, 2012
recognition failure. This means that the developed system was able to recognise characters irrespective of
the writing styles.
5. Conclusion and Future Work
In this paper, we have been able to develop an effective feature extraction technique for the proposed online
character recognition system using hybrid of geometrical and statistical features. The hybrid feature
extraction was developed to alleviate the problem of poor feature extraction algorithm of online character
recognition system. However, a hybrid of modified counter propagation and modified optical
backpropagation neural network model was developed for the proposed system. The performance of the
online character recognition system under consideration was evaluated based on different learning rates,
image sizes and database sizes. The results from the study were compared with some works in the literature
using counterpropagation and standard backpropagation respectively. The results showed that
counterpropagation performed better than standard backpropagation in terms of correct recognition, false
recognition, recognition failure and the achieved recognition rates were 94% and 81% respectively. The
developed system achieved better performance with no recognition failure, 99% recognition rate and an
average recognition time of 2 milliseconds. Future work could be geared towards integrating an
optimization algorithm into the learning algorithms to further enhance the convergence of the neural
network.
References
Anoop, M. N. and Anil K.J. (2004): “Online Handwritten Script Recognition’’, IEEE Trans. PAMI, 26(1):
124
130.
Liu, C.L., Nakashima, K., Sako, H. and Fujisawa, H. (2004): “Handwritten Digit Recognition:
Investigation of
Normalization and Feature Extraction Techniques’’, Pattern Recognition, 37(2): 265-279.
Mohamad, D. and Zafar, M.F. (2004): “Comparative Study of Two Novel Feature Vectors for Complex
Image
Matching Using Counterpropagation Neural Network’’, Journal of Information Technology, FSKSM, UTM,
16(1): 2073-2081.
Naser, M.A., Adnan, M., Arefin, T.M., Golam, S.M. and Naushad, A. (2009): “Comparative Analysis of
Radon and Fan-beam based Feature Extraction Techniques for Bangla Character Recognition”, IJCSNS
International Journal of Computer Science and Network Security, 9(9): 120-135.
Pradeep, J., Srinivasan, E. and Himavathi, S. (2011): “Diagonal Based Feature Extraction for Handwritten
Alphabets Recognition using Neural Network’’, International Journal of Computer Science and Information
Technology (IJCS11), 3(1): 27-37.
Shanthi, N., and Duraiwamy, K. (2007): “Performance Comparison of Different Image Sizes for
Recognizing Unconstrained Handwritten Tamil Characters Using SVM’’, Journal of Science, 3(9): 760-764.
Santosh, K.C. and Nattee, C. (2009): “A Comprehensive Survey on Online Handwriting Recognition
Technology and Its Real Application to the Nepalese Natural Handwriting’’, Kathmandu University Journal
of Science, Engineering Technology, 5(1): 31-55.
Simon G. and Horst B. (2004): “Feature Selection Algorithms for the Generalization of Multiple Classifier
Systems and their Application to Handwritten Word Recognition’’, Pattern Recognition Letters, 25(11):
1323-1336.
98
11. Computer Engineering and Intelligent Systems www.iiste.org
ISSN 2222-1719 (Paper) ISSN 2222-2863 (Online)
Vol 3, No.4, 2012
Gupta, K., Rao, S.V., and Viswanath (2007): “Speeding up Online Character Recognition’’, Proceedings of
Image and Vision Computing Newzealand, Hamilton: 41-45.
Manuel, J., Fonseca, and Joaquim, A.J., (2001): “Experimental Evolution of an Online Scribble
Recognizer’’, Pattern Recognition Letters, 22(12): 1311-1319.
Rejean, P. and Sargur, S.N. (2000): “On-line and Off-line Recognition: A Comprehensive Survey’’, IEEE
Transaction on Pattern Analysis and Machine Intelligence, 22(1): 63-84.
Kittler, J. and Roli, F. (2000): “1st International Workshop on Multiple Classifier Systems”, Cagliari, Italy.
Kittler, J. and Roli, F. (2001): “2nd International Workshop on Multiple Classifier Systems”, Cagliari, Italy.
Marc, P., Alexandre, L., and Christian, G. (2001): “Character Recognition Experiment using Unipen Data’’,
Parizeau and al., Proceeding of ICDAR, Seattle: 10-13.
Muhammad, F.Z., Dzuulkifli, M., and Razib, M.O. (2006): “Writer Independent Online Handwritten
Character Recognition Using Simple Approach’’, Information Technology Journal, 5(3): 476-484.
Freeman, J.A. and Skapura, D.M. (1992): “Backpropagation Neural Networks Algorithm Applications and
Programming Techniques”, Addison-Wesley Publishing Company: 89-125.
Minai, A.A. and Williams, R.D. (1990): “Acceleration of Backpropagation through learning Rate
Momentum Adaptation’’, Proceeding of the International Joint Conference on Neural Networks: 1676-1679.
Riedmiller, M. and Braun, H. (1993): “A Direct Adaptive Method for Faster Backpropagation learning the
PROP Algorithm’’, Proceedings of the IEEE International Conference on Neural Networks (ICNN), 1: 586-
591, Francisco.
Otair, M.A. and Salameh, W.A. (2005a): “Online Handwritten Character Recognition using an Optical
Backpropagation Neural Network’’, Issues in Informing Science and Information Technology, 2: 787-797.
Rajashekararadhya, S.V. and Vanaja, P.R. (2008a): “Handwritten numeral recognition of three popular
South Indian Scripts: A Novel Approach:’’, Proceedings of the Second International Conference on
information processing ICIP: 162-167.
Rajashekararadhya S.V. and Vanaja, P.R (2008b): “Isolated Handwritten Kannada Digit Recognition: A
Novel Approach’’, Proceedings of the International Conference on Cognition and Recognition: 134-140.
Fenwa, O. D., Omidiora, E. O. and Fakolujo O. A. (2012): “Development of a Feature Extraction
Technique for Online Character Recognition System, Journal of Innovative Systems Design and
Engineering, International Institute of Science, Technology and Education,Vol. 3, No.3, pp. 10-23.
99
12. Computer Engineering and Intelligent Systems www.iiste.org
ISSN 2222-1719 (Paper) ISSN 2222-2863 (Online)
Vol 3, No.4, 2012
Figure 2.1: Character ‘n’ in 5 by 5 (25 equal zones)
Figure 2.2: Block Diagram of the Developed Character Recognition System
Figure 2.3: The snapshot of Genius Pen (G-Pen 450) Digitizer for character acquisition
100
13. Computer Engineering and Intelligent Systems www.iiste.org
ISSN 2222-1719 (Paper) ISSN 2222-2863 (Online)
Vol 3, No.4, 2012
Figure 2.4: The Developed Hybrid Feature Extraction Model
a1 = E (W1 – P), a2 = logsig (W2a1 + b2), a3 = Maxsoft (W3 + b3)
Figure 2.5: The Hybrid Neural Network Model
101
14. Computer Engineering and Intelligent Systems www.iiste.org
ISSN 2222-1719 (Paper) ISSN 2222-2863 (Online)
Vol 3, No.4, 2012
Figure 2.6: Graphic user interface of the developed Online Character Recognition System
Figure 2.7: Acquisition of character ’A’ using 3 strokes
102
15. Computer Engineering and Intelligent Systems www.iiste.org
ISSN 2222-1719 (Paper) ISSN 2222-2863 (Online)
Vol 3, No.4, 2012
Figure 2.8: Acquisition of character ’A’ using 2 strokes
Figure 2.9: Network parameter setting and loading data to be trained from the database
Figure 2.10: Network Training in progress
Figure 2.11: Result of recognition process displaying recognition status
103
16. Computer Engineering and Intelligent Systems www.iiste.org
ISSN 2222-1719 (Paper) ISSN 2222-2863 (Online)
Vol 3, No.4, 2012
Figure 2.12: Graph showing the effect of variation in Learning Rate and Database size on Epochs
Figure 2.13: Graph showing the effect of variation of Image size and Database size on Epoch
Figure 2.14: Graph showing the effect of variation in Database size and Epoch on the Recognition Rate
104
17. Computer Engineering and Intelligent Systems www.iiste.org
ISSN 2222-1719 (Paper) ISSN 2222-2863 (Online)
Vol 3, No.4, 2012
Figure 2.15: Graph showing the effect of variation in Image size and Database size on the Recognition Rate
Figure 2.16: Graph showing the effect of variation in system configurations and Database size on Epochs
CR = Correct Recognition; FR = False Recognition; RF = Recognition Failure
Figure 2.17: Graph showing performance evaluation of the developed system with related works
105
18. International Journals Call for Paper
The IISTE, a U.S. publisher, is currently hosting the academic journals listed below. The peer review process of the following journals
usually takes LESS THAN 14 business days and IISTE usually publishes a qualified article within 30 days. Authors should
send their full paper to the following email address. More information can be found in the IISTE website : www.iiste.org
Business, Economics, Finance and Management PAPER SUBMISSION EMAIL
European Journal of Business and Management EJBM@iiste.org
Research Journal of Finance and Accounting RJFA@iiste.org
Journal of Economics and Sustainable Development JESD@iiste.org
Information and Knowledge Management IKM@iiste.org
Developing Country Studies DCS@iiste.org
Industrial Engineering Letters IEL@iiste.org
Physical Sciences, Mathematics and Chemistry PAPER SUBMISSION EMAIL
Journal of Natural Sciences Research JNSR@iiste.org
Chemistry and Materials Research CMR@iiste.org
Mathematical Theory and Modeling MTM@iiste.org
Advances in Physics Theories and Applications APTA@iiste.org
Chemical and Process Engineering Research CPER@iiste.org
Engineering, Technology and Systems PAPER SUBMISSION EMAIL
Computer Engineering and Intelligent Systems CEIS@iiste.org
Innovative Systems Design and Engineering ISDE@iiste.org
Journal of Energy Technologies and Policy JETP@iiste.org
Information and Knowledge Management IKM@iiste.org
Control Theory and Informatics CTI@iiste.org
Journal of Information Engineering and Applications JIEA@iiste.org
Industrial Engineering Letters IEL@iiste.org
Network and Complex Systems NCS@iiste.org
Environment, Civil, Materials Sciences PAPER SUBMISSION EMAIL
Journal of Environment and Earth Science JEES@iiste.org
Civil and Environmental Research CER@iiste.org
Journal of Natural Sciences Research JNSR@iiste.org
Civil and Environmental Research CER@iiste.org
Life Science, Food and Medical Sciences PAPER SUBMISSION EMAIL
Journal of Natural Sciences Research JNSR@iiste.org
Journal of Biology, Agriculture and Healthcare JBAH@iiste.org
Food Science and Quality Management FSQM@iiste.org
Chemistry and Materials Research CMR@iiste.org
Education, and other Social Sciences PAPER SUBMISSION EMAIL
Journal of Education and Practice JEP@iiste.org
Journal of Law, Policy and Globalization JLPG@iiste.org Global knowledge sharing:
New Media and Mass Communication NMMC@iiste.org EBSCO, Index Copernicus, Ulrich's
Journal of Energy Technologies and Policy JETP@iiste.org Periodicals Directory, JournalTOCS, PKP
Historical Research Letter HRL@iiste.org Open Archives Harvester, Bielefeld
Academic Search Engine, Elektronische
Public Policy and Administration Research PPAR@iiste.org Zeitschriftenbibliothek EZB, Open J-Gate,
International Affairs and Global Strategy IAGS@iiste.org OCLC WorldCat, Universe Digtial Library ,
Research on Humanities and Social Sciences RHSS@iiste.org NewJour, Google Scholar.
Developing Country Studies DCS@iiste.org IISTE is member of CrossRef. All journals
Arts and Design Studies ADS@iiste.org have high IC Impact Factor Values (ICV).