The technical study had been performed on
many foreign languages like Japanese; Chinese etc. but the
efforts on Indian ancient script is still immature. As the Modi
script language is ancient and cursive type, the OCR of it is still
not widely available. As per our knowledge, Prof. D.N.Besekar,
Dept. of Computer Science, Shri. Shivaji College of Science,
Akola had proposed a system for recognition of offline
handwritten MODI script Vowels. The challenges of
recognition of handwritten Modi characters are very high due
to the varying writing style of each individual. Many vital
documents with precious information have been written in
Modi and currently, these documents have been stored and
preserved in temples and museums. Over a period of time these
documents will wither away if not given due attention. In this
paper we propose a system for recognition of handwritten
Modi script characters; the proposed method uses Image
processing techniques and algorithms which are described
below.
General Terms
Preprocessing techniques: Gray scaling, Thresholding,
Boundary detection, Thinning, cropping, scaling, Template
generation. Other algorithms used- Average method, otsu
method, Stentiford method, Template-based matching method
DEVNAGARI DOCUMENT SEGMENTATION USING HISTOGRAM APPROACHijcseit
Document segmentation is one of the critical phases in machine recognition of any language. Correct
segmentation of individual symbols decides the accuracy of character recognition technique. It is used to
decompose image of a sequence of characters into sub images of individual symbols by segmenting lines and
words. Devnagari is the most popular script in India. It is used for writing Hindi, Marathi, Sanskrit and
Nepali languages. Moreover, Hindi is the third most popular language in the world. Devnagari documents
consist of vowels, consonants and various modifiers. Hence proper segmentation of Devnagari word is
challenging. A simple histogram based approach to segment Devnagari documents is proposed in this paper.
Various challenges in segmentation of Devnagari script are also discussed.
Devnagari handwritten numeral recognition using geometric features and statis...Vikas Dongre
This paper presents a Devnagari Numerical recognition method based on statistical
discriminant functions. 17 geometric features based on pixel connectivity, lines, line directions, holes,
image area, perimeter, eccentricity, solidity, orientation etc. are used for representing the numerals. Five
discriminant functions viz. Linear, Quadratic, Diaglinear, Diagquadratic and Mahalanobis distance are
used for classification. 1500 handwritten numerals are used for training. Another 1500 handwritten
numerals are used for testing. Experimental results show that Linear, Quadratic and Mahalanobis
discriminant functions provide better results. Results of these three Discriminants are fed to a majority
voting type Combination classifier. It is found that Combination classifier offers better results over
individual classifiers.
Devnagari document segmentation using histogram approachVikas Dongre
Document segmentation is one of the critical phases in machine recognition of any language. Correct
segmentation of individual symbols decides the accuracy of character recognition technique. It is used to
decompose image of a sequence of characters into sub images of individual symbols by segmenting lines and
words. Devnagari is the most popular script in India. It is used for writing Hindi, Marathi, Sanskrit and
Nepali languages. Moreover, Hindi is the third most popular language in the world. Devnagari documents
consist of vowels, consonants and various modifiers. Hence proper segmentation of Devnagari word is
challenging. A simple histogram based approach to segment Devnagari documents is proposed in this paper.
Various challenges in segmentation of Devnagari script are also discussed.
A Comprehensive Study On Handwritten Character Recognition Systemiosrjce
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
A bidirectional text transcription of braille for odia, hindi, telugu and eng...eSAT Journals
Abstract
Communication gap establishes an aura of unwillingness of understanding the factor behind. Basically, this is one of the common
agenda for the visually challenged people. In order to bridge this gap, a proper platform of learning for both the mass and the
visually challenged for any native language is emphasized in this paper through Braille pattern. Braille, a code, well known to
visually challenged as their mode of communication is now converted into normal text in Odia, Hindi, Telugu and English
languages using Image segmentation as the base criteria with MATLAB as its simulation field so that every person can be able to
easily decode the information being conveyed by these people. The algorithm makes the best use of segmentation, histogram
analysis, pattern recognition, letter arrays, data base generation with testing in software and dumping in using Spartan 3e FPGA
kit. This paper also elaborates on the reverse conversion of native languages and English to Braille making the paper to be more
compatible. The processing speed with efficiency and accuracy defines the effective features of this paper as a successful
approach in both software and hardware.
Keywords: visually challenge, image segmentation, MATLAB, pattern recognition, Spartan 3e FPGA kit, compatible.
BrailleOCR: An Open Source Document to Braille Converter Applicationpijush15
This presentation is actually about an Open Source application, BrailleOCR that helps to convert scanned documents to Braille and thus helps the Visually Impaired.
What is the use of this application in real life? Well, BrailleOCR is currently the only app that integrated Optical character recognition and Braille Translation together. This app will eventually help converting a lot of important documents to Braille. The project site for this project is given here
IJCA Paper: http://www.ijcaonline.org/archives/volume68/number16/11664-7254
Project site: https://code.google.com/p/brailleocr/
The app uses a four step process. Initially, we have a scanned image, which is a RGB image. The first step or the Pre-Processing step deals with conversion of a RGB image to grayscale. The 2nd step deals with Character Recognition using the Tesseract Engine. Now, the recognition step may have errors and we require post processing to correct them. The 3rd step is thus the Post-Processing step and it actually corrects errors in the previous step. The final and the most important step is the Braille Conversion step.
SEGMENTATION OF CHARACTERS WITHOUT MODIFIERS FROM A PRINTED BANGLA TEXTcscpconf
Optical Character Recognition (OCR) is one of the fundamental research areas of image processing and pattern recognition field. The performance accuracy of an OCR system depends on the proper segmentation of the characters. This paper is concerned with the segmentation of printed bangla characters without modifiers for optical character recognition (OCR) system. The basic steps needed for developing an OCR system also have been discussed.
DEVNAGARI DOCUMENT SEGMENTATION USING HISTOGRAM APPROACHijcseit
Document segmentation is one of the critical phases in machine recognition of any language. Correct
segmentation of individual symbols decides the accuracy of character recognition technique. It is used to
decompose image of a sequence of characters into sub images of individual symbols by segmenting lines and
words. Devnagari is the most popular script in India. It is used for writing Hindi, Marathi, Sanskrit and
Nepali languages. Moreover, Hindi is the third most popular language in the world. Devnagari documents
consist of vowels, consonants and various modifiers. Hence proper segmentation of Devnagari word is
challenging. A simple histogram based approach to segment Devnagari documents is proposed in this paper.
Various challenges in segmentation of Devnagari script are also discussed.
Devnagari handwritten numeral recognition using geometric features and statis...Vikas Dongre
This paper presents a Devnagari Numerical recognition method based on statistical
discriminant functions. 17 geometric features based on pixel connectivity, lines, line directions, holes,
image area, perimeter, eccentricity, solidity, orientation etc. are used for representing the numerals. Five
discriminant functions viz. Linear, Quadratic, Diaglinear, Diagquadratic and Mahalanobis distance are
used for classification. 1500 handwritten numerals are used for training. Another 1500 handwritten
numerals are used for testing. Experimental results show that Linear, Quadratic and Mahalanobis
discriminant functions provide better results. Results of these three Discriminants are fed to a majority
voting type Combination classifier. It is found that Combination classifier offers better results over
individual classifiers.
Devnagari document segmentation using histogram approachVikas Dongre
Document segmentation is one of the critical phases in machine recognition of any language. Correct
segmentation of individual symbols decides the accuracy of character recognition technique. It is used to
decompose image of a sequence of characters into sub images of individual symbols by segmenting lines and
words. Devnagari is the most popular script in India. It is used for writing Hindi, Marathi, Sanskrit and
Nepali languages. Moreover, Hindi is the third most popular language in the world. Devnagari documents
consist of vowels, consonants and various modifiers. Hence proper segmentation of Devnagari word is
challenging. A simple histogram based approach to segment Devnagari documents is proposed in this paper.
Various challenges in segmentation of Devnagari script are also discussed.
A Comprehensive Study On Handwritten Character Recognition Systemiosrjce
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
A bidirectional text transcription of braille for odia, hindi, telugu and eng...eSAT Journals
Abstract
Communication gap establishes an aura of unwillingness of understanding the factor behind. Basically, this is one of the common
agenda for the visually challenged people. In order to bridge this gap, a proper platform of learning for both the mass and the
visually challenged for any native language is emphasized in this paper through Braille pattern. Braille, a code, well known to
visually challenged as their mode of communication is now converted into normal text in Odia, Hindi, Telugu and English
languages using Image segmentation as the base criteria with MATLAB as its simulation field so that every person can be able to
easily decode the information being conveyed by these people. The algorithm makes the best use of segmentation, histogram
analysis, pattern recognition, letter arrays, data base generation with testing in software and dumping in using Spartan 3e FPGA
kit. This paper also elaborates on the reverse conversion of native languages and English to Braille making the paper to be more
compatible. The processing speed with efficiency and accuracy defines the effective features of this paper as a successful
approach in both software and hardware.
Keywords: visually challenge, image segmentation, MATLAB, pattern recognition, Spartan 3e FPGA kit, compatible.
BrailleOCR: An Open Source Document to Braille Converter Applicationpijush15
This presentation is actually about an Open Source application, BrailleOCR that helps to convert scanned documents to Braille and thus helps the Visually Impaired.
What is the use of this application in real life? Well, BrailleOCR is currently the only app that integrated Optical character recognition and Braille Translation together. This app will eventually help converting a lot of important documents to Braille. The project site for this project is given here
IJCA Paper: http://www.ijcaonline.org/archives/volume68/number16/11664-7254
Project site: https://code.google.com/p/brailleocr/
The app uses a four step process. Initially, we have a scanned image, which is a RGB image. The first step or the Pre-Processing step deals with conversion of a RGB image to grayscale. The 2nd step deals with Character Recognition using the Tesseract Engine. Now, the recognition step may have errors and we require post processing to correct them. The 3rd step is thus the Post-Processing step and it actually corrects errors in the previous step. The final and the most important step is the Braille Conversion step.
SEGMENTATION OF CHARACTERS WITHOUT MODIFIERS FROM A PRINTED BANGLA TEXTcscpconf
Optical Character Recognition (OCR) is one of the fundamental research areas of image processing and pattern recognition field. The performance accuracy of an OCR system depends on the proper segmentation of the characters. This paper is concerned with the segmentation of printed bangla characters without modifiers for optical character recognition (OCR) system. The basic steps needed for developing an OCR system also have been discussed.
International Journal of Research in Engineering and Science is an open access peer-reviewed international forum for scientists involved in research to publish quality and refereed papers. Papers reporting original research or experimentally proved review work are welcome. Papers for publication are selected through peer review to ensure originality, relevance, and readability.
Isolated Arabic Handwritten Character Recognition Using Linear CorrelationEditor IJCATR
Handwriting recognition systems have emerged and evolved significantly, especially in English language, but for the Arabic
language, such systems did not find that sufficient attention in comparison to other languages .Therefore, the aim of this paper to
highlight the Optical Character Recognition using linear correlation algorithm in two dimensions and then the programs can to identify
discrete Arabic letters application started manually, the program has been successfully applied.
A Review on Geometrical Analysis in Character Recognitioniosrjce
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
This paper presents an approach to recognize off-line Bangla numeral. Today there are many OCR used to recognize
Bangla numeral. The recognition of handwritten character is still a challenging work in the field of pattern recognition. Numeral
recognition in pattern recognition is the process to identify the given character according to the predefined character set. The difficulties
of recognition of handwritten Bangla numeral are that they are different in shapes and sizes which are much curved in nature. . We try
to establish a process to recognize such handwritten Bangla numerals having different shape and size. The input scanned image is first
to be binarized. Then we have segmented all the ten digits of Bangla numerals to identify each and individual digit from a scanned
image. We have used line segmentation to extract the feature from each numeral based on templates. A high correlation coefficient
method provides a successful match between the test data and training data.
A PREPROCESSING MODEL FOR HAND-WRITTEN ARABIC TEXTS BASED ON VORONOI DIAGRAMSijcsit
In this paper, a preprocessing model for hand-written Arabic text on the basis of the Voronoi Diagrams (VDs) is presented and discussed. The proposed VD-based pre-processing model consists of five stages: a preparatory stage, page segmentation, thinning, baseline estimation, and slanting correction. In the preparatory stage, the text image is converted via VDs into a group of geometrical forms that consist of edges and vertices that are used to create the other stages of the proposed model. This stage consists of
four main processes: binarization, edge extraction and contour tracking, sampling, and point-VD construction. The second stage is the page segmentation stage based on the VD area. In the third stage, an efficient method for text structuring (that is, thinning) is presented. In the fourth stage, a novel baseline
based VD method is presented. In the fifth stage, an efficient technique for slanting detection and correction is proposed and discussed.
Nowadays character recognition has gained lot of attention in the field of pattern recognition due to its application in various fields. It is one of the most successful applications of automatic pattern recognition. Research in OCR is popular for its application potential in banks, post offices, office automation etc. HCR is useful in cheque processing in banks; almost all kind of form processing systems, handwritten postal address resolution and many more. This paper presents a simple and efficient approach for the implementation of OCR and translation of scanned images of printed text into machine-encoded text. It makes use of different image analysis phases followed by image detection via pre-processing and post-processing. This paper also describes scanning the entire document (same as the segmentation in our case) and recognizing individual characters from image irrespective of their position, size and various font styles and it deals with recognition of the symbols from English language, which is internationally accepted.
An Efficient Segmentation Technique for Machine Printed Devanagiri Script: Bo...iosrjce
Segmentation technique plays a major role in scripting the documents for extraction of various
features. Many researchers are doing various research works in this field to make the segmenting process
simple as well as efficient. In this paper a simple segmentation technique for both the line and word
segmentation of a script document has been proposed. The main objective of this technique is to recognize the
spaces that separate two text lines.For the Word segmentation technique also similar procedure is followed. In
this work ,three different scanned document have been taken as input images for both line and word
segmentation techniques. The results found were outstanding with average accuracy for both line and word. It
provides 100% accuracy for line segmentation and 100% for line segmentation as well. Evaluation results show
that our method outperforms several competing methods.
The Heuristic Extraction Algorithms for Freeman Chain Code of Handwritten Cha...Waqas Tariq
Handwriting character recognition (HCR) is the ability of a computer to receive and interpret handwritten input. In HCR, there are many representation schemes and one of them is Freeman chain code (FCC). Chain code is a sequence of code direction of a characters and connection to a starting point which is often used in image processing. The main problem in representing character using FCC that it is depends on the starting points. Unfortunately, the study about FCC extraction using one continuous route and to minimizing the length of chain code to FCC from a thinned binary image (TBI) have not been widely explored. To solve this problem, heuristic algorithms are proposed to extract the FCC that is correctly representing the characters. This paper proposes two heuristics algorithm that are based on randomized and enumeration-based algorithms to solve the problems. As problem solving techniques, the randomized algorithm makes the random choices while enumeration-based algorithm enumerates all possible candidates for solution. The performance measures of the algorithms are the route length and computation time. The experiment on the algorithms are performed based on the chain code representation derived from established previous works of Center of Excellence for Document Analysis and Recognition (CEDAR) dataset which consists of 126 upper-case letter characters. The experimental result shows that route length of both algorithms are similar but the computation time of enumeration-based algorithm is higher than randomized algorithm. This is because enumeration-based algorithm considers all branches in route walk.
FREEMAN CODE BASED ONLINE HANDWRITTEN CHARACTER RECOGNITION FOR MALAYALAM USI...acijjournal
Handwritten character recognition is conversion of handwritten text to machine readable and editable form. Online character recognition deals with live conversion of characters. Malayalam is a language spoken by millions of people in the state of Kerala and the union territories of Lakshadweep and Pondicherry in India. It is written mostly in clockwise direction and consists of loops and curves. The method aims at training a simple neural network with three layers using backpropagation algorithm.
Freeman codes are used to represent each character as feature vector. These feature vectors act as inputs to the network during the training and testing phases of the neural network. The output is the character expressed in the Unicode format.
Character Recognition (Devanagari Script)IJERA Editor
Character Recognition is has found major interest in field of research and practical application to analyze and study characters in different languages using image as their input. In this paper the user writes the Devanagari character using mouse as a plotter and then the corresponding character is saved in the form of image. This image is processed using Optical Character Recognition in which location, segmentation, pre-processing of image is done. Later Neural Networks is used to identify all the characters by the further process of OCR i.e. by using feature extraction and post-processing of image. This entire process is done using MATLAB.
Recognition of Words in Tamil Script Using Neural NetworkIJERA Editor
In this paper, word recognition using neural network is proposed. Recognition process is started with the partitioning of document image into lines, words, and characters and then capturing the local features of segmented characters. After classifying the characters, the word image is transferred into unique code based on character code. This code ideally describes any form of word including word with mixed styles and different sizes. Sequence of character codes of the word form input pattern and word code is a target value of the pattern. Neural network is used to train the patterns of the words. Trained network is tested with word patterns and is recognized or unrecognized based on the network error value. Experiments have been conducted with a local database to evaluate the performance of the word recognizing system and obtained good accuracy. This method can be applied for any language word recognition system as the training is based on only unique code of the characters and words belonging to the language.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
This study was designed to evaluate the effect of
70% ethanolic crude extract of Portulaca oleracea L on mice
orgons . (In vivo),In vivo, the acute toxicity of 70 % ethanolic
extract of the plant on normal mice was studied. No toxic effect
was noted on normal mice even at 9500 mg /kg B.W S/C
injection.Histopathological changes due to ethanolic extract of
the plant in healthy mice were summarized in hyperplasia of
white pulp with amyloid deposition, proliferation of
megakaryocytes and mononuclear cell infiltration in the liver and
kidney parenchyma. There were no significant lesions detected in
the brain, heart and ovary in all treated groups.
The combination of steganography and
cryptography is considered as one of the best security methods
used for message protection, due to this reason, in this paper, a
data hiding system that is based on image steganography and
cryptography is proposed to secure data transfer between the
source and destination. Animated GIF image is chosen as a
carrier file format for the steganography due to a wide use in web
pages and a LSB (Least Significant Bits) algorithm is employed to
hide the message inside the colors of the pixels of an animated
GIF image frames. To increase the security of hiding, each frame
of GIF image is converted to 256 color BMP image and the
palette of them is sorted and reassign each pixels to its new index,
furthermore, the message is encrypted by LZW ( Lempel _
Ziv_Welch) compression algorithm before being hidden in the
image frames. The proposed system was evaluated for
effectiveness and the result shows that, the encryption and
decryption methods used for developing the system make the
security of the proposed system more efficient in securing data
from unauthorized users. The system is therefore, recommended
to be used by the Internet users for establishing a more secure
communication
International Journal of Research in Engineering and Science is an open access peer-reviewed international forum for scientists involved in research to publish quality and refereed papers. Papers reporting original research or experimentally proved review work are welcome. Papers for publication are selected through peer review to ensure originality, relevance, and readability.
Isolated Arabic Handwritten Character Recognition Using Linear CorrelationEditor IJCATR
Handwriting recognition systems have emerged and evolved significantly, especially in English language, but for the Arabic
language, such systems did not find that sufficient attention in comparison to other languages .Therefore, the aim of this paper to
highlight the Optical Character Recognition using linear correlation algorithm in two dimensions and then the programs can to identify
discrete Arabic letters application started manually, the program has been successfully applied.
A Review on Geometrical Analysis in Character Recognitioniosrjce
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
This paper presents an approach to recognize off-line Bangla numeral. Today there are many OCR used to recognize
Bangla numeral. The recognition of handwritten character is still a challenging work in the field of pattern recognition. Numeral
recognition in pattern recognition is the process to identify the given character according to the predefined character set. The difficulties
of recognition of handwritten Bangla numeral are that they are different in shapes and sizes which are much curved in nature. . We try
to establish a process to recognize such handwritten Bangla numerals having different shape and size. The input scanned image is first
to be binarized. Then we have segmented all the ten digits of Bangla numerals to identify each and individual digit from a scanned
image. We have used line segmentation to extract the feature from each numeral based on templates. A high correlation coefficient
method provides a successful match between the test data and training data.
A PREPROCESSING MODEL FOR HAND-WRITTEN ARABIC TEXTS BASED ON VORONOI DIAGRAMSijcsit
In this paper, a preprocessing model for hand-written Arabic text on the basis of the Voronoi Diagrams (VDs) is presented and discussed. The proposed VD-based pre-processing model consists of five stages: a preparatory stage, page segmentation, thinning, baseline estimation, and slanting correction. In the preparatory stage, the text image is converted via VDs into a group of geometrical forms that consist of edges and vertices that are used to create the other stages of the proposed model. This stage consists of
four main processes: binarization, edge extraction and contour tracking, sampling, and point-VD construction. The second stage is the page segmentation stage based on the VD area. In the third stage, an efficient method for text structuring (that is, thinning) is presented. In the fourth stage, a novel baseline
based VD method is presented. In the fifth stage, an efficient technique for slanting detection and correction is proposed and discussed.
Nowadays character recognition has gained lot of attention in the field of pattern recognition due to its application in various fields. It is one of the most successful applications of automatic pattern recognition. Research in OCR is popular for its application potential in banks, post offices, office automation etc. HCR is useful in cheque processing in banks; almost all kind of form processing systems, handwritten postal address resolution and many more. This paper presents a simple and efficient approach for the implementation of OCR and translation of scanned images of printed text into machine-encoded text. It makes use of different image analysis phases followed by image detection via pre-processing and post-processing. This paper also describes scanning the entire document (same as the segmentation in our case) and recognizing individual characters from image irrespective of their position, size and various font styles and it deals with recognition of the symbols from English language, which is internationally accepted.
An Efficient Segmentation Technique for Machine Printed Devanagiri Script: Bo...iosrjce
Segmentation technique plays a major role in scripting the documents for extraction of various
features. Many researchers are doing various research works in this field to make the segmenting process
simple as well as efficient. In this paper a simple segmentation technique for both the line and word
segmentation of a script document has been proposed. The main objective of this technique is to recognize the
spaces that separate two text lines.For the Word segmentation technique also similar procedure is followed. In
this work ,three different scanned document have been taken as input images for both line and word
segmentation techniques. The results found were outstanding with average accuracy for both line and word. It
provides 100% accuracy for line segmentation and 100% for line segmentation as well. Evaluation results show
that our method outperforms several competing methods.
The Heuristic Extraction Algorithms for Freeman Chain Code of Handwritten Cha...Waqas Tariq
Handwriting character recognition (HCR) is the ability of a computer to receive and interpret handwritten input. In HCR, there are many representation schemes and one of them is Freeman chain code (FCC). Chain code is a sequence of code direction of a characters and connection to a starting point which is often used in image processing. The main problem in representing character using FCC that it is depends on the starting points. Unfortunately, the study about FCC extraction using one continuous route and to minimizing the length of chain code to FCC from a thinned binary image (TBI) have not been widely explored. To solve this problem, heuristic algorithms are proposed to extract the FCC that is correctly representing the characters. This paper proposes two heuristics algorithm that are based on randomized and enumeration-based algorithms to solve the problems. As problem solving techniques, the randomized algorithm makes the random choices while enumeration-based algorithm enumerates all possible candidates for solution. The performance measures of the algorithms are the route length and computation time. The experiment on the algorithms are performed based on the chain code representation derived from established previous works of Center of Excellence for Document Analysis and Recognition (CEDAR) dataset which consists of 126 upper-case letter characters. The experimental result shows that route length of both algorithms are similar but the computation time of enumeration-based algorithm is higher than randomized algorithm. This is because enumeration-based algorithm considers all branches in route walk.
FREEMAN CODE BASED ONLINE HANDWRITTEN CHARACTER RECOGNITION FOR MALAYALAM USI...acijjournal
Handwritten character recognition is conversion of handwritten text to machine readable and editable form. Online character recognition deals with live conversion of characters. Malayalam is a language spoken by millions of people in the state of Kerala and the union territories of Lakshadweep and Pondicherry in India. It is written mostly in clockwise direction and consists of loops and curves. The method aims at training a simple neural network with three layers using backpropagation algorithm.
Freeman codes are used to represent each character as feature vector. These feature vectors act as inputs to the network during the training and testing phases of the neural network. The output is the character expressed in the Unicode format.
Character Recognition (Devanagari Script)IJERA Editor
Character Recognition is has found major interest in field of research and practical application to analyze and study characters in different languages using image as their input. In this paper the user writes the Devanagari character using mouse as a plotter and then the corresponding character is saved in the form of image. This image is processed using Optical Character Recognition in which location, segmentation, pre-processing of image is done. Later Neural Networks is used to identify all the characters by the further process of OCR i.e. by using feature extraction and post-processing of image. This entire process is done using MATLAB.
Recognition of Words in Tamil Script Using Neural NetworkIJERA Editor
In this paper, word recognition using neural network is proposed. Recognition process is started with the partitioning of document image into lines, words, and characters and then capturing the local features of segmented characters. After classifying the characters, the word image is transferred into unique code based on character code. This code ideally describes any form of word including word with mixed styles and different sizes. Sequence of character codes of the word form input pattern and word code is a target value of the pattern. Neural network is used to train the patterns of the words. Trained network is tested with word patterns and is recognized or unrecognized based on the network error value. Experiments have been conducted with a local database to evaluate the performance of the word recognizing system and obtained good accuracy. This method can be applied for any language word recognition system as the training is based on only unique code of the characters and words belonging to the language.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
This study was designed to evaluate the effect of
70% ethanolic crude extract of Portulaca oleracea L on mice
orgons . (In vivo),In vivo, the acute toxicity of 70 % ethanolic
extract of the plant on normal mice was studied. No toxic effect
was noted on normal mice even at 9500 mg /kg B.W S/C
injection.Histopathological changes due to ethanolic extract of
the plant in healthy mice were summarized in hyperplasia of
white pulp with amyloid deposition, proliferation of
megakaryocytes and mononuclear cell infiltration in the liver and
kidney parenchyma. There were no significant lesions detected in
the brain, heart and ovary in all treated groups.
The combination of steganography and
cryptography is considered as one of the best security methods
used for message protection, due to this reason, in this paper, a
data hiding system that is based on image steganography and
cryptography is proposed to secure data transfer between the
source and destination. Animated GIF image is chosen as a
carrier file format for the steganography due to a wide use in web
pages and a LSB (Least Significant Bits) algorithm is employed to
hide the message inside the colors of the pixels of an animated
GIF image frames. To increase the security of hiding, each frame
of GIF image is converted to 256 color BMP image and the
palette of them is sorted and reassign each pixels to its new index,
furthermore, the message is encrypted by LZW ( Lempel _
Ziv_Welch) compression algorithm before being hidden in the
image frames. The proposed system was evaluated for
effectiveness and the result shows that, the encryption and
decryption methods used for developing the system make the
security of the proposed system more efficient in securing data
from unauthorized users. The system is therefore, recommended
to be used by the Internet users for establishing a more secure
communication
This study aims to employ low-cost agro waste
biosorbent tamarind (Tamarindus indica) pod shells and
activated carbon prepared by complete and partial pyrolysis of
tamarind pod shell for the removal of hexavalent chromium
ions from aqueous solution. The effect of parameters namely,
initial metal ion concentration, pH, temperature, biomass
loading on chromium removal efficiency were studied. More
than 96.9% removal of Chromium was achieved using crude
tamarind pod shells as biosorbent. The experimental data
obtained were fitted with Langmuir, Freundlich, Temkin and
Redlich-Peterson adsorption isotherm models. The
experimental data fits well to Langmuir, Freundlich and
Temkin isotherms with regression coefficient R2 more than 0.9.
For Redlich-Peterson adsorption isotherm the experimental
data does not fit so well. The crude tamarind had maximum
monolayer adsorption capacity of 40 mg/g and a separation
factor of 0.0416 indicating it as best adsorbent among the three
tested adsorbent. Further, an attempt is made to fit sorption
kinetics with pseudo first order and pseudo second order
reactions. Pseudo second order kinetics model fits well to the
experimental data for all three adsorbents.
RFID-based public transport ticketing systems
rely on widespread networks of RFID readers that locate
the user within the transport network in real time to be
able to verify whether he can travel at that time with the
ticket he holds. This paper presents a system that uses that
same RFID-based location information to give the user
navigation indications depending on his current location
provided that the user has indicated beforehand the places
he intends to visit. The system was designed to be costeffectively
deployable on the short term but open for easy
extension. This paper is based on ticketing and
identification of the passenger in the public transport. In
the metropolitan city like Mumbai, Kolkata we have a
severe malfunction of public transport and various
security problems. Firstly, there is a lot of confusion
between the passengers regarding fares which lead to
corruption, Secondly due to mismanagement of public
transport the passengers faces the problem of traffic jam,
thirdly nowadays we have severe security problems in
public transport due anti-social elements.The entire
network comprises of three modules; Base Station Module,
In-Bus Modules and Bus Stop Module. The base station
module consists of monitoring system which includes GSM
and a PC. The In-Bus Modules consists of two
Microcontrollers, GSM Modem, GPS, Zigbee, RFID, LCD
and infrared sensor. RFID for ticketing purpose, GSM,
GPS is used for mobile data transmission and tracking
location. The Zigbee module is also interfaced with the
microcontroller which is used to send the bus information
to bus stop and to get the information from the bus stop to
bus. The Bus Stop Module is fixed at every bus stop
consists of Zigbee node which is interfaced with the
Microcontroller.
—Stochastic processes have many useful applications
and are taught in several university programmers. In this paper
we are using stochastic process with complex concept on Markov
chains which uses a transition matrix to plot a transition diagram
and there are several examples which explains various type of
transition diagram. The concept behind this topic is simple and
easy to understand.
Bio-char can be produced by thermal conversion of
biomass. Palm shells were obtained from palm fruits (palmira).
They were air-dried to remove moisture. The dried palm shells
were ground to become powder and heated at 600ºC, 800ºC and
1000ºC for 2 h respectively. After heating, bio-char was obtained.
Structural properties of palm shell powder and bio-char were
examined by X-ray diffraction (XRD). Scanning electron
microscopy (SEM) was used to observe microstructure of biochar.
Properties such as hydration capacity, pH were also
evaluated.
A young astronomer’s by now ten years old
results are re-told and put in perspective. The implications are
far-reaching. Angular-momentum shows its clout not only in
quantum mechanics where this is well known, but is also a
major player in the space-time theory of the equivalence
principle and its ramifications. In general relativity, its
fundamental role was largely neglected for the better part of a
century. A children’s device – a friction-free rotating bicycle
wheel suspended from its hub that can be lowered and pulled
up reversibly – serves as an eye-opener. The consequences are
embarrassingly far-reaching in reviving Einstein’s original
dream
When a ductile material with a crack is loaded in
tension, the deformation energy builds up around the crack tip
and it is understood that at a certain critical condition voids are
formed ahead of the crack tip. The crack extension occurs by
coalescence of voids with the crack tip. The “characteristic
distance” (Lc) defined as the distance b/w the crack tip & the void
responsible for eventual coalescence with the crack tip. Nucleation
of these voids is generally associated with the presence of second
phase particles or grain boundaries in the vicinity of the crack tip.
Although approximate, Lc assumes a special significance since it
links the fracture toughness to the microscopic mechanism
considered responsible for ductile fracture. The knowledge of the
“characteristic distance” is also crucial for designing the size of
mesh in the finite element simulations of material crack growth
using damage mechanics principles. There is not much work
(experimental as well as numerical) available in the literature
related to the dependency of “characteristic distance” on the
fracture specimen geometry. The present research work is an
attempt to understand numerically, the geometry dependency of
“characteristic distance” using three-dimensional FEM analysis.
The variation of “characteristic distance” parameter due to the
change of temperature across the fracture specimen thickness was
also studied. The work also studied the variation of “characteristic
distance”, due to the change in fracture specimen thickness.
Finally, the ASTM requirement of fracture specimen thickness
criteria is evaluated for the “characteristic distance” fracture
parameter. “Characteristic distance” is found to vary across the
fracture specimen thickness. It is dependent on fracture specimen
thickness and it converges after a specified thickness of fracture
specimen. “Characteristic distance” value is also dependent on the
temperature of ductile material. In Armco iron material, it is
found to decrease with the increase in temperature.
This research deals with study of Degradation
behavior of starch blended with different percentage of
polypropylene (PP) .Twin screw extruder at 160- 190 °C and 50
rpm is used for manufacture of blend sheet. Degradation test
achieved according to ASTM standard (D 638 IV and D570-98).
Studies on their degradation properties were carried out by Soil
burial test, Water absorption test and Hydrolysis test. The
morphology test of the polypropylene / starch blend samples
was obviously seen in the (Dino- Light- Digital Microscope),
Results of soil burial test show that tensile strength and
percentage of elongation of polypropylene / starch blend
decrease with increasing the starch content and burial time. The
hydrolysis test show the weight losses increasing with the
increasing amount of starch. High percent of polypropylene
found to decrease the amount of water absorption of the blend.
The physical appearance and morphology studies of
polypropylene / starch blend after burial test in soil and
hydrolysis in water environment showed that all blend samples
was obviously changed after 90-day study period, whereas the
pure polypropylene samples remained unchanged
Cloud computing solves the problem of real
time demand information and visibility at different location by
which information can be delivered with reliability, scalability
and flexibility between the supplier and customer. Logistics
network requires effective information flow for technical support
by which logistics infrastructures can be totally utilized and
tracked the information collection, transmission and operation.
Cloud is fast growing technology which can effectively reduce the
intermediate cost of flow of information and improve the link
between the logistics partners and customers. This paper
analyzes the advantages of cloud based logistics network and
defines in which way a logistics network manages Information
Flow Control (IFC) over the cloud, which allows the logistics
network to do work effectively.
On the surface a packet is a chunk of information
but at the deeper level a packet is one unit of binary data capable
of being transferred through a network. Delivering data packets
for highly dynamic mobile ad hoc networks in a reliable and
timely manner. Driven by this issue, an efficient Position-based
Opportunistic Routing (POR) protocol which takes advantage of
the stateless property of geographic routing. In proactive routing
protocols the route discovery and recovery procedures are time
and energy consuming process. Once the path breaks, data
packets will get lost or be delayed for a long time until the
reconstruction of the route, causing transmission interruption.
but Geographic routing (GR) uses location information to
forward data packets, in a hop-by-hop routing fashion. Greedy
forwarding is used to select next hop forwarder with the largest
positive progress toward the destination while void handling
mechanism is triggered to route around communication voids. No
end-to-end route need to be maintained, leading to GR’s high
efficiency and scalability. In the operation of greedy forwarding,
the neighbour which is relatively far away from the sender is
chosen as the next hop. If the node moves out of the sender’s
coverage area, the transmission will fail. In GPSR (a very famous
geographic routing protocol), the MAC-layer failure feedback is
used to offer the packet another chance to reroute.
A supply chain consists of all parties involved
directly or indirectly, in fulfilling a customer request. The supply
chain includes not only the manufacturers and suppliers, but also
transporters, workhouse, retailers and even customers
themselves. Within each organization, such as a manufactures,
the supply chain includes all functions involved in receiving and
filling a customer request. These functions include, but are not
limited to, new product development, marketing operations,
distributions, finance, and customer service. Supply chain
management (SCM) is the management of an interconnected or
interlinked between network, channel and node businesses
involved in the provision of product and service packages
required by the end customers in a supply chain. Supply chain
management spans the movement and storage of raw materials,
work-in-process inventory, and finished goods from point of
origin to point of consumption. It is also defined as the "design,
planning, execution, control, and monitoring of supply chain
activities with the objective of creating net value, building a
competitive infrastructure, leveraging worldwide logistics,
synchronizing supply with demand and measuring performance
globally.
In current year, endurable and entire renewable
energy resources are extensively used in electrical energy
generation system. Mainly, solar energy conservation systems
are apply in stand -alone system. Solar panels covert solar
radiation into direct electrical energy. Solar panels are one of
the most potential renewable energy technologies for refreshing
building. In this study, responsibility analysis of a solar system
installed in my collage academic block and hostel is
investigated. The system includes solar panel, battery,
generator, converter and loads. In this study we calculate
overall load in academic block (Electrical engineering
department and round building) and only boy hostel. After
knowing overall loads result for these buildings we simulate
this data through HOMER tool and we obtain the best result
which is presented in this paper.
The result obtained from the optimization gives the initial
capital cost as 296.000$ while operating cost is 2,882$/yr. Total
net present cost (NPC) is 332,846$ and the cost of energy
(COE) is 0.212$/kWh.
The main purpose of this research paper is that the
maximum demand of energy consumption for both academic
block and hostel are simulated through solar panel, for this
purpose which amount of solar panel and battery is required.
To help corporations survive amidst worldwide
quality competition, the authors have focused on the strategic
development of a Higher-Cycled Product Design CAE Model
employing a Highly Reliable CAE Analysis Technology
Component Model. Their efforts are part of principle-based
research aimed at evolving product design and CAE development
processes to ensure better quality assurance. To satisfy the
requirements of developing and producing high quality products
while also reducing costs and shortening development times, the
effectiveness of this model was verified by successfully applying it
to the technological problems of loosening bolts and other
product design bottlenecks at auto manufacturers.
-In the field of Agriculture most important things
are fertility of soil, nutrition’s available in soil, water availability
in that area, atmospheric conditions .All these parameters are
playing the measure roll regarding the productivity of crop .In
this paper we are trying to go through the techniques which will
show us how to improve productivity with the minimum use of
natural resources like water, and avoid leaching of soil by using
fertilizers through drip. This can be used in greenhouse or open
environments to efficiently monitor soil moisture and
temperature, ambient temperature, and humidity. Wired
communications, sensor networks, and other complementary
technologies provide the necessary tools to compile and processes
physical variables, including temperature, humidity, and soil
moisture, pH of soil, fertilizer concentrations. Greenhouse and
precision agricultural, in general, demand real-time precise
measurement of these parameters in order to avoid unnecessary
exposure to unhealthy ambient conditions, assure maximum
productivity and provide value-added quality. This paper aims to
implement the basic application of automizing the irrigation field
by programming the components and building the necessary
hardware with ARM7 Processor. This is used to find the exact
field condition and maintaining their levels in the soil
This paper focuses on the numerous techniques that
have been proposed over the years for metamaterial
characterization. These techniques are categorized into
analytical, field averaging and experimental methods, which
provide various methods to determine the complex permittivity,
complex permeability and refractive index of metamaterials.
Space-time adaptive processing (STAP) is a signal
processing technique most commonly used in radar systems where
interference is a problem. The radar signal processor is used to
remove the unintentional cluttering effects caused by ground
reflections and echoes due to sea, desert, forest, etc. and intentional
jamming and make the received signal useful. In this paper a new
approach to STAP based on subspace projection has been described
in detail. According to linear algebra and three dimensional
geometry, if we project a range space on to a subspace spanned by
linearly independent vectors, we can suppress data which is
perpendicular to that subspace. In subspace based technique, the
received data is projected on to a subspace which is orthogonal to
clutter subspace to remove the clutter. The probability of target
detection can be find out in order to analyse the performance of the
proposed algorithm. Two existing algorithms, SMI and DPCA are
chosen to do the comparison. while plotting the detection Probability
against SINR , the results obtained are better for subspace technique
than DPCA and SMI. We got the SINR improved for subspace based
technique for same detection probability. The effect of subspace rank
on SINR was also analysed for understanding the computational load
caused by the technique. We also analysed the convergence of the
algorithm by taking plots of SINR against range snapshots.
Suspended nanoparticles in conventional fluids,
called nanofluids, have been the subject of intensive study
worldwide since pioneering researchers recently discovered the
anomalous thermal behavior of these fluids. The heat transfer from
smaller area is achieved through microchannels. The heat transfer
principle states that maximum heat transfer is achieved in
microchannels with maximum pressure drop across it. In this
research work the experimental and numerical investigation for
the improved heat transfer characteristics of serpentine shaped
microchannel heat sink using Al2O3/water nanofluid is done. The
fluid flow characteristics is also analyzed for the serpentine
shaped micrchannel. The experimental results of the heat
transfer using Al2O3 nanofluid is compared with the numerical
values. The calculations in this work suggest that the best heat
transfer enhancement can be obtained by using a system with an
Al2O3–water nanofluid-cooled micro channel with serpentine
shaped fluid flow
The Application Programming Interface restricts
the types of queries that the Web service can answer. For
instance, a Web service might provide a method that returns the
books of a given author in fast manner, but it might not provide a
method that returns the authors of a given book. If the user asks
for the author of some specific book, then the Web service cannot
be called – even though the underlying database might have the
preferred piece of information, this scenario is called asymmetry.
This asymmetry is particularly problematic if the service is used
in a Web service orchestration system. In this survey, we propose
to use on-the-fly information extraction (IE).IE used to collect
values, and then the value can be used as parameter Bindings for
the Web service. This survey shows how the information
extraction can be integrated into a Web service orchestration
system. The proposed approach is fully implemented in a
prototype called Search Using Services and Information
Extraction (SUSIE). Real-life data and services are used to
demonstrate the practical viability and good performance of our
approach
Due to increase demand of energy, increasing price
of petroleum fuels, depletion of petroleum fuels, and
environmental pollution by these fuel emissions, it is very
necessary to find the alternative fuels. This work focused on use
of hybrid blends of Karanja and Cottonseed oil Biodiesels. In this
work 20% and 25% blends are used and the performance and
emission tests were conducted on single cylinder, 4-stroke, water
cooled CI engine by running the engine at a speed of 1500rpm, at
a compression ratio of 16.5:1 and at an injection pressure of
205bar and performance parameters like BP, BSFC, BTE and
the emissions like CO, HC and NOx are compared. It was found
that the blends gave comparatively good results in respect of
performance and emissions.
Inpainting scheme for text in video a surveyeSAT Journals
Abstract
Text data present in video sequences provide useful information of paramount requirement. Although text present in video provide
useful information not all are of them are necessary because it may hide the important portion of the video. So there must way to
erase this type of unwanted text. This can be done in two phase first text components are detected from each frame of the video.
Detected text component are then removed from the video sequences. And restore the occluded part of the video using inpaint
method. This text detection and removal scheme is in two phases. Each phase is broad topic of image processing. Video text
detection and Inpainting are two most important phase in this scheme. Text detection phase consist of text localization, text
segmentation and recognition phase. Inpainting method is used for restoring occluded part produce due to removal of text.
Keywords—Optical Character Recognition(OCR), Stroke width transform, Text Detection,Connected Component
DEVELOPMENT OF AN ALPHABETIC CHARACTER RECOGNITION SYSTEM USING MATLAB FOR BA...Mohammad Liton Hossain
Character recognition technique, associates a symbolic identity with the image of the character, is an important area in pattern recognition and image processing. The principal idea here is to convert raw images (scanned from document, typed, pictured etcetera) into editable text like html, doc, txt or other formats. There is a very limited number of Bangla Character recognition system, if available they can’t recognize the whole alphabet set. Motivated by this, this paper demonstrates a MATLAB based Character Recognition system from printed Bangla writings. It can also compare the characters of one image file to another one. Processing steps here involved binarization, noise removal and segmentation in various levels, features extraction and recognition.
OCR for Gujarati Numeral using Neural Networkijsrd.com
This papers functions within to reduce individuality popularity (OCR) program for hand-written Gujarati research. One can find so much of work for Indian own native different languages like Hindi, Gujarati, Tamil, Bengali, Malayalam, Gurumukhi etc., but Gujarati is a vocabulary for which hardly any work is traceable especially for hand-written individuals. Here in this work a nerve program is provided for Gujarati hand-written research popularity. This paper deals with an optical character recognition (OCR) system for handwritten Gujarati numbers. A several break up food ahead nerve program is suggested for variation of research. The functions of Gujarati research are abstracted by four different details of research. Reduction and skew- changes are also done for preprocessing of hand-written research before their variation. This work has purchased approximately 81% of performance for Gujarati handwritten numerals.
An effective approach to offline arabic handwriting recognitionijaia
Segmentation is the most challenging part of the Arabic handwriting recognition, due to the unique
characteristics of Arabic writing that allows the same shape to denote different characters. In this paper,
an off-line Arabic handwriting recognition system is proposed. The processing details are presented in
three main stages. Firstly, the image is skeletonized to one pixel thin. Secondly, transfer each diagonally
connected foreground pixel to the closest horizontal or vertical line. Finally, these orthogonal lines are
coded as vectors of unique integer numbers; each vector represents one letter of the word. In order to
evaluate the proposed techniques, the system has been tested on the IFN/ENIT database, and the
experimental results show that our method is superior to those methods currently available.
Image compression using negative formateSAT Journals
Abstract This project deals with the compression of digital images using the concept of conversion of original image to negative format. The colored image can be of larger size whereas the image can be converted into a negative form and compressed, by applying a compression algorithm on it. Image compression can improve the performance of digital systems by reducing the time and cost for the storage of images and their transmission, without significant reduction in quality and also to find a tool for compress a folder and selective image compression. Keywords: Image Processing, Pixels, Image Negatives, Colors, Color Models.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
Handwritten character recognition is one of the most challenging and ongoing areas of research in the
field of pattern recognition. HCR research is matured for foreign languages like Chinese and Japanese but
the problem is much more complex for Indian languages. The problem becomes even more complicated for
South Indian languages due to its large character set and the presence of vowels modifiers and compound
characters. This paper provides an overview of important contributions and advances in offline as well as
online handwritten character recognition of Malayalam scripts.
Feature Extraction and Feature Selection using Textual Analysisvivatechijri
After pre-processing the images in character recognition systems, the images are segmented based on
certain characteristics known as “features”. The feature space identified for character recognition is however
ranging across a huge dimensionality. To solve this problem of dimensionality, the feature selection and feature
extraction methods are used. Hereby in this paper, we are going to discuss, the different techniques for feature
extraction and feature selection and how these techniques are used to reduce the dimensionality of feature space
to improve the performance of text categorization.
A comparative study on content based image retrieval methodsIJLT EMAS
Content-based image retrieval (CBIR) is a method of
finding images from a huge image database according to persons’
interests. Content-based here means that the search involves
analysis the actual content present in the image. As database of
images is growing daybyday, researchers/scholars are searching
for better techniques for retrieval of images maintaining good
efficiency. This paper presents the visual features and various
ways for image retrieval from the huge image database.
Recent joint surgery studies reveal increased
revisions and resurfacing of the metal on metal hip joints. Metal
on metal hip implants were developed more than thirty years ago
and their application has been refined because of availability of
advanced manufacturing techniques and partly by advancements
in material science and engineering. Development of composite
materials may provide greater durability to metal-on-metal hip
implants .This review article is a study of the latest literature of
metal-on-metal hip implants and its various modeling techniques.
Numbers of methods are used for convergence and numerical
solution to investigate the performance of metal-on-metal hip
implant for accurate stable solution. This paper presents analysis
done by various researchers on metal-on-metal hip implants for
wear, lubrication, fatigue, bio-tribo-corrosion, design, toxicity
and resurfacing. After in vivo and in vitro studies, it is found that
all these methods have limitations. There is a need of more
insight for lubrication analysis, geometry of bearings, materials
and input parameters. The information provided in this work is
intended as an aid in the assessment of metal-on-metal hip joints.
Background Hospital contributes significantly tangible and intangible resources on a concurred plan by the scheduling of surgery on the OT list. Postponement decreases efficiency by declining throughput leads to wastage of resources hence burden to the nation. Patients and their family face economic and emotional implication due to the postponement. Postponement rate being a quality indicator controls check mechanism could be developed from the results. Postponement of elective scheduled operations results in inefficient use of the operating room (OR) time on the day of surgery. Inconvenience to patients and families are also caused by postponements. Moreover, the day of surgery (DOS) postponement creates logistic and financial burden associated with extended hospital stay and repetitions of pre-operative preparations to an extent of repetition of investigations in some cases causing escalated costs, wastage of time and reduced income. Methodology A cross-sectional study was done in the operation theaters of a tertiary care hospital in which total ten operation theaters of General Surgery Data of scheduled, performed and postponed surgeries was collected from all the operation theater with effect from March 1st to September 30th, 2018. A questionnaire was developed to find out the reasons for the postponement for all hospital’s stakeholders (surgeons, Anesthetist, Nursing Officer) and they were further evaluated time series analysis of scheduling of Operation Theater for moving average technique. Results Total 958 surgeries were scheduled and 772 surgeries performed were and 186 surgeries were postponed with a postponement rate of 19.42% in the cardiac surgery department during the study period. Month-wise postponement Rate exponential smoothing of time series data shows the dynamic of operating suits. To test throughput Postponement rate was plotted the postponed surgeries and on regression analysis is in a perfect linear relationship.
Introduction: Postponement of elective scheduled operations results in inefficient use of operating room (OR) time on the day of surgery. Inconvenience to patients and families also caused by postponements. Moreover, day of surgery (DOS) postponement creates logistic and financial burden associated with extended hospital stay and repetitions of pre-operative preparations to an extend of repetition of investigations in some cases causing escalated costs, wastage of time and reduced income. Methodology: A cross sectional study was done in the operation theaters of a tertiary care hospital in which total ten operation theaters of General Surgery Data of scheduled, performed and postponed surgeries was collected from all the operation theater with effect from march 1st to September 30th 2018. A questionnaire was developed to find out the reasons for the postponement for all hospital’s stakeholders (Surgeons, Anesthetist, Nursing officer) and they were further evaluated Time series analysis of scheduling of Operation Theater for Moving average Technique. Results: total 2,466 surgeries were scheduled and 1,980 surgeries were performed and 486 surgeries were postponed in the general surgery department during the study period. Month wise postponement forecast was in accordance with the performed surgeries and on regression analysis postponed surgeries were in perfect linear relationship with the postponement Rate.
In the present paper the experimental study of
Nanotechnology involves high cost for Lab set-up and the
experimentation processes were also slow. Attempt has also
been made to discuss the contributions towards the societal
change in the present convergence of Nano-systems and
information technologies. one cannot rely on experimental
nanotechnology alone. As such, the Computer- simulations and
modeling are one of the foundations of computational
nanotechnology. The computer modeling and simulations
were also referred as computational experimentations. The
accuracy of such Computational nano-technology based
experiment generally depends on the accuracy of the following
things: Intermolecular interaction, Numerical models and
Simulation schemes used. The essence of nanotechnology is
therefore size and control because of the diversity of
applications the plural term nanotechnology is preferred by
some nevertheless they all share the common feature of control
at the nanometer scale the latter focusing on the observation
and study of phenomena at the nanometer scale. In this paper,
a brief study of Computer-Simulation techniques as well as
some Experimental result
Solar cell absorber Kesterite- type Cu2ZnSnS4 (CZTS) thin films have been prepared by Chemical Bath Deposition (CBD). UV–vis absorption spectra measurement indicated that the band gap of as-synthesized CZTS was about1.68 eV, which was near the optimum value for photovoltaic solar conversion in a single-band-gap device. The polycrystalline CZTS thin films with kieserite crystal structure have been obtained by XRD. The average of crystalline size of CZTS is 27 nm
Multilevel inverters play a crucial part in the
areas of high and medium voltage applications. Among the three
main multilevel inverters used, the capacitor clamped multilevel
inverter(CCMLI) has advantage with respect to voltage
redundancies. This work proposes a switching pattern to improve
the performance of chosen H-bridge type CCMLI over
conventional CCMLI. The PWM technique used in this work is
Phase Opposition Disposition PWM(PODPWM). The
performance of proposed H-bridge type CCMLI is verified
through MATLAB-Simulink based simulation. It has been
observed that the THD is low in chosen CCMLI compared to
conventional CCMLI.
- In this paper, we introduce a practical mechanism of
compressing a binary phase code modulation (BPCM) signal
according to Barker code with 13 chips in presence of additive
white Gaussian noise (AWGN) by using a digital matched filter
(DMF) corresponding to time domain convolution algorithm of
input and reference signals using Cyclone II EP2C70F896C6
FPGA from ALTERA placed on education and development
board DE2-70 with the following parameters: frequency of
BPCM signal fIF=2 MHz, sampling frequency
f MHz SAM 50
,pulse period
T 200s
, pulse width
S 13sc
, chip width
CH 1sc
, compressing factor
KCOM 13
, SNRinp=1/1, 1/2, 1/3, 1/4, 1/5 and processing
gain factor SNRout/SNRinp=11.14 dB.
The results of filter operation are evaluated using a digital
oscilloscope GDS-1052U to display the input and output signals
for different SNRinp.
Flooding is one of the most devastating natural
disasters in Nigeria. The impact of flooding on human activities
cannot be overemphasized. It can threaten human lives, their
property, environment and the economy. Different techniques
exist to manage and analyze the impact of flooding. Some of these
techniques have not been effective in management of flood
disaster. Remote sensing technique presents itself as an effective
and efficient means of managing flood disaster. In this study,
SPOT-10 image was used to perform land cover/ land use
classification of the study area. Advanced Space borne Thermal
Emission and Reflection Radiometer (ASTER) image of 2010 was
used to generate the Digital Elevation Model (DEM). The image
focal statistics were generated using the Spatial Analyst/
Neighborhood/Focal Statistics Tool in ArcMap. The contour map
was produced using the Spatial Analyst/ Surface/ Contour Tools.
The DEM generated from the focal statistics was reclassified into
different risk levels based on variation of elevation values. The
depression in the DEM was filled and used to create the flow
direction map. The flow accumulation map was produced using
the flow direction data as input image. The stream network and
watershed were equally generated and the stream vectorized. The
reclassified DEM, stream network and vectorized land cover
classes were integrated and used to analyze the impact of flood on
the classes. The result shows that 27.86% of the area studied will
be affected at very high risk flood level, 35.63% at high risk,
17.90% at moderate risk, 10.72% at low risk, and 7.89% at no
risk flood level. Built up area class will be mostly affected at very
high risk flood level while farmland will be affected at high risk
flood level. Oshoro, Imhekpeme, and Weppa communities will be
affected at very high risk flood inundation while Ivighe, Uneme,
Igoide and Iviari communities will be at risk at high risk flood
inundation level. It is recommended among others that buildings
that fall within the “Very High Risk” area should be identified
and occupants possibly relocated to other areas such as the “No
Risk” area.
Without water, humans cannot live. Since time began,
we have lived by the water and vast tracts of waterless land have
been abandoned as it is too difficult to inhabit. At any given
moment, the earth’s atmosphere contains 4,000 cubic miles of
water, which is just 0.000012% of the 344 million cubic miles of
water on earth. Nature maintains this ratio via evaporation and
condensation, irrespective of the activities of man.
There is a certain need for an alternative to solve the water
scarcity. Obtaining water from the atmosphere is nothing new -
since the beginning of time, nature’s continuous hydrologic cycle
of evaporation and condensation in the form of rain or snow has
been the sole source and means of regenerating wholesome water
for all forms of life on earth.
An effective method to generate water is by the separation of
moisture present in air by condensation. In this study, the water
present in air is condensed on the surface of a container and then
collected in an external jacket provided on the container.
Insulations are provided to optimize the inner temperature of the
container.
The method is although uncommon but has certain advantages
which make it a success. The process is economical and does not
require a lot of utilities. It also helps in further reducing the
carbon footprint.
In every moment of functioning the Li-Ion
battery must provide the power required by the user, to have a
long operating life and to and to provide high reliability in
operation. The methods for analysis and testing batteries are
ensuring that all these conditions imposed to the batteries are
met by being tested depending on their intended use.
The success rate of real estate project is
decreasing as there is large scale of project and participation of
entities. It is necessary to study the risk factors involved in the
project. This paper focused on types of risks involved in the
project, risk factors, risk management tools & techniques.
Identification of risk of the project in terms of the total cost of the
project has been divided under Technical, Financial, Sociopolitical
and Statutory cost centers. Large real estate projects
have to tackle the following issues: land acquisition, skilledlabour
shortage, non-availability of skilled project managers, and
mechanization of the construction process to cater to the growing
demands. Non- availability of supporting infrastructure, political
issues like instability of the government leading to regulatory
issues, social issues, marketing forms an important part in these
projects as this is a onetime investment and the purchase cycle is
long , long development period makes the same project be at
different points in the real estate value cycle.
- In the present scenario carbon emission and sand
mining are major concern due to its hazardous effect to
environment and making serious imbalance to the ecosystem.
Various studies have been conducted to reduce severe effect on
environment, using byproducts like copper slag as partial
replacement of fine aggregate. Different researchers have also
revealed numerous uses of copper slag as a replacing agent in
determining the strength of concrete. A comprehensive review of
studies has been presented in this paper for scope of replacement
of fine aggregate from copper slag in concrete
- Security is a concept similar to being cautious
or alert against any danger. Network security is the condition of
being protected against any danger or loss. Thus safety plays a
important role in bank transactions where disclosure of any data
results in big loss. We can define networking as the combination
of two or more computers for the purpose of resource sharing.
Resources here include files, database, emails etc. It is the
protection of these resources from unauthorized users that
brought the development of network security. It is a measure
incorporated to protect data during their transmission and also
to ensure the transmitted is protected and authentic.
Security of online bank transactions here has been
improved by increasing the number of bits while establishing the
SSL connection as well as in RSA asymmetric key encryption
along with SHA1 used for digital signature to authenticate the
user
Background: Septoplasty is a common surgical
procedure performed by otolaryngologists for the correction of
deviated nasal septum. This surgery may be associated with
numerous complications. To minimize these complications,
otolaryngologists frequently pack both nasal cavities with
different types of nasal packing. Despite all its advantages,
nasal packing is also associated with some disadvantages. To
avoid these issues, many surgeons use suturing techniques to
obviate the need for packing after surgery.
Objective: To determine the efficacy and safety of trans-septal
suture technique in preventing complications and decreasing
morbidity after septoplasty in comparison with nasal packing.
Patients and methods: Prospective comparative study. This
study was conducted in the department of Otolaryngology -
Head and Neck Surgery, Rizgary Teaching Hospital - Erbil,
from the 6th of May 2014 to the 30th of November 2014.
A total of 60 patients aged 18-45 years, undergoing septoplasty,
were included in the study. Before surgery, patients were
randomly divided into two equal groups. Group (A) with transseptal
suture technique was compared with group (B) in which
nasal packing with Merocel was done. Postoperative morbidity
in terms of pain, bleeding, postnasal drip, sleep disturbance,
dysphagia, headache and epiphora along with postoperative
complications including septal hematoma, septal perforation,
crustation and synechiae formation were assessed over a follow
up period of four weeks.
Results: Out of 60 patients, 37 patients were males (61.7%)
and 23 patients were females (38.3%). Patients with nasal
packing had significantly more postoperative pain (P<0.05)><0.05). There was no significant difference between
the two groups with respect to nasal bleeding, septal
hematoma, septal perforation, crustation and synechiae
formation.
Conclusion: Septoplasty can be safely performed using transseptal
suturing technique without nasal packing.
The basic reason behind the need to
monitor water quality is to verify whether the examined
water quality is suitable for intended usage or not. This
study is conducted on Al -Shamiya al- sharqi drain in
Diwaniya city in Iraq to make valid assessment for the
level of parameters measured and to realize their effects
on irrigation. In order to assess the drainage water
quality for irrigation purposes with a high accuracy, the
Irrigation Water Quality Index (IWQI) will be examined
and upgraded (integrated with GIS) to make a
classification for drainage water. For this purpose, ten
samples of drainage water were taken from different ten
location of the stuay area. The collected samples were
analyzed chemically for different elements which affect
water quality for irrigation.These elements are :
Calcium(Ca+2), Sodium(Na+
), Magnesium(Mg+2),
Chloride( ), Potassium(K+
), Bicarbonate(HCO3),
Nitrate(NO3), Sulfate( , Phosphate( , Electrical
Conductivity(EC), Total Dissolved Solids (TDS), Total
Suspended Solids (TSS) and pH-values (PH). Sodium
Adsorption Ratio (SAR) and Sodium Content (Na%)
have been also calculated. Results suggest that, the use of
GIS and Water Quality Index (WQI) methods could
provide an extremely interesting as well as efficient tool
to water resource management. The results analysis of
(IWQI) maps confirms that: 52% of the drainage water
in study area falls within the "Low restriction" (LR) and
47%of study area has water with (Moderate
restriction)(MR),While 1% of drainage water in the
study area classified as (Sever restriction) (SR). So, the
drainage water should be used with the soil having high
permeability with some constraints imposed on types of
plant for specified tolerance of salts
The cable-hoisting method and rail cable-lifting
method are widely used in the construction of suspension bridge.
This paper takes a suspension bridge in Hunan as an example,
and expounds the two construction methods, and analyzes their
respective merits and disadvantages.
Baylis-Hillman reaction has been achieved on
different organic motifs but with completion times of three to
six days. Micellar medium of CTAB in water along with the
organic base DABCO has been used to effect the BaylisHillman
reaction on a steroidal nucleus of Withaferin-A for the
first time with different aromatic aldehydes within a day to
synthesize a library of BH adducts (W1a –W14a) and (W1bW14b)
as a mixture of two isomers and W15 as a single
compound. The isomers were separated on column and the
major components were chosen for bio-evaluation. Cytotoxic
activity of the synthesized compounds was screened against a
panel of four cancer cell lines Lung A-549, Breast MCF-7,
Colon HCT-116 and Leukemia THP-1 along with 5-florouracil
and Mitomycin-C as references. All the compounds exhibited
promising activity against screened cell lines and were found to
possess enhaunced activity than parent compound. BH adducts
with aromatic systems having methoxy and nitro groups were
found to be more active.
This paper presents the details on the
experimental investigation carried out to get the desired fresh
properties of the SCC. Tests were performed on various mixtures
to obtain the required SCC. In the present research work we
have replaced 15% of cement with class F fly ash. By varying the
quantity of water and sand the mortar mix was prepared. Later
varying percentage of coarse aggregate was added to the mortar
to obtain the desired SCC.
The batteries used in electric and hybrid vehicles
consists of several cells with voltages between 3.6V battery and
4.2 V in series or parallel combinations of configurations for
obtaining the necessary available voltages in the operation of a
hybrid electric vehicle. How malfunction of a single cell affects
the behavior of the entire battery pack, BMS main function is to
protect individual cells against over-discharge, overload or
overheating. This is done by correct balancing of the cells. In
addition BMS estimates the battery charge status
This project aims at using (PD-MCPWM) Phase
disposition multi carrier pulse width modulation technique to
reduce leakage current in a transformerless cascaded multilevel
inverter for PV systems. Advantages of transformerless PV
inverter topology is as follows, simple structure, low weight and
provides higher efficiency , but however this topology provides a
path for the leakage current to flow through the parasitic
capacitance formed between the PV module and the ground.
Modulation technique reduces leakage current with an added
advantage without adding any extra components.
More from International Journal of Technical Research & Application (20)
How to Split Bills in the Odoo 17 POS ModuleCeline George
Bills have a main role in point of sale procedure. It will help to track sales, handling payments and giving receipts to customers. Bill splitting also has an important role in POS. For example, If some friends come together for dinner and if they want to divide the bill then it is possible by POS bill splitting. This slide will show how to split bills in odoo 17 POS.
How to Create Map Views in the Odoo 17 ERPCeline George
The map views are useful for providing a geographical representation of data. They allow users to visualize and analyze the data in a more intuitive manner.
The Art Pastor's Guide to Sabbath | Steve ThomasonSteve Thomason
What is the purpose of the Sabbath Law in the Torah. It is interesting to compare how the context of the law shifts from Exodus to Deuteronomy. Who gets to rest, and why?
2024.06.01 Introducing a competency framework for languag learning materials ...Sandy Millin
http://sandymillin.wordpress.com/iateflwebinar2024
Published classroom materials form the basis of syllabuses, drive teacher professional development, and have a potentially huge influence on learners, teachers and education systems. All teachers also create their own materials, whether a few sentences on a blackboard, a highly-structured fully-realised online course, or anything in between. Despite this, the knowledge and skills needed to create effective language learning materials are rarely part of teacher training, and are mostly learnt by trial and error.
Knowledge and skills frameworks, generally called competency frameworks, for ELT teachers, trainers and managers have existed for a few years now. However, until I created one for my MA dissertation, there wasn’t one drawing together what we need to know and do to be able to effectively produce language learning materials.
This webinar will introduce you to my framework, highlighting the key competencies I identified from my research. It will also show how anybody involved in language teaching (any language, not just English!), teacher training, managing schools or developing language learning materials can benefit from using the framework.
Students, digital devices and success - Andreas Schleicher - 27 May 2024..pptxEduSkills OECD
Andreas Schleicher presents at the OECD webinar ‘Digital devices in schools: detrimental distraction or secret to success?’ on 27 May 2024. The presentation was based on findings from PISA 2022 results and the webinar helped launch the PISA in Focus ‘Managing screen time: How to protect and equip students against distraction’ https://www.oecd-ilibrary.org/education/managing-screen-time_7c225af4-en and the OECD Education Policy Perspective ‘Students, digital devices and success’ can be found here - https://oe.cd/il/5yV
Welcome to TechSoup New Member Orientation and Q&A (May 2024).pdfTechSoup
In this webinar you will learn how your organization can access TechSoup's wide variety of product discount and donation programs. From hardware to software, we'll give you a tour of the tools available to help your nonprofit with productivity, collaboration, financial management, donor tracking, security, and more.
ESC Beyond Borders _From EU to You_ InfoPack general.pdf
RECOGNITION AND CONVERSION OF HANDWRITTEN MODI CHARACTERS
1. International Journal of Technical Research and Applications e-ISSN: 2320-8163,
www.ijtra.com Volume 3, Issue 1 (Jan-Feb 2015), PP. 128-131
128 | P a g e
RECOGNITION AND CONVERSION OF
HANDWRITTEN MODI CHARACTERS
Prof. Mrs. Snehal R. Rathi1, Rohini H. Jadhav2, Rushikesh A. Ambildhok3
VIIT college,
University of Pune, India
Snehal_rathi@rediffmail.com1
, jadhavrohini63@gmail.com2
, rushiamby@gmail.com3
ABSTRACT- The technical study had been performed on
many foreign languages like Japanese; Chinese etc. but the
efforts on Indian ancient script is still immature. As the Modi
script language is ancient and cursive type, the OCR of it is still
not widely available. As per our knowledge, Prof. D.N.Besekar,
Dept. of Computer Science, Shri. Shivaji College of Science,
Akola had proposed a system for recognition of offline
handwritten MODI script Vowels. The challenges of
recognition of handwritten Modi characters are very high due
to the varying writing style of each individual. Many vital
documents with precious information have been written in
Modi and currently, these documents have been stored and
preserved in temples and museums. Over a period of time these
documents will wither away if not given due attention. In this
paper we propose a system for recognition of handwritten
Modi script characters; the proposed method uses Image
processing techniques and algorithms which are described
below.
General Terms
Preprocessing techniques: Gray scaling, Thresholding,
Boundary detection, Thinning, cropping, scaling, Template
generation. Other algorithms used- Average method, otsu
method, Stentiford method, Template-based matching method.
Keywords- MODI Script, handwritten character recognition
(HCR), Image processing.
I. INTRODUCTION
Handwritten character recognition has been a popular
field of research but it is still an open problem. The
challenging nature of handwritten character recognition has
attracted the attention of researchers from industry and
academic people. The recognition task of Modi script is very
difficult because the Modi handwritten characters are
naturally of two types, cursive and unconstrained. There is
high similarity between character and distorted and broken
characters. Hence the extreme variation is observed between
the collected character samples. The proposed work is an
attempt for handwritten Modi characters recognition and
conversion into corresponding English character.
Section II describes about Modi script. In section III
Recognition model is discussed. Section IV covers
Preprocessing steps taken. Section V, discusses about
Feature extraction methods. Classification method is
explained in section VI. Result and Discussion is covered in
section VII. Future work is discussed in section VIII and
conclusion explained in section IX.
II. MODI
Modi is one of the scripts used to write the Marathi
language, which is the primary language spoken in the state
of Maharashtra in western India. There are several theories
about the origin of this script. One of them claims that in
12th Century MODI was developed by ‘Hemandpant’ or
‘Hemadri’, (a wellknown administrator in the kingdom of
‘Mahadev Yadav’ and ‘Ramdev Yadav’ (‘Raja Ramdevrai’,
Last king of ‘Yadav empire’ (1187-1318 at ‘Devgiri’.). Dr.
Rajwade and Dr. Bhandarkar believes that Hemandpant
brought MODI script from Sri Lanka, but according to
Chandorkar, MODI script has evolved from Mouryi
(Bramhi) script of Ashoka period.
The Modi alphabet was invented during the 17th
century to write the Marathi language of Maharashtra. It is a
variant of the Devanāgarī alphabet. The Modi alphabet was
used until 1950 when it was replaced by the Devanāgarī
alphabet. Modi alphabets are classified into vowels,
consonants and numerals.
Notable features are that each letter has an inherent
vowel (a). Other vowels are indicated using a variety of
diacritics which appear above, below, in front of or after the
main letter. Some vowels are indicated by modifying the
consonant letter itself.
Fig. 1: Modi Vowels and diacritics
Fig. 2: Modi Consonants
Fig. 3: Modi Numbers
III. RECOGNITION MODEL
The Character Recognition process includes some vital
sub steps like: Preprocessing, Feature Extraction, and Post
Processing. The block diagram of typical character
recognition is shown in Fig. 4. The preprocessing steps are
described in step IV.
2. International Journal of Technical Research and Applications e-ISSN: 2320-8163,
www.ijtra.com Volume 3, Issue 1 (Jan-Feb 2015), PP. 128-131
129 | P a g e
Fig.4: Block Diagram
IV. PREPROCESSING
Preprocessing is a very important step in any optical or
any handwritten character recognition system. As a
preliminary work we have collected papers containing Modi
characters, written by different people without considering
the variations in ink or pen. It contains weak, broken and
distorted characters also. We have scanned these pages
using 200,300 or 600 DPI and stored as JPG, BMP or TIF
format. On these scanned input images we are going to
perform some preprocessing methods as shown in Fig.4.
Initially by applying Average method we are going to
convert a scanned input image into a GrayScale image i.e. a
monochrome image (Image made up of single color i.e.
Gray). Subsequently, we are going to convert grayscale
image into binary image by applying Thresholding
algorithm to it. After Thresholding, we have to find out the
boundaries of that character and we have to crop it if
necessary. After cropping we have to remove the noise from
that image using Median filter. For noise removal, we have
to apply Stentiford thinning algorithm for thinning that input
image. After the thinning process we can scale that image to
bring it in a proper size template. Now we have to train this
template to generate a trained template. All these steps come
under preprocessing. The results of some preprocessing
steps are shown in Fig 5 and 6.
Fig. 5: Original Image
Fig. 6: Binarized Image
A. Gray-scale image
Image is a 2D array or a matrix of pixels. Pixel is the
smallest element of an image on a display screen. They can
be imagined as a continuous series of square boxes placed
on the screen. Pixels are stored as integers of size varying
from 8 bit, 24 bit or 32 bit. 24 bit pixels consisting of a
combination of 3 colours, viz red, green and blue are most
commonly used to depict any image. Many image
processing operations work on a plane of image data (e.g. a
single colour channel) at a time.
So if u have an RGB image you many need to apply the
operation on each of the three image planes and then
combine the results. Gray scale images only contain one
image plane containing the gray scale intensity values. If u
convert an RGB image to gray scale, you would only need
to process 1/3 of the data compared to the coloured image.
This data reduction saves a reasonable amount of time.
Methods like lightness, luminosity and average are used for
converting an image into a gray scale image. Average
method is the most commonly used method where we
compute the average by adding the three colour components
of the pixels and divide it by 3. (Gs= r+g+b / 3).
B. Thresholding
Thresholding is one of the simplest methods of image
segmentation. Thresholding is performed generally on a
gray image to generate binary images. That is, an image
with black and white colours only. Thresholding is
commonly used to extract essential features. Feature
extraction is basically a concept of differentiation between
the foreground and the background. The required features of
an image are converted to black and everything else to white
or the other way around. In thresholding, we compute the
binary pixel value of the output based on a formerly defined
threshold value. Thresholding is mostly applied on gray
scale images, though it can be directly applied on a coloured
image as well. But, as mentioned ago, this increases the
executing time of the algorithm and slows down the process
considerably.
C. Boundary detection and Cropping
Boundary detection or commonly known as edge
detection is a method of identifying points in an image at
which the image brightness changes in a crisply manner.
The points at which image brightness changes suddenly or
abruptly are organized into a set, termed as edges.
Horizontal and vertical scanners are incorporated which
detect the edges from all the sides of the image. Carrying
out the edge detection procedure on any image under
consideration may substantially lessen the quantity of data
to be processed and hence flush out information that perhaps
be regarded as immaterial, while keeping the vital structural
properties of an image preserved. Thus, the succeeding task
of interpreting the information contents in the primary
image may therefore be simplified significantly.
3. International Journal of Technical Research and Applications e-ISSN: 2320-8163,
www.ijtra.com Volume 3, Issue 1 (Jan-Feb 2015), PP. 128-131
130 | P a g e
Cropping is the simplest of photo manipulation
processes, and is performed with the aim to delete
unnecessary data or immaterial details from a picture,
transform its aspect ratio, or to superiorize the general
constitution. Here, cropping is used to remove the non
essential details from the photo eventually helping to reduce
the processing time.
D. Thinning
Thinning is basically a peripheral procedure that is used
to delete the marked pixels from binary images. Thinning is
particularly used for skeletonizing a particular image. It is
commonly used to clean up and tidy up the output of the
edge detectors by cutting down all the lines to single pixel
thickness. Thinning is applied only to a binary image and
the consequential image is also a binary image.
Image before thinning Image after thinning
The Stentiford Algorithm for thinning is described below:
1) Search a pixel location (x, y), where the pixels in the
image tally those in template T (A). Using the
following template, all pixels placed at the top of the
image are deleted shifting from left to right and top to
bottom of the image.
2) If the pixel at the centre is not a terminating pixel, and
has connectivity no. (CN) = 1, then this pixel is
highlighted for removal.
3) Terminating pixel: A pixel is a terminating pixel if it is
joined to only one pixel. That means, if a black pixel
has only one black neighbour out of the 8 potential
neighbours, it can be marked as a terminating pixel.
Repeat steps I and II for all the pixel locations
matching T (A).
4) Replicate steps I to III for the remaining templates: T
(B), T (C), and T (D).
5) T (B) will matches pixels on the left side of the image,
by shifting positions from bottom to top and from left
to right. T (C) chooses pixels along the bottom of the
object and shift from right to left and down to up. T
(D) addresses pixels on the right side of the image,
shifting from up to down direction and right to left.
6) Set to white or black as per mentioned at the
beginning, the pixels highlighted to be erased.
E. Scaling
Scaling is the process of resampling or resizing an
image into a size which is predefined which coincides with
the template with which we are examining that particular
image. In image processing, bilinear interpolation is one of
the most basic resizing techniques.
It is also known as bilinear filtering in texture mapping,
and it can be utilized to produce a sensibly non virtual
image. A weighted average of the attributes viz colour,
alpha, etc. of the 4 neighbouring pixels is calculated and
practically applied to the pixel on the screen. This method is
applied recursively for each and every pixel forming the
image which is currently being textured.
Each and every pixel of the primary image needs to be
shifted in a certain particular direction based on the scale
constant, when an image is being scaled up. But sometimes,
in up scaling of an image, there are pixels (or holes) that are
not allotted proper pixel values, when up scaling is done by
a non integral scale value. In such a case, those holes should
be assigned appropriate RGB or gray scale values so that the
output image does not produce non-valued pixels.
Bilinear method of interpolation can be effectively
utilized where exact image transmutation with pixel
matching is not feasible, so that one can compute and apport
suitable intensity measures to pixels. Dissimilar to other
interpolation methods such as bicubic interpolation and
closest neighbour interpolation, this method make use of
only the 4 closest pixel measures which are placed in
diagonally from a given pixel with the goal to evaluate the
values of that pixel with appropriate colour intensities.
The method of bilinear interpolation only takes notice
of the nearest 2x2 surrounding pixels of the previously
known pixel values neighbouring the unknown pixel's
calculated position. It subsequently takes a weighted
average of the marked 4 pixels to ultimately reach at its end,
interpolated value. From each of the known pixel positions,
the weight on each of the 4 pixel values is based on the
calculated pixel's distance in 2 dimensional spaces.
F. Template Matching
Template matching is a digital image processing
method developed for searching minute parts of a picture
corresponding a template image. This technique of matching
templates can be further divided into the following:
(1) feature-based matching
(2) template-based matching.
If the template image under consideration strongly
featured, a feature-based approach may be preferred. Since
this method doesn’t consider the entire template image, it
can be more feasible to compute when working with
pictures of enhanced resolution. As an optional method,
template-based, may require searching potentially large
amounts of points in order to find the best matching
location. This approach may further prove very handy if the
match in the search image can be transformed in some
fashion.
For templates having weak features, or for images
where the majority of the template image constitutes the
matching image, the template-based technique may be
considered. As mentioned before, since template-based
matching perhaps require sampling of a massive number of
positions, it is possible to lessen the amount of sampling
points by decreasing the resolution of the search images and
template images by the same factor and carrying on the
operation on the resultant images downsized, thus providing
a window of points inside the search image, so that the
4. International Journal of Technical Research and Applications e-ISSN: 2320-8163,
www.ijtra.com Volume 3, Issue 1 (Jan-Feb 2015), PP. 128-131
131 | P a g e
template need not search every workable information or a
combination of the two.
V. FUTURE WORK
Several important documents written in ‘Modi’ language
still remain in vegetative state. These documents have
priceless data and information. They can be of great help if
they are successfully decoded. The problem of Modi OCR
and handwriting recognition is a challenging job, and
experts try hard to interpret these issues and fabricate
potential answers to these issues. A large number of issues
still remain to be solved and active research in this area is
required to take this potential problem to useful levels, when
product using the solution would become available to
common man.
VI. CONCLUSION
Here, we have mentioned about the various steps in the
image processing techniques in order to convert the Modi
characters into English.
REFERENCES
[1] Sakal News Paper(9th
July 2014)
[2] D. N. Besekar, R. J. Ramteke, International Journal of
Computer Applications, vol. 64, no. 3, February 2013. “Study
for Theoretical Analysis of Handwritten MODI Script – A
Recognition Perspective”.
[3] Lawrence Lo, ‘ancientscripts.com A compendium of world-
wide writing systems from prehistory to today’,
”MODI”,“www.ancientscripts.com/modi.html”, Accessed 28
March 2014 Accessed 28 March 2014
[4] David Lalmalsawma, India Insights, Reuters, Edition US, 7
Sept 2013. “India speaks 780 languages, 220 lost in last 50
years–survey”, “apresearch.org/india-speaks-780-languages-
220-lost-in-last-50-years-survey-india-insight”, Accessed 28
March 2014
[5] Rajesh Khillari, “History of MODI Script”, 30 May 2008,
“http://modi-script.blogspot.in/2008/05/history-of-
modiscript.html” Accessed 28 May 2014.