OCR has been an active research area since last few decades. OCR performs the recognition of the text in
the scanned document image and converts it into editable form. The OCR process can have several stages
like preprocessing, segmentation, recognition and post processing. The preprocessing stage is a crucial
stage for the success of OCR, which mainly deals with noise removal. In the present paper, a modified
technique for noise removal named as “K-Algorithm” has been proposed, which has two stages as
filtering and binarization. The proposed technique shows improvised results in comparison to median
filtering technique.
A Review on Geometrical Analysis in Character Recognitioniosrjce
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
Fingerprint image enhancement is the key process in IAFIS systems. In order to reduce false identification ratio and to supply good fingerprint images to IAFIS systems for exact identification, fingerprint images are generally enhanced. A filtering process tries to filter out the noise from the input image, and emphasize on low, high and directional spatial frequency components of an image. This paper presents an experimental summary of enhancing fingerprint images using Gabor filters. Frequency, width and window domain filter ranges are fixed. The orientation angle alone is modified by 0 radians, π/2, π/4 and 3π/4 radians. The experimental results show that Gabor filter enhances the fingerprint image in a better way than other filtering methods and extracts features.
International Journal of Computational Engineering Research(IJCER)ijceronline
International Journal of Computational Engineering Research(IJCER) is an intentional online Journal in English monthly publishing journal. This Journal publish original research work that contributes significantly to further the scientific knowledge in engineering and Technology.
A Review on Geometrical Analysis in Character Recognitioniosrjce
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
Fingerprint image enhancement is the key process in IAFIS systems. In order to reduce false identification ratio and to supply good fingerprint images to IAFIS systems for exact identification, fingerprint images are generally enhanced. A filtering process tries to filter out the noise from the input image, and emphasize on low, high and directional spatial frequency components of an image. This paper presents an experimental summary of enhancing fingerprint images using Gabor filters. Frequency, width and window domain filter ranges are fixed. The orientation angle alone is modified by 0 radians, π/2, π/4 and 3π/4 radians. The experimental results show that Gabor filter enhances the fingerprint image in a better way than other filtering methods and extracts features.
International Journal of Computational Engineering Research(IJCER)ijceronline
International Journal of Computational Engineering Research(IJCER) is an intentional online Journal in English monthly publishing journal. This Journal publish original research work that contributes significantly to further the scientific knowledge in engineering and Technology.
Handwritten Character Recognition: A Comprehensive Review on Geometrical Anal...iosrjce
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
Offline signatures matching using haar wavelet subbandsTELKOMNIKA JOURNAL
The complexity of multimedia contents is significantly increasing in the current world. This leads to an exigent demand for developing highly effective systems tosatisfy human needs. Until today, handwritten signature considered an important means that is used in banks and businesses to evidence identity, so there are many works triedto develop a method for recognition purpose. This paper introduced an efficient technique for offline signature recognition depending on extracting the local feature by utilizing the haar wavelet subbands and energy. Three different setsof features are utilized by partitioning the signature image into non overlapping blocks where different block sizes are used. CEDAR signature database is used asa dataset for testing purpose. The results achieved by this technique indicate a high performance in signature recognition.
A Hybrid Approach to Face Detection And Feature Extractioniosrjce
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
A literature review of various techniques available on Image DenoisingAI Publications
This paper provides a literature review of the different approaches used for image denoising. Various approaches are studied and their results are compared to provide a better understanding of the filters used to de-noise images. It is shown that how a single image is subjected to various denoising techniques and how it can react to those filters. Statistical and mean deviation techniques used by halder et al. (2019)1 and CNN techniques used by zing et al.(2018)2 are reviewed in detail to show how salt and pepper noise can be removed from the images. Each paper that is discussed here has explored the individual approach based on their research and the aim of this paper is to discuss all those approaches in a subsequent manner.
An efficient method for recognizing the low quality fingerprint verification ...IJCI JOURNAL
In this paper, we propose an efficient method to provide personal identification using fingerprint to get better accuracy even in noisy condition. The fingerprint matching based on the number of corresponding minutia pairings, has been in use for a long time, which is not very efficient for recognizing the low quality fingerprints. To overcome this problem, correlation technique is used. The correlation-based fingerprint verification system is capable of dealing with low quality images from which no minutiae can be extracted reliably and with fingerprints that suffer from non-uniform shape distortions, also in case of damaged and partial images. Orientation Field Methodology (OFM) has been used as a preprocessing module, and it converts the images into a field pattern based on the direction of the ridges, loops and bifurcations in the image of a fingerprint. The input image is then Cross Correlated (CC) with all the images in the cluster and the highest correlated image is taken as the output. The result gives a good recognition rate, as the proposed scheme uses Cross Correlation of Field Orientation (CCFO = OFM + CC) for fingerprint identification.
A UTOMATIC L OGO E XTRACTION F ROM D OCUMENT I MAGESIJCI JOURNAL
Logo extraction plays an important role in logo ba
sed document image retrieval. Here we present a
method for automatic logo extraction from the docum
ent images that works for scanned documents
containing a logo. Proposed method uses morphologic
al operations for logo extraction. It supports
extraction of a logo with its gray level and color
information
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
COMPRESSION BASED FACE RECOGNITION USING TRANSFORM DOMAIN FEATURES FUSED AT M...sipij
The physiological biometric trait face images are used to identify a person effectively. In this paper, we
propose compression based face recognition using transform domain features fused at matching level. The
2D images are converted into 1-D vectors using mean to compress number of pixels. The Fast Fourier
Transform (FFT) and Discrete Wavelet Transform (DWT) are used to extract features. The low and high
frequency coefficients of DWT are concatenated to obtained final DWT features. The performance
parameters are computed by comparing database and test image features of FFT and DWT using Euclidian
Distance (ED). The performance parameters of FFT and DWT are fused at matching level to obtain better
results. It is observed that the performance of proposed method is better than the existing methods.
MULTI WAVELET BASED IMAGE COMPRESSION FOR TELE MEDICAL APPLICATIONprj_publication
Analysis and compression of medical image is an important area of biomedical
engineering. Analysis of medical image and data compression are rapidly evolving field with
growing applications in the teleradiology, Bio-medical, tele-medicine and medical data
analysis. Wavelet based techniques are latest development in the field of medical image
compression. The ROI must be compressed by a Lossless or a near lossless compression
algorithm. Wavelet based techniques are most recent growth in the area of medical image
compression.
Wavelet multi-resolution decomposition of images has shown its efficiency in many
image processing areas and specifically in compression. Transformed coefficients are
obtained by expanding a signal on a wavelet basis. The transformed signal is a different
representation of the same underlying data. Such representation is efficient if a relevant part
of the original information is found in a relative small number of coefficients. In this sense,
wavelets are near optimal bases for a wide class of signals with some smoothness, which is
the reason for compression.
Keywords: Image compression, Integer Multiwavelet Transform.
1. INTRODUCTION
Image Compression is used to reduce the number of bits required to represent an
image or a video sequence. A Compression algorithm takes an input X and generates
compressed information that requires fewer bits. The Decompression algorithm reconstructs
the compressed information and gives the original.
A compression of medical image is an important area of biomedical and telemedici
An Effective Tea Leaf Recognition Algorithm for Plant Classification Using Ra...IJMER
A leaf is an organ of a vascular plant, as identified in botanical terms, and in particular in plant morphology. Naturally a leaf is a thin, flattened organ bear above ground and it is mainly used for photosynthesis. Recognition of plants has become an active area of research as most of the plant species are at the risk of extinction. Most of the leaves cannot be recognized easily since some are not flat (e.g. succulent leaves and conifers), some does not grow above ground (e.g. bulb scales), and some does not undergo photosynthetic function (e.g. cataphylls, spines, and cotyledons).In this paper, we mainly focused on tea leaves to identify the leaf type for improving tea leaf classification. Tea leaf images are loaded from digital cameras or scanners in the system. This proposed approach consists of three phases such as preprocessing, feature extraction and classification to process the loaded image. The tea leaf images can be identified accurately in the preprocessing phase by fuzzy denoising using Dual Tree Discrete Wavelet Transform (DT-DWT) in order to remove the noisy features and boundary enhancement to obtain the shape of leaf accurately. In the feature extraction phase, Digital Morphological Features (DMFs) are derived to improve the classification accuracy. Radial Basis Function (RBF) is used for efficient classification. The RBF is trained by 60 tea leaves to classify them into 6 types. Experimental results proved that the proposed method classifies the tea leaves with more accuracy in less time. Thus, the proposed method achieves more accuracy in retrieving the leaf type.
Design of dual band dissimilar patch sizeijistjournal
This paper deals with the design of a dual band array antenna for wireless applications such as LTE (Long
Term Evolution), WiMAX etc…that resonates at 3.5 GHz and 5 GHz respectively. The substrate used for design
is FR4 (∈ =4.6) and the software used for simulation is Agilent ADS Momentum. The concept of dissimilar
patch size array antenna has been introduced. So patches of different dimensions have been used in the array
and their corresponding results are validated based on various antenna parameters like VSWR, gain, directivity
and power radiated.
Handwritten Character Recognition: A Comprehensive Review on Geometrical Anal...iosrjce
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
Offline signatures matching using haar wavelet subbandsTELKOMNIKA JOURNAL
The complexity of multimedia contents is significantly increasing in the current world. This leads to an exigent demand for developing highly effective systems tosatisfy human needs. Until today, handwritten signature considered an important means that is used in banks and businesses to evidence identity, so there are many works triedto develop a method for recognition purpose. This paper introduced an efficient technique for offline signature recognition depending on extracting the local feature by utilizing the haar wavelet subbands and energy. Three different setsof features are utilized by partitioning the signature image into non overlapping blocks where different block sizes are used. CEDAR signature database is used asa dataset for testing purpose. The results achieved by this technique indicate a high performance in signature recognition.
A Hybrid Approach to Face Detection And Feature Extractioniosrjce
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
A literature review of various techniques available on Image DenoisingAI Publications
This paper provides a literature review of the different approaches used for image denoising. Various approaches are studied and their results are compared to provide a better understanding of the filters used to de-noise images. It is shown that how a single image is subjected to various denoising techniques and how it can react to those filters. Statistical and mean deviation techniques used by halder et al. (2019)1 and CNN techniques used by zing et al.(2018)2 are reviewed in detail to show how salt and pepper noise can be removed from the images. Each paper that is discussed here has explored the individual approach based on their research and the aim of this paper is to discuss all those approaches in a subsequent manner.
An efficient method for recognizing the low quality fingerprint verification ...IJCI JOURNAL
In this paper, we propose an efficient method to provide personal identification using fingerprint to get better accuracy even in noisy condition. The fingerprint matching based on the number of corresponding minutia pairings, has been in use for a long time, which is not very efficient for recognizing the low quality fingerprints. To overcome this problem, correlation technique is used. The correlation-based fingerprint verification system is capable of dealing with low quality images from which no minutiae can be extracted reliably and with fingerprints that suffer from non-uniform shape distortions, also in case of damaged and partial images. Orientation Field Methodology (OFM) has been used as a preprocessing module, and it converts the images into a field pattern based on the direction of the ridges, loops and bifurcations in the image of a fingerprint. The input image is then Cross Correlated (CC) with all the images in the cluster and the highest correlated image is taken as the output. The result gives a good recognition rate, as the proposed scheme uses Cross Correlation of Field Orientation (CCFO = OFM + CC) for fingerprint identification.
A UTOMATIC L OGO E XTRACTION F ROM D OCUMENT I MAGESIJCI JOURNAL
Logo extraction plays an important role in logo ba
sed document image retrieval. Here we present a
method for automatic logo extraction from the docum
ent images that works for scanned documents
containing a logo. Proposed method uses morphologic
al operations for logo extraction. It supports
extraction of a logo with its gray level and color
information
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
COMPRESSION BASED FACE RECOGNITION USING TRANSFORM DOMAIN FEATURES FUSED AT M...sipij
The physiological biometric trait face images are used to identify a person effectively. In this paper, we
propose compression based face recognition using transform domain features fused at matching level. The
2D images are converted into 1-D vectors using mean to compress number of pixels. The Fast Fourier
Transform (FFT) and Discrete Wavelet Transform (DWT) are used to extract features. The low and high
frequency coefficients of DWT are concatenated to obtained final DWT features. The performance
parameters are computed by comparing database and test image features of FFT and DWT using Euclidian
Distance (ED). The performance parameters of FFT and DWT are fused at matching level to obtain better
results. It is observed that the performance of proposed method is better than the existing methods.
MULTI WAVELET BASED IMAGE COMPRESSION FOR TELE MEDICAL APPLICATIONprj_publication
Analysis and compression of medical image is an important area of biomedical
engineering. Analysis of medical image and data compression are rapidly evolving field with
growing applications in the teleradiology, Bio-medical, tele-medicine and medical data
analysis. Wavelet based techniques are latest development in the field of medical image
compression. The ROI must be compressed by a Lossless or a near lossless compression
algorithm. Wavelet based techniques are most recent growth in the area of medical image
compression.
Wavelet multi-resolution decomposition of images has shown its efficiency in many
image processing areas and specifically in compression. Transformed coefficients are
obtained by expanding a signal on a wavelet basis. The transformed signal is a different
representation of the same underlying data. Such representation is efficient if a relevant part
of the original information is found in a relative small number of coefficients. In this sense,
wavelets are near optimal bases for a wide class of signals with some smoothness, which is
the reason for compression.
Keywords: Image compression, Integer Multiwavelet Transform.
1. INTRODUCTION
Image Compression is used to reduce the number of bits required to represent an
image or a video sequence. A Compression algorithm takes an input X and generates
compressed information that requires fewer bits. The Decompression algorithm reconstructs
the compressed information and gives the original.
A compression of medical image is an important area of biomedical and telemedici
An Effective Tea Leaf Recognition Algorithm for Plant Classification Using Ra...IJMER
A leaf is an organ of a vascular plant, as identified in botanical terms, and in particular in plant morphology. Naturally a leaf is a thin, flattened organ bear above ground and it is mainly used for photosynthesis. Recognition of plants has become an active area of research as most of the plant species are at the risk of extinction. Most of the leaves cannot be recognized easily since some are not flat (e.g. succulent leaves and conifers), some does not grow above ground (e.g. bulb scales), and some does not undergo photosynthetic function (e.g. cataphylls, spines, and cotyledons).In this paper, we mainly focused on tea leaves to identify the leaf type for improving tea leaf classification. Tea leaf images are loaded from digital cameras or scanners in the system. This proposed approach consists of three phases such as preprocessing, feature extraction and classification to process the loaded image. The tea leaf images can be identified accurately in the preprocessing phase by fuzzy denoising using Dual Tree Discrete Wavelet Transform (DT-DWT) in order to remove the noisy features and boundary enhancement to obtain the shape of leaf accurately. In the feature extraction phase, Digital Morphological Features (DMFs) are derived to improve the classification accuracy. Radial Basis Function (RBF) is used for efficient classification. The RBF is trained by 60 tea leaves to classify them into 6 types. Experimental results proved that the proposed method classifies the tea leaves with more accuracy in less time. Thus, the proposed method achieves more accuracy in retrieving the leaf type.
Design of dual band dissimilar patch sizeijistjournal
This paper deals with the design of a dual band array antenna for wireless applications such as LTE (Long
Term Evolution), WiMAX etc…that resonates at 3.5 GHz and 5 GHz respectively. The substrate used for design
is FR4 (∈ =4.6) and the software used for simulation is Agilent ADS Momentum. The concept of dissimilar
patch size array antenna has been introduced. So patches of different dimensions have been used in the array
and their corresponding results are validated based on various antenna parameters like VSWR, gain, directivity
and power radiated.
Video surveillance systems are very important in our everyday life. Video surveillance applications are
used in airports, banks, offices and even our homes to remain us secure. Night vision is the ability to see in
low light conditions. Surveillance video may not be seen clearly. Especially under the weak illumination
conditions. The details of the image are very poor at night. Most of the previous work has focused on
daytime Surveillance. This paper proposes a method of Human detection in hours of darkness (night time)
using Gaussian Mixture Model (GMM) in real night environment employing an infrared radiation camera.
Infrared video is taken as Input to perform Human detection at night. Then the video was processed using
Gaussian mixture model algorithm, it was confirmed that by using this method accurate detection of human
at night is possible.
Old age people and kindergarten children face health problems mainly due to deficiencies in sanitation. So
a complete wireless health monitoring system which continuously monitors some of the vital body
parameters such as body temperature, heart pulse rate, motion sensor and odour sensor can be designed.
Our paper deals with the detection of ammonia (NH3) and hydrogen sulphide (H2S) using an odour sensor
or a gas sensor.
Performance analysis of neural network models for oxazolines and oxazoles der...ijistjournal
Neural networks have been used successfully to a br
oad range of areas such as business, data mining, d
rug
discovery and biology. In medicine, neural network
s have been applied widely in medical diagnosis,
detection and evaluation of new drugs and treatment
cost estimation. In addition, neural networks have
begin practice in data mining strategies for the a
im of prediction, knowledge discovery. This paper
will
present the application of neural networks for the
prediction and analysis of antitubercular activity
of
Oxazolines and Oxazoles derivatives. This study pre
sents techniques based on the development of Single
hidden layer neural network (SHLFFNN), Gradient Des
cent Back propagation neural network (GDBPNN),
Gradient Descent Back propagation with momentum neu
ral network (GDBPMNN), Back propagation with
Weight decay neural network (BPWDNN) and Quantile r
egression neural network (QRNN) of artificial
neural network (ANN) models Here, we comparatively
evaluate the performance of five neural network
techniques. The evaluation of the efficiency of eac
h model by ways of benchmark experiments is an
accepted application. Cross-validation and resampli
ng techniques are commonly used to derive point
estimates of the performances which are compared to
identify methods with good properties. Predictiv
e
accuracy was evaluated using the root mean squared
error (RMSE), Coefficient determination(
), mean
absolute error(MAE), mean percentage error(MPE) and
relative square error(RSE). We found that all five
neural network models were able to produce feasible
models. QRNN model is outperforms with all
statistical tests amongst other four models.
Strategy For Assessment Of Land And Complex Fields Type Analysis Through GIS ...ijistjournal
Bangladesh is an over populated developing country where crisis of food is a major issue, it faces different infrastructure problem in every sector. For Poverty Alleviation from the country we have to confirm
cultivable land to increase the crop production for feeding the over population of the country. This paper focuses on the measurement of cultivable land for cultivation. The main purpose of this paper is to briefly
describe how the GIS, Digital Mapping, Internet concepts and tools can effectively contribute in the modeling, analysis and visualization phases within an engineering or research project according to the crops by using object detection, object tracking and field mapping in Bangladesh. Through GIS mapping of the agricultural lands, the statistics can be made of how much land is cultivable and each year how much land we are losing. Mapping the cultivation land will tell us how much crop we have to import from other countries. Enabling real-time GIS analysis anytime, anywhere, the implementation of the GIS information to a wider aspect. Automation is the indicator of the modern civilizations. The system will benefit the food stock of the country according to the harvest. For this research we developed a new interactive system. The system will integrate with GIS project data in Google Earth, first finds highly accurate cluster images and partial images, obtains user feedback to merge or correct these digests, and then the supplementary visual analysis complete the partitioning of the data. This study was conducted at the software laboratory, Computer Science and Engineering department, Jahangirnagar University,Dhaka, Bangladesh in 201
Implementation Of Back-Propagation Neural Network For Isolated Bangla Speech ...ijistjournal
This paper is concerned with the development of Back-propagation Neural Network for Bangla Speech Recognition. In this paper, ten bangla digits were recorded from ten speakers and have been recognized.The features of these speech digits were extracted by the method of Mel Frequency Cepstral Coefficient(MFCC) analysis. The mfcc features of five speakers were used to train the network with Back propagation algorithm. The mfcc features of ten bangla digit speeches, from 0 to 9, of another five speakers were used to test the system. All the methods and algorithms used in this research were implemented using the features of Turbo C and
C++ languages. From our investigation it is seen that the developed system can successfully encode and analyze the mfcc features of the speech signal to recognition. The developed system achieved recognition rate about 96.332% for known speakers (i.e., speaker dependent) and 92% for unknown speakers (i.e., speaker independent).
Handwritten character recognition is one of the most challenging and ongoing areas of research in the
field of pattern recognition. HCR research is matured for foreign languages like Chinese and Japanese but
the problem is much more complex for Indian languages. The problem becomes even more complicated for
South Indian languages due to its large character set and the presence of vowels modifiers and compound
characters. This paper provides an overview of important contributions and advances in offline as well as
online handwritten character recognition of Malayalam scripts.
A Comprehensive Study On Handwritten Character Recognition Systemiosrjce
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
Feature Extraction and Feature Selection using Textual Analysisvivatechijri
After pre-processing the images in character recognition systems, the images are segmented based on
certain characteristics known as “features”. The feature space identified for character recognition is however
ranging across a huge dimensionality. To solve this problem of dimensionality, the feature selection and feature
extraction methods are used. Hereby in this paper, we are going to discuss, the different techniques for feature
extraction and feature selection and how these techniques are used to reduce the dimensionality of feature space
to improve the performance of text categorization.
Numeral recognition is an important research direction in field of pattern recognition, and it has
broad application prospects. Aiming at four arithmetic operations of general printed formats, this article
adopts a multiple hybrid recognition method and is applied to automatically calculating. This method mainly
uses BP neural network and template matching method to distinguish the numerals and operators, in order
to increase the operation speed and recognition accuracy. Sample images of four arithmetic operations are
extracted from printed books, and they are used for testing the performance of proposed recognition
method. The experiments show that the method provides correct recognition rate of 96% and correct
calculation rate of 89%.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
AN EMERGING TREND OF FEATURE EXTRACTION METHOD IN VIDEO PROCESSINGcscpconf
Recently the progress in technology and flourishing applications open up new forecast and defy
for the image and video processing community. Compared to still images, video sequences
afford more information about how objects and scenarios change over time. Quality of video is
very significant before applying it to any kind of processing techniques. This paper deals with
two major problems in video processing they are noise reduction and object segmentation on
video frames. The segmentation of objects is performed using foreground segmentation based
and fuzzy c-means clustering segmentation is compared with the proposed method Improvised
fuzzy c – means segmentation based on color. This was applied in the video frame to segment
various objects in the current frame. The proposed technique is a powerful method for image
segmentation and it works for both single and multiple feature data with spatial information.
The experimental result was conducted using various noises and filtering methods to show which is best suited among others and the proposed segmentation approach generates good quality segmented frames.
A Review of Optical Character Recognition System for Recognition of Printed Textiosrjce
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
Similar to K-ALGORITHM: A MODIFIED TECHNIQUE FOR NOISE REMOVAL IN HANDWRITTEN DOCUMENTS (20)
Submit Your Research Articles - International Journal of Information Sciences...ijistjournal
The International Journal of Information Science & Techniques (IJIST) focuses on information systems science and technology coercing multitude applications of information systems in business administration, social science, biosciences, and humanities education, library sciences management, depiction of data and structural illustration, big data analytics, information economics in real engineering and scientific problems.
This journal provides a forum that impacts the development of engineering, education, technology management, information theories and application validation. It also acts as a path to exchange novel and innovative ideas about Information systems science and technology.
INFORMATION THEORY BASED ANALYSIS FOR UNDERSTANDING THE REGULATION OF HLA GEN...ijistjournal
Considering information entropy (IE), HLA surface expression (SE) regulation phenomenon is considered as information propagation channel with an amount of distortion. HLA gene SE is considered as sink regulated by the inducible transcription factors (TFs) (source). Previous work with a certain number of bin size, IEs for source and receiver is computed and computation of mutual information characterizes the dependencies of HLA gene SE on some certain TFs in different cells types of hematopoietic system under the condition of leukemia. Though in recent time information theory is utilized for different biological knowledge generation and different rules are available in those specific domains of biomedical areas; however, no such attempt is made regarding gene expression regulation, hence no such rule is available. In this work, IE calculation with varying bin size considering the number of bins is approximately half of the sample size of an attribute also confirms the previous inferences.
Call for Research Articles - 5th International Conference on Artificial Intel...ijistjournal
5th International Conference on Artificial Intelligence and Machine Learning (CAIML 2024) will provide an excellent international forum for sharing knowledge and results in theory, methodology and applications of Artificial Intelligence and Machine Learning. The Conference looks for significant contributions to all major fields of the Artificial Intelligence, Machine Learning in theoretical and practical aspects. The aim of the Conference is to provide a platform to the researchers and practitioners from both academia as well as industry to meet and share cutting-edge development in the field.
Authors are solicited to contribute to the conference by submitting articles that illustrate research results, projects, surveying works and industrial experiences that describe significant advances in the areas of Computer Science, Engineering and Applications.
Online Paper Submission - International Journal of Information Sciences and T...ijistjournal
The International Journal of Information Science & Techniques (IJIST) focuses on information systems science and technology coercing multitude applications of information systems in business administration, social science, biosciences, and humanities education, library sciences management, depiction of data and structural illustration, big data analytics, information economics in real engineering and scientific problems.
This journal provides a forum that impacts the development of engineering, education, technology management, information theories and application validation. It also acts as a path to exchange novel and innovative ideas about Information systems science and technology.
SYSTEM IDENTIFICATION AND MODELING FOR INTERACTING AND NON-INTERACTING TANK S...ijistjournal
System identification from the experimental data plays a vital role for model based controller design. Derivation of process model from first principles is often difficult due to its complexity. The first stage in the development of any control and monitoring system is the identification and modeling of the system. Each model is developed within the context of a specific control problem. Thus, the need for a general system identification framework is warranted. The proposed framework should be able to adapt and emphasize different properties based on the control objective and the nature of the behavior of the system. Therefore, system identification has been a valuable tool in identifying the model of the system based on the input and output data for the design of the controller. The present work is concerned with the identification of transfer function models using statistical model identification, process reaction curve method, ARX model, genetic algorithm and modeling using neural network and fuzzy logic for interacting and non interacting tank process. The identification technique and modeling used is prone to parameter change & disturbance. The proposed methods are used for identifying the mathematical model and intelligent model of interacting and non interacting process from the real time experimental data.
Call for Research Articles - 4th International Conference on NLP & Data Minin...ijistjournal
4th International Conference on NLP & Data Mining (NLDM 2024) will provide an excellent international forum for sharing knowledge and results in theory, methodology and applications of Natural Language Computing and Data Mining.
Authors are solicited to contribute to the conference by submitting articles that illustrate research results, projects, surveying works and industrial experiences that describe significant advances in the following areas, but are not limited to.
Research Article Submission - International Journal of Information Sciences a...ijistjournal
The International Journal of Information Science & Techniques (IJIST) focuses on information systems science and technology coercing multitude applications of information systems in business administration, social science, biosciences, and humanities education, library sciences management, depiction of data and structural illustration, big data analytics, information economics in real engineering and scientific problems.
This journal provides a forum that impacts the development of engineering, education, technology management, information theories and application validation. It also acts as a path to exchange novel and innovative ideas about Information systems science and technology.
Call for Papers - International Journal of Information Sciences and Technique...ijistjournal
The International Journal of Information Science & Techniques (IJIST) focuses on information systems science and technology coercing multitude applications of information systems in business administration, social science, biosciences, and humanities education, library sciences management, depiction of data and structural illustration, big data analytics, information economics in real engineering and scientific problems.
This journal provides a forum that impacts the development of engineering, education, technology management, information theories and application validation. It also acts as a path to exchange novel and innovative ideas about Information systems science and technology.
Implementation of Radon Transformation for Electrical Impedance Tomography (EIT)ijistjournal
Radon Transformation is generally used to construct optical image (like CT image) from the projection data in biomedical imaging. In this paper, the concept of Radon Transformation is implemented to reconstruct Electrical Impedance Topographic Image (conductivity or resistivity distribution) of a circular subject. A parallel resistance model of a subject is proposed for Electrical Impedance Topography(EIT) or Magnetic Induction Tomography(MIT). A circular subject with embedded circular objects is segmented into equal width slices from different angles. For each angle, Conductance and Conductivity of each slice is calculated and stored in an array. A back projection method is used to generate a two-dimensional image from one-dimensional projections. As a back projection method, Inverse Radon Transformation is applied on the calculated conductance and conductivity to reconstruct two dimensional images. These images are compared to the target image. In the time of image reconstruction, different filters are used and these images are compared with each other and target image.
Online Paper Submission - 6th International Conference on Machine Learning & ...ijistjournal
6th International Conference on Machine Learning & Applications (CMLA 2024) will provide an excellent international forum for sharing knowledge and results in theory, methodology and applications of on Machine Learning & Applications.
Authors are solicited to contribute to the conference by submitting articles that illustrate research results, projects, surveying works and industrial experiences that describe significant advances in the following areas, but are not limited to.
Submit Your Research Articles - International Journal of Information Sciences...ijistjournal
The International Journal of Information Science & Techniques (IJIST) focuses on information systems science and technology coercing multitude applications of information systems in business administration, social science, biosciences, and humanities education, library sciences management, depiction of data and structural illustration, big data analytics, information economics in real engineering and scientific problems.
This journal provides a forum that impacts the development of engineering, education, technology management, information theories and application validation. It also acts as a path to exchange novel and innovative ideas about Information systems science and technology.
BER Performance of MPSK and MQAM in 2x2 Almouti MIMO Systemsijistjournal
Almouti published the error performance of the 2x2 space-time transmit diversity scheme using BPSK. One of the key techniques employed for correcting such errors is the Quadrature amplitude modulation (QAM) because of its efficiency in power and bandwidth.. In this paper we explore the error performance of the 2x2 MIMO system using the Almouti space-time codes for higher order PSK and M-ary QAM. MATLAB was used to simulate the system; assuming slow fading Rayleigh channel and additive white Gaussian noise. The simulated performance curves were compared and evaluated with theoretical curves obtained using BER tool on the MATLAB by setting parameters for random generators. The results shows that the technique used do find a place in correcting error rates of QAM system of higher modulation schemes. The model can equally be used not only for the criteria of adaptive modulation but for a platform to design other modulation systems as well.
Online Paper Submission - International Journal of Information Sciences and T...ijistjournal
The International Journal of Information Science & Techniques (IJIST) focuses on information systems science and technology coercing multitude applications of information systems in business administration, social science, biosciences, and humanities education, library sciences management, depiction of data and structural illustration, big data analytics, information economics in real engineering and scientific problems.
This journal provides a forum that impacts the development of engineering, education, technology management, information theories and application validation. It also acts as a path to exchange novel and innovative ideas about Information systems science and technology.
Call for Papers - International Journal of Information Sciences and Technique...ijistjournal
The International Journal of Information Science & Techniques (IJIST) focuses on information systems science and technology coercing multitude applications of information systems in business administration, social science, biosciences, and humanities education, library sciences management, depiction of data and structural illustration, big data analytics, information economics in real engineering and scientific problems.
This journal provides a forum that impacts the development of engineering, education, technology management, information theories and application validation. It also acts as a path to exchange novel and innovative ideas about Information systems science and technology.
International Journal of Information Sciences and Techniques (IJIST)ijistjournal
The International Journal of Information Science & Techniques (IJIST) focuses on information systems science and technology coercing multitude applications of information systems in business administration, social science, biosciences, and humanities education, library sciences management, depiction of data and structural illustration, big data analytics, information economics in real engineering and scientific problems.
This journal provides a forum that impacts the development of engineering, education, technology management, information theories and application validation. It also acts as a path to exchange novel and innovative ideas about Information systems science and technology.
BRAIN TUMOR MRIIMAGE CLASSIFICATION WITH FEATURE SELECTION AND EXTRACTION USI...ijistjournal
Feature extraction is a method of capturing visual content of an image. The feature extraction is the process to represent raw image in its reduced form to facilitate decision making such as pattern classification. We have tried to address the problem of classification MRI brain images by creating a robust and more accurate classifier which can act as an expert assistant to medical practitioners. The objective of this paper is to present a novel method of feature selection and extraction. This approach combines the Intensity, Texture, shape based features and classifies the tumor as white matter, Gray matter, CSF, abnormal and normal area. The experiment is performed on 140 tumor contained brain MR images from the Internet Brain Segmentation Repository. The proposed technique has been carried out over a larger database as compare to any previous work and is more robust and effective. PCA and Linear Discriminant Analysis (LDA) were applied on the training sets. The Support Vector Machine (SVM) classifier served as a comparison of nonlinear techniques Vs linear ones. PCA and LDA methods are used to reduce the number of features used. The feature selection using the proposed technique is more beneficial as it analyses the data according to grouping class variable and gives reduced feature set with high classification accuracy.
Research Article Submission - International Journal of Information Sciences a...ijistjournal
The International Journal of Information Science & Techniques (IJIST) focuses on information systems science and technology coercing multitude applications of information systems in business administration, social science, biosciences, and humanities education, library sciences management, depiction of data and structural illustration, big data analytics, information economics in real engineering and scientific problems.
This journal provides a forum that impacts the development of engineering, education, technology management, information theories and application validation. It also acts as a path to exchange novel and innovative ideas about Information systems science and technology.
A MEDIAN BASED DIRECTIONAL CASCADED WITH MASK FILTER FOR REMOVAL OF RVINijistjournal
In this paper A Median Based Directional Cascaded with Mask (MBDCM) filter has been proposed, which is based on three different sized cascaded filtering windows. The differences between the current pixel and its neighbors aligned with four main directions are considered for impulse detection. A direction index is used for each edge aligned with a given direction. Minimum of these four direction indexes is used for impulse detection under each masking window. Depending on the minimum direction indexes among these three windows new value to substitute the noisy pixel is calculated. Extensive simulations showed that the MBDCM filter provides good performances of suppressing impulses from both gray level and colored benchmarked images corrupted with low noise level as well as for highly dense impulses. MBDCM filter gives better results than MDWCMM filter in suppressing impulses from highly corrupted digital images.
DECEPTION AND RACISM IN THE TUSKEGEE SYPHILIS STUDYijistjournal
During the twentieth century (1932-1972), white physicians representing the United States government
conducted a human experiment known as the Tuskegee Syphilis Study on black syphilis patients in Macon
County, Alabama. The creators of the study, who supported the idea of black inferiority and the concept
that black people’s bodies functioned differently from white people’s, observed the effects of a disease
called syphilis on untreated black patients in order to collect data for further research on syphilis. Black
individuals involved with the study believed that they were receiving treatment, although in truth,
treatments for syphilis were purposely held back from them. Not only this, but fluids from their bodies, such
as blood and spinal fluid, were extracted to serve as research material without their awareness of the
purpose of the collection. The physicians justified their approach by positioning it as mere observation,
asserting that they were not actively intervening with the patients participating in the experiment. However,
despite their claims of passivity, these white physicians engaged in various morally improper actions,
including deceit, which ultimately resulted in the deaths of numerous black patients who might have had a
chance at survival.
Online Paper Submission - International Journal of Information Sciences and T...ijistjournal
The International Journal of Information Science & Techniques (IJIST) focuses on information systems science and technology coercing multitude applications of information systems in business administration, social science, biosciences, and humanities education, library sciences management, depiction of data and structural illustration, big data analytics, information economics in real engineering and scientific problems.
This journal provides a forum that impacts the development of engineering, education, technology management, information theories and application validation. It also acts as a path to exchange novel and innovative ideas about Information systems science and technology.
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
Sudheer Mechineni, Head of Application Frameworks, Standard Chartered Bank
Discover how Standard Chartered Bank harnessed the power of Neo4j to transform complex data access challenges into a dynamic, scalable graph database solution. This keynote will cover their journey from initial adoption to deploying a fully automated, enterprise-grade causal cluster, highlighting key strategies for modelling organisational changes and ensuring robust disaster recovery. Learn how these innovations have not only enhanced Standard Chartered Bank’s data infrastructure but also positioned them as pioneers in the banking sector’s adoption of graph technology.
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
Building RAG with self-deployed Milvus vector database and Snowpark Container...Zilliz
This talk will give hands-on advice on building RAG applications with an open-source Milvus database deployed as a docker container. We will also introduce the integration of Milvus with Snowpark Container Services.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
GridMate - End to end testing is a critical piece to ensure quality and avoid...ThomasParaiso2
End to end testing is a critical piece to ensure quality and avoid regressions. In this session, we share our journey building an E2E testing pipeline for GridMate components (LWC and Aura) using Cypress, JSForce, FakerJS…
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
Essentials of Automations: The Art of Triggers and Actions in FME
K-ALGORITHM: A MODIFIED TECHNIQUE FOR NOISE REMOVAL IN HANDWRITTEN DOCUMENTS
1. International Journal of Information Sciences and Techniques (IJIST) Vol.3, No.3, May 2013
DOI : 10.5121/ijist.2013.3301 1
K-ALGORITHM: A MODIFIED TECHNIQUE FOR
NOISE REMOVAL IN HANDWRITTEN DOCUMENTS
Kanika Bansal1
, Rajiv Kumar2
1
Student, SMCA, Thapar University, Patiala, Punjab, India.
knbs_ind@yahoo.com
2
Assistant Professor, SMCA, Thapar University, Patiala, Punjab, India.
rajiv.patiala@gmail.com
ABSTRACT
OCR has been an active research area since last few decades. OCR performs the recognition of the text in
the scanned document image and converts it into editable form. The OCR process can have several stages
like preprocessing, segmentation, recognition and post processing. The preprocessing stage is a crucial
stage for the success of OCR, which mainly deals with noise removal. In the present paper, a modified
technique for noise removal named as “K-Algorithm” has been proposed, which has two stages as
filtering and binarization. The proposed technique shows improvised results in comparison to median
filtering technique.
KEYWORDS
OCR, Handwritten, Filtering, Binarization, Preprocessing, Offline.
1. INTRODUCTION
The offline handwritten character recognition refers to the processing of the text images
captured with the help of scanner, which gives meaningful symbols as output, pertaining to the
script. The counterpart, online handwritten character recognition deals with automatic
conversion of the characters, which are written on a special digitizer, tablet PC or personal
digital assistant (PDA) where a sensor picks up the pen-tip movements as well as pen-up/pen-
down switching. The online character recognition captures the dynamic motion during
handwriting. The offline handwritten character recognition process possesses more complexity
than the online handwritten character recognition process, attributed to the fact that more
information may be captured in the online case such as the direction, speed and the order of
strokes of the handwriting. Since this paper considers the offline handwritten character
recognition process, it is discussed in detail.
The offline images are often obtained through scanning of the text documents. The scanned
image, thus obtained, can only be read and not be edited but there may be applications where
the text from the scanned image needs to be edited. Here, the character recognition process
offers a solution, which translates the image into editable form that can be processed by the
computer. This process can be used to translate articles, books and documents into electronic
format, to publish text on website, to process the cheques in banks, to sort letters and many
more applications.
2. International Journal of Information Sciences and Techniques (IJIST) Vol.3, No.3, May 2013
2
1.1. Stages of Character Recognition Process
The image acquired through scanner needs to pass through various stages which can be named
as preprocessing, segmentation, recognition and post processing as given by [1,2]. The various
stages of the offline handwritten character recognition process are shown in Figure 1.
Figure 1. Stages of Character Recognition Process
Preprocessing Stage: The acquired image is required to go under a number of steps to convert it
to a usable form for further stages of recognition process. This constitutes the preprocessing
stage, which includes all the functions required to produce the cleaned up version of image
acquired through scanner. It includes the processes like filtering, binarization, thinning,
smoothing.
Segmentation Stage: The cleaned up image obtained as output of preprocessing stage serves as
input to the segmentation stage, which refers to decomposing the document into
subcomponents. It deals with the separation of lines, words and characters according to the
application through usage of various segmentation strategies.
Recognition Stage :The segmented image now act as input for the recognition stage which
includes the various pattern recognition strategies to assign an unknown sample to a predefined
class to identify the character according to the script used.
Postprocessing Stage: The recognized characters are fed into the post processing stage, which
includes the usage of dictionary according to the script for purpose of spell check, grammar
check in order to enhance the rate of recognition and minimize the recognition errors.
The above stages constitute the character recognition process and since this paper deals with
preprocessing, it is further discussed in length in the following section.
2. PREPROCESSING AND ITS SIGNIFICANCE
The image acquired with the help of scanner may contain many impurities, which may be
introduced due to different reasons like varying paper quality, varying ink quality and the
quality of scanner used. Therefore, the acquired image cannot be sent directly to the subsequent
stages of recognition process and needs the impurities to be removed, which is done in the
preprocessing stage.
Preprocessing consists of a series of steps, which are used to produce a cleaned-up version of
the acquired image. Preprocessing converts the acquired image into a more usable form for the
next stages. The preprocessing stage plays an important role in the handwritten character
recognition process. The extent to which acquired image is appropriately cleaned, influences the
3. International Journal of Information Sciences and Techniques (IJIST) Vol.3, No.3, May 2013
3
rate of accuracy of segmentation and hence recognition. Inappropriate preprocessing increases
the complexity of techniques used in further stages and may lead to incorrect segmentation and
recognition. The major objectives of preprocessing stage can be to reduce the amount of noise
present in the document and to reduce the amount of data to be retained. The preprocessing
stage includes a number of techniques in order to achieve these objectives, which are now
explained.
2.1. Reduction of Noise
Noise refers to the error in the pixel value or an unwanted bit pattern, which do not have any
significance in the output. Noise may be introduced due to reproduction and transmission of
image during its acquisition process. Noise may introduce gaps in lines, disconnected segments
and filled loops in the document. The salt and pepper noise is generally encountered in the poor
quality documents and it appears as isolated pixels or as on pixels on off region or as off pixels
in on region. This noise can be removed through filtering as given in [3].
Filtering refers to the usage of a predefined function with the image in order to assign a value to
a pixel in consideration as a function of the values of its neighbouring pixels. It diminishes
unwanted bit patterns, introduced by uneven writing surface or poor quality of the data
acquisition device. It also removes slightly textured or coloured background and sharpens the
image. The objective in the design of a filter to reduce noise is that it remove as much of the
noise as possible while retaining all of the relevant data. The filters can be categorised into the
linear filters and non-linear filters. The widely used linear filters like mean filter and wiener
filter are often used to remove noise but they have several disadvantages like blurring of edges
and fine details, destruction of lines. Thus, non-linear filters like median filters are used which
overcomes the limitations of linear filters to an extent. As stated in [4], the median filter too
causes the removal of corners and threads and may cause blurring of text, thus to overcome
these limitations, a modified technique is developed, which is discussed in the following
section.
2.2. Data Reduction
It basically refers to the reduction in the amount of data, which is to be retained in the
preprocessed image. It encodes the information using the fewer bits than the original
representation would use. It reduces the consumption of expensive resources like memory and
time. It is often achieved through binarization. Binarization refers to the conversion of a gray
scale image into binary image where a pixel can have any of the two values (0 or 1). This
process works on the basis of threshold value. This threshold value can be decided locally and
globally as stated in [5]. The global threshold picks one value for entire image on the basis of
estimation of the background intensity from the intensity histogram of the image whereas the
local threshold picks different values for each pixel on the basis of the local information as
stated in [6]. The filtering technique described above is modified and combined with the
binarization technique to form a modified technique, which is described in the following
section.
3. K - ALGORITHM : A MODIFIED TECHNIQUE FOR NOISE REMOVAL
As discussed in section 2, the median filter overcomes the limitations of the linear filters, but it
has its own disadvantages like it may cause the removal of corners and threads, blurring of text
in the document. In order to overcome these limitations, K-Algorithm, which is the modified
technique for the removal of noise in the handwritten documents, is proposed in this paper.
4. International Journal of Information Sciences and Techniques (IJIST) Vol.3, No.3, May 2013
4
The proposed solution consists of two steps (i) filtering and (ii) binarization.
STEP I
In this step, the filtering technique is used for removal of noise. This step uses the median filter
as its basis. The median filter ranks the neighbouring pixels according to their intensity and the
median value becomes the new value for the central pixel. The median filter operation can be
seen in the Figure 2.
Figure 2. Median Filter Example
The median filter serves as the basis for the modified technique. The modified technique uses
the median filter conditionally, that is, the application of the median filter to a pixel depends on
the parameter, ‘K’ and hence, the algorithm derives its name, “K-Algorithm”. The parameter,
‘K’ denotes the count of the lowest intensity pixels in the current neighbourhood matrix of the
pixel under consideration. If the value of parameter, K is less than defined value, only then the
median filtering is applied to the current pixel. The defined value depends on the matrix size
used. The output of the filtering may contain some textured or slightly coloured background,
which can be removed during binarization process in step II, to be discussed in the following
section.
STEP II
The output of the filtering technique may still have some textured or slightly coloured
background, which may interfere in the functioning of the subsequent stages. In order to deal
with this, a binarization technique is used to convert the filtered image to a binary image. In this
technique, a threshold value is calculated and the pixels having intensity value above it are set to
white(0) and the pixels having the intensity below it are set to black(1). The threshold value is
calculated by taking the average of all pixel intensities in the document as given in [7].
The formal algorithm for the above solution is given below:-
Parameters used:
Image:-The bitmap image given as input.
Matrix_Size:-Defines the neighbourhood matrix size for the pixel. In case of 3x3 matrix, it has
value as two.
Min_X:-Defines the minimum x coordinate value for input image.
Max_X:-Defines the maximum x coordinate value for input image.
Min_Y:-Defines the minimum y coordinate value for input image.
Max_Y:-Defines the maximum y coordinate value for input image.
Pixel_Intensity(X, Y):-Returns or Sets the pixel intensity at coordinates X and Y.
Pixel_Values:-A list storing intensity values of all pixels.
5. International Journal of Information Sciences and Techniques (IJIST) Vol.3, No.3, May 2013
5
K:-Parameter defined according to matrix size. For 3x3 matrix, it has value as one.
Pixel_Values_Count:-Defines count of values in list Pixel_Values.
-- Step I: Filtering
Modified_Median_Filter (Image, Matrix_Size)
Set A_Min=-(Matrix_Size)/2
Set A_Max= (Matrix_Size)/2
For X=Min_X to Max_X //1
For Y=Min_Y to Max_Y //2
For X1=A_Min to A_Max //3
Set Temp_X=X+X1
If (Temp_X>=Min_X and Temp_X<=Max_X)
For Y1=A_Min to A_Max //4
Set Temp_Y=Y+Y1
If (Temp_Y>=Min_Y and Temp_Y<=Max_Y)
Add Pixel_Intensity (Temp_X, Temp_Y) to
list Pixel_Values
End If
End For //4
End If
End For //3
Sort the list Pixel_Values
Set No_Occurences=Number of the occurrences of lowest pixel intensity value
in list Pixel_Values
If (No_Occurences==K)
Median_value=Value at Pixel_Values_Count/2
Set Pixel_Intensity(X, Y) =Median_Value
End If
End For //2
End For //1
Return Modified Image
-- Step II: Binarization
Binarization (Image)
For X=Min_X to Max_X //1
For Y=Min_Y to Max_Y //2
Pixel_Intensity_Sum=Pixel_Intensity_Sum+Pixel_Intensity(X, Y)
Pixel_Count=Pixel_Count+1
End For //2
End For //1
Average_Intensity= Pixel_Intensity_Sum/ Pixel_Count
For X=Min_X to Max_X //3
For Y=Min_Y to Max_Y //4
If (Pixel_Intensity(X, Y) >=Average_Intensity)
Set Pixel_Intensity(X, Y) =WHITE
Else
Set Pixel_Intensity(X, Y) =BLACK
End If
End For //4
6. International Journal of Information Sciences and Techn
End For
Return Modified Image
The proposed algorithm is implemented and results are discussed in the following section.
4. EXPERIMENTAL RESULTS
The proposed algorithm has been implemented by the authors in the .NET framework using C#
language.The Drawing namespace and Bitmap class present in the inbuilt libraries of C# is
utilised to process the bitmap image. The image acquired through scanner is
It can be seen that the noise is present in the scanned document and thus, it requires
preprocessing.
Initially, the median filter is used to remove the noise from the scanned image. The
shows the results after application of median filter. It can be seen that although, noise has been
removed from scanned image, but the text got blurred in the resultant image and it is not
suitable for processing by subsequent stages of handwritten charac
Hence, it requires another approach for preprocessing.
Figure 4. Resultant Image after application of Median Filter
The modified technique-“K-Algorithm” is then applied to the scanned image. The step I of
algorithm, which is filtering is executed and the resultant image is shown in
observed that the text has not been blurred as in
improvement in clarity of relevant pixels. There is presence of slightly textured backgroun
the image as seen in Figure 5. Since, this can interfere in the accurate working of subsequent
stages; it needs to be removed and can be removed through step II of the modified technique
“K-Algorithm”.
International Journal of Information Sciences and Techniques (IJIST) Vol.3, No.3, May
//3
The proposed algorithm is implemented and results are discussed in the following section.
XPERIMENTAL RESULTS AND DISCUSSION
The proposed algorithm has been implemented by the authors in the .NET framework using C#
language.The Drawing namespace and Bitmap class present in the inbuilt libraries of C# is
utilised to process the bitmap image. The image acquired through scanner is shown in
It can be seen that the noise is present in the scanned document and thus, it requires
Figure 3. Original Noisy Image
Initially, the median filter is used to remove the noise from the scanned image. The
shows the results after application of median filter. It can be seen that although, noise has been
removed from scanned image, but the text got blurred in the resultant image and it is not
suitable for processing by subsequent stages of handwritten character recognition process.
Hence, it requires another approach for preprocessing.
Resultant Image after application of Median Filter
Algorithm” is then applied to the scanned image. The step I of
ltering is executed and the resultant image is shown in Figure 5
observed that the text has not been blurred as in Figure 4 and the resultant image shows
improvement in clarity of relevant pixels. There is presence of slightly textured backgroun
. Since, this can interfere in the accurate working of subsequent
stages; it needs to be removed and can be removed through step II of the modified technique
iques (IJIST) Vol.3, No.3, May 2013
6
The proposed algorithm is implemented and results are discussed in the following section.
The proposed algorithm has been implemented by the authors in the .NET framework using C#
language.The Drawing namespace and Bitmap class present in the inbuilt libraries of C# is
shown in Figure 3.
It can be seen that the noise is present in the scanned document and thus, it requires
Initially, the median filter is used to remove the noise from the scanned image. The Figure 4
shows the results after application of median filter. It can be seen that although, noise has been
removed from scanned image, but the text got blurred in the resultant image and it is not
ter recognition process.
Algorithm” is then applied to the scanned image. The step I of
ure 5. It can be
and the resultant image shows
improvement in clarity of relevant pixels. There is presence of slightly textured background in
. Since, this can interfere in the accurate working of subsequent
stages; it needs to be removed and can be removed through step II of the modified technique-
7. International Journal of Information Sciences and Techn
Figure 5. Resultant Image after application of
The step II of the modified technique
resultant image of step I and the resultant image is shown in
the textured and coloured background present in the
to be served as input to further stages of handwritten character recognition process.
Figure 6. Resultant Image after application of
5. CONCLUSION
The “K-Algorithm” has been implemented and tested on several documents. It can be seen from
the results that the modified technique provides much better preprocessing than the existing
techniques like median filtering. The
of the existing techniques. The results obtained are good and encouraging. The above work
highlights the significance of the preprocessing stage in the handwritten character recognition
process.
REFERENCES
[1] N. Arica and F. Yarman-Vural,
Handwriting”. IEEE Transactions on Systems, Man, and Cybernetics, v
[2] A. Senior and A. Robinson,
IEEETransactions on Pattern Analysis and Machine Intelligence. v
[3] R. Kannan, R. Prabhakar and R. Suresh,
Recognition,” in Proceedings of International Conference on Security Technology, pp. 160
[4] T. Saba, G. Sulong and A. Rehman,
Methods And Remaining Problems”. Springer Science+Busine
[5] S. Mo and J. Mathews, (1998)
Binarization,” IEEE Transactions on Image Processing.
International Journal of Information Sciences and Techniques (IJIST) Vol.3, No.3, May
Resultant Image after application of Modified Technique
The step II of the modified technique-“K-Algorithm”, which is binarization, is applied to
resultant image of step I and the resultant image is shown in Figure 6. It can be observed that
the textured and coloured background present in the Figure 5 is now removed and it is suitable
to be served as input to further stages of handwritten character recognition process.
Resultant Image after application of Binarization
Algorithm” has been implemented and tested on several documents. It can be seen from
the results that the modified technique provides much better preprocessing than the existing
techniques like median filtering. The modified technique is able to overcome the disadvantages
of the existing techniques. The results obtained are good and encouraging. The above work
highlights the significance of the preprocessing stage in the handwritten character recognition
Vural, (2001) “An Overview of Character Recognition focused on Off
Handwriting”. IEEE Transactions on Systems, Man, and Cybernetics, vol. 31, no. 2, pp. 216
A. Senior and A. Robinson, (1998)“An Off-line Cursive Handwriting Recognition System,”
Transactions on Pattern Analysis and Machine Intelligence. vol. 20, no. 3, pp. 309
R. Kannan, R. Prabhakar and R. Suresh, (2008) “Off-Line Cursive Handwritten Tamil Character
ition,” in Proceedings of International Conference on Security Technology, pp. 160
T. Saba, G. Sulong and A. Rehman, (2010) “Document Image Analysis: Issues, Comparison Of
Methods And Remaining Problems”. Springer Science+Business Media B.V., pp. 101-
(1998) “Adaptive, Quadratic Preprocessing of Document Images for
Binarization,” IEEE Transactions on Image Processing. vol. 7, no. 7, pp. 992-999.
iques (IJIST) Vol.3, No.3, May 2013
7
Algorithm”, which is binarization, is applied to
. It can be observed that
is now removed and it is suitable
Algorithm” has been implemented and tested on several documents. It can be seen from
the results that the modified technique provides much better preprocessing than the existing
modified technique is able to overcome the disadvantages
of the existing techniques. The results obtained are good and encouraging. The above work
highlights the significance of the preprocessing stage in the handwritten character recognition
“An Overview of Character Recognition focused on Off-line
ol. 31, no. 2, pp. 216-233.
iting Recognition System,”
ol. 20, no. 3, pp. 309-312.
ive Handwritten Tamil Character
ition,” in Proceedings of International Conference on Security Technology, pp. 160-164.
“Document Image Analysis: Issues, Comparison Of
-110.
“Adaptive, Quadratic Preprocessing of Document Images for
8. International Journal of Information Sciences and Techniques (IJIST) Vol.3, No.3, May 2013
8
[6] R. Kasturi, L. Gorman and V. Govindaraju, (2002) “Document Image Analysis: A Primer”.
S¯ adhan¯ a, vol. 27, part 1, pp. 3–22.
[7] R. Kumar and A. Singh, (2011) “Algorithm to Detect and Segment Gurmukhi Handwritten Text
into Lines, Words and Characters” IACSIT International Journal of Engineering and Technology,
vol. 3, no.4.
AUTHORS
Rajiv Kumar obtained his Ph.D. from the University College of Engineering, Punjabi
University, Patiala, Punjab, India. At present, he is the Assistant Professor in the School of
Mathematics and Computer Applications, Thapar University, Patiala. He has published several
articles in international journals and conferences.
Kanika Bansal obtained her B.Tech. (CSE) from Guru Gobind Singh Indraprastha University,
Delhi, India and currently, pursuing her M.Tech. from Thapar University, Patiala, Punjab,
India. She is currently working in the document image processing area.