Detection text from handwriting in historical documents provides high-level features for the challenging problem of handwriting recognition. Such handwriting often contains noise, faint or incomplete strokes, strokes with gaps, and competing lines when embedded in a table or form, making it unsuitable for local line following algorithms or associated binarization schemes. In this paper, a proposed method based on the optimum threshold value and namely as the Optimum Mean method was presented. Besides, Wolf method unsuccessful in order to detect the thin text in the non-uniform input image. However, the proposed method was suggested to overcome the Wolf method problem by suggesting a maximum threshold value using optimum mean. Based on the calculation, the proposed method obtained a higher F-measure (74.53), PSNR (14.77) and lowest NRM (0.11) compared to the Wolf method. In conclusion, the proposed method successful and effective to solve the wolf problem by producing a high-quality output image.
Enhancement and Segmentation of Historical Recordscsandit
Document Analysis and Recognition (DAR) aims to extract automatically the information in the document and also addresses to human comprehension. The automatic processing of degraded
historical documents are applications of document image analysis field which is confronted with many difficulties due to the storage condition and the complexity of the script. The main interest
of enhancement of historical documents is to remove undesirable statistics that appear in the
background and highlight the foreground, so as to enable automatic recognition of documents
with high accuracy. This paper addresses pre-processing and segmentation of ancient scripts, as an initial step to automate the task of an epigraphist in reading and deciphering inscriptions.
Pre-processing involves, enhancement of degraded ancient document images which is achieved through four different Spatial filtering methods for smoothing or sharpening namely Median,
Gaussian blur, Mean and Bilateral filter, with different mask sizes. This is followed by
binarization of the enhanced image to highlight the foreground information, using Otsu
thresholding algorithm. In the second phase Segmentation is carried out using Drop Fall and
WaterReservoir approaches, to obtain sampled characters, which can be used in later stages of
OCR. The system showed good results when tested on the nearly 150 samples of varying
degraded epigraphic images and works well giving better enhanced output for, 4x4 mask size
for Median filter, 2x2 mask size for Gaussian blur, 4x4 mask size for Mean and Bilateral filter.
The system can effectively sample characters from enhanced images, giving a segmentation rate of 85%-90% for Drop Fall and 85%-90% for Water Reservoir techniques respectively
Volumetric Medical Images Lossy Compression using Stationary Wavelet Transfor...Omar Ghazi
Abstract: The aim of the study is to reduce the size required for storage along with decreasing the bitrate and the
bandwidth for the process of sending and receiving the image. It also aims to decrease the time required for the
process as much as possible. This study proposes a novel system for efficient lossy volumetric medical image
compression using Stationary Wavelet Transform and Linde-Buzo-Gray for Vector Quantization. The system makes
use of a combination of Linde-Buzo-Gray vector quantization technique for lossy compression along with
Arithmetic coding and Huffman coding for lossless compression. The system proposed uses Stationary Wavelet
Transform and then compares the results obtained to Discrete Wavelet Transform, Lifting Wavelet Transform and
Discrete Cosine Transform at three decomposition levels. The system also compares the results obtained using
transforms with only Arithmetic Coding and Huffman Coding for Lossless Compression.The results show that the
system proposed outperforms the others.
IEEE Final Year Projects 2011-2012 :: Elysium Technologies Pvt Ltd::Imageproc...sunda2011
IEEE Final Year Projects 2011-2012 :: Elysium Technologies Pvt Ltd
IEEE projects, final year projects, students project, be project, engineering projects, academic project, project center in madurai, trichy, chennai, kollam, coimbatore
Enhancement and Segmentation of Historical Recordscsandit
Document Analysis and Recognition (DAR) aims to extract automatically the information in the document and also addresses to human comprehension. The automatic processing of degraded
historical documents are applications of document image analysis field which is confronted with many difficulties due to the storage condition and the complexity of the script. The main interest
of enhancement of historical documents is to remove undesirable statistics that appear in the
background and highlight the foreground, so as to enable automatic recognition of documents
with high accuracy. This paper addresses pre-processing and segmentation of ancient scripts, as an initial step to automate the task of an epigraphist in reading and deciphering inscriptions.
Pre-processing involves, enhancement of degraded ancient document images which is achieved through four different Spatial filtering methods for smoothing or sharpening namely Median,
Gaussian blur, Mean and Bilateral filter, with different mask sizes. This is followed by
binarization of the enhanced image to highlight the foreground information, using Otsu
thresholding algorithm. In the second phase Segmentation is carried out using Drop Fall and
WaterReservoir approaches, to obtain sampled characters, which can be used in later stages of
OCR. The system showed good results when tested on the nearly 150 samples of varying
degraded epigraphic images and works well giving better enhanced output for, 4x4 mask size
for Median filter, 2x2 mask size for Gaussian blur, 4x4 mask size for Mean and Bilateral filter.
The system can effectively sample characters from enhanced images, giving a segmentation rate of 85%-90% for Drop Fall and 85%-90% for Water Reservoir techniques respectively
Volumetric Medical Images Lossy Compression using Stationary Wavelet Transfor...Omar Ghazi
Abstract: The aim of the study is to reduce the size required for storage along with decreasing the bitrate and the
bandwidth for the process of sending and receiving the image. It also aims to decrease the time required for the
process as much as possible. This study proposes a novel system for efficient lossy volumetric medical image
compression using Stationary Wavelet Transform and Linde-Buzo-Gray for Vector Quantization. The system makes
use of a combination of Linde-Buzo-Gray vector quantization technique for lossy compression along with
Arithmetic coding and Huffman coding for lossless compression. The system proposed uses Stationary Wavelet
Transform and then compares the results obtained to Discrete Wavelet Transform, Lifting Wavelet Transform and
Discrete Cosine Transform at three decomposition levels. The system also compares the results obtained using
transforms with only Arithmetic Coding and Huffman Coding for Lossless Compression.The results show that the
system proposed outperforms the others.
IEEE Final Year Projects 2011-2012 :: Elysium Technologies Pvt Ltd::Imageproc...sunda2011
IEEE Final Year Projects 2011-2012 :: Elysium Technologies Pvt Ltd
IEEE projects, final year projects, students project, be project, engineering projects, academic project, project center in madurai, trichy, chennai, kollam, coimbatore
K2 Algorithm-based Text Detection with An Adaptive Classifier ThresholdCSCJournals
In natural scene images, text detection is a challenging study area for dissimilar content-based image analysis tasks. In this paper, a Bayesian network scores are used to classify candidate character regions by computing posterior probabilities. The posterior probabilities are used to define an adaptive threshold to detect text in scene images with accuracy. Therefore, candidate character regions are extracted through maximally stable extremal region. K2 algorithm-based Bayesian network scores are learned by evaluating dependencies amongst features of a given candidate character region. Bayesian logistic regression classifier is trained to compute posterior probabilities to define an adaptive classifier threshold. The candidate character regions below from adaptive classifier threshold are discarded as non-character regions. Finally, text regions are detected with the use of effective text localization scheme based on geometric features. The entire system is evaluated on the ICDAR 2013 competition database. Experimental results produce competitive performance (precision, recall and harmonic mean) with the recently published algorithms.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Data reduction techniques for high dimensional biological dataeSAT Journals
Abstract
High dimensional biological datasets in recent years has been growing rapidly. Extracting the knowledge and analyzing highdimensional
biological data is one the key challenges in which variety and veracity are the two distinct characteristics. The
question that arises now is, how to perform dimensionality reduction for this heterogeneous data and how to develop a high
performance platform to efficiently analyze high dimensional biological data and how to find the useful things from this data. To
deeply discuss this issue, this paper begins with a brief introduction to data analytics available for biological data, followed by
the discussions of big data analytics and then a survey on various data reduction methods for biological data. We propose a dense
clustering algorithm for standard high dimensional biological data.
Keywords: Big Data Analytics, Dimensionality Reduction
Engineering Research Publication
Best International Journals, High Impact Journals,
International Journal of Engineering & Technical Research
ISSN : 2321-0869 (O) 2454-4698 (P)
www.erpublication.org
Image segmentation by modified map ml estimationsijesajournal
Though numerous algorithms exist to perform image segmentation there are several issues
related to execution time of these algorithm. Image Segmentation is nothing but label relabeling
problem under probability framework. To estimate the label configuration, an iterative
optimization scheme is implemented to alternately carry out the maximum a posteriori (MAP)
estimation and the maximum likelihood (ML) estimations. In this paper this technique is
modified in such a way so that it performs segmentation within stipulated time period. The
extensive experiments shows that the results obtained are comparable with existing algorithms.
This algorithm performs faster execution than the existing algorithm to give automatic
segmentation without any human intervention. Its result match image edges very closer to
human perception.
Review paper on segmentation methods for multiobject feature extractioneSAT Journals
Abstract Feature extraction and representation plays a vital role in multimedia processing. It is still a challenge in computer vision system to extract ideal features that represents intrinsic characteristics of an image. Multiobject feature extraction system means a system that can extract features and locations of multiple objects in an image. In this paper we have discuss various methods to extract location and features of multiple objects and describe a system that can extract locations and features of multiple objects in an image by implementing an algorithm as hardware logic on a field-programmable gate array-based platform. There are many multiobject extraction methods which can be use for image segmentation based on motion, color intensity and texture. By calculating zeroth and first order moments of objects it is possible to obtain locations and sizes of multiple objects in an image. Keywords: multiobject extraction, image segmentation
A NOVEL METRIC APPROACH EVALUATION FOR THE SPATIAL ENHANCEMENT OF PAN-SHARPEN...cscpconf
Various and different methods can be used to produce high-resolution multispectral images
from high-resolution panchromatic image (PAN) and low-resolution multispectral images (MS),
mostly on the pixel level. The Quality of image fusion is an essential determinant of the value of
processing images fusion for many applications. Spatial and spectral qualities are the two
important indexes that used to evaluate the quality of any fused image. However, the jury is still
out of fused image’s benefits if it compared with its original images. In addition, there is a lack
of measures for assessing the objective quality of the spatial resolution for the fusion methods.
So, an objective quality of the spatial resolution assessment for fusion images is required.
Therefore, this paper describes a new approach proposed to estimate the spatial resolution
improve by High Past Division Index (HPDI) upon calculating the spatial-frequency of the edge
regions of the image and it deals with a comparison of various analytical techniques for evaluating the Spatial quality, and estimating the colour distortion added by image fusion including: MG, SG, FCC, SD, En, SNR, CC and NRMSE. In addition, this paper devotes to concentrate on the comparison of various image fusion techniques based on pixel and feature fusion technique.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
Comparative analysis of c99 and topictiling text segmentation algorithmseSAT Journals
Abstract In this paper, the work done includes the extraction of information from image datasets which contain natural text. The difficulty level of segmenting natural text from an image is too high and so precision is the most important factor to be kept in mind. To minimize the error rates, error filtration technique is provided, as filtration is adopted while doing image segmentation basically text segmentation present in images. Furthermore, a comparative analysis of two different text segmentation algorithms namely C99 and TopicTiling on image documents is presented. To assess how well each algorithm works, each was applied on different datasets and results were compared. The work done also proves the efficiency of TopicTiling over C99. Index Terms: Text Segmentation, text extraction, image documents,C99 and TopicTiling.
Text detection and recognition in scene images or natural images has applications in computer
vision systems like registration number plate detection, automatic traffic sign detection, image retrieval
and help for visually impaired people. Scene text, however, has complicated background, blur image,
partly occluded text, variations in font-styles, image noise and ranging illumination. Hence scene text
recognition could be a difficult computer vision problem. In this paper connected component method
is used to extract the text from background. In this work, horizontal and vertical projection profiles,
geometric properties of text, image binirization and gap filling method are used to extract the text from
scene images. Then histogram based threshold is applied to separate text background of the images.
Finally text is extracted from images.
Optimized Reversible Data Hiding Technique for Secured Data TransmissionEditor IJMTER
Reversible data hiding (RDH) is used to embed secret message into a cover image by slightly
modifying its pixel values. Embedded message and the cover image are completely recovered from the
marked content. RDH supports information hiding with the lossless compressibility of natural images.
Lossless compression, difference expansion, histogram modification, prediction-error expansion and
integer transform techniques are used for RDH process. Histogram based RDH method is divided into
two steps histogram generation and histogram modification. Histogram construction is performed with
the pixel pairs sequences and their different values. Histogram modification is carried out to embed data
into the cover image. The un-hiding process recovers the message and also the cover image.
Reversible data hiding (RDH) scheme is designed by using difference-pair-mapping (DPM)
mechanism. A sequence consisting of pairs of difference values is computed by considering each pixelpair and its context. DPM is an injective mapping defined on difference-pairs. Specifically designed
DPM is used for Reversible data embedding process. A two-dimensional difference-histogram is
generated by counting the frequency of the resulting difference-pairs. Current histogram-based RDH
methods use natural extension of expansion embedding and shifting techniques. A pixel-pair-selection
strategy is adopted to use the pixel-pairs located in smooth image regions to embed data. Capacity
distortion property is used evaluate the embedding capacity (EC).
Two dimensional difference histogram modification model is enhanced to increase the
embedding capacity. Difference Pair Mapping (DPM) model is optimized to identify pixel redundancy.
Multiple in value based pixel pair modification is allowed in the system. Histogram modification is
carried out with different frequency levels.
Documents Image Binarization is performed in the preprocessing stage for document analysis and it aims to
segment the foreground text from the document background. A fast and accurate document image binarization
technique is important for the ensuing document image processing tasks such as optical character recognition
(OCR). Though document image binarization has been studied for many years, the thresholding of degraded
document images is still an unsolved problem due to the high inter/intra variation between the text stroke and
the document background across different document images. The handwritten text within the degraded
documents often shows a certain amount of variation in terms of the stroke width, stroke brightness, stroke
connection, and document background. In addition, historical documents are often degraded by the bleed.
Documents are often degraded by different types of imaging artifact. These different types of document
degradations tend to induce the document thresholding error and make degraded document image binarization a
big challenge to most state-of-the-art techniques. The proposed method is simple, robust and capable of
handling different types of degraded document images with minimum parameter tuning. It makes use of the
adaptive image contrast that combines the local image contrast and the local image gradient adaptively and
therefore is tolerant to the text and background variation caused by different types of document degradations. In
particular, the proposed technique addresses the over-normalization problem of the local maximum minimum
algorithm. At the same time, the parameters used in the algorithm can be adaptively estimated.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
K2 Algorithm-based Text Detection with An Adaptive Classifier ThresholdCSCJournals
In natural scene images, text detection is a challenging study area for dissimilar content-based image analysis tasks. In this paper, a Bayesian network scores are used to classify candidate character regions by computing posterior probabilities. The posterior probabilities are used to define an adaptive threshold to detect text in scene images with accuracy. Therefore, candidate character regions are extracted through maximally stable extremal region. K2 algorithm-based Bayesian network scores are learned by evaluating dependencies amongst features of a given candidate character region. Bayesian logistic regression classifier is trained to compute posterior probabilities to define an adaptive classifier threshold. The candidate character regions below from adaptive classifier threshold are discarded as non-character regions. Finally, text regions are detected with the use of effective text localization scheme based on geometric features. The entire system is evaluated on the ICDAR 2013 competition database. Experimental results produce competitive performance (precision, recall and harmonic mean) with the recently published algorithms.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Data reduction techniques for high dimensional biological dataeSAT Journals
Abstract
High dimensional biological datasets in recent years has been growing rapidly. Extracting the knowledge and analyzing highdimensional
biological data is one the key challenges in which variety and veracity are the two distinct characteristics. The
question that arises now is, how to perform dimensionality reduction for this heterogeneous data and how to develop a high
performance platform to efficiently analyze high dimensional biological data and how to find the useful things from this data. To
deeply discuss this issue, this paper begins with a brief introduction to data analytics available for biological data, followed by
the discussions of big data analytics and then a survey on various data reduction methods for biological data. We propose a dense
clustering algorithm for standard high dimensional biological data.
Keywords: Big Data Analytics, Dimensionality Reduction
Engineering Research Publication
Best International Journals, High Impact Journals,
International Journal of Engineering & Technical Research
ISSN : 2321-0869 (O) 2454-4698 (P)
www.erpublication.org
Image segmentation by modified map ml estimationsijesajournal
Though numerous algorithms exist to perform image segmentation there are several issues
related to execution time of these algorithm. Image Segmentation is nothing but label relabeling
problem under probability framework. To estimate the label configuration, an iterative
optimization scheme is implemented to alternately carry out the maximum a posteriori (MAP)
estimation and the maximum likelihood (ML) estimations. In this paper this technique is
modified in such a way so that it performs segmentation within stipulated time period. The
extensive experiments shows that the results obtained are comparable with existing algorithms.
This algorithm performs faster execution than the existing algorithm to give automatic
segmentation without any human intervention. Its result match image edges very closer to
human perception.
Review paper on segmentation methods for multiobject feature extractioneSAT Journals
Abstract Feature extraction and representation plays a vital role in multimedia processing. It is still a challenge in computer vision system to extract ideal features that represents intrinsic characteristics of an image. Multiobject feature extraction system means a system that can extract features and locations of multiple objects in an image. In this paper we have discuss various methods to extract location and features of multiple objects and describe a system that can extract locations and features of multiple objects in an image by implementing an algorithm as hardware logic on a field-programmable gate array-based platform. There are many multiobject extraction methods which can be use for image segmentation based on motion, color intensity and texture. By calculating zeroth and first order moments of objects it is possible to obtain locations and sizes of multiple objects in an image. Keywords: multiobject extraction, image segmentation
A NOVEL METRIC APPROACH EVALUATION FOR THE SPATIAL ENHANCEMENT OF PAN-SHARPEN...cscpconf
Various and different methods can be used to produce high-resolution multispectral images
from high-resolution panchromatic image (PAN) and low-resolution multispectral images (MS),
mostly on the pixel level. The Quality of image fusion is an essential determinant of the value of
processing images fusion for many applications. Spatial and spectral qualities are the two
important indexes that used to evaluate the quality of any fused image. However, the jury is still
out of fused image’s benefits if it compared with its original images. In addition, there is a lack
of measures for assessing the objective quality of the spatial resolution for the fusion methods.
So, an objective quality of the spatial resolution assessment for fusion images is required.
Therefore, this paper describes a new approach proposed to estimate the spatial resolution
improve by High Past Division Index (HPDI) upon calculating the spatial-frequency of the edge
regions of the image and it deals with a comparison of various analytical techniques for evaluating the Spatial quality, and estimating the colour distortion added by image fusion including: MG, SG, FCC, SD, En, SNR, CC and NRMSE. In addition, this paper devotes to concentrate on the comparison of various image fusion techniques based on pixel and feature fusion technique.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
Comparative analysis of c99 and topictiling text segmentation algorithmseSAT Journals
Abstract In this paper, the work done includes the extraction of information from image datasets which contain natural text. The difficulty level of segmenting natural text from an image is too high and so precision is the most important factor to be kept in mind. To minimize the error rates, error filtration technique is provided, as filtration is adopted while doing image segmentation basically text segmentation present in images. Furthermore, a comparative analysis of two different text segmentation algorithms namely C99 and TopicTiling on image documents is presented. To assess how well each algorithm works, each was applied on different datasets and results were compared. The work done also proves the efficiency of TopicTiling over C99. Index Terms: Text Segmentation, text extraction, image documents,C99 and TopicTiling.
Text detection and recognition in scene images or natural images has applications in computer
vision systems like registration number plate detection, automatic traffic sign detection, image retrieval
and help for visually impaired people. Scene text, however, has complicated background, blur image,
partly occluded text, variations in font-styles, image noise and ranging illumination. Hence scene text
recognition could be a difficult computer vision problem. In this paper connected component method
is used to extract the text from background. In this work, horizontal and vertical projection profiles,
geometric properties of text, image binirization and gap filling method are used to extract the text from
scene images. Then histogram based threshold is applied to separate text background of the images.
Finally text is extracted from images.
Optimized Reversible Data Hiding Technique for Secured Data TransmissionEditor IJMTER
Reversible data hiding (RDH) is used to embed secret message into a cover image by slightly
modifying its pixel values. Embedded message and the cover image are completely recovered from the
marked content. RDH supports information hiding with the lossless compressibility of natural images.
Lossless compression, difference expansion, histogram modification, prediction-error expansion and
integer transform techniques are used for RDH process. Histogram based RDH method is divided into
two steps histogram generation and histogram modification. Histogram construction is performed with
the pixel pairs sequences and their different values. Histogram modification is carried out to embed data
into the cover image. The un-hiding process recovers the message and also the cover image.
Reversible data hiding (RDH) scheme is designed by using difference-pair-mapping (DPM)
mechanism. A sequence consisting of pairs of difference values is computed by considering each pixelpair and its context. DPM is an injective mapping defined on difference-pairs. Specifically designed
DPM is used for Reversible data embedding process. A two-dimensional difference-histogram is
generated by counting the frequency of the resulting difference-pairs. Current histogram-based RDH
methods use natural extension of expansion embedding and shifting techniques. A pixel-pair-selection
strategy is adopted to use the pixel-pairs located in smooth image regions to embed data. Capacity
distortion property is used evaluate the embedding capacity (EC).
Two dimensional difference histogram modification model is enhanced to increase the
embedding capacity. Difference Pair Mapping (DPM) model is optimized to identify pixel redundancy.
Multiple in value based pixel pair modification is allowed in the system. Histogram modification is
carried out with different frequency levels.
Documents Image Binarization is performed in the preprocessing stage for document analysis and it aims to
segment the foreground text from the document background. A fast and accurate document image binarization
technique is important for the ensuing document image processing tasks such as optical character recognition
(OCR). Though document image binarization has been studied for many years, the thresholding of degraded
document images is still an unsolved problem due to the high inter/intra variation between the text stroke and
the document background across different document images. The handwritten text within the degraded
documents often shows a certain amount of variation in terms of the stroke width, stroke brightness, stroke
connection, and document background. In addition, historical documents are often degraded by the bleed.
Documents are often degraded by different types of imaging artifact. These different types of document
degradations tend to induce the document thresholding error and make degraded document image binarization a
big challenge to most state-of-the-art techniques. The proposed method is simple, robust and capable of
handling different types of degraded document images with minimum parameter tuning. It makes use of the
adaptive image contrast that combines the local image contrast and the local image gradient adaptively and
therefore is tolerant to the text and background variation caused by different types of document degradations. In
particular, the proposed technique addresses the over-normalization problem of the local maximum minimum
algorithm. At the same time, the parameters used in the algorithm can be adaptively estimated.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
A Novel Document Image Binarization For Optical Character RecognitionEditor IJCATR
This paper presents a technique for document image binarization that segments the foreground text accurately from poorly
degraded document images. The proposed technique is based on the Segmentation of text from poorly degraded document images and
it is a very demanding job due to the high variation between the background and the foreground of the document. This paper proposes
a novel document image binarization technique that segments the texts by using adaptive image contrast. It is a combination of the
local image contrast and the local image gradient that is efficient to overcome variations in text and background caused by different
types degradation effects. In the proposed technique, first an adaptive contrast map is constructed for a degraded input document
image. The contrast map is then binarized by global thresholding and pooled with Canny’s edge map detection to identify the text
stroke edge pixels. By applying Segmentation the text is further segmented by a local thresholding method that. The proposed method
is simple, strong, and requires minimum parameter tuning.
A Novel Document Image Binarization For Optical Character RecognitionEditor IJCATR
This paper presents a technique for document image binarization that segments the foreground text accurately from poorly
degraded document images. The proposed technique is based on the
Segmentation of text from poorly degraded document images and
it is a very demanding job due to the high variation between the background and the foreground of the document. This paper pr
oposes
a novel document image binarization technique that segments t
he texts by using adaptive image contrast. It is a combination of the
local image contrast and the local image gradient that is efficient to overcome variations in text and background caused by d
ifferent
types degradation effects. In the proposed technique
, first an adaptive contrast map is constructed for a degraded input document
image. The contrast map is then binarized by global thresholding and pooled with Canny’s edge map detection to identify the t
ext
stroke edge pixels. By applying Segmentation the
text is further segmented by a local thresholding method that. The proposed method
is simple, strong, and requires minimum parameter tuning
American standard code for information interchange mapping technique for tex...IJECEIAES
One of the significant techniques for hiding important information (such as text, image, and audio) is steganography. Steganography is used to keep this information as secret as possible, especially the sensitive ones after the massive expansion of data transmission through the Internet inside a conventional, non-secret, file, or message. This paper uses the American standard code for information interchange (ASCII) mapping technique (AMT) to hide the data in the color and grey image by converting it in a binary form, also convert the three levels of the red, green, and blue (RGB) image and grey image in the binary form, and then hide the data through hiding every two bits of the text in the two bits of one of the levels from the RGB image and grey image that means the text will be distributed throughout the images and allows hiding large amounts of data. That will send the information in a good securing way.
EXTENDED WAVELET TRANSFORM BASED IMAGE INPAINTING ALGORITHM FOR NATURAL SCENE...cscpconf
This paper proposes an exemplar based image inpainting using extended wavelet transform. The
Image inpainting modifies an image with the available information outside the region to be
inpainted in an undetectable way. The extended wavelet transform is in two dimensions. The
Laplacian pyramid is first used to capture the point discontinuities, and then followed by a
directional filter bank to link point discontinuities into linear structures. The proposed model
effectively captures the edges and contours of natural scene images
IMAGE SEGMENTATION BY MODIFIED MAP-ML ESTIMATIONScscpconf
Though numerous algorithms exist to perform image segmentation there are several issues
related to execution time of these algorithm. Image Segmentation is nothing but label relabeling
problem under probability framework. To estimate the label configuration, an iterative
optimization scheme is implemented to alternately carry out the maximum a posteriori (MAP)
estimation and the maximum likelihood (ML) estimations. In this paper this technique is
modified in such a way so that it performs segmentation within stipulated time period. The
extensive experiments shows that the results obtained are comparable with existing algorithms.
This algorithm performs faster execution than the existing algorithm to give automatic
segmentation without any human intervention. Its result match image edges very closer to
human perception.
Though numerous algorithms exist to perform image segmentation there are several issues
related to execution time of these algorithm. Image Segmentation is nothing but label relabeling
problem under probability framework. To estimate the label configuration, an iterative
optimization scheme is implemented to alternately carry out the maximum a posteriori (MAP)
estimation and the maximum likelihood (ML) estimations. In this paper this technique is
modified in such a way so that it performs segmentation within stipulated time period. The
extensive experiments shows that the results obtained are comparable with existing algorithms.
This algorithm performs faster execution than the existing algorithm to give automatic
segmentation without any human intervention. Its result match image edges very closer to
human perception.
Text-Image Separation in Document Images using Boundary/Perimeter DetectionIDES Editor
Document analysis plays an important role in office
automation, especially in intelligent signal processing. The
proposed system consists of two modules: block segmentation
and block identification. In this approach, first a document is
segmented into several non-overlapping blocks by utilizing a
novel recursive segmentation technique, and then extracts
the features embedded in each segmented block are extracted.
Two kinds of features, connected components and image
boundary/perimeter features are extracted. Document with
text inside image pose limitations in earlier reported literature.
This is taken care of by applying additional pass of the Run
Length Smearing on the extracted image that contains text.
Proposed scheme is independent of type and language of the
document.
Many applications such as robot navigation, defense, medical and remote sensing perform
various processing tasks, which can be performed more easily when all objects in different images of the
same scene are combined into a single fused image. In this paper, we propose a fast and effective
method for image fusion. The proposed method derives the intensity based variations that is large and
small scale, from the source images. In this approach, guided filtering is employed for this extraction.
Gaussian and Laplacian pyramidal approach is then used to fuse the different layers obtained.
Experimental results demonstrate that the proposed method can obtain better performance for fusion of
all sets of images. The results clearly indicate the feasibility of the proposed approach.
A method for semantic-based image retrieval using hierarchical clustering tre...TELKOMNIKA JOURNAL
Semantic extraction for images is an urgent problem and is applied in many different semantic retrieval systems. In this paper, a semantic-based image retrieval (SBIR) system is proposed based on the combination of growth partitioning tree (GP-Tree), which was built in the authors’ previous work, with a self-organizing map (SOM) network and neighbor graph (called SgGP-Tree) to improve accuracy. For each query image, a similar set of images is retrieved on the SgGP-Tree, and a set of visual words is extracted relying on the classes obtained from mask region-based convolutional neural networks (R-CNN), as the basis for querying semantic of input images on ontology by simple protocol and resource description framework query language (SPARQL) query. The experiment was performed on image datasets, such as ImageCLEF and MS-COCO, with precision values of 0.898453 and 0.875467, respectively. These results are compared with related works on the same image dataset, showing the effectiveness of the methods proposed.
ER Publication,
IJETR, IJMCTR,
Journals,
International Journals,
High Impact Journals,
Monthly Journal,
Good quality Journals,
Research,
Research Papers,
Research Article,
Free Journals, Open access Journals,
erpublication.org,
Engineering Journal,
Science Journals,
Similar to Improved wolf algorithm on document images detection using optimum mean technique (20)
Square transposition: an approach to the transposition process in block cipherjournalBEEI
The transposition process is needed in cryptography to create a diffusion effect on data encryption standard (DES) and advanced encryption standard (AES) algorithms as standard information security algorithms by the National Institute of Standards and Technology. The problem with DES and AES algorithms is that their transposition index values form patterns and do not form random values. This condition will certainly make it easier for a cryptanalyst to look for a relationship between ciphertexts because some processes are predictable. This research designs a transposition algorithm called square transposition. Each process uses square 8 × 8 as a place to insert and retrieve 64-bits. The determination of the pairing of the input scheme and the retrieval scheme that have unequal flow is an important factor in producing a good transposition. The square transposition can generate random and non-pattern indices so that transposition can be done better than DES and AES.
Hyper-parameter optimization of convolutional neural network based on particl...journalBEEI
Deep neural networks have accomplished enormous progress in tackling many problems. More specifically, convolutional neural network (CNN) is a category of deep networks that have been a dominant technique in computer vision tasks. Despite that these deep neural networks are highly effective; the ideal structure is still an issue that needs a lot of investigation. Deep Convolutional Neural Network model is usually designed manually by trials and repeated tests which enormously constrain its application. Many hyper-parameters of the CNN can affect the model performance. These parameters are depth of the network, numbers of convolutional layers, and numbers of kernels with their sizes. Therefore, it may be a huge challenge to design an appropriate CNN model that uses optimized hyper-parameters and reduces the reliance on manual involvement and domain expertise. In this paper, a design architecture method for CNNs is proposed by utilization of particle swarm optimization (PSO) algorithm to learn the optimal CNN hyper-parameters values. In the experiment, we used Modified National Institute of Standards and Technology (MNIST) database of handwritten digit recognition. The experiments showed that our proposed approach can find an architecture that is competitive to the state-of-the-art models with a testing error of 0.87%.
Supervised machine learning based liver disease prediction approach with LASS...journalBEEI
In this contemporary era, the uses of machine learning techniques are increasing rapidly in the field of medical science for detecting various diseases such as liver disease (LD). Around the globe, a large number of people die because of this deadly disease. By diagnosing the disease in a primary stage, early treatment can be helpful to cure the patient. In this research paper, a method is proposed to diagnose the LD using supervised machine learning classification algorithms, namely logistic regression, decision tree, random forest, AdaBoost, KNN, linear discriminant analysis, gradient boosting and support vector machine (SVM). We also deployed a least absolute shrinkage and selection operator (LASSO) feature selection technique on our taken dataset to suggest the most highly correlated attributes of LD. The predictions with 10 fold cross-validation (CV) made by the algorithms are tested in terms of accuracy, sensitivity, precision and f1-score values to forecast the disease. It is observed that the decision tree algorithm has the best performance score where accuracy, precision, sensitivity and f1-score values are 94.295%, 92%, 99% and 96% respectively with the inclusion of LASSO. Furthermore, a comparison with recent studies is shown to prove the significance of the proposed system.
A secure and energy saving protocol for wireless sensor networksjournalBEEI
The research domain for wireless sensor networks (WSN) has been extensively conducted due to innovative technologies and research directions that have come up addressing the usability of WSN under various schemes. This domain permits dependable tracking of a diversity of environments for both military and civil applications. The key management mechanism is a primary protocol for keeping the privacy and confidentiality of the data transmitted among different sensor nodes in WSNs. Since node's size is small; they are intrinsically limited by inadequate resources such as battery life-time and memory capacity. The proposed secure and energy saving protocol (SESP) for wireless sensor networks) has a significant impact on the overall network life-time and energy dissipation. To encrypt sent messsages, the SESP uses the public-key cryptography’s concept. It depends on sensor nodes' identities (IDs) to prevent the messages repeated; making security goals- authentication, confidentiality, integrity, availability, and freshness to be achieved. Finally, simulation results show that the proposed approach produced better energy consumption and network life-time compared to LEACH protocol; sensors are dead after 900 rounds in the proposed SESP protocol. While, in the low-energy adaptive clustering hierarchy (LEACH) scheme, the sensors are dead after 750 rounds.
Plant leaf identification system using convolutional neural networkjournalBEEI
This paper proposes a leaf identification system using convolutional neural network (CNN). This proposed system can identify five types of local Malaysia leaf which were acacia, papaya, cherry, mango and rambutan. By using CNN from deep learning, the network is trained from the database that acquired from leaf images captured by mobile phone for image classification. ResNet-50 was the architecture has been used for neural networks image classification and training the network for leaf identification. The recognition of photographs leaves requested several numbers of steps, starting with image pre-processing, feature extraction, plant identification, matching and testing, and finally extracting the results achieved in MATLAB. Testing sets of the system consists of 3 types of images which were white background, and noise added and random background images. Finally, interfaces for the leaf identification system have developed as the end software product using MATLAB app designer. As a result, the accuracy achieved for each training sets on five leaf classes are recorded above 98%, thus recognition process was successfully implemented.
Customized moodle-based learning management system for socially disadvantaged...journalBEEI
This study aims to develop Moodle-based LMS with customized learning content and modified user interface to facilitate pedagogical processes during covid-19 pandemic and investigate how teachers of socially disadvantaged schools perceived usability and technology acceptance. Co-design process was conducted with two activities: 1) need assessment phase using an online survey and interview session with the teachers and 2) the development phase of the LMS. The system was evaluated by 30 teachers from socially disadvantaged schools for relevance to their distance learning activities. We employed computer software usability questionnaire (CSUQ) to measure perceived usability and the technology acceptance model (TAM) with insertion of 3 original variables (i.e., perceived usefulness, perceived ease of use, and intention to use) and 5 external variables (i.e., attitude toward the system, perceived interaction, self-efficacy, user interface design, and course design). The average CSUQ rating exceeded 5.0 of 7 point-scale, indicated that teachers agreed that the information quality, interaction quality, and user interface quality were clear and easy to understand. TAM results concluded that the LMS design was judged to be usable, interactive, and well-developed. Teachers reported an effective user interface that allows effective teaching operations and lead to the system adoption in immediate time.
Understanding the role of individual learner in adaptive and personalized e-l...journalBEEI
Dynamic learning environment has emerged as a powerful platform in a modern e-learning system. The learning situation that constantly changing has forced the learning platform to adapt and personalize its learning resources for students. Evidence suggested that adaptation and personalization of e-learning systems (APLS) can be achieved by utilizing learner modeling, domain modeling, and instructional modeling. In the literature of APLS, questions have been raised about the role of individual characteristics that are relevant for adaptation. With several options, a new problem has been raised where the attributes of students in APLS often overlap and are not related between studies. Therefore, this study proposed a list of learner model attributes in dynamic learning to support adaptation and personalization. The study was conducted by exploring concepts from the literature selected based on the best criteria. Then, we described the results of important concepts in student modeling and provided definitions and examples of data values that researchers have used. Besides, we also discussed the implementation of the selected learner model in providing adaptation in dynamic learning.
Prototype mobile contactless transaction system in traditional markets to sup...journalBEEI
One way to prevent and reduce the spread of the covid-19 pandemic is through physical distancing program. This research aims to develop a prototype contactless transaction system using digital payment mechanisms and QR code technology that will be applied in traditional markets. The method used in the development of electronic market systems is a prototype approach. The application of QR code and digital payments are used as a solution to minimize money exchange contacts that are common in traditional markets. The results showed that the system built was able to accelerate and facilitate the buying and selling transaction process in traditional market environment. Alpha testing shows that all functional systems are running well. Meanwhile, beta testing shows that the user can very well accept the system that was built. The results of the study also show acceptance of the usefulness of the system being built, as well as the optimism of its users to be able to take advantage of this system both technologically and functionally, so its can be a part of the digital transformation of the traditional market to the electronic market and has become one of the solutions in reducing the spread of the current covid-19 pandemic.
Wireless HART stack using multiprocessor technique with laxity algorithmjournalBEEI
The use of a real-time operating system is required for the demarcation of industrial wireless sensor network (IWSN) stacks (RTOS). In the industrial world, a vast number of sensors are utilised to gather various types of data. The data gathered by the sensors cannot be prioritised ahead of time. Because all of the information is equally essential. As a result, a protocol stack is employed to guarantee that data is acquired and processed fairly. In IWSN, the protocol stack is implemented using RTOS. The data collected from IWSN sensor nodes is processed using non-preemptive scheduling and the protocol stack, and then sent in parallel to the IWSN's central controller. The real-time operating system (RTOS) is a process that occurs between hardware and software. Packets must be sent at a certain time. It's possible that some packets may collide during transmission. We're going to undertake this project to get around this collision. As a prototype, this project is divided into two parts. The first uses RTOS and the LPC2148 as a master node, while the second serves as a standard data collection node to which sensors are attached. Any controller may be used in the second part, depending on the situation. Wireless HART allows two nodes to communicate with each other.
Implementation of double-layer loaded on octagon microstrip yagi antennajournalBEEI
A double-layer loaded on the octagon microstrip yagi antenna (OMYA) at 5.8 GHz industrial, scientific and medical (ISM) Band is investigated in this paper. The double-layer consist of two double positive (DPS) substrates. The OMYA is overlaid with a double-layer configuration were simulated, fabricated and measured. A good agreement was observed between the computed and measured results of the gain for this antenna. According to comparison results, it shows that 2.5 dB improvement of the OMYA gain can be obtained by applying the double-layer on the top of the OMYA. Meanwhile, the bandwidth of the measured OMYA with the double-layer is 14.6%. It indicates that the double-layer can be used to increase the OMYA performance in term of gain and bandwidth.
The calculation of the field of an antenna located near the human headjournalBEEI
In this work, a numerical calculation was carried out in one of the universal programs for automatic electro-dynamic design. The calculation is aimed at obtaining numerical values for specific absorbed power (SAR). It is the SAR value that can be used to determine the effect of the antenna of a wireless device on biological objects; the dipole parameters will be selected for GSM1800. Investigation of the influence of distance to a cell phone on radiation shows that absorbed in the head of a person the effect of electromagnetic radiation on the brain decreases by three times this is a very important result the SAR value has decreased by almost three times it is acceptable results.
Exact secure outage probability performance of uplinkdownlink multiple access...journalBEEI
In this paper, we study uplink-downlink non-orthogonal multiple access (NOMA) systems by considering the secure performance at the physical layer. In the considered system model, the base station acts a relay to allow two users at the left side communicate with two users at the right side. By considering imperfect channel state information (CSI), the secure performance need be studied since an eavesdropper wants to overhear signals processed at the downlink. To provide secure performance metric, we derive exact expressions of secrecy outage probability (SOP) and and evaluating the impacts of main parameters on SOP metric. The important finding is that we can achieve the higher secrecy performance at high signal to noise ratio (SNR). Moreover, the numerical results demonstrate that the SOP tends to a constant at high SNR. Finally, our results show that the power allocation factors, target rates are main factors affecting to the secrecy performance of considered uplink-downlink NOMA systems.
Design of a dual-band antenna for energy harvesting applicationjournalBEEI
This report presents an investigation on how to improve the current dual-band antenna to enhance the better result of the antenna parameters for energy harvesting application. Besides that, to develop a new design and validate the antenna frequencies that will operate at 2.4 GHz and 5.4 GHz. At 5.4 GHz, more data can be transmitted compare to 2.4 GHz. However, 2.4 GHz has long distance of radiation, so it can be used when far away from the antenna module compare to 5 GHz that has short distance in radiation. The development of this project includes the scope of designing and testing of antenna using computer simulation technology (CST) 2018 software and vector network analyzer (VNA) equipment. In the process of designing, fundamental parameters of antenna are being measured and validated, in purpose to identify the better antenna performance.
Transforming data-centric eXtensible markup language into relational database...journalBEEI
eXtensible markup language (XML) appeared internationally as the format for data representation over the web. Yet, most organizations are still utilising relational databases as their database solutions. As such, it is crucial to provide seamless integration via effective transformation between these database infrastructures. In this paper, we propose XML-REG to bridge these two technologies based on node-based and path-based approaches. The node-based approach is good to annotate each positional node uniquely, while the path-based approach provides summarised path information to join the nodes. On top of that, a new range labelling is also proposed to annotate nodes uniquely by ensuring the structural relationships are maintained between nodes. If a new node is to be added to the document, re-labelling is not required as the new label will be assigned to the node via the new proposed labelling scheme. Experimental evaluations indicated that the performance of XML-REG exceeded XMap, XRecursive, XAncestor and Mini-XML concerning storing time, query retrieval time and scalability. This research produces a core framework for XML to relational databases (RDB) mapping, which could be adopted in various industries.
Key performance requirement of future next wireless networks (6G)journalBEEI
Given the massive potentials of 5G communication networks and their foreseeable evolution, what should there be in 6G that is not in 5G or its long-term evolution? 6G communication networks are estimated to integrate the terrestrial, aerial, and maritime communications into a forceful network which would be faster, more reliable, and can support a massive number of devices with ultra-low latency requirements. This article presents a complete overview of potential 6G communication networks. The major contribution of this study is to present a broad overview of key performance indicators (KPIs) of 6G networks that cover the latest manufacturing progress in the environment of the principal areas of research application, and challenges.
Noise resistance territorial intensity-based optical flow using inverse confi...journalBEEI
This paper presents the use of the inverse confidential technique on bilateral function with the territorial intensity-based optical flow to prove the effectiveness in noise resistance environment. In general, the image’s motion vector is coded by the technique called optical flow where the sequences of the image are used to determine the motion vector. But, the accuracy rate of the motion vector is reduced when the source of image sequences is interfered by noises. This work proved that the inverse confidential technique on bilateral function can increase the percentage of accuracy in the motion vector determination by the territorial intensity-based optical flow under the noisy environment. We performed the testing with several kinds of non-Gaussian noises at several patterns of standard image sequences by analyzing the result of the motion vector in a form of the error vector magnitude (EVM) and compared it with several noise resistance techniques in territorial intensity-based optical flow method.
Modeling climate phenomenon with software grids analysis and display system i...journalBEEI
This study aims to model climate change based on rainfall, air temperature, pressure, humidity and wind with grADS software and create a global warming module. This research uses 3D model, define, design, and develop. The results of the modeling of the five climate elements consist of the annual average temperature in Indonesia in 2009-2015 which is between 29oC to 30.1oC, the horizontal distribution of the annual average pressure in Indonesia in 2009-2018 is between 800 mBar to 1000 mBar, the horizontal distribution the average annual humidity in Indonesia in 2009 and 2011 ranged between 27-57, in 2012-2015, 2017 and 2018 it ranged between 30-60, during the East Monsoon, the wind circulation moved from northern Indonesia to the southern region Indonesia. During the west monsoon, the wind circulation moves from the southern part of Indonesia to the northern part of Indonesia. The global warming module for SMA/MA produced is feasible to use, this is in accordance with the value given by the validate of 69 which is in the appropriate category and the response of teachers and students through a 91% questionnaire.
An approach of re-organizing input dataset to enhance the quality of emotion ...journalBEEI
The purpose of this paper is to propose an approach of re-organizing input data to recognize emotion based on short signal segments and increase the quality of emotional recognition using physiological signals. MIT's long physiological signal set was divided into two new datasets, with shorter and overlapped segments. Three different classification methods (support vector machine, random forest, and multilayer perceptron) were implemented to identify eight emotional states based on statistical features of each segment in these two datasets. By re-organizing the input dataset, the quality of recognition results was enhanced. The random forest shows the best classification result among three implemented classification methods, with an accuracy of 97.72% for eight emotional states, on the overlapped dataset. This approach shows that, by re-organizing the input dataset, the high accuracy of recognition results can be achieved without the use of EEG and ECG signals.
Parking detection system using background subtraction and HSV color segmentationjournalBEEI
Manual system vehicle parking makes finding vacant parking lots difficult, so it has to check directly to the vacant space. If many people do parking, then the time needed for it is very much or requires many people to handle it. This research develops a real-time parking system to detect parking. The system is designed using the HSV color segmentation method in determining the background image. In addition, the detection process uses the background subtraction method. Applying these two methods requires image preprocessing using several methods such as grayscaling, blurring (low-pass filter). In addition, it is followed by a thresholding and filtering process to get the best image in the detection process. In the process, there is a determination of the ROI to determine the focus area of the object identified as empty parking. The parking detection process produces the best average accuracy of 95.76%. The minimum threshold value of 255 pixels is 0.4. This value is the best value from 33 test data in several criteria, such as the time of capture, composition and color of the vehicle, the shape of the shadow of the object’s environment, and the intensity of light. This parking detection system can be implemented in real-time to determine the position of an empty place.
Quality of service performances of video and voice transmission in universal ...journalBEEI
The universal mobile telecommunications system (UMTS) has distinct benefits in that it supports a wide range of quality of service (QoS) criteria that users require in order to fulfill their requirements. The transmission of video and audio in real-time applications places a high demand on the cellular network, therefore QoS is a major problem in these applications. The ability to provide QoS in the UMTS backbone network necessitates an active QoS mechanism in order to maintain the necessary level of convenience on UMTS networks. For UMTS networks, investigation models for end-to-end QoS, total transmitted and received data, packet loss, and throughput providing techniques are run and assessed and the simulation results are examined. According to the results, appropriate QoS adaption allows for specific voice and video transmission. Finally, by analyzing existing QoS parameters, the QoS performance of 4G/UMTS networks may be improved.
Industrial Training at Shahjalal Fertilizer Company Limited (SFCL)MdTanvirMahtab2
This presentation is about the working procedure of Shahjalal Fertilizer Company Limited (SFCL). A Govt. owned Company of Bangladesh Chemical Industries Corporation under Ministry of Industries.
Hybrid optimization of pumped hydro system and solar- Engr. Abdul-Azeez.pdffxintegritypublishin
Advancements in technology unveil a myriad of electrical and electronic breakthroughs geared towards efficiently harnessing limited resources to meet human energy demands. The optimization of hybrid solar PV panels and pumped hydro energy supply systems plays a pivotal role in utilizing natural resources effectively. This initiative not only benefits humanity but also fosters environmental sustainability. The study investigated the design optimization of these hybrid systems, focusing on understanding solar radiation patterns, identifying geographical influences on solar radiation, formulating a mathematical model for system optimization, and determining the optimal configuration of PV panels and pumped hydro storage. Through a comparative analysis approach and eight weeks of data collection, the study addressed key research questions related to solar radiation patterns and optimal system design. The findings highlighted regions with heightened solar radiation levels, showcasing substantial potential for power generation and emphasizing the system's efficiency. Optimizing system design significantly boosted power generation, promoted renewable energy utilization, and enhanced energy storage capacity. The study underscored the benefits of optimizing hybrid solar PV panels and pumped hydro energy supply systems for sustainable energy usage. Optimizing the design of solar PV panels and pumped hydro energy supply systems as examined across diverse climatic conditions in a developing country, not only enhances power generation but also improves the integration of renewable energy sources and boosts energy storage capacities, particularly beneficial for less economically prosperous regions. Additionally, the study provides valuable insights for advancing energy research in economically viable areas. Recommendations included conducting site-specific assessments, utilizing advanced modeling tools, implementing regular maintenance protocols, and enhancing communication among system components.
Welcome to WIPAC Monthly the magazine brought to you by the LinkedIn Group Water Industry Process Automation & Control.
In this month's edition, along with this month's industry news to celebrate the 13 years since the group was created we have articles including
A case study of the used of Advanced Process Control at the Wastewater Treatment works at Lleida in Spain
A look back on an article on smart wastewater networks in order to see how the industry has measured up in the interim around the adoption of Digital Transformation in the Water Industry.
About
Indigenized remote control interface card suitable for MAFI system CCR equipment. Compatible for IDM8000 CCR. Backplane mounted serial and TCP/Ethernet communication module for CCR remote access. IDM 8000 CCR remote control on serial and TCP protocol.
• Remote control: Parallel or serial interface.
• Compatible with MAFI CCR system.
• Compatible with IDM8000 CCR.
• Compatible with Backplane mount serial communication.
• Compatible with commercial and Defence aviation CCR system.
• Remote control system for accessing CCR and allied system over serial or TCP.
• Indigenized local Support/presence in India.
• Easy in configuration using DIP switches.
Technical Specifications
Indigenized remote control interface card suitable for MAFI system CCR equipment. Compatible for IDM8000 CCR. Backplane mounted serial and TCP/Ethernet communication module for CCR remote access. IDM 8000 CCR remote control on serial and TCP protocol.
Key Features
Indigenized remote control interface card suitable for MAFI system CCR equipment. Compatible for IDM8000 CCR. Backplane mounted serial and TCP/Ethernet communication module for CCR remote access. IDM 8000 CCR remote control on serial and TCP protocol.
• Remote control: Parallel or serial interface
• Compatible with MAFI CCR system
• Copatiable with IDM8000 CCR
• Compatible with Backplane mount serial communication.
• Compatible with commercial and Defence aviation CCR system.
• Remote control system for accessing CCR and allied system over serial or TCP.
• Indigenized local Support/presence in India.
Application
• Remote control: Parallel or serial interface.
• Compatible with MAFI CCR system.
• Compatible with IDM8000 CCR.
• Compatible with Backplane mount serial communication.
• Compatible with commercial and Defence aviation CCR system.
• Remote control system for accessing CCR and allied system over serial or TCP.
• Indigenized local Support/presence in India.
• Easy in configuration using DIP switches.
Hierarchical Digital Twin of a Naval Power SystemKerry Sado
A hierarchical digital twin of a Naval DC power system has been developed and experimentally verified. Similar to other state-of-the-art digital twins, this technology creates a digital replica of the physical system executed in real-time or faster, which can modify hardware controls. However, its advantage stems from distributing computational efforts by utilizing a hierarchical structure composed of lower-level digital twin blocks and a higher-level system digital twin. Each digital twin block is associated with a physical subsystem of the hardware and communicates with a singular system digital twin, which creates a system-level response. By extracting information from each level of the hierarchy, power system controls of the hardware were reconfigured autonomously. This hierarchical digital twin development offers several advantages over other digital twins, particularly in the field of naval power systems. The hierarchical structure allows for greater computational efficiency and scalability while the ability to autonomously reconfigure hardware controls offers increased flexibility and responsiveness. The hierarchical decomposition and models utilized were well aligned with the physical twin, as indicated by the maximum deviations between the developed digital twin hierarchy and the hardware.
Student information management system project report ii.pdfKamal Acharya
Our project explains about the student management. This project mainly explains the various actions related to student details. This project shows some ease in adding, editing and deleting the student details. It also provides a less time consuming process for viewing, adding, editing and deleting the marks of the students.
CFD Simulation of By-pass Flow in a HRSG module by R&R Consult.pptxR&R Consult
CFD analysis is incredibly effective at solving mysteries and improving the performance of complex systems!
Here's a great example: At a large natural gas-fired power plant, where they use waste heat to generate steam and energy, they were puzzled that their boiler wasn't producing as much steam as expected.
R&R and Tetra Engineering Group Inc. were asked to solve the issue with reduced steam production.
An inspection had shown that a significant amount of hot flue gas was bypassing the boiler tubes, where the heat was supposed to be transferred.
R&R Consult conducted a CFD analysis, which revealed that 6.3% of the flue gas was bypassing the boiler tubes without transferring heat. The analysis also showed that the flue gas was instead being directed along the sides of the boiler and between the modules that were supposed to capture the heat. This was the cause of the reduced performance.
Based on our results, Tetra Engineering installed covering plates to reduce the bypass flow. This improved the boiler's performance and increased electricity production.
It is always satisfying when we can help solve complex challenges like this. Do your systems also need a check-up or optimization? Give us a call!
Work done in cooperation with James Malloy and David Moelling from Tetra Engineering.
More examples of our work https://www.r-r-consult.dk/en/cases-en/
Cosmetic shop management system project report.pdfKamal Acharya
Buying new cosmetic products is difficult. It can even be scary for those who have sensitive skin and are prone to skin trouble. The information needed to alleviate this problem is on the back of each product, but it's thought to interpret those ingredient lists unless you have a background in chemistry.
Instead of buying and hoping for the best, we can use data science to help us predict which products may be good fits for us. It includes various function programs to do the above mentioned tasks.
Data file handling has been effectively used in the program.
The automated cosmetic shop management system should deal with the automation of general workflow and administration process of the shop. The main processes of the system focus on customer's request where the system is able to search the most appropriate products and deliver it to the customers. It should help the employees to quickly identify the list of cosmetic product that have reached the minimum quantity and also keep a track of expired date for each cosmetic product. It should help the employees to find the rack number in which the product is placed.It is also Faster and more efficient way.
Overview of the fundamental roles in Hydropower generation and the components involved in wider Electrical Engineering.
This paper presents the design and construction of hydroelectric dams from the hydrologist’s survey of the valley before construction, all aspects and involved disciplines, fluid dynamics, structural engineering, generation and mains frequency regulation to the very transmission of power through the network in the United Kingdom.
Author: Robbie Edward Sayers
Collaborators and co editors: Charlie Sims and Connor Healey.
(C) 2024 Robbie E. Sayers
2. ISSN: 2302-9285
Bulletin of Electr Eng and Inf, Vol. 8, No. 2, June 2019 : 551 – 557
552
The general image binarization (or thresholding algorithms) could be divided into global, local
thresholding and mixing strategies. Global thresholding establishes a fixed threshold for all pixels of the
image, whereas local thresholding applies different thresholds for each pixel (using pixel local information).
Hybrid approaches try to compose both strategies: they use global and local information for thresholding.
Global thresholding techniques usually need less computation, and they work well in simple cases, though
fail in complex images. The most popular global image threshold is an Otsu’s method [15] which after
computing the gray level histogram, takes the optimal value that minimizes the inner class variance. There
are some refinements based on this method [16, 17]. Taking a closer look at document specific contributions,
Ntirogiannis et al. [18] proposed a combination of the global and local threshold for handwritten text
recognition. First, a background estimation is applied to binarize the image globally, then they work with
Connected Components (CCs), the approach takes stroke characteristics and discard components that do not
suit to the common stroke criteria. The above finding is consistent with the study by Moghaddam and Cheriet
[19]. The work in a similar manner since it extracts features such as stroke width and line height. It adapts
itself to the document structure because it combines a grid-based modeling and the estimated
background map.
In degraded documents, such historical records, traditional and general methods present poor
performances. Degraded documents involve not only binarization, however, the restoration of document parts
(mainly lost strokes) [20, 21]. In this purpose, usually, stroke characteristics are integrated into the
binarization. It is possible to find a vast set of contributions addressing these issues [22-24]. For example,
Moghaddam and Cheriet [25] treat binarization of degraded documents by combining different binarization
scales. Their work aims to restore lost strokes and eliminate ink bleed-through. In contrast, the study by Nina
et al. [26] indicated that a recursive Otsu’s thresholding could be used for binarization of historical
documents as well. In the same objective, another adaptive approach by the combination of image contrast,
gradients, and text shapes are estimated by Canny’s edge extractor was studied by Su et al. [27]. Besides,
Valizadeh and Kabir [28] suggest to extract useful characteristics in order to classify the pixels in background
or foreground; they use structural contrast taking into account information like the stroke width for feature
extraction. With the selected features, space is partitioned into several regions (clustering) where the pixel
within each region will be classified mixing the region information and the previous features extracted.
In this paper, a modification Wolf algorithm based on optimum mean was presented. The target of
this study is to improve the detection accuracy and blurred problem on binarization result and at the same
time to reduce the noise and any artefact by determining the optimum threshold value based on adjusting the
mean. In terms of the analysis stage, a few image quality assessment (IQA) such as an F-measure, Peak
Signal Noise Ratio (PSNR) and Negative Rate Metric (NRM) will be obtained. This paper is organized into
four sections. Section 2 explains the proposed method which focuses on Optimum Mean Technique, Section
3 shows the results performance in terms of the IQA. Section 4 describes the summary of the work.
2. RESEARCH METHOD
The optimum mean suggests based on a modification of Wolf method. Basically, the wolf method is
to localize artificial text in images and videos using a measure of accumulated gradients and morphological
processing. The quality of the localized text is improved by robust multiple frame integration [29]. However,
wolf method is not working properly on the image which contents many structures such as text image and
retinal image. The equation of wolf method as follows;
)
(
)
1
( M
m
R
s
a
aM
m
a
T
(1)
m and s is a mean and standard deviation, while a is a gain parameter. Then, M represent image size
minimum value of the gray levels and R is a maximum value of the standard deviations. Actually, the Wolf
method successful to segment the text region, however failed when dealing with the non-uniform and low
contrast input image. Besides, the output from the Wolf method also looks blurred, not clear and low quality
especially when performed on thin text.
In this paper, the objective is to determine the optimum threshold value which is likely to work
better for many types of low contrast document images. A slight modification of the Wolf threshold by using
adjustment of mean value was proposed. The optimum mean method was proposed to overcome this problem
by finding the maximum threshold value. The main advantage of the proposed method over Wolf is that it
considerably improves the image quality, reduced the blurred image and eliminate the noise.
3. Bulletin of Electr Eng and Inf ISSN: 2302-9285
Improved wolf algorithm on document images detection using optimum mean technique (Wan Azani Mustafa)
553
By referring to (1), the mean value m will give a high effect on the threshold value as shown in
Figure 1. For example, according to Figure 1, if the mean value is less than average ( m -15) the result was
blurred and more information details were lost. Otherwise, if the mean value is more than the average ( m
+15) the result is better and improved. In this paper, the specific value (maximum mean) was found to
replace the normal mean value. The aim is to find the specific value more than the normal average.
Original Standard Mean
By using the original mean value,
the output image was satisfied.
However, a few thin texts were
loss and the image look blurred.
If the mean lower than
standard mean
(mean –15)
If modified the mean value of the
lower original mean, the output
images look more blurred and not
clear.
If the higher than standard
mean
(mean +15)
By applying mean value over than
original mean, the output image
shows more clear and also able to
detect the thin text
Figure 1. Comparison of resulting image with different mean value
Actually, the Wolf method failed to binarize the text image in low contrast condition because the
threshold value is not accurate. Therefore, the proposed method tends to improve the threshold value based
on an optimum mean calculation and applied to the low contrast image. The specific and maximum threshold
value was suggested caused if the threshold value is too high, it’s will introduce noise and artefact on the
resulting image. In this paper, the maximum mean is calculated in order to replace the original mean. The
maximum mean equation is depicted as follows;
y
x
mean
Omean ,
max
(2)
where, )
,
(
max y
x is the maximum intensity of the overall mean image and mean is the standard mean.
Normally, the method was processed on 3x3 windowing size and contain a low of the mean value in the
whole images. So, we select the maximum mean value to put into the final algorithm. The final proposed
algorithm is;
)
(
)
1
( M
O
R
s
a
aM
O
a
T mean
mean
(3)
Based on the modified algorithm, the blur problem and unsegment thin text can be solved. Then, it
will increase the binarization result and produced high image quality. In order to evaluate the effectiveness of
4. ISSN: 2302-9285
Bulletin of Electr Eng and Inf, Vol. 8, No. 2, June 2019 : 551 – 557
554
the proposed method and compared to the Wolf methods, 14 document images experimented and the results
are explained in the following section.
3. RESULTS AND ANALYSIS
In this study, 14 document images with non-uniform image background intensity due to uneven
illumination were tested [30]. The document dataset is publicly image provided by
http://utopia.duth.gr/~ipratika/HDIBCO2012/benchmark. Every input image shows the different pixel size
and the major problem images is caused by the fade ink and degrade of background. In this experiment, all
images were processed in grayscale images. The proposed process was performed on the default windowing
size, which is 3×3 [31, 32]. In binarization stage, a small windowing size will give a better result compared to
the larger size [33]. A few mathematical statistics such as variance and the standard deviation was obtained
before applying the final proposed equation. Figure 2 illustrated the comparison resulting image for Wolf
method and the proposed method. Based on observation, the wolf method failed in order to detect the thin
text, especially on illumination and low contrast problem input image. However, after applying the
modification proposed method was increased the resultant performance. The resulting image shows that more
clearly and improved the image quality.
Original Ground Truth Wolf Optimum Mean
Figure 2. Comparison of Wolf method and Optimum mean method on the document image
5. Bulletin of Electr Eng and Inf ISSN: 2302-9285
Improved wolf algorithm on document images detection using optimum mean technique (Wan Azani Mustafa)
555
In order to prove the performance method, a few selected Image Quality Assessment (IQA) such as
an F-measure, PSNR and NRM was calculated. The quality of the segmented image is determined based on
the pixels similarity of the resultant segmented image against the manually segmented image. The
mathematical for F-measure, PSNR and NRM as follows;
FP
TP
TP
FN
TP
TP
FP
TP
TP
FN
TP
TP
measure
F
2 (4)
2
TN
FP
FP
FN
TP
FN
NRM (5)
MSE
S
PSNR
2
10
log
10
(6)
TP, FP, FN and TN denote the True Positive, False Positive, False Negative and True Negative
values, respectively [34, 35]. Mathematically, the highest of F-measure and PSNR show the good
binarization result [36, 37]. Oppositely, the lowest of NRM should be present the good image quality.
Table 1 presents the comparison result between Wolf method and Optimum Mean method based on IQA.
Table 1. Result of Image Quality Assessment (IQA) in terms of F-measure PSNR, NRM and MPM
Image
F-measure PSNR NRM
Wolf Optimum Mean Wolf Optimum Mean Wolf Optimum Mean
1 70.81 78.12 15.79 15.66 0.22 0.08
2 28.90 76.28 11.49 14.19 0.42 0.14
3 26.86 81.47 12.68 15.69 0.42 0.04
4 78.95 82.46 17.68 17.29 0.17 0.05
5 72.54 65.52 15.83 12.07 0.21 0.03
6 59.42 69.90 14.12 12.59 0.28 0.04
7 27.06 60.98 12.51 13.76 0.42 0.25
8 81.31 71.35 16.40 12.44 0.15 0.03
9 37.71 84.67 10.54 14.84 0.38 0.11
10 22.53 80.93 10.58 14.63 0.44 0.13
11 12.06 84.77 10.93 15.57 0.47 0.06
12 28.17 74.29 13.18 15.81 0.42 0.17
13 18.29 49.86 13.58 14.59 0.45 0.32
14 69.86 82.89 16.66 17.62 0.23 0.05
Average 45.32 74.53 13.71 14.77 0.33 0.11
According to the Table 1, the F-measure and PSNR show the increment result after applying the
proposed method. Based on the average value, the proposed method achieved an F-measure (74.53) and the
PSNR (13.71) compared to the Wolf method which is an F-measure (45.32) and the PSNR (13.71). In terms
of increment percentage, the proposed method scored 64.45 % (F-measure) and 7.73 % (PSNR). Besides, the
proposed method also gave the lowest NRM (0.11) compared to the Wolf method (0.33). From this data, we
can see that the proposed method was successful to improve the binarization performance of Wolf method
4. CONCLUSION
Nowadays, digital image processing is very important in order to analyse the information on the
image. Thousands of valuable historical documents stored on the shelves of national libraries throughout the
world are waiting to be scanned in order to facilitate access to the information it contains. In this paper, a new
approach based on a modification of Wolf method was present. The main goal of the study was to determine
the optimum threshold value using the maximum mean. The second objective is to solve the blur problem
and improved the detection performance. Based on the numerical result, the proposed method obtained the
highest PSNR (14.773), F-measure (74.539) and lowest NRM (0.111) compared to the Wolf method.
Summary, the proposed method effective and successful to solve the Wolf problem and also increased the
binarization result.
6. ISSN: 2302-9285
Bulletin of Electr Eng and Inf, Vol. 8, No. 2, June 2019 : 551 – 557
556
ACKNOWLEDGEMENTS
The author would like to acknowledge support from the Short Term Grant under a grant number of
9001-00576, Universiti Malaysia Perlis (UniMAP)
REFERENCES
[1] B. M. Singh, R. Sharma, D. Ghosh, and A. Mittal, “Adaptive binarization of severely degraded and non-uniformly
illuminated documents,” Int. J. Doc. Anal. Recognit., pp. 393–412, 2014.
[2] E. T. Zemouri, Y. Chibani, and Y. Brik, “Restoration based Contourlet Transform for historical document image
binarization,” Int. Conf. Multimed. Comput. Syst. -Proceedings, vol. 0, pp. 309–313, 2014.
[3] H. Z. Nafchi, R. F. Moghaddam, and M. Cheriet, “Phase-based binarization of ancient document images: Model
and applications,” IEEE Trans. Image Process., vol. 23, no. 7, pp. 2916–2930, 2014.
[4] A. Sehad, Y. Chibani, and M. Cheriet, “Gabor Filters for Degraded Document Image Binarization,” Proc. Int. Conf.
Front. Handwrit. Recognition, ICFHR, vol. 2014-Decem, pp. 702–707, 2014.
[5] W. A. Mustafa and H. Yazid, “Image Enhancement Technique on Contrast Variation : A Comprehensive Review,”
J. Telecommun. Electron. Comput. Eng., vol. 9, no. 3, pp. 199–204, 2017.
[6] W. A. Mustafa, H. Yazid, and S. Yaacob, “A Review : Comparison Between Different Type of Filtering Methods
on the Contrast Variation Retinal Images,” in IEEE International Conference on Control System, Computing and
Engineering, 2014, pp. 542–546.
[7] W. A. Mustafa, H. Yazid, and M. Jaafar, “An Improved Sauvola Approach on Document Images Binarization,” J.
Telecommun. Electron. Comput. Eng., vol. 10, no. 2, pp. 43–50, 2018.
[8] E. Ahmadi, Z. Azimifar, M. Shams, M. Famouri, and M. J. Shafiee, “Document image binarization using a
discriminative structural classifier,” Pattern Recognit. Lett., vol. 63, pp. 36–42, 2015.
[9] N. Mitianoudis and N. Papamarkos, “Document image binarization using local features and Gaussian mixture
modeling,” Image Vis. Comput., vol. 38, pp. 33–51, 2015.
[10] W. A. Mustafa and H. Yazid, “Illumination and Contrast Correction Strategy using Bilateral Filtering and
Binarization Comparison,” J. Telecommun. Electron. Comput. Eng., vol. 8, no. 1, pp. 67–73, 2016.
[11] W. A. Mustafa and H. Yazid, “Background Correction using Average Filtering and Gradient Based Thresholding,”
J. Telecommun. Electron. Comput. Eng., vol. 8, no. 5, pp. 81–88, 2016.
[12] W. A. Mustafa and H. Yazid, “Contrast and Luminosity Correction Based On Statistical Region Information,” Adv.
Sci. Lett., vol. 23, no. 6, pp. 5383–5386, 2017.
[13] W. A. Mustafa and M. M. M. A. Kader, “Binarization of Document Images: A Comprehensive Review,” J. Phys.
Conf. Ser., vol. 1019, no. 012023, pp. 1–9, 2018.
[14] W. A. Mustafa and M. M. M. A. Kader, “Binarization of Document Image Using Optimum Threshold
Modification,” J. Phys. Conf. Ser., vol. 1019, no. 012022, pp. 1–8, 2018.
[15] N. Otsu, “A Threshold Selection Method from Gray-Level Histograms,” in IEEE Transactions On Systrems, Man,
And Cybernetics, 1979, vol. 20, no. 1, pp. 62–66.
[16] A. D. Brink, “Thresholding of digital images using two-dimensional entropies,” Pattern Recognit., vol. 25, no. 8,
pp. 803–808, 1992.
[17] J. Kittler and J. Illingworth, “On threshold selection using clustering criteria,” IEEE Trans. Syst. Man. Cybern., vol.
SMC-15, no. 5, pp. 652–655, 1985.
[18] K. Ntirogiannis, B. Gatos, and I. Pratikakis, “A combined approach for the binarization of handwritten document
images,” Pattern Recognit. Lett., vol. 35, pp. 3–15, Jan. 2014.
[19] Reza Farrahi Moghaddamn and M. Cheriet, “AdOtsu: An adaptive and parameterless generalization of Otsu’s
method for document image binarization,” Pattern Recognit., vol. 45, pp. 2419–2431, Jun. 2012.
[20] W. A. Mustafa, H. Aziz, W. Khairunizam, Z. Ibrahim, S. Ab, and Z. M.Razlan, “Review of Different Binarization
Approaches on Degraded Document Images,” in IEEE International Conference on Computational Approach in
Smart Systems Design and Applications (ICASSDA), 2018, pp. 1–8.
[21] W. A. Mustafa, W. Khairunizam, Z. Ibrahim, S. Ab, and Z. M.Razlan, “Improved Feng Binarization Based on
Max-Mean Technique on Document Image,” in IEEE International Conference on Computational Approach in
Smart Systems Design and Applications (ICASSDA), 2018, pp. 1–6.
[22] J. Amudha, N. Pradeepa, and R. Sudhakar, “A survey on digital image restoration,” in Procedia Engineering, 2012,
vol. 38, pp. 2378–2382.
[23] C. C. Fung and R. Chamchong, “A review of evaluation of optimal binarization technique for character
segmentation in historical manuscripts,” in 3rd International Conference on Knowledge Discovery and Data
Mining, WKDD 2010, 2010, pp. 236–240.
[24] J. Kaur and R. Mahajan, “A Review of Degraded Document Image Binarization Techniques,” Int. J. Adv. Res.
Comput. Commun. Eng., vol. 3, no. 5, 2014.
[25] R. Farrahi Moghaddam and M. Cheriet, “A multi-scale framework for adaptive binarization of degraded document
images,” Pattern Recognit., vol. 43, no. 6, pp. 2186–2198, Jun. 2010.
[26] O. Nina, B. Morse, and W. Barrett, “A Recursive Otsu Thresholding Method for Scanned Document Binarization,”
in IEEE Workshop on Applications of Computer Vision (WACV), 2010, pp. 307–314.
[27] B. Su, S. Lu, and C. L. Tan, “Robust document image binarization technique for degraded document images,”
IEEE Trans. Image Process., vol. 22, no. 4, pp. 1408–1417, 2013.
[28] M. Valizadeh and E. Kabir, “Binarization of degraded document image based on feature space partitioning and
7. Bulletin of Electr Eng and Inf ISSN: 2302-9285
Improved wolf algorithm on document images detection using optimum mean technique (Wan Azani Mustafa)
557
classification,” Int. J. Doc. Anal. Recognit., vol. 15, no. 1, pp. 57–69, 2012.
[29] C. Wolf, J. Jolion, R. F. V Rr-, I. De Lyon, and A. A. Einstein, “Extraction and Recognition of Artificial Text in
Multimedia Documents,” Pattern Anal. Appl., vol. 6, no. 4, pp. 309–326, 2003.
[30] W. A. Mustafa and M. M. M. A. Kader, “Document Image Database ( 2009 - 2012 ): A Systematic Review,” J.
Phys. Conf. Ser., vol. 1019, no. 012024, pp. 1–7, 2018.
[31] K. Saeed and M. Albakoor, “Region growing based segmentation algorithm for typewritten and handwritten text
recognition,” Appl. Soft Comput., vol. 9, no. 2, pp. 608–617, Mar. 2009.
[32] J. Ruiz-del-Solar and J. Quinteros, “Illumination compensation and normalization in eigenspace-based face
recognition: A comparative study of different pre-processing approaches,” Pattern Recognit. Lett., vol. 29, pp.
1966–1979, 2008.
[33] S. Y. W.Azani Mustafa, Haniza Yazid, “Illumination Normalization Of Non-Uniform Images Based On Double
Mean Filtering,” in IEEE International Conference on Control System, Computing and Engineering, 2014, pp.
366–371.
[34] R. Hedjam, H. Z. Nafchi, R. F. Moghaddam, M. Kalacska, and M. Cheriet, “ICDAR 2015 contest on MultiSpectral
Text Extraction (MS-TEx 2015),” Proc. Int. Conf. Doc. Anal. Recognition, ICDAR, pp. 1181–1185, 2015.
[35] I. Pratikakis, B. Gatos, and K. Ntirogiannis, “ICDAR 2013 document image binarization contest (DIBCO 2013),”
in Proceedings of the International Conference on Document Analysis and Recognition, ICDAR, 2013, pp. 1471–
1476.
[36] W. A. Mustafa and H. Yazid, “Conversion of the Retinal Image Using Gray World Technique,” J. Biomimetics,
Biomater. Biomed. Eng., vol. 36, pp. 70–77, 2018.
[37] W. A. Mustafa and M. M. M. A. Kader, “A Comparative Study of Automated Segmentation Methods for Cell
Nucleus Detection,” Malaysian Appl. Biol., vol. 47, no. 2, pp. 125–129, 2018.
BIOGRAPHIES OF AUTHORS
Wan Azani Mustafa received degree in Biomedical Electronic Engineering (2013) and PhD in
Mechatronic Engineering (2017) from University Malaysia Perlis (UniMAP). Registered on
Board Engineer Malaysia (BEM) on 2014. Published more than 40 academic articles and 1 book.
Now, working as a senior lecturer at University Malaysia perlis, Malaysia. Current interest
includes image processing, biomechanics, intelligence system, and control system.
Mohamed Mydin M. Abdul Kader received Master of Science In Electrical Engineering from
Universiti Sains Malaysia (USM). Recognition as a Professional Technologist member from
Malaysia Board Of Technologists (MBOT) ( 2018 ). Now, working as a lecturer at University
Malaysia perlis, Malaysia. Current interest includes image processing, control system, Robotic
and Automation.
Zahereel Ishwar Abdul Khalib received Ph.D in Computer Engineering, UniMAP MSc. in Real
Time Software Engineering, UTM B.Sc in Computer Engineering, California State University,
Sacramento, USA.Now, working as a senior lecturer at School of Computer & Communication
Engineering, University Malaysia perlis, Malaysia. Research interest Image and Signal
Processing, Real Time, SystemMulticore Programming and Multimedia Streaming.