The document proposes a hybrid image enhancement method for degraded document images combining Frankle McCann Retinex and morphological operations. Retinex is used to highlight image contours and suppress background noise and deformation. Morphological operations like thickening, filling, and bridging are then applied to further suppress background and connect foreground text. The method is tested on 300 degraded estampage images of 11th century Kannada inscriptions, showing it effectively enhances images for optical character recognition by improving contrast and removing unwanted artifacts compared to other Retinex methods.
Hyperspectral Data Compression Using Spatial-Spectral Lossless Coding TechniqueCSCJournals
Hyperspectral imaging is widely used in many applications; especially in vegetation, climate changes, and desert studies. Such kind of imaging has a huge amount of data, which requires transmission, processing, and storage resources especially for space borne imaging. Compression of hyperspectral data cubes is an effective solution for these problems. Lossless compression of the hyperspectral data usually results in low compression ratio, which may not meet the available resources; on the other hand, lossy compression may give the desired ratio, but with a significant degradation effect on object identification performance of the hyperspectral data. Moreover, most hyperspectral data compression techniques exploits the similarities in spectral dimensions; which requires bands reordering or regrouping, to make use of the spectral redundancy. In this paper, we analyze the spectral cross correlation between bands for Hyperion hyperspectral data; spectral cross correlation matrix is calculated, assessing the strength of the spectral matrix, and finally, we propose new technique to find highly correlated groups of bands in the hyperspectral data cube based on "inter band correlation square", from the resultant groups of bands we propose a new predictor that can predict efficiently the whole bands within data cube based on weighted combination of spectral and spatial prediction, the results are evaluated versus other state of the art predictor for lossless compression.
SINGLE IMAGE SUPER RESOLUTION: A COMPARATIVE STUDYcsandit
The majority of applications requiring high resolution images to derive and analyze data
accurately and easily. Image super resolution is playing an effective role in those applications.
Image super resolution is the process of producing high resolution image from low resolution
image. In this paper, we study various image super resolution techniques with respect to the
quality of results and processing time. This comparative study introduces a comparison between
four algorithms of single image super-resolution. For fair comparison, the compared algorithms
are tested on the same dataset and same platform to show the major advantages of one over the
others.
Object Classification of Satellite Images Using Cluster Repulsion Based Kerne...IOSR Journals
Abstract: We investigated the Classification of satellite images and multispectral remote sensing data .we
focused on uncertainty analysis in the produced land-cover maps .we proposed an efficient technique for
classifying the multispectral satellite images using Support Vector Machine (SVM) into road area, building area
and green area. We carried out classification in three modules namely (a) Preprocessing using Gaussian
filtering and conversion from conversion of RGB to Lab color space image (b) object segmentation using
proposed Cluster repulsion based kernel Fuzzy C- Means (FCM) and (c) classification using one-to-many SVM
classifier. The goal of this research is to provide the efficiency in classification of satellite images using the
object-based image analysis. The proposed work is evaluated using the satellite images and the accuracy of the
proposed work is compared to FCM based classification. The results showed that the proposed technique has
achieved better results reaching an accuracy of 79%, 84%, 81% and 97.9% for road, tree, building and vehicle
classification respectively.
Keywords:-Satellite image, FCM Clustering, Classification, SVM classifier.
Analysis of Multi-focus Image Fusion Method Based on Laplacian PyramidRajyalakshmi Reddy
This paper presented a simple and efficient algorithm
for multi-focus image fusion, which used a
multiresolution signal decomposition scheme called
Laplacian pyramid method. The principle of Laplacian
pyramid transform is introduced, and based on it the
fusion strategy is described in detail. By analyzing the
experimental results, it showed that this method has
good performance, and the quality of the fused image is
better than the results of other methods
INFORMATION SATURATION IN MULTISPECTRAL PIXEL LEVEL IMAGE FUSIONIJCI JOURNAL
The availability of imaging sensors operating in multiple spectral bands has led to the requirement of
image fusion algorithms that would combine the image from these sensors in an efficient way to give an
image that is more informative as well as perceptible to human eye. Multispectral image fusion is the
process of combining images from different spectral bands that are optically acquired. In this paper, we
used a pixel-level image fusion based on principal component analysis that combines satellite images of the
same scene from seven different spectral bands. The purpose of using principal component analysis
technique is that it is best method for Grayscale image fusion and gives better results. The main aim of
PCA technique is to reduce a large set of variables into a small set which still contains most of the
information that was present in the large set. The paper compares different parameters namely, entropy,
standard deviation, correlation coefficient etc. for different number of images fused from two to seven.
Finally, the paper shows that the information content in an image gets saturated after fusing four images.
Learning Based Single Frame Image Super-resolution Using Fast Discrete Curvel...CSCJournals
High-resolution (HR) images play a vital role in all imaging applications as they offer more details. The images captured by the camera system are of degraded quality due to the imaging system and are low-resolution (LR) images. Image super-resolution (SR) is a process, where HR image is obtained from combining one or multiple LR images of same scene. In this paper, learning based single frame image super-resolution technique is proposed by using Fast Discrete Curvelet Transform (FDCT) coefficients. FDCT is an extension to Cartesian wavelets having anisotropic scaling with many directions and positions, which forms tight wedges. Such wedges allow FDCT to capture the smooth curves and fine edges at multiresolution level. The finer scale curvelet coefficients of LR image are learnt locally from a set of high-resolution training images. The super-resolved image is reconstructed by inverse Fast Discrete Curvelet Transform (IFDCT). This technique represents fine edges of reconstructed HR image by extrapolating the FDCT coefficients from the high-resolution training images. Experimentation based results show appropriate improvements in MSE and PSNR.
Hyperspectral Data Compression Using Spatial-Spectral Lossless Coding TechniqueCSCJournals
Hyperspectral imaging is widely used in many applications; especially in vegetation, climate changes, and desert studies. Such kind of imaging has a huge amount of data, which requires transmission, processing, and storage resources especially for space borne imaging. Compression of hyperspectral data cubes is an effective solution for these problems. Lossless compression of the hyperspectral data usually results in low compression ratio, which may not meet the available resources; on the other hand, lossy compression may give the desired ratio, but with a significant degradation effect on object identification performance of the hyperspectral data. Moreover, most hyperspectral data compression techniques exploits the similarities in spectral dimensions; which requires bands reordering or regrouping, to make use of the spectral redundancy. In this paper, we analyze the spectral cross correlation between bands for Hyperion hyperspectral data; spectral cross correlation matrix is calculated, assessing the strength of the spectral matrix, and finally, we propose new technique to find highly correlated groups of bands in the hyperspectral data cube based on "inter band correlation square", from the resultant groups of bands we propose a new predictor that can predict efficiently the whole bands within data cube based on weighted combination of spectral and spatial prediction, the results are evaluated versus other state of the art predictor for lossless compression.
SINGLE IMAGE SUPER RESOLUTION: A COMPARATIVE STUDYcsandit
The majority of applications requiring high resolution images to derive and analyze data
accurately and easily. Image super resolution is playing an effective role in those applications.
Image super resolution is the process of producing high resolution image from low resolution
image. In this paper, we study various image super resolution techniques with respect to the
quality of results and processing time. This comparative study introduces a comparison between
four algorithms of single image super-resolution. For fair comparison, the compared algorithms
are tested on the same dataset and same platform to show the major advantages of one over the
others.
Object Classification of Satellite Images Using Cluster Repulsion Based Kerne...IOSR Journals
Abstract: We investigated the Classification of satellite images and multispectral remote sensing data .we
focused on uncertainty analysis in the produced land-cover maps .we proposed an efficient technique for
classifying the multispectral satellite images using Support Vector Machine (SVM) into road area, building area
and green area. We carried out classification in three modules namely (a) Preprocessing using Gaussian
filtering and conversion from conversion of RGB to Lab color space image (b) object segmentation using
proposed Cluster repulsion based kernel Fuzzy C- Means (FCM) and (c) classification using one-to-many SVM
classifier. The goal of this research is to provide the efficiency in classification of satellite images using the
object-based image analysis. The proposed work is evaluated using the satellite images and the accuracy of the
proposed work is compared to FCM based classification. The results showed that the proposed technique has
achieved better results reaching an accuracy of 79%, 84%, 81% and 97.9% for road, tree, building and vehicle
classification respectively.
Keywords:-Satellite image, FCM Clustering, Classification, SVM classifier.
Analysis of Multi-focus Image Fusion Method Based on Laplacian PyramidRajyalakshmi Reddy
This paper presented a simple and efficient algorithm
for multi-focus image fusion, which used a
multiresolution signal decomposition scheme called
Laplacian pyramid method. The principle of Laplacian
pyramid transform is introduced, and based on it the
fusion strategy is described in detail. By analyzing the
experimental results, it showed that this method has
good performance, and the quality of the fused image is
better than the results of other methods
INFORMATION SATURATION IN MULTISPECTRAL PIXEL LEVEL IMAGE FUSIONIJCI JOURNAL
The availability of imaging sensors operating in multiple spectral bands has led to the requirement of
image fusion algorithms that would combine the image from these sensors in an efficient way to give an
image that is more informative as well as perceptible to human eye. Multispectral image fusion is the
process of combining images from different spectral bands that are optically acquired. In this paper, we
used a pixel-level image fusion based on principal component analysis that combines satellite images of the
same scene from seven different spectral bands. The purpose of using principal component analysis
technique is that it is best method for Grayscale image fusion and gives better results. The main aim of
PCA technique is to reduce a large set of variables into a small set which still contains most of the
information that was present in the large set. The paper compares different parameters namely, entropy,
standard deviation, correlation coefficient etc. for different number of images fused from two to seven.
Finally, the paper shows that the information content in an image gets saturated after fusing four images.
Learning Based Single Frame Image Super-resolution Using Fast Discrete Curvel...CSCJournals
High-resolution (HR) images play a vital role in all imaging applications as they offer more details. The images captured by the camera system are of degraded quality due to the imaging system and are low-resolution (LR) images. Image super-resolution (SR) is a process, where HR image is obtained from combining one or multiple LR images of same scene. In this paper, learning based single frame image super-resolution technique is proposed by using Fast Discrete Curvelet Transform (FDCT) coefficients. FDCT is an extension to Cartesian wavelets having anisotropic scaling with many directions and positions, which forms tight wedges. Such wedges allow FDCT to capture the smooth curves and fine edges at multiresolution level. The finer scale curvelet coefficients of LR image are learnt locally from a set of high-resolution training images. The super-resolved image is reconstructed by inverse Fast Discrete Curvelet Transform (IFDCT). This technique represents fine edges of reconstructed HR image by extrapolating the FDCT coefficients from the high-resolution training images. Experimentation based results show appropriate improvements in MSE and PSNR.
International Journal of Research in Engineering and Science is an open access peer-reviewed international forum for scientists involved in research to publish quality and refereed papers. Papers reporting original research or experimentally proved review work are welcome. Papers for publication are selected through peer review to ensure originality, relevance, and readability.
Robust High Resolution Image from the Low Resolution Satellite Imageidescitation
In this paper, we propose a framework detecting and locating the land cover
classes from a low-resolution image, which can play a very important role in the satellite
surveillance image from the MODIS data. The lands cover classes by constructing super-
resolution images from the MODIS data. The highest resolution of the MODIS images is 250
meters per pixel. By magnifying and de-blurring the low-resolution satellite image through
the kernel regression. SR reconstruction is image interpolation that has been used to
increase the size of a single image. The SRKR algorithm takes a single low-resolution image
and generates a de-blurred high-resolution image. We perform bi-cubic interpolation on the
input low-resolution image (LR) with a desired scaling factor. Finally, the KR model is then
used to generate the de-blurred HR image. K-means is one of the simplest unsupervised
learning algorithms that solve the well-known clustering problem, which generates a
specific number of disjoint, flat (non-hierarchical) clusters. K-means clustering is employ in
order to compare MODIS data and recognize land cover type, i.e., “Forest”, “Land”, “sea”,
and “Ice”.
An Efficient K-Nearest Neighbors Based Approach for Classifying Land Cover Re...IDES Editor
In recent times, researchers in the remote
sensing community have been greatly interested in
utilizing hyperspectral data for in-depth analysis of
Earth’s surface. In general, hyperspectral imaging comes
with high dimensional data, which necessitates a pressing
need for efficient approaches that can effectively process
on these high dimensional data. In this paper, we present
an efficient approach for the analysis of hyperspectral
data by incorporating the concepts of Non-linear manifold
learning and k-nearest neighbor (k-NN). Instead of
dealing with the high dimensional feature space directly,
the proposed approach employs Non-linear manifold
learning that determines a low-dimensional embedding of
the original high dimensional data by computing the
geometric distances between the samples. Initially, the
dimensionality of the hyperspectral data is reduced to a
pairwise distance matrix by making use of the Johnson's
shortest path algorithm and Multidimensional scaling
(MDS). Subsequently, based on the k-nearest neighbors,
the classification of the land cover regions in the
hyperspectral data is achieved. The proposed k-NN based
approach is evaluated using the hyperspectral data
collected by the NASA’s (National Aeronautics and Space
Administration) AVIRIS (Airborne Visible/Infrared
Imaging Spectrometer) from Kennedy Space Center,
Florida. The classification accuracies of the proposed k-
NN based approach demonstrate its effectiveness in land
cover classification of hyperspectral data.
A hybrid approach for analysis of dynamic changes in spatial dataijdms
Any geographic location undergoes changes over a period of time. These changes can be observed by
naked eye, only if they are huge in number spread over a small area. However, when the changes are small
and spread over a large area, it is very difficult to observe or extract the changes. Presently, there are few
methods available for tackling these types of problems, such as GRID, DBSCAN etc. However, these
existing mechanisms are not adequate for finding an accurate changes or observation which is essential
with respect to most important geometrical changes such as deforestations and land grabbing etc.,. This
paper proposes new mechanism to solve the above problem. In this proposed method, spatial image
changes are compared over a period of time taken by the satellite. Partitioning the satellite image in to
grids, employed in the proposed hybrid method, provides finer details of the image which are responsible
for improving the precision of clustering compared to whole image manipulation, used in DBSCAN, at a
time .The simplicity of DBSCAN explored while processing portioned grid portion.
SEGMENTATION OF LUNG GLANDULAR CELLS USING MULTIPLE COLOR SPACESIJCSEA Journal
Early detection of lung cancer is a challenging problem, the world faces today. Prior to classify glandular cells as malignant or benign a reliable segmentation technique is required. In this paper we present a novel lung glandular cell segmentation technique. The technique uses a combination of multiple color spaces and various clustering algorithms to automatically find the best possible segmentation result. Unsupervised clustering methods of K-means and Fuzzy C-means were used on multiple color spaces such as HSV, LAB, LUV, xyY. Experimental results of segmentation using various color spaces are provided to show the performance of the proposed system.
Development and Comparison of Image Fusion Techniques for CT&MRI ImagesIJERA Editor
Image processing techniques primarily focus upon enhancing the quality of an image or a set ofimages to derive
the maximum information from them. Image Fusion is a technique of producing a superior quality image from a
set of available images. It is the process of combining relevant information from two or more images into a
single image wherein the resulting image will be more informative and complete than any of the input images. A
lot of research is being done in this field encompassing areas of Computer Vision, Automatic object detection,
Image processing, parallel and distributed processing, Robotics and remote sensing. This project paves way to
explain the theoretical and implementation issues of seven image fusion algorithms and the experimental results
of the same. The fusion algorithms would be assessed based on the study and development of some image
quality metrics
Unsupervised Building Extraction from High Resolution Satellite Images Irresp...CSCJournals
Extraction of geospatial data from the photogrammetric sensing images becomes more and more important with the advances in the technology. Today Geographic Information Systems are used in a large variety of applications in engineering, city planning and social sciences. Geospatial data like roads, buildings and rivers are the most critical feeds of a GIS database. However, extracting buildings is one of the most complex and challenging tasks as there exist a lot of inhomogeneity due to varying hierarchy. The variety of the type of buildings and also the shapes of rooftops are very inconstant. Also in some areas, the buildings are placed irregularly or too close to each other. For these reasons, even by using high resolution IKONOS and QuickBird satellite imagery the quality percentage of building extraction is very less. This paper proposes a solution to the problem of automatic and unsupervised extraction of building features irrespective of rooftop structures in multispectral satellite images. The algorithm instead of detecting the region of interest, eliminates areas other than the region of interest which extract the rooftops completely irrespective of their shapes. Extensive tests indicate that the methodology performs well to extract buildings in complex environments.
International Journal of Computational Engineering Research (IJCER) is dedicated to protecting personal information and will make every reasonable effort to handle collected information appropriately. All information collected, as well as related requests, will be handled as carefully and efficiently as possible in accordance with IJCER standards for integrity and objectivity
Review paper on segmentation methods for multiobject feature extractioneSAT Journals
Abstract Feature extraction and representation plays a vital role in multimedia processing. It is still a challenge in computer vision system to extract ideal features that represents intrinsic characteristics of an image. Multiobject feature extraction system means a system that can extract features and locations of multiple objects in an image. In this paper we have discuss various methods to extract location and features of multiple objects and describe a system that can extract locations and features of multiple objects in an image by implementing an algorithm as hardware logic on a field-programmable gate array-based platform. There are many multiobject extraction methods which can be use for image segmentation based on motion, color intensity and texture. By calculating zeroth and first order moments of objects it is possible to obtain locations and sizes of multiple objects in an image. Keywords: multiobject extraction, image segmentation
The content based Image Retrieval is the restoration of images with respect to the visual appearances
like texture, shape and color.The methods, components and the algorithms adopted in this content based
retrieval of images were commonly derived from the areas like pattern identification, signal progressing
and the computer vision. Moreover the shape and the color features were abstracted in the course of
wavelet transformation and color histogram. Thus the new content based retrieval is proposed in this
research paper.In this paper the algorithms were required to propose with regards to the shape, shade and
texture feature abstraction .The concept of discrete wavelet transform to be implemented in order to
compute the Euclidian distance.The calculation of clusters was made with the help of the modified KMeans
clustering technique. Thus the analysis is made in among the query image and the database
image.The MATLAB software is implemented to execute the queries. The K-Means of abstraction is
proposed by performing fragmentation and grid-means module, feature extraction and K- nearest neighbor
clustering algorithms to construct the content based image retrieval system.Thus the obtained result are
made to compute and compared to all other algorithm for the retrieval of quality image features
Performance analysis of contourlet based hyperspectral image fusion methodijitjournal
Recently, contourlet transform has been widely used in hyperspectral image fusion due to its advantages,
such as high directionality and anisotropy; and studies show that the contourlet-based fusion methods
perform better than the existing conventional methods including wavelet-based fusion methods. Few studies
have been done to comparatively analyze the performance of contourlet-based fusion methods;
furthermore, no research has been done to analyze the contourlet-based fusion methods by focusing on
their unique transform mechanisms. In addition, no research has focused on the original contourlet
transform and its upgraded versions. In this paper, we investigate three different kinds of contourlet
transform: i) original contourlet transform, ii) nonsubsampled contourlet transform, iii) contourlet
transform with sharp frequency localization. The latter two transforms were developed to overcome the
major drawbacks of the original contourlet transform; so it is necessary and beneficial to see how they
perform in the context of hyperspectral image fusion. The results of our comparative analysis show that the
latter two transforms perform better than the original contourlet transform in terms of increasing spatial
resolution and preserving spectral information.
DEEP LEARNING BASED TARGET TRACKING AND CLASSIFICATION DIRECTLY IN COMPRESSIV...sipij
Past research has found that compressive measurements save data storage and bandwidth usage. However, it is also observed that compressive measurements are difficult to be used directly for target tracking and classification without pixel reconstruction. This is because the Gaussian random matrix destroys the target location information in the original video frames. This paper summarizes our research effort on target tracking and classification directly in the compressive measurement domain. We focus on one type of compressive measurement using pixel subsampling. That is, the compressive measurements are obtained by randomly subsample the original pixels in video frames. Even in such special setting, conventional trackers still do not work well. We propose a deep learning approach that integrates YOLO (You Only Look Once) and ResNet (residual network) for target tracking and classification in low quality videos. YOLO is for multiple target detection and ResNet is for target classification. Extensive experiments using optical and mid-wave infrared (MWIR) videos in the SENSIAC database demonstrated the efficacy of the proposed approach.
Interpretability Evaluation of Annual Mosaic Image of MTB Model for Land Cove...TELKOMNIKA JOURNAL
To verify whether the annual mosaic image of MTB model is acceptable for further digital
analysis, it is necessary to evaluate the visual interpretability. The MTB model is an effort to integrate
multi-scene and multi-temporal data, to obtain a minimum cloud cover mosaic image in locations that are
often covered by clouds and haze. This study is to evaluate the interpretability of the annual mosaic image
for analysis of the land cover changes. The data used are the images of 2015, 2016, and 2017 covers a
part of central Sumatra. Visual interpretations with a series of steps are used, starting with identification of
the objects using interpretation keys, followed by spectral band correlations, scattergram analysis, and
ended by consistency assessment. The consistency assessment step is performed to determine the level
of clearness and easiness of the object recognition in the annual mosaic images. The results showed that
the most optimal spectral bands used for RGB combinations for visual interpretation were Band SWIR-1,
Band NIR, and Band Red. Based on the evaluation results, the annual mosaic image o f MTB model
performed the consistent results of the clearness objects and the easiness of the object recognition. Thus
the annual mosaic image of MTB model of 0.02x0.02 degree tile is acceptable for further digital processing
as well as digital land cover analysis.
This paper presents to building identification from satellite images. Because of monitoring illegal land usage. Nowadays rapid urbanization leads to
increase the land usage, in this case of monitoring illegal land usage is very important. This project implemented to building identification from
satellite images, images are provided from Bing maps. Adaptive Neuro Fuzzy Inference System used to check data base information. In this proposed
system, I can identify only building images from the satellite images, To improving the image details effectively.
A NOVEL METRIC APPROACH EVALUATION FOR THE SPATIAL ENHANCEMENT OF PAN-SHARPEN...cscpconf
Various and different methods can be used to produce high-resolution multispectral images
from high-resolution panchromatic image (PAN) and low-resolution multispectral images (MS),
mostly on the pixel level. The Quality of image fusion is an essential determinant of the value of
processing images fusion for many applications. Spatial and spectral qualities are the two
important indexes that used to evaluate the quality of any fused image. However, the jury is still
out of fused image’s benefits if it compared with its original images. In addition, there is a lack
of measures for assessing the objective quality of the spatial resolution for the fusion methods.
So, an objective quality of the spatial resolution assessment for fusion images is required.
Therefore, this paper describes a new approach proposed to estimate the spatial resolution
improve by High Past Division Index (HPDI) upon calculating the spatial-frequency of the edge
regions of the image and it deals with a comparison of various analytical techniques for evaluating the Spatial quality, and estimating the colour distortion added by image fusion including: MG, SG, FCC, SD, En, SNR, CC and NRMSE. In addition, this paper devotes to concentrate on the comparison of various image fusion techniques based on pixel and feature fusion technique.
Volumetric Medical Images Lossy Compression using Stationary Wavelet Transfor...Omar Ghazi
Abstract: The aim of the study is to reduce the size required for storage along with decreasing the bitrate and the
bandwidth for the process of sending and receiving the image. It also aims to decrease the time required for the
process as much as possible. This study proposes a novel system for efficient lossy volumetric medical image
compression using Stationary Wavelet Transform and Linde-Buzo-Gray for Vector Quantization. The system makes
use of a combination of Linde-Buzo-Gray vector quantization technique for lossy compression along with
Arithmetic coding and Huffman coding for lossless compression. The system proposed uses Stationary Wavelet
Transform and then compares the results obtained to Discrete Wavelet Transform, Lifting Wavelet Transform and
Discrete Cosine Transform at three decomposition levels. The system also compares the results obtained using
transforms with only Arithmetic Coding and Huffman Coding for Lossless Compression.The results show that the
system proposed outperforms the others.
Multispectral images are used for space Arial application, target detection and remote sensing application. MS images are very rich in spectral resolution but at a cost of spatial resolution. We propose a new method to increase a spatial resolution MS images. For spatial resolution enhancement of MS images we need to employ a super-resolution technique which uses a Principal Component Analysis (PCA) based approach by learning an edge details from database. Experiments have been carried out on both real multispectral (MS) data and MS data. This experiment is done with the usefulness for hyper spectral (HS) data as a future work.
Enhancement and Segmentation of Historical Recordscsandit
Document Analysis and Recognition (DAR) aims to extract automatically the information in the document and also addresses to human comprehension. The automatic processing of degraded
historical documents are applications of document image analysis field which is confronted with many difficulties due to the storage condition and the complexity of the script. The main interest
of enhancement of historical documents is to remove undesirable statistics that appear in the
background and highlight the foreground, so as to enable automatic recognition of documents
with high accuracy. This paper addresses pre-processing and segmentation of ancient scripts, as an initial step to automate the task of an epigraphist in reading and deciphering inscriptions.
Pre-processing involves, enhancement of degraded ancient document images which is achieved through four different Spatial filtering methods for smoothing or sharpening namely Median,
Gaussian blur, Mean and Bilateral filter, with different mask sizes. This is followed by
binarization of the enhanced image to highlight the foreground information, using Otsu
thresholding algorithm. In the second phase Segmentation is carried out using Drop Fall and
WaterReservoir approaches, to obtain sampled characters, which can be used in later stages of
OCR. The system showed good results when tested on the nearly 150 samples of varying
degraded epigraphic images and works well giving better enhanced output for, 4x4 mask size
for Median filter, 2x2 mask size for Gaussian blur, 4x4 mask size for Mean and Bilateral filter.
The system can effectively sample characters from enhanced images, giving a segmentation rate of 85%-90% for Drop Fall and 85%-90% for Water Reservoir techniques respectively
International Journal of Research in Engineering and Science is an open access peer-reviewed international forum for scientists involved in research to publish quality and refereed papers. Papers reporting original research or experimentally proved review work are welcome. Papers for publication are selected through peer review to ensure originality, relevance, and readability.
Robust High Resolution Image from the Low Resolution Satellite Imageidescitation
In this paper, we propose a framework detecting and locating the land cover
classes from a low-resolution image, which can play a very important role in the satellite
surveillance image from the MODIS data. The lands cover classes by constructing super-
resolution images from the MODIS data. The highest resolution of the MODIS images is 250
meters per pixel. By magnifying and de-blurring the low-resolution satellite image through
the kernel regression. SR reconstruction is image interpolation that has been used to
increase the size of a single image. The SRKR algorithm takes a single low-resolution image
and generates a de-blurred high-resolution image. We perform bi-cubic interpolation on the
input low-resolution image (LR) with a desired scaling factor. Finally, the KR model is then
used to generate the de-blurred HR image. K-means is one of the simplest unsupervised
learning algorithms that solve the well-known clustering problem, which generates a
specific number of disjoint, flat (non-hierarchical) clusters. K-means clustering is employ in
order to compare MODIS data and recognize land cover type, i.e., “Forest”, “Land”, “sea”,
and “Ice”.
An Efficient K-Nearest Neighbors Based Approach for Classifying Land Cover Re...IDES Editor
In recent times, researchers in the remote
sensing community have been greatly interested in
utilizing hyperspectral data for in-depth analysis of
Earth’s surface. In general, hyperspectral imaging comes
with high dimensional data, which necessitates a pressing
need for efficient approaches that can effectively process
on these high dimensional data. In this paper, we present
an efficient approach for the analysis of hyperspectral
data by incorporating the concepts of Non-linear manifold
learning and k-nearest neighbor (k-NN). Instead of
dealing with the high dimensional feature space directly,
the proposed approach employs Non-linear manifold
learning that determines a low-dimensional embedding of
the original high dimensional data by computing the
geometric distances between the samples. Initially, the
dimensionality of the hyperspectral data is reduced to a
pairwise distance matrix by making use of the Johnson's
shortest path algorithm and Multidimensional scaling
(MDS). Subsequently, based on the k-nearest neighbors,
the classification of the land cover regions in the
hyperspectral data is achieved. The proposed k-NN based
approach is evaluated using the hyperspectral data
collected by the NASA’s (National Aeronautics and Space
Administration) AVIRIS (Airborne Visible/Infrared
Imaging Spectrometer) from Kennedy Space Center,
Florida. The classification accuracies of the proposed k-
NN based approach demonstrate its effectiveness in land
cover classification of hyperspectral data.
A hybrid approach for analysis of dynamic changes in spatial dataijdms
Any geographic location undergoes changes over a period of time. These changes can be observed by
naked eye, only if they are huge in number spread over a small area. However, when the changes are small
and spread over a large area, it is very difficult to observe or extract the changes. Presently, there are few
methods available for tackling these types of problems, such as GRID, DBSCAN etc. However, these
existing mechanisms are not adequate for finding an accurate changes or observation which is essential
with respect to most important geometrical changes such as deforestations and land grabbing etc.,. This
paper proposes new mechanism to solve the above problem. In this proposed method, spatial image
changes are compared over a period of time taken by the satellite. Partitioning the satellite image in to
grids, employed in the proposed hybrid method, provides finer details of the image which are responsible
for improving the precision of clustering compared to whole image manipulation, used in DBSCAN, at a
time .The simplicity of DBSCAN explored while processing portioned grid portion.
SEGMENTATION OF LUNG GLANDULAR CELLS USING MULTIPLE COLOR SPACESIJCSEA Journal
Early detection of lung cancer is a challenging problem, the world faces today. Prior to classify glandular cells as malignant or benign a reliable segmentation technique is required. In this paper we present a novel lung glandular cell segmentation technique. The technique uses a combination of multiple color spaces and various clustering algorithms to automatically find the best possible segmentation result. Unsupervised clustering methods of K-means and Fuzzy C-means were used on multiple color spaces such as HSV, LAB, LUV, xyY. Experimental results of segmentation using various color spaces are provided to show the performance of the proposed system.
Development and Comparison of Image Fusion Techniques for CT&MRI ImagesIJERA Editor
Image processing techniques primarily focus upon enhancing the quality of an image or a set ofimages to derive
the maximum information from them. Image Fusion is a technique of producing a superior quality image from a
set of available images. It is the process of combining relevant information from two or more images into a
single image wherein the resulting image will be more informative and complete than any of the input images. A
lot of research is being done in this field encompassing areas of Computer Vision, Automatic object detection,
Image processing, parallel and distributed processing, Robotics and remote sensing. This project paves way to
explain the theoretical and implementation issues of seven image fusion algorithms and the experimental results
of the same. The fusion algorithms would be assessed based on the study and development of some image
quality metrics
Unsupervised Building Extraction from High Resolution Satellite Images Irresp...CSCJournals
Extraction of geospatial data from the photogrammetric sensing images becomes more and more important with the advances in the technology. Today Geographic Information Systems are used in a large variety of applications in engineering, city planning and social sciences. Geospatial data like roads, buildings and rivers are the most critical feeds of a GIS database. However, extracting buildings is one of the most complex and challenging tasks as there exist a lot of inhomogeneity due to varying hierarchy. The variety of the type of buildings and also the shapes of rooftops are very inconstant. Also in some areas, the buildings are placed irregularly or too close to each other. For these reasons, even by using high resolution IKONOS and QuickBird satellite imagery the quality percentage of building extraction is very less. This paper proposes a solution to the problem of automatic and unsupervised extraction of building features irrespective of rooftop structures in multispectral satellite images. The algorithm instead of detecting the region of interest, eliminates areas other than the region of interest which extract the rooftops completely irrespective of their shapes. Extensive tests indicate that the methodology performs well to extract buildings in complex environments.
International Journal of Computational Engineering Research (IJCER) is dedicated to protecting personal information and will make every reasonable effort to handle collected information appropriately. All information collected, as well as related requests, will be handled as carefully and efficiently as possible in accordance with IJCER standards for integrity and objectivity
Review paper on segmentation methods for multiobject feature extractioneSAT Journals
Abstract Feature extraction and representation plays a vital role in multimedia processing. It is still a challenge in computer vision system to extract ideal features that represents intrinsic characteristics of an image. Multiobject feature extraction system means a system that can extract features and locations of multiple objects in an image. In this paper we have discuss various methods to extract location and features of multiple objects and describe a system that can extract locations and features of multiple objects in an image by implementing an algorithm as hardware logic on a field-programmable gate array-based platform. There are many multiobject extraction methods which can be use for image segmentation based on motion, color intensity and texture. By calculating zeroth and first order moments of objects it is possible to obtain locations and sizes of multiple objects in an image. Keywords: multiobject extraction, image segmentation
The content based Image Retrieval is the restoration of images with respect to the visual appearances
like texture, shape and color.The methods, components and the algorithms adopted in this content based
retrieval of images were commonly derived from the areas like pattern identification, signal progressing
and the computer vision. Moreover the shape and the color features were abstracted in the course of
wavelet transformation and color histogram. Thus the new content based retrieval is proposed in this
research paper.In this paper the algorithms were required to propose with regards to the shape, shade and
texture feature abstraction .The concept of discrete wavelet transform to be implemented in order to
compute the Euclidian distance.The calculation of clusters was made with the help of the modified KMeans
clustering technique. Thus the analysis is made in among the query image and the database
image.The MATLAB software is implemented to execute the queries. The K-Means of abstraction is
proposed by performing fragmentation and grid-means module, feature extraction and K- nearest neighbor
clustering algorithms to construct the content based image retrieval system.Thus the obtained result are
made to compute and compared to all other algorithm for the retrieval of quality image features
Performance analysis of contourlet based hyperspectral image fusion methodijitjournal
Recently, contourlet transform has been widely used in hyperspectral image fusion due to its advantages,
such as high directionality and anisotropy; and studies show that the contourlet-based fusion methods
perform better than the existing conventional methods including wavelet-based fusion methods. Few studies
have been done to comparatively analyze the performance of contourlet-based fusion methods;
furthermore, no research has been done to analyze the contourlet-based fusion methods by focusing on
their unique transform mechanisms. In addition, no research has focused on the original contourlet
transform and its upgraded versions. In this paper, we investigate three different kinds of contourlet
transform: i) original contourlet transform, ii) nonsubsampled contourlet transform, iii) contourlet
transform with sharp frequency localization. The latter two transforms were developed to overcome the
major drawbacks of the original contourlet transform; so it is necessary and beneficial to see how they
perform in the context of hyperspectral image fusion. The results of our comparative analysis show that the
latter two transforms perform better than the original contourlet transform in terms of increasing spatial
resolution and preserving spectral information.
DEEP LEARNING BASED TARGET TRACKING AND CLASSIFICATION DIRECTLY IN COMPRESSIV...sipij
Past research has found that compressive measurements save data storage and bandwidth usage. However, it is also observed that compressive measurements are difficult to be used directly for target tracking and classification without pixel reconstruction. This is because the Gaussian random matrix destroys the target location information in the original video frames. This paper summarizes our research effort on target tracking and classification directly in the compressive measurement domain. We focus on one type of compressive measurement using pixel subsampling. That is, the compressive measurements are obtained by randomly subsample the original pixels in video frames. Even in such special setting, conventional trackers still do not work well. We propose a deep learning approach that integrates YOLO (You Only Look Once) and ResNet (residual network) for target tracking and classification in low quality videos. YOLO is for multiple target detection and ResNet is for target classification. Extensive experiments using optical and mid-wave infrared (MWIR) videos in the SENSIAC database demonstrated the efficacy of the proposed approach.
Interpretability Evaluation of Annual Mosaic Image of MTB Model for Land Cove...TELKOMNIKA JOURNAL
To verify whether the annual mosaic image of MTB model is acceptable for further digital
analysis, it is necessary to evaluate the visual interpretability. The MTB model is an effort to integrate
multi-scene and multi-temporal data, to obtain a minimum cloud cover mosaic image in locations that are
often covered by clouds and haze. This study is to evaluate the interpretability of the annual mosaic image
for analysis of the land cover changes. The data used are the images of 2015, 2016, and 2017 covers a
part of central Sumatra. Visual interpretations with a series of steps are used, starting with identification of
the objects using interpretation keys, followed by spectral band correlations, scattergram analysis, and
ended by consistency assessment. The consistency assessment step is performed to determine the level
of clearness and easiness of the object recognition in the annual mosaic images. The results showed that
the most optimal spectral bands used for RGB combinations for visual interpretation were Band SWIR-1,
Band NIR, and Band Red. Based on the evaluation results, the annual mosaic image o f MTB model
performed the consistent results of the clearness objects and the easiness of the object recognition. Thus
the annual mosaic image of MTB model of 0.02x0.02 degree tile is acceptable for further digital processing
as well as digital land cover analysis.
This paper presents to building identification from satellite images. Because of monitoring illegal land usage. Nowadays rapid urbanization leads to
increase the land usage, in this case of monitoring illegal land usage is very important. This project implemented to building identification from
satellite images, images are provided from Bing maps. Adaptive Neuro Fuzzy Inference System used to check data base information. In this proposed
system, I can identify only building images from the satellite images, To improving the image details effectively.
A NOVEL METRIC APPROACH EVALUATION FOR THE SPATIAL ENHANCEMENT OF PAN-SHARPEN...cscpconf
Various and different methods can be used to produce high-resolution multispectral images
from high-resolution panchromatic image (PAN) and low-resolution multispectral images (MS),
mostly on the pixel level. The Quality of image fusion is an essential determinant of the value of
processing images fusion for many applications. Spatial and spectral qualities are the two
important indexes that used to evaluate the quality of any fused image. However, the jury is still
out of fused image’s benefits if it compared with its original images. In addition, there is a lack
of measures for assessing the objective quality of the spatial resolution for the fusion methods.
So, an objective quality of the spatial resolution assessment for fusion images is required.
Therefore, this paper describes a new approach proposed to estimate the spatial resolution
improve by High Past Division Index (HPDI) upon calculating the spatial-frequency of the edge
regions of the image and it deals with a comparison of various analytical techniques for evaluating the Spatial quality, and estimating the colour distortion added by image fusion including: MG, SG, FCC, SD, En, SNR, CC and NRMSE. In addition, this paper devotes to concentrate on the comparison of various image fusion techniques based on pixel and feature fusion technique.
Volumetric Medical Images Lossy Compression using Stationary Wavelet Transfor...Omar Ghazi
Abstract: The aim of the study is to reduce the size required for storage along with decreasing the bitrate and the
bandwidth for the process of sending and receiving the image. It also aims to decrease the time required for the
process as much as possible. This study proposes a novel system for efficient lossy volumetric medical image
compression using Stationary Wavelet Transform and Linde-Buzo-Gray for Vector Quantization. The system makes
use of a combination of Linde-Buzo-Gray vector quantization technique for lossy compression along with
Arithmetic coding and Huffman coding for lossless compression. The system proposed uses Stationary Wavelet
Transform and then compares the results obtained to Discrete Wavelet Transform, Lifting Wavelet Transform and
Discrete Cosine Transform at three decomposition levels. The system also compares the results obtained using
transforms with only Arithmetic Coding and Huffman Coding for Lossless Compression.The results show that the
system proposed outperforms the others.
Multispectral images are used for space Arial application, target detection and remote sensing application. MS images are very rich in spectral resolution but at a cost of spatial resolution. We propose a new method to increase a spatial resolution MS images. For spatial resolution enhancement of MS images we need to employ a super-resolution technique which uses a Principal Component Analysis (PCA) based approach by learning an edge details from database. Experiments have been carried out on both real multispectral (MS) data and MS data. This experiment is done with the usefulness for hyper spectral (HS) data as a future work.
Enhancement and Segmentation of Historical Recordscsandit
Document Analysis and Recognition (DAR) aims to extract automatically the information in the document and also addresses to human comprehension. The automatic processing of degraded
historical documents are applications of document image analysis field which is confronted with many difficulties due to the storage condition and the complexity of the script. The main interest
of enhancement of historical documents is to remove undesirable statistics that appear in the
background and highlight the foreground, so as to enable automatic recognition of documents
with high accuracy. This paper addresses pre-processing and segmentation of ancient scripts, as an initial step to automate the task of an epigraphist in reading and deciphering inscriptions.
Pre-processing involves, enhancement of degraded ancient document images which is achieved through four different Spatial filtering methods for smoothing or sharpening namely Median,
Gaussian blur, Mean and Bilateral filter, with different mask sizes. This is followed by
binarization of the enhanced image to highlight the foreground information, using Otsu
thresholding algorithm. In the second phase Segmentation is carried out using Drop Fall and
WaterReservoir approaches, to obtain sampled characters, which can be used in later stages of
OCR. The system showed good results when tested on the nearly 150 samples of varying
degraded epigraphic images and works well giving better enhanced output for, 4x4 mask size
for Median filter, 2x2 mask size for Gaussian blur, 4x4 mask size for Mean and Bilateral filter.
The system can effectively sample characters from enhanced images, giving a segmentation rate of 85%-90% for Drop Fall and 85%-90% for Water Reservoir techniques respectively
Content Based Image Retrieval Using Dominant Color and Texture FeaturesIJMTST Journal
The purpose of this Paper is to describe our research on different feature extraction and matching techniques in designing a Content Based Image Retrieval (CBIR) system. Due to the enormous increase in image database sizes, as well as its vast deployment in various applications, the need for CBIR development arose. Content Based Image Retrieval (CBIR) is the retrieval of images based on features such as color and texture. Image retrieval using color feature cannot provide good solution for accuracy and efficiency. The most important features are Color and texture. In this paper technique used for retrieving the images based on their content namely dominant color, texture and combination of both color and texture. The technique verifies the superiority of image retrieval using multi feature than the single feature.
Image compression and reconstruction using improved Stockwell transform for q...IJECEIAES
Image compression is an important stage in picture processing since it reduces the data extent and promptness of image diffusion and storage, whereas image reconstruction helps to recover the original information that was communicated. Wavelets are commonly cited as a novel technique for image compression, although the production of waves proceeding smooth areas with the image remains unsatisfactory. Stockwell transformations have been recently entered the arena for image compression and reconstruction operations. As a result, a new technique for image compression based on the improved Stockwell transform is proposed. The discrete cosine transforms, which involves bandwidth partitioning is also investigated in this work to verify its experimental results. Wavelet-based techniques such as multilevel Haar wavelet, generic multiwavelet transform, Shearlet transform, and Stockwell transforms were examined in this paper. The MATLAB technical computing language is utilized in this work to implement the existing approaches as well as the suggested improved Stockwell transform. The standard images mostly used in digital image processing applications, such as Lena, Cameraman and Barbara are investigated in this work. To evaluate the approaches, quality constraints such as mean square error (MSE), normalized cross-correlation (NCC), structural content (SC), peak noise ratio, average difference (AD), normalized absolute error (NAE) and maximum difference are computed and provided in tabular and graphical representations.
International Journal of Computational Engineering Research(IJCER) ijceronline
nternational Journal of Computational Engineering Research (IJCER) is dedicated to protecting personal information and will make every reasonable effort to handle collected information appropriately. All information collected, as well as related requests, will be handled as carefully and efficiently as possible in accordance with IJCER standards for integrity and objectivity.
Facial image retrieval on semantic features using adaptive mean genetic algor...TELKOMNIKA JOURNAL
The emergence of larger databases has made image retrieval techniques an essential component and has led to the development of more efficient image retrieval systems. Retrieval can either be content or text-based. In this paper, the focus is on the content-based image retrieval from the FGNET database. Input query images are subjected to several processing techniques in the database before computing the squared Euclidean distance (SED) between them. The images with the shortest Euclidean distance are considered as a match and are retrieved. The processing techniques involve the application of the median modified Weiner filter (MMWF), extraction of the low-level features using histogram-oriented gradients (HOG), discrete wavelet transform (DWT), GIST, and Local tetra pattern (LTrP). Finally, the features are selected using Adaptive Mean Genetic Algorithm (AMGA). In this study, the average PSNR value obtained after applying the Wiener filter was 45.29. The performance of the AMGA was evaluated based on its precision, F-measure, and recall, and the obtained average values were respectively 0.75, 0.692, and 0.66. The performance matrix of the AMGA was compared to those of particle swarm optimization algorithm (PSO) and genetic algorithm (GA) and found to perform better; thus, proving its efficiency.
The complexity of landscape pattern mining is well stated due to its non-linear spatial image formation and
inhomogeneity of the satellite images. Land Ex tool of the literature work needs several seconds to answer input
image pattern query. The time duration of content based image retrieval depends on input query complexity. This
paper focuses on designing and implementing a training dataset to train NML (Neural network based Machine
Learning) algorithm to reduce the search time to improve the result accuracy. The performance evolution of
proposed NML CBIR (Content Based Image Retrieval) method will be used for comparison of satellite and natural
images by means of increasing speed and accuracy.
Keywords: Spatial Image, Satellite image, NML, CBIR
A deep locality-sensitive hashing approach for achieving optimal image retri...IJECEIAES
Efficient methods that enable high and rapid image retrieval are continuously needed, especially with the large mass of images that are generated from different sectors and domains like business, communication media, and entertainment. Recently, deep neural networks are extensively proved higher-performing models compared to other traditional models. Besides, combining hashing methods with a deep learning architecture improves the image retrieval time and accuracy. In this paper, we propose a novel image retrieval method that employs locality-sensitive hashing with convolutional neural networks (CNN) to extract different types of features from different model layers. The aim of this hybrid framework is focusing on both the high-level information that provides semantic content and the low-level information that provides visual content of the images. Hash tables are constructed from the extracted features and trained to achieve fast image retrieval. To verify the effectiveness of the proposed framework, a variety of experiments and computational performance analysis are carried out on the CIFRA-10 and NUS-WIDE datasets. The experimental results show that the proposed method surpasses most existing hash-based image retrieval methods.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
EXTENDED WAVELET TRANSFORM BASED IMAGE INPAINTING ALGORITHM FOR NATURAL SCENE...cscpconf
This paper proposes an exemplar based image inpainting using extended wavelet transform. The
Image inpainting modifies an image with the available information outside the region to be
inpainted in an undetectable way. The extended wavelet transform is in two dimensions. The
Laplacian pyramid is first used to capture the point discontinuities, and then followed by a
directional filter bank to link point discontinuities into linear structures. The proposed model
effectively captures the edges and contours of natural scene images
Query Image Searching With Integrated Textual and Visual Relevance Feedback f...IJERA Editor
There are many researchers who have studied the relevance feedback in the literature of content based image
retrieval (CBIR) community, but none of CBIR search engines support it because of scalability, effectiveness
and efficiency issues. In this, we had implemented an integrated relevance feedback for retrieving of web
images. Here, we had concentrated on integration of both textual features (TF) and visual features (VF) based
relevance feedback (RF), simultaneously we also tested them individually. The TFRF employs and effective
search result clustering (SRC) algorithm to get salient phrases. Then a new user interface (UI) is proposed to
support RF. Experimental results show that the proposed algorithm is scalable, effective and accurated
A novel Image Retrieval System using an effective region based shape represen...CSCJournals
With recent improvements in methods for the acquisition and rendering of shapes, the need for retrieval of shapes from large repositories of shapes has gained prominence. A variety of methods have been proposed that enable the efficient querying of shape repositories for a desired shape or image. Many of these methods use a sample shape as a query and attempt to retrieve shapes from the database that have a similar shape. This paper introduces a novel and efficient shape matching approach for the automatic identification of real world objects. The identification process is applied on isolated objects and requires the segmentation of the image into separate objects, followed by the extraction of representative shape signatures and the similarity estimation of pairs of objects considering the information extracted from the segmentation process and shape signature. We compute a 1D shape signature function from a region shape and use it for region shape representation and retrieval through similarity estimation. The proposed region shape feature is much more efficient to compute than other region shape techniques invariant to image transformation.
There have been several researches done in the field of image saliency but
not as much as in video saliency. In order to increase precision and accuracy
during compression, reduce coding complexity and time consumption along
with memory allocation problems with our proposed solution. It is a
modified high-definition video compression (HEVC) pixel based consistent
spatiotemporal diffusion with temporal uniformity. It involves taking apart
the video into groups of frames, computing colour saliency, integrate
temporal fusion, pixel saliency fusion is conducted and then colour
information guides the diffusion process for the spatiotemporal mapping
with the help of permutation matrix. The proposed solution is tested on a
publicly available extensive dataset with five global saliency valuation
metrics and is compared with several other state-of-the-art saliency detection
methods. The results display and overall best performance amongst all other
candidates.
Similar to Enhancement of Degraded Document Images using Retinex and Morphological Operations (20)
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
zkStudyClub - Reef: Fast Succinct Non-Interactive Zero-Knowledge Regex ProofsAlex Pruden
This paper presents Reef, a system for generating publicly verifiable succinct non-interactive zero-knowledge proofs that a committed document matches or does not match a regular expression. We describe applications such as proving the strength of passwords, the provenance of email despite redactions, the validity of oblivious DNS queries, and the existence of mutations in DNA. Reef supports the Perl Compatible Regular Expression syntax, including wildcards, alternation, ranges, capture groups, Kleene star, negations, and lookarounds. Reef introduces a new type of automata, Skipping Alternating Finite Automata (SAFA), that skips irrelevant parts of a document when producing proofs without undermining soundness, and instantiates SAFA with a lookup argument. Our experimental evaluation confirms that Reef can generate proofs for documents with 32M characters; the proofs are small and cheap to verify (under a second).
Paper: https://eprint.iacr.org/2023/1886
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
PHP Frameworks: I want to break free (IPC Berlin 2024)Ralf Eggert
In this presentation, we examine the challenges and limitations of relying too heavily on PHP frameworks in web development. We discuss the history of PHP and its frameworks to understand how this dependence has evolved. The focus will be on providing concrete tips and strategies to reduce reliance on these frameworks, based on real-world examples and practical considerations. The goal is to equip developers with the skills and knowledge to create more flexible and future-proof web applications. We'll explore the importance of maintaining autonomy in a rapidly changing tech landscape and how to make informed decisions in PHP development.
This talk is aimed at encouraging a more independent approach to using PHP frameworks, moving towards a more flexible and future-proof approach to PHP development.
SAP Sapphire 2024 - ASUG301 building better apps with SAP Fiori.pdfPeter Spielvogel
Building better applications for business users with SAP Fiori.
• What is SAP Fiori and why it matters to you
• How a better user experience drives measurable business benefits
• How to get started with SAP Fiori today
• How SAP Fiori elements accelerates application development
• How SAP Build Code includes SAP Fiori tools and other generative artificial intelligence capabilities
• How SAP Fiori paves the way for using AI in SAP apps
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
Essentials of Automations: The Art of Triggers and Actions in FME
Enhancement of Degraded Document Images using Retinex and Morphological Operations
1. Enhancement of degraded Document Images using
Retinex and Morphological Operations
Chandrakala H T Thippeswamy G Sahana D Gowda
Research Scholar Professor and Head Professor and Head
Dept. of CSE Dept. of CSE Dept. of CSE
VTU Regional Research Center BMS Institute of Technology BNM Institute of Technology
Bengaluru, India Bengaluru, India Bengaluru, India
chandrakl80@gmail.com swamy.gangappa@gmail.com sahanagowda@rediffmail.com
Abstract— Ancient historical inscriptions collected from various
sources are image captured and stored as document images in
digital libraries. Due to various factors, such as aging,
degradation, erosion and deposition of foreign bodies on the
inscriptions the quality of images captured is poor. These images
are not ready for further processing such as reading, translation
and indexing. Image Enhancement is an important phase for
such document images before extracting information. A novel
hybrid enhancement process has been proposed in this paper to
highlight the text in the inscription, to make it more suitable for
recognition using an OCR (Optical Character Recognition)
system. The proposed method is a combination of Frankle
McCann Retinex approach and Morphological operations which
highlight the image contours by suppressing the background
deformation and noise. The method is tested on a dataset of 300
camera captured estampage images of stone inscriptions written
in ancient Kannada script. Experimental results show the efficacy
of the proposed method.
Keywords- Frankle McCann Retinex; Thickening; Filling;
Inscription images
Introduction
Inscriptions carved on stone, palm leaves, metal and shells
are the historical documents which serve as the solitary and
authentic records for understanding ancient history. These
recorded experiences are useful in countless ways for the study
and reconstruction of the social, economic, cultural, dynastic
and political history of the mankind. Preservation of these
documents is irrefutable if they must continue to serve as a
reference in making further discoveries about the world.
Unfortunately, these copies are at a serious risk of loss and
extinction as they are deteriorating due to aging, natural
disasters, risky handling, depositions and harsh weather
conditions. To preserve these valuable archaeological resources
for future the Archaeological departments throughout the world
excavate these inscriptions from their sources, create their
Estampages and maintain a corpus of the same. But
Estampages can also deteriorate in the long run due to
breakage, aging, risky handling, dust and insects.
Digitization of these images is a more reliable solution for
their preservation. Digitization creates faithful reproduction of
Estampages in the form of digital images by either image
capture or scanning. Digital images have longer shelf life and
are easy to access and disseminate. Moreover, they can further
take advantage of the power of digital image enhancement,
possibilities of structured indexes, machine recognition and
translation, mathematics of compression and communication.
These technological solutions are very much needed to
motivate the Archaeological Departments to convert their
repository of historical documents into a digital library and to
automate information extraction from these documents.
Moreover, these digital documents are more readily accessible
to historians and researchers compared to the originals that are
not easily available for public viewing.
However, these digital images would be inherently degraded
as they are captured from a source which is already
deteriorated. Therefore, to make them suitable for automatic
machine recognition and translation it is inevitable to pre-
process them using suitable image enhancement technique.
Image Enhancement improves the perception of information in
an image for human viewing and for further automated image
processing operations. The proposed work is the first effort to
enhance the camera captured estampages of the stone
inscriptions belonging to the 11th
century Kalyani Chalukyan
dynasty. These handwritten inscriptions are in Kannada script
International Journal of Computer Science and Information Security (IJCSIS),
Vol. 16, No. 4, April 2018
55 https://sites.google.com/site/ijcsis/
ISSN 1947-5500
2. and are collected from the corpus of the public organization-
Archaeological Survey of India.
The techniques for image enhancement can be broadly
classified as local and global methods. Global approach is an
overall enhancement approach where the entire image is
modified as per the statistics of the whole image. But
meanwhile the smaller details are lost because the number of
pixels in these small areas has no influence on the computation
of global transformation. Whereas local enhancement can
enhance even the smaller details in the image as it uses a small
rectangular or square neighborhood with the centre moving
from pixel to pixel over the entire image. The centre pixel of
the window is modified with a value calculated based on the
statistics of the other pixels of the window. Local enhancement
is preferable for inscription images since the separation
between the foreground text and the background is not
prominent. This paper presents one such enhancement
approach based on Frankle McCann Retinex algorithm coupled
with morphological processing. The rest of the paper is
organized as follows: section I discusses related work, section
II gives a detailed explanation of the proposed enhancement
scheme, section III discusses the experimental results and
discussion and section IV concludes the paper.
I. RELATED WORK
As available in the literature, the enhancement of Historical
handwritten documents has been performed using Background
light intensity normalization [29], directional wavelet transform
[28], Background light intensity normalization [40] and
Hyperspectral imaging [39].They mainly address the issues like
background noise and ink bleed through. Specific to inscription
image enhancement median filtering technique [32-35] has
been used extensively. Curvelet transform [30] and shearlet
tranform [31] in combination with morphological operations
have been used to denoise south Indian palmscripts. Natural
Gradient based Fast Independent Component Analysis
technique has been employed to enhance stone inscriptions of
Hampi [27]. But these techniques are not suitable for
inscription estampage images as they might result in uneven
contrast stretching.
Inscription images do not have clear visible difference
between the foreground text and the background. Many times
the deformation in the background would look like part of
foreground text rendering poor visual appearance to these
images. Retinex filtering [1, 3, 10, 11 ] is an enhancement
method which compensates for non-uniform contrast by
separating the illumination from the reflectance in a given
image. It decreases the influence of the reflectance component,
thus enhancing the original image to its true likeness. Hence it
is more suitable for enhancement of Inscription document
images. Although Retinex methodology had been used so far to
enhance medical images[22], satellite images[21] ,natural scene
images[5], nighttime images[20] and many more, it was only
used for skew correction of document images[23] till [41] used
it for contrast enhancement of inscription document images.
The proposed enhancement scheme aims at improving the
contrast enhancement results achieved by [41].
The Retinex algorithms published in the literature can be
classified into four categories: Path based algorithms,
Recursive algorithms, Center Surround algorithms and
Variational algorithms. In path based algorithms the value of
the new pixel depends on the product of ratios along the
stochastic paths [11]-[14]. Recursive algorithms replace the
path computation by a recursive matrix comparison [7]-[9].
These algorithms are computationally more efficient than the
path based algorithms. In Center Surround method [3]-[5] a
given pixel value is compared with the surrounding average
pixel values to compute the new pixel. The variational Retinex
algorithms [15]-[17] convert the constraints of illumination and
reflectance into a mathematical problem and then obtain the
new pixel value by solving equations or optimization problems.
Morphological operations have been traditionally used as an
effective tool for noise removal and enhancement of digital
images [24-26].
Frankle McCann Retinex [7] algorithm, a recursive variant
of traditional Retinex was found to be more suitable to
highlight the text contours in the inscription image as it can
stretch the image contrast simultaneously compressing the
dynamic range rendering better visual clarity. Following which
some morphological operations can be applied to suppress
background noise and deformation. The following section
provides the details of these techniques employed in the
proposed method.
II. METHODOLOGY
The proposed enhancement scheme integrates Frankle
McCann Retinex algorithm with Morphological Processing to
enhance the inscription document images. Frankle McCann
Retinex (FMR) algorithm [7] performs pixel level contrast
stretching rendering sharp contrast to the entire image. To
highlight the text contours and to suppress background noise
and deformation, Morphological operations are performed on
the Retinex enhanced images. The following subsections
explain the FMR enhancement and the Background noise
suppression in detail.
A. Frankle McCann Retinex
The principle of Frankle McCann algorithm [7] is shown in
figure 1 below:
Figure 1: Principle of Frankle McCann Retinex Algorithm
At each step, the comparison is implemented using the
Ratio-Product-Reset-Average operation. The process continues
International Journal of Computer Science and Information Security (IJCSIS),
Vol. 16, No. 4, April 2018
56 https://sites.google.com/site/ijcsis/
ISSN 1947-5500
3. until the spacing decreases to one pixel. The Ratio- Product-
Reset-Average operation is given by the equation:
rp
k+1
= (reset(ip – iq + rq
k
) + rp
k
) /2 (1)
Where p is the neighborhood center pixel index and q is the
index of one of the neighboring point. Let p=(x,y), then q ϵ
{(x± dk
,y), (x,y ± dk
)} , where dk
is the shift distance
corresponding to the k-th update operation. In the iterative
procedure, for a given p, q is spirally taken, and dk
is
progressively reduced towards zero.
Since the operations are performed in logarithmic domain,
the term is the ratio between the original intensity at p and that
at q. The following addition is the product operation. Then the
ratio- product term is reset to a constant whenever it exceeds
the constant. And finally the reflectance estimation is updated
by averaging the last estimation and the reset term.
B. Morphological Operations
The background noise pixels in the image produce artificial
edges which are also enhanced by Retinex processing. These
unwanted background edges interfere with the foreground text
which hampers their visual clarity. In order to suppress these
artifacts some morphological operations are performed on the
binarized FMR output .The foreground text is accumulated into
a connected component by applying Morphological
Thickening, Filling and Bridging.
Thickening is the morphological dual of thinning. It is defined
as
A ʘ B = A U (A B) C (2)
where A is the image matrix (set),
B is a structuring element suitable for thickening
is the hit or miss transformation operation
T The morphological fill operation fills all the holes
with ones for binary images. The hole filling algorithm first
generates an array X0. X0 contains all zeros except at the
location corresponding to the given point in each hole, which is
set to one. This is followed by the following procedure:
Xk = (X k-1 B) ∩ AC
k=1,2,3,…. (3)
where B is the symmetric structuring element
is the dilation operation
The algorithm terminates at the iteration step k if Xk = Xk-1.
The set Xk then contains all the filled holes; the union of Xk
and A contains all the filled holes and their boundaries. The
dilation would fill the entire area if left unchecked. However,
the intersection at each step with the complement of A limits
the result to inside the region of interest.
Bridging operation ties the unconnected pixels in the binary
image by setting all zero valued pixels to one if they have two
nonzero neighbours that are not connected. The objects
signified by the connected components are assigned labels. For
each object pixel summation is performed. If this sum is higher
than the assumed threshold then the object is detected as valid
text area and put into a mask image M. Finally the Region of
Interest(ROI), that is the foreground text is extracted by
computing Hadamard Product of X and M given by:
(X ο M)i,j ← (X) i,j (M) i,j (4)
III. RESULTS AND DISCUSSION
The proposed Enhancement scheme is tested on the dataset of
300 camera captured images of ancient Kannada inscription
Estampages that belong to the Kalyani Chalukyan era of 11th
century. The images are captured using a camera of 13
Megapixel resolution. The visual quality of the original image
as shown in Fig 2(a) is poor as it is infected by background
noise and interference of the background pixels with the
foreground text. Three different types of Retinex techniques
namely Single Scale Retinex(SSR), MultiScale Retinex(MSR)
and Frankle McCann Retinex (FMR) and the proposed method
(FMR with Morphological operations) were tried on the
dataset. SSR method enhanced the text but output resulted in
significant greying out effect in some parts of the image as
shown in Fig 2(b) leading to loss of some visual content. Quite
a similar effect was observed with MSR as shown in Fig 2(c).
Frankle McCann Retinex algorithm is applied on the
logarithmic version of the original image. The corresponding
output is shown in Fig 2(d). Though the foreground text looks
enhanced when compared to Fig 2(a), the overall image suffers
from slight greying; a consequence of Retinex algorithm, also
the unwanted artifacts gets enhanced. This might result in less
accuracy of Optical Character Recognition to be performed
later. So, to further enhance this result Morphological
Processing is performed on the FMR output and the result is
shown in Fig 2(e). Morphological Processing has removed the
greyish look of the image and has improved the contrast by
removing the unwanted background pixels making it more
suitable for further processing steps.
(a)
(b)
International Journal of Computer Science and Information Security (IJCSIS),
Vol. 16, No. 4, April 2018
57 https://sites.google.com/site/ijcsis/
ISSN 1947-5500
4. (c)
(d)
(e)
Figure 2 : The enhancement results achieved using the different Retinex
approaches with the corresponding histogram plots (X-axis represents pixel
intensity and Y-axis represents pixel count). (a) Original image (b) SSR (c)
MSR (d) FMR (e) FMR with Morphological processing
Experimentation was done on our estampage dataset of 11th
century Kannada stone inscriptions and also on the standard
Handwritten text datasets-HDIBCO 2010, HDIBCO 2014 and
HDIBCO 2016 to give a comparative study. The quality of the
enhancement results were evaluated by measuring their
Standard Deviation and Root Mean Square (RMS) Contrast.
A. Standard Deviation
(5)
where is a one dimensional array of N pixel intensities of
the given image and is the corresponding mean given by:
(6)
B. RMS Contrast
(7)
where is an image of size M x N whose pixel intensities are
normalized in the range [0,1]. is the mean intensity of all pixel
values in the image .
TABLE 1: RMS contrast and Standard Deviation values achieved using SSR,
MSR, FMR and the proposed method on Estampage dataset. These methods
are evaluated on the standard Handwritten DIBCO datasets- HDIBCO2010,
HDIBCO2014 and HDIBCO2016 .
Dataset Measures Original
Image
SSR MSR FMR FMR with
Morphological
processing
Estampage
RMS
contrast
2.61 3.99 3.98 4.44 5.78
Standard
Deviation
45.82 72.80 73.77 77.89 101.41
HDIBCO2010
RMS
contrast
1.18 3.56 3.66 2.14 2.44
Standard
Deviation
23.36 80.27 83.58 43.08 46.76
HDIBCO2014
RMS
contrast
1.41 3.74 3.54 2.40 2.75
Standard
Deviation
31.79 91.47 93.42 54.11 61.12
HDIBCO2016
RMS
contrast
2.43 6.09 6.21 3.31 3.52
Standard
Deviation
38.68 72.30 75.86 52.46 56.34
(a)
(b)
International Journal of Computer Science and Information Security (IJCSIS),
Vol. 16, No. 4, April 2018
58 https://sites.google.com/site/ijcsis/
ISSN 1947-5500
5. (c)
(d)
Figure 3 : RMS contrast and Standard Deviation plots on different Datasets
(X-axis represents pixel intensity and Y-axis represents enhancement method).
(a) Estampage (b) HDIBCO 2010 (c) HDIBCO 2014 (d) HDIBCO 2016
IV. CONCLUSION AND FUTURE SCOPE
Based on the degradation characteristics of Inscription
estampage images an improved enhancement approach which
integrates Morphological Processing with Frankle McCann
Retinex algorithm has been implemented. This scheme
highlights the text by iterative contrast stretching and
suppresses the background artifacts through mathematical
morphology . The results thus achieved show superior visual
clarity with the best Standard Deviation and RMS contrast
when compared to the traditional Retinex variants. However it
was observed that the proposed method took too much of
computational time. This is one issue that can be addressed in
future.
REFERENCES
[1] Edwin H Land , “The Retinex Theory of Color Vision”, J. Scientific
American, Vol 237 No 6 P108-128. 1997
[2] Ana Belen Petro, Catalina Sbert, Jean-Michel Morel ,” Multiscale
Retinex “, Image Processing Online (IPOL), ISSN 2105-1232. 2014
[3] Z. Rahman, D. J. Jobson, G. A. Woodell, “ Retinex processing for
automatic image enhancement”, Human Vision and Electronic Imaging
VII, SPIE Symposium on Electronic Imaging, Proc. SPIE 4662, 2002
[4] D J Jobson, Z Rahman, G A Woodell, “Properties and performance of a
center/surround Retinex”, IEEE Trans.Image Processing, Vol 6, no.3, p.
451-462.1997
[5] D J Jobson, Z Rahman, G A Woodell, “A multiscale Retinex for
bridging the gap between color images and the human observation of
scene”s, IEEE Trans. Image Processing, Vol 6, no.7, p. 965-976.1997
[6] R. C. Gonzalez, R. E. Woods, “Digital Image Processing”, 2nd ed., New
Jersey: Prentice-Hall. 2002
[7] Frankle J , McCann J, “Method and apparatus for lightness imaging”,
US,4384336[P]. 05-17. 1983
[8] Funt B,Ciurea F,McCann J, “Retinex in Matlab”, Journal of Electronic
Imaging ,13(1):48-57. 2004
[9] J McCann, “Lesson learned from mondrains applied to real images and
color gamuts”, in Proc. IST/SID 7th Color Imag.Conf. ,pp. 1-8.1999
[10] Lei Ling ,Zhou Yinqing,Li Jingwen, “An investigation of Retinex
Algorithms for Image Enhancement”, Journal of Electron(China)
,Vol.24 No.5. 2007
[11] E Land J McCann, “Lightness and Retinex theory”, J. Optical Society of
America ,vol.61,no.1 pp 1-11.1971
[12] E H Land, “Recent advances in Retinex theory”, Vis.Res., vol.26,
no.1,pp 7-21.1986
[13] E .Provenzi, L D Carli, A Rizzi, D Marini, “Mathematical definition and
analysis of the Retinex algorithm”, J Opt. Soc. Amer.vol 22,pp.2613-
2621.2005
[14] D Marini, “A computational approach to color adaptation effects”,
Image Vis. Comput., Vol. 18,no.13, pp. 1005-1014.2000
[15] A Blake, “Boundary conditions for lightness computation in Mondrian
world”, Comput. Vis. Graph. Image Process., Vol. 32, pp. 314-327.1985
[16] B Funt, M Drew , M Brockington, “Recovering shading from color
Images”, in Proc. 2nd Eur. Conf. Comput. Vis., pp.124-132.1992
[17] D Terzepoulos, ”Image analysis using multigrid relaxation methods”,
IEEE Trans. Pattern Anal. Mach. Intell., Vol. PAMI-8, No.2, pp. 129-
139.1986
[18] Shengdong Pan, Xiangjing An, Hongtao Xue, Hangen He, “Improving
Iterative Retinex Algorithm for Dynamic Range Compression”,
Proceedings of 2nd International Conference and Information
Application.2012
[19] Jia Li, “Application of image enhancement method for digital images
based on Retinex theory”, Journal Optik124,5986-5988, Elsevier.2013
[20] Haoning Lin, Zhenwei Shi, “Multi scale Retinex improvement for
nighttime image enhancement”, Journal Optik 125,7143-7148,
Elsevier.2014
[21] Akram hashemi Sejzei, Mansur Jamzad, “Evaluation of various digital
image processing techniques for detecting critical crescent moon and
introducing CMD- A tool for critical crescent moon detection”, Journal
Optik 127,1511-1525, Elsevier.2016
[22] Yifan Wang, Hongyu Wang, Chuanli Yin, Ming Dai, “Biogically
inspired image enhancement based on Retinex”, Journal of
Neurocomputing 177 373-384, Elsevier.2016
[23] Marian Wagdy, Ibrahima Faye, Dayang Rohaya, “Degradation
Enhancement for the Captured Document Image using Retinex Theory”,
International Conference on Information Technology and
Multimedia(ICIMU).2014
[24] Hamid Hassanpour, Najmeh Samadiani, Mahdi Salehi, “Using
morphological transforms to enhace the contrast of medical images”,
The Egyptian Journal of Radiology and Nuclear Medicine, Elsevier.
2015
[25] Cao Yuan, Yaqin Li, “Switching median and morphological filter for
impulse noise removal from digital images”, Journal Optik126, 1598-
1601, Elsevier.2015
[26] Shijian Lu, Ben M Chen, C C Ko, “Perspective rectification of document
images using fuzzy set and morphological operations”, Journal of Image
and Vision Computing 23 541-553, Elsevier.2005
[27] Indu Sreedevi, Rishi Pandey, N Jayanthi, Geetanjali Bhola , Santanu
Chaudhary,”Enhancement of Inscription Images”, 978-4673-5952-8/13 ,
IEEE .2013
[28] Qian Wang, Tao Xia, Lida Li, Chew Lim Tan, “Document Image
Enhancement using Directional Wavelet”, Proceedings of IEEE
Conference on Computer Vision and Pattern Recognition, 1063-
6919/03, IEEE .2003
[29] Zhixin Shi , Venu Govindaraju, “Historical Document Image
Enhancement using Background Light Intensity Normalization”,
Proceedings of 17th International Conference on Patten Recognition,
1051-4651/04, IEEE .2004
[30] B Gangamma, Srikanta Murthy K ,”A combined approach for degraded
Historical Documents denoising using Curvelet and Mathematical
Morphology”,978-1-4244-5967-4/10, IEEE.2010
[31] Ranganatha D, Ganga Holi, “Historical Document Enhancement using
Shearlet Transform and mathematical morphological operations”, 978-1-
4799-8792-4/15, IEEE .2015
International Journal of Computer Science and Information Security (IJCSIS),
Vol. 16, No. 4, April 2018
59 https://sites.google.com/site/ijcsis/
ISSN 1947-5500
6. [32] G Janani, V Vishalini, P Mohan Kumar, “Recognition and Analysis of
Tamil inscriptions and mapping using Image Processing Technique”s,
978-1-5090-1706-5/16 , IEEE .2016
[33] Saleem Pasha, M C Padma, “Handwritten Kannada Character
Recognition using Wavelet Transform and structural features”,
International Conference on Emerging Research in Electronics, CST,
978-4673-9563-2/15, IEEE .2015
[34] G Bhuvaneswari, V Subbiah Bharathi, “An efficient algorithm for
recognition of ancient stone inscription characters”, 7th International
Conference on Advanced Computing, 978-5090-1933-5/15, IEEE.2015
[35] N Jayanthi, S Indu, P Gola, P Thripathi, “Novel method for manuscript
and inscription text extraction”, 3rd International Conference on Signal
Processing and Integrated Networks, 978-4673-9197-9/16, IEEE. 2016
[36] Shafali Gupta, Yadwinder Kaur. Review of different local and Global
contrast enhancement techniques for digital image. International Journal
of Computer Applications,0975-8875, Volume 100-No.18.2014
[37] Wenye Ma, Jean-Michel Morel, Stanley Osher , Aichi Chien, “An L1-
based variational model for Retinex theory and its application to medical
images”, Proceedings,IEEE Computer Society Conference on Computer
Vision and Pattern Recognition 20-25.2011
[38] Anu Namdeo , Sandeep Singh Bhadoriya, “A Review on Image
Enhancement Techniques with its Advantages and Disadvantages”,
International Journal for Science and Advance Research In Technology
ISSN : 2395-1052 - Volume 2 Issue 5 .2016
[39] Seon Joo Kim, Fanbo Deng, Michael S Brown, “Visual Enhancement of
old documents with hyperspectral imaging”,Journal of Pattern
Recognition 44-1461-1469, Elsevier. 2011
[40] Zhixin Shi, Venu Govindaraju, “Historical Document Immage
Enhancement using Background Light Intensity Normalization”,
Proceedings of 17th International Conference on Pattern Recognition
1051-4651.IEEE.2004
[41] Chandrakala HT, Thippeswamy G, "Epigraphic Document image
enhancement using Retinex method", Proceedings of 3rd international
symposium of signal processing and intelligent recognition systems,
Book chapter in Advanced in signal processing and intelligent
recognition systems, Springer,ISBN:978-3319679334, 2017.
AUTHORS PROFILE
Mrs.Chandrakala H T is a university second rank holder in her
post graduation in CSE from Visvesvaraya Technological University in
2012. Currently she is a pursuing her PhD in the field of Digital Image
Processing under Visvesvaraya Technological University. She has 8 years of
Teaching experience and 3 years of Research experience. She is now working
as Assistant Professor in the Department of Computer
Science,GFGCM,Tumkur University,India. Her research areas of interest
include image processing, pattern recognition, computer vision and data
mining. She is a life member of IEI and ISTE. She has published 10 research
papers in reputed journals and conferences including springer .
Dr. Thippeswamy G received his ME in CSE from Bangalore
University in 1997 and PhD in Digital Image Processing from Mangalore
University in 2012. He has 24 years of Teaching experience and 8 years of
Research experience. His research areas include image processing,pattern
recognition, computer vision and data mining. He is now working as Professor
and HOD in the Department of CSE ,BMS Institute of Technology,
Bengaluru, India. He is a life member of CSI,IEI and ISTE. He has published
more than 20 research papers in reputed journals and conferences including
springer.
Dr. Sahana D Gowda received her ME in CSE from Bangalore
University and PhD in Digital Image Processing from University of Mysore .
She has 16 years of Teaching experience and 10 years of Research experience.
Her research specializations include image processing,pattern recognition,
computer vision and data mining. She is now working as Professor and HOD
in the Department of CSE , BNM Institute of Technology, Bengaluru, India.
She is a life member of CSI and ISTE. She has published more than 20
research papers in reputed journals and conferences including IEEE and
springer .
International Journal of Computer Science and Information Security (IJCSIS),
Vol. 16, No. 4, April 2018
60 https://sites.google.com/site/ijcsis/
ISSN 1947-5500