The document presents a new approach called Linear Curvature Empirical Coding (LCEC) for image retrieval. LCEC aims to improve upon existing curvature-based coding approaches by linearly representing the curvature scale space plot and then applying empirical coding to select descriptive shape features. The linear representation considers variations across all smoothing factors rather than discarding information below a threshold. Empirical coding is used to select features based on variation density rather than just magnitudes. The results show LCEC performs better than previous methods for image retrieval.
Content Based Image Retrieval Approach Based on Top-Hat Transform And Modifie...cscpconf
In this paper a robust approach is proposed for content based image retrieval (CBIR) using texture analysis techniques. The proposed approach includes three main steps. In the first one, shape detection is done based on Top-Hat transform to detect and crop object part of the image. Second step is included a texture feature representation algorithm using color local binary patterns (CLBP) and local variance features. Finally, to retrieve mostly closing matching images to the query, log likelihood ratio is used. The performance of the proposed approach is evaluated using Corel and Simplicity image sets and it compared by some of other well-known approaches in terms of precision and recall which shows the superiority of the proposed approach. Low noise sensitivity, rotation invariant, shift invariant, gray scale invariant and low computational complexity are some of other advantages.
This document summarizes and reviews several techniques for image mining, including feature extraction, image clustering, and object recognition algorithms. It discusses color, texture, and edge feature extraction techniques and evaluates their precision and recall. It also describes the block truncation algorithm for image recognition and the cascade feature extraction approach. The key techniques - color moments, block truncation coding, and cascade classifiers - are evaluated based on experimental recall and precision results. Overall, the document provides an overview of different image mining techniques and evaluates their effectiveness.
This document presents a method for tracking moving objects in video sequences using affine flow parameters combined with illumination insensitive template matching. The method extracts affine flow parameters from frames to model local object motion using affine transformations. It then applies template matching with illumination compensation to track objects across frames while being robust to illumination changes. The method is evaluated on various indoor and outdoor database videos and is shown to effectively track objects without false detections, handling issues like illumination variations, camera motion and dynamic backgrounds better than other methods.
This document describes an image preprocessing scheme for line detection using the Hough transform in a mobile robot vision system. The preprocessing includes resizing images to 128x96 pixels, converting to grayscale, performing edge detection using Sobel filters, and edge thinning. A newly developed edge thinning method is found to produce images better suited for the Hough transform than other thinning methods. The preprocessed images are then used as input for line detection and the robot's self-navigation system.
Text-Image Separation in Document Images using Boundary/Perimeter DetectionIDES Editor
Document analysis plays an important role in office
automation, especially in intelligent signal processing. The
proposed system consists of two modules: block segmentation
and block identification. In this approach, first a document is
segmented into several non-overlapping blocks by utilizing a
novel recursive segmentation technique, and then extracts
the features embedded in each segmented block are extracted.
Two kinds of features, connected components and image
boundary/perimeter features are extracted. Document with
text inside image pose limitations in earlier reported literature.
This is taken care of by applying additional pass of the Run
Length Smearing on the extracted image that contains text.
Proposed scheme is independent of type and language of the
document.
Content Based Image Retrieval Using Dominant Color and Texture FeaturesIJMTST Journal
The purpose of this Paper is to describe our research on different feature extraction and matching techniques in designing a Content Based Image Retrieval (CBIR) system. Due to the enormous increase in image database sizes, as well as its vast deployment in various applications, the need for CBIR development arose. Content Based Image Retrieval (CBIR) is the retrieval of images based on features such as color and texture. Image retrieval using color feature cannot provide good solution for accuracy and efficiency. The most important features are Color and texture. In this paper technique used for retrieving the images based on their content namely dominant color, texture and combination of both color and texture. The technique verifies the superiority of image retrieval using multi feature than the single feature.
MMFO: modified moth flame optimization algorithm for region based RGB color i...IJECEIAES
Region-based color image segmentation is elementary steps in image processing and computer vision. The region-based color image segmentation has faced the problem of multidimensionality. The color image is considered in five-dimensional problems, in which three dimensions in color (RGB) and two dimensions in geometry (luminosity layer and chromaticity layer). In this paper, L*a*b color space conversion has been used to reduce the one dimension and geometrically it converts in the array hence the further one dimension has been reduced. This paper introduced, an improved algorithm modified moth flame optimization (MMFO) algorithm for RGB color image segmentation which is based on bio-inspired techniques. The simulation results of MMFO for region based color image segmentation are performed better as compared to PSO and GA, in terms of computation times for all the images. The experiment results of this method gives clear segments based on the different color and the different number of clusters is used during the segmentation process.
The project aims at development of efficient segmentation method for the CBIR system. Mean-shift segmentation generates a list of potential objects which are meaningful and then these objects are clustered according to a predefined similarity measure. The method was tested on benchmark data and F-Score of .30 was achieved.
Content Based Image Retrieval Approach Based on Top-Hat Transform And Modifie...cscpconf
In this paper a robust approach is proposed for content based image retrieval (CBIR) using texture analysis techniques. The proposed approach includes three main steps. In the first one, shape detection is done based on Top-Hat transform to detect and crop object part of the image. Second step is included a texture feature representation algorithm using color local binary patterns (CLBP) and local variance features. Finally, to retrieve mostly closing matching images to the query, log likelihood ratio is used. The performance of the proposed approach is evaluated using Corel and Simplicity image sets and it compared by some of other well-known approaches in terms of precision and recall which shows the superiority of the proposed approach. Low noise sensitivity, rotation invariant, shift invariant, gray scale invariant and low computational complexity are some of other advantages.
This document summarizes and reviews several techniques for image mining, including feature extraction, image clustering, and object recognition algorithms. It discusses color, texture, and edge feature extraction techniques and evaluates their precision and recall. It also describes the block truncation algorithm for image recognition and the cascade feature extraction approach. The key techniques - color moments, block truncation coding, and cascade classifiers - are evaluated based on experimental recall and precision results. Overall, the document provides an overview of different image mining techniques and evaluates their effectiveness.
This document presents a method for tracking moving objects in video sequences using affine flow parameters combined with illumination insensitive template matching. The method extracts affine flow parameters from frames to model local object motion using affine transformations. It then applies template matching with illumination compensation to track objects across frames while being robust to illumination changes. The method is evaluated on various indoor and outdoor database videos and is shown to effectively track objects without false detections, handling issues like illumination variations, camera motion and dynamic backgrounds better than other methods.
This document describes an image preprocessing scheme for line detection using the Hough transform in a mobile robot vision system. The preprocessing includes resizing images to 128x96 pixels, converting to grayscale, performing edge detection using Sobel filters, and edge thinning. A newly developed edge thinning method is found to produce images better suited for the Hough transform than other thinning methods. The preprocessed images are then used as input for line detection and the robot's self-navigation system.
Text-Image Separation in Document Images using Boundary/Perimeter DetectionIDES Editor
Document analysis plays an important role in office
automation, especially in intelligent signal processing. The
proposed system consists of two modules: block segmentation
and block identification. In this approach, first a document is
segmented into several non-overlapping blocks by utilizing a
novel recursive segmentation technique, and then extracts
the features embedded in each segmented block are extracted.
Two kinds of features, connected components and image
boundary/perimeter features are extracted. Document with
text inside image pose limitations in earlier reported literature.
This is taken care of by applying additional pass of the Run
Length Smearing on the extracted image that contains text.
Proposed scheme is independent of type and language of the
document.
Content Based Image Retrieval Using Dominant Color and Texture FeaturesIJMTST Journal
The purpose of this Paper is to describe our research on different feature extraction and matching techniques in designing a Content Based Image Retrieval (CBIR) system. Due to the enormous increase in image database sizes, as well as its vast deployment in various applications, the need for CBIR development arose. Content Based Image Retrieval (CBIR) is the retrieval of images based on features such as color and texture. Image retrieval using color feature cannot provide good solution for accuracy and efficiency. The most important features are Color and texture. In this paper technique used for retrieving the images based on their content namely dominant color, texture and combination of both color and texture. The technique verifies the superiority of image retrieval using multi feature than the single feature.
MMFO: modified moth flame optimization algorithm for region based RGB color i...IJECEIAES
Region-based color image segmentation is elementary steps in image processing and computer vision. The region-based color image segmentation has faced the problem of multidimensionality. The color image is considered in five-dimensional problems, in which three dimensions in color (RGB) and two dimensions in geometry (luminosity layer and chromaticity layer). In this paper, L*a*b color space conversion has been used to reduce the one dimension and geometrically it converts in the array hence the further one dimension has been reduced. This paper introduced, an improved algorithm modified moth flame optimization (MMFO) algorithm for RGB color image segmentation which is based on bio-inspired techniques. The simulation results of MMFO for region based color image segmentation are performed better as compared to PSO and GA, in terms of computation times for all the images. The experiment results of this method gives clear segments based on the different color and the different number of clusters is used during the segmentation process.
The project aims at development of efficient segmentation method for the CBIR system. Mean-shift segmentation generates a list of potential objects which are meaningful and then these objects are clustered according to a predefined similarity measure. The method was tested on benchmark data and F-Score of .30 was achieved.
Feature Extraction of an Image by Using Adaptive Filtering and Morpological S...IOSR Journals
Abstract: For enhancing an image various enhancement schemes are used which includes gray scale manipulation, filtering and Histogram Equalization, Where Histogram equalization is one of the well known image enhancement technique. It became a popular technique for contrast enhancement because it is simple and effective. The basic idea of Histogram Equalization method is to remap the gray levels of an image. Here using morphological segmentation we can get the segmented image. Morphological reconstruction is used to segment the image. Comparative analysis of different enhancement and segmentation will be carried out. This comparison will be done on the basis of subjective and objective parameters. Subjective parameter is visual quality and objective parameters are Area, Perimeter, Min and Max intensity, Avg Voxel Intensity, Std Dev of Intensity, Eccentricity, Coefficient of skewness, Coefficient of Kurtosis, Median intensity, Mode intensity. Keywords: Histogram Equalization, Segmentation, Morphological Reconstruction .
Finger Print Image Compression for Extracting Texture Features and Reconstru...IOSR Journals
The document summarizes a method for fingerprint image compression that involves decomposing the image into two components - ridges (primary component) and textures/features (secondary component). The ridges are extracted and encoded using arithmetic coding combined with vector quantization, achieving a higher compression ratio than FBI standards. The decoding process reconstructs a hybrid surface based on the encoded ridges. The method allows for extracting minutiae directly from the compressed image without needing decompression, and provides both compression and the ability to reconstruct the original image. Experimental results show the compression ratio is better than FBI specified methods.
Content-based Image Retrieval Using The knowledge of Color, Texture in Binary...Zahra Mansoori
This document presents a new approach for content-based image retrieval that combines color, texture, and a binary tree structure to describe images and their features. Color histograms in HSV color space and wavelet texture features are extracted as low-level features. A binary tree partitions each image into regions based on color and represents higher-level spatial relationships. The performance of the proposed system is evaluated on a subset of the COREL image database and compared to the SIMPLIcity image retrieval system. Experimental results show the proposed system has better retrieval performance than SIMPLIcity in some categories and comparable performance in others.
AN ENHANCED EDGE ADAPTIVE STEGANOGRAPHY APPROACH USING THRESHOLD VALUE FOR RE...ijcsa
The document summarizes an enhanced edge adaptive steganography approach using a threshold value for region selection. It aims to improve the quality and modification rate of a stego image compared to Sobel and Canny edge detection techniques. The proposed approach uses a threshold value to select high frequency pixels from the cover image for data embedding using LSBMR. Experimental results on 100 images show about a 0.2-0.6% improvement in image quality measured by PSNR and a 4-10% improvement in modification rate measured by MSE compared to Sobel and Canny edge detection.
Graph fusion of finger multimodal biometricsAnu Antony
Graph fusion technique i.e., weighted graph structure model to characterize the finger biometrics, and present the fusion frameworks for the trimodal images of a finger.
A comparative study on content based image retrieval methodsIJLT EMAS
Content-based image retrieval (CBIR) is a method of
finding images from a huge image database according to persons’
interests. Content-based here means that the search involves
analysis the actual content present in the image. As database of
images is growing daybyday, researchers/scholars are searching
for better techniques for retrieval of images maintaining good
efficiency. This paper presents the visual features and various
ways for image retrieval from the huge image database.
This document provides a survey of various image segmentation techniques used in image processing. It begins with an introduction to image segmentation and its importance in fields like pattern recognition and medical imaging. It then categorizes and describes different segmentation approaches like edge-based, threshold-based, region-based, etc. The literature survey section summarizes several papers on specific segmentation algorithms or applications. It concludes with a table comparing the advantages and disadvantages of different segmentation techniques. The overall document aims to provide an overview of segmentation methods and their uses in computer vision.
An Unsupervised Cluster-based Image Retrieval Algorithm using Relevance FeedbackIJMIT JOURNAL
Content-based image retrieval (CBIR) systems utilize low level query image feature as identifying similarity between a query image and the image database. Image contents are plays significant role for image retrieval. There are three fundamental bases for content-based image retrieval, i.e. visual feature extraction, multidimensional indexing, and retrieval system design. Each image has three contents such as: color, texture and shape features. Color and texture both plays important image visual features used in Content-Based Image Retrieval to improve results. Color histogram and texture features have potential to retrieve similar images on the basis of their properties. As the feature extracted from a query is low level, it is extremely difficult for user to provide an appropriate example in based query. To overcome these problems and reach higher accuracy in CBIR system, providing user with relevance feedback is famous for provide promising solutio
Modified Skip Line Encoding for Binary Image Compressionidescitation
This paper proposes a modified skip line encoding technique for lossless compression of binary images. Skip line encoding exploits correlation between successive scan lines by encoding only one line and skipping similar lines. The proposed technique improves upon existing skip line encoding by allowing a scan line to be skipped if a similar line exists anywhere in the image, rather than just successive lines. Experimental results on sample images show the modified technique achieves higher compression ratios than conventional skip line encoding.
The document discusses clustering images based on their properties. Images are converted into intensity, contrast, Weibull and fractal images. Eight properties are calculated for each image type, including brightness, standard deviation, entropy, skewness, kurtosis, separability, spatial frequency and visibility. The properties are normalized and clustered using k-means clustering. Tables show normalized property values for different image types. The clustering groups similar images based on their discriminative properties.
This document summarizes an evaluation of texture feature extraction methods for content-based image retrieval, including co-occurrence matrices, Tamura features, and Gabor filters. The evaluation tested these methods on a Corel image collection using Manhattan distance as the similarity measure. Co-occurrence matrices performed best with homogeneity as the feature, while Gabor wavelets showed better performance for homogeneous textures of fixed sizes. Tamura features performed poorly with directionality. Overall, co-occurrence matrices provided the best results for general texture retrieval.
This document discusses content-based image mining techniques for image retrieval. It provides an overview of image mining, describing how image mining goes beyond content-based image retrieval by aiming to discover significant patterns in large image collections according to user queries. The document reviews several existing image mining techniques, including those using color histograms, texture analysis, clustering algorithms like k-means, and association rule mining. It discusses challenges in developing universal image retrieval methods and proposes combining low-level visual features with high-level semantic features. Overall, the document surveys the state of the art in content-based image mining and retrieval.
Multiple Ant Colony Optimizations for Stereo MatchingCSCJournals
The stereo matching problem, which obtains the correspondence between right and left images, can be cast as a search problem. The matching of all candidates in the same line forms a 2D optimization task and the two dimensional (2D) optimization is a NP-hard problem. There are two characteristics in stereo matching. Firstly, the local optimization process along each scan-line can be done concurrently; secondly, there are some relationship among adjacent scan-lines can be explored to promote the matching correctness. Although there are many methods, such as GCPs, GGCPs are proposed, but these so called GCPs maybe not be ground. The relationship among adjacent scan-lines is posteriori, that is to say the relationship can only be discovered after every optimization is finished. The Multiple Ant Colony Optimization(MACO) is efficient to solve large scale problem. It is a proper way to settle down the stereo matching task with constructed MACO, in which the master layer values the sub-solutions and propagate the reliability after every local optimization is finished. Besides, whether the ordering and uniqueness constraints should be considered during the optimization is discussed, and the proposed algorithm is proved to guarantee its convergence to find the optimal matched pairs.
This document presents a hybrid approach for color image segmentation that integrates color edge information and seeded region growing. It uses color edge detection in CIE L*a*b color space to select initial seed regions and guide region growth. Seeded region growing is performed based on color similarity between pixels. The edge map and region map are fused to produce homogeneous regions with closed boundaries. Small regions are then merged. The approach is tested on images from the Berkeley segmentation dataset and produces reasonably good segmentation results by combining color and edge information.
Strong Image Alignment for Meddling Recognision PurposeIJMER
International Journal of Modern Engineering Research (IJMER) is Peer reviewed, online Journal. It serves as an international archival forum of scholarly research related to engineering and science education.
A NOVEL IMAGE SEGMENTATION ENHANCEMENT TECHNIQUE BASED ON ACTIVE CONTOUR AND...acijjournal
This document summarizes a novel image segmentation technique based on active contours and topological alignments. The technique aims to improve boundary detection by incorporating the advantages of both active contours and topological alignments. It presents a two-step algorithm: 1) Initial segmentation is performed using topological alignments to improve cell tracking results. 2) The output is transformed into the input for an active contour model, which evolves toward cell boundaries for analysis of cell mobility. Tests on 70 grayscale cell images showed the technique achieved better segmentation and boundary detection compared to active contours alone, including for low contrast images and cases of under/over-segmentation.
Automatic dominant region segmentation for natural imagescsandit
Image Segmentation segments an image into different homogenous regions. An efficient
semantic based image retrieval system divides the image into different regions separated by
color or texture sometimes even both. Features are extracted from the segmented regions and
are annotated automatically. Relevant images are retrieved from the database based on the
keywords of the segmented region In this paper, automatic image segmentation is proposed to
obtained dominant region of the input natural images. Dominant region are segmented and
results are obtained . Results are also recorded in comparison to JSEG algorithm
COLOUR BASED IMAGE SEGMENTATION USING HYBRID KMEANS WITH WATERSHED SEGMENTATIONIAEME Publication
Image processing, arbitrarily manipulating an image to achieve an aesthetic standard or to support a preferred reality. The objective of segmentation is partitioning an image into distinct regions containing each pixels with similar attributes. Image segmentation can be done using thresholding, color space segmentation, k-means clustering.
Segmentation is the low-level operation concerned with partitioning images by determining disjoint and homogeneous regions or, equivalently, by finding edges or boundaries. The homogeneous regions, or the edges, are supposed to correspond, actual objects, or parts of them, within the images. Thus, in a large number of applications in image processing and computer vision, segmentation plays a fundamental role as the first step before applying to images higher-level operations such as recognition, semantic interpretation, and representation. Until very recently, attention has been focused on segmentation of gray-level images since these have been the only kind of visual information that acquisition devices were able to take the computer resources to handle. Nowadays, color image has definitely displaced monochromatic information and computation power is no longer a limitation in processing large volumes of data. In this paper proposed hybrid k-means with watershed segmentation algorithm is used segment the images. Filtering techniques is used as noise filtration method to improve the results and PSNR, MSE performance parameters has been calculated and shows the level of accuracy
This document examines using a Static Synchronous Compensator (STATCOM) for reactive power compensation in Nigeria's electricity grid. It develops power flow equations to model the grid's 28-bus, 330kV system with and without STATCOM. Simulations using MATLAB show that with STATCOM, voltage magnitudes at 5 buses rise to within limits of 0.95-1.05 p.u. and total system power loss reduces by 5.88% from 98.21MW to 92.44MW. The results indicate STATCOM can stabilize voltages and reduce losses in the Nigeria grid.
The document discusses the utilization of foundry waste sand in the preparation of concrete. It presents the results of experiments conducted to study the compressive strength, split tensile strength, and flexural strength of M20 and M25 grade concrete containing 0%, 10%, and 100% replacement of foundry waste sand in place of fine aggregate. The tests were conducted at curing periods of 7, 28, and 56 days. The results showed that 100% replacement of foundry waste sand can be used for M20 and M25 grade concrete based on the compressive strengths achieved at different curing periods being comparable to control mixes. Flexural and split tensile strengths were also found to be comparable between control mixes and mixes with foundry
Investigation of Thermal Insulation on Ice CoolersIOSR Journals
This document investigates different materials for thermal insulation in ice coolers. It tests coconut fibre, polystyrene, and polyurethane at various densities using the Lee's Disk method to determine their thermal conductivity. The study finds that polyurethane with a density of 95kg/m3 has the lowest thermal conductivity of 0.0195 W/mK. Numerical analysis confirms that polyurethane of this density and thickness of 64mm maintains the lowest inside temperature for an ice cooler. The experimental data and numerical analysis show that polyurethane of 95kg/m3 density and 64mm thickness provides the best thermal insulation to minimize heat transfer and increase ice melting time in coolers.
Design of Gabor Filter for Noise Reduction in Betel Vine leaves Disease Segme...IOSR Journals
This document describes a design of a Gabor filter for noise reduction in images of betel vine leaves to aid in disease segmentation. A Gabor filter is designed using Verilog HDL and implemented on a CADENCE platform. The filter takes pixel inputs from images that have undergone preprocessing like Sobel edge detection and segmentation. It convolves the pixels with stored filter coefficients to reduce noise and segment the diseased areas. The proposed Gabor filter achieves noiseless segmentation with increased speed and reduced delay compared to existing methods. It utilizes fewer resources with minimal warnings. The system could be enhanced further with 2D and 3D image processing and neural network training.
Feature Extraction of an Image by Using Adaptive Filtering and Morpological S...IOSR Journals
Abstract: For enhancing an image various enhancement schemes are used which includes gray scale manipulation, filtering and Histogram Equalization, Where Histogram equalization is one of the well known image enhancement technique. It became a popular technique for contrast enhancement because it is simple and effective. The basic idea of Histogram Equalization method is to remap the gray levels of an image. Here using morphological segmentation we can get the segmented image. Morphological reconstruction is used to segment the image. Comparative analysis of different enhancement and segmentation will be carried out. This comparison will be done on the basis of subjective and objective parameters. Subjective parameter is visual quality and objective parameters are Area, Perimeter, Min and Max intensity, Avg Voxel Intensity, Std Dev of Intensity, Eccentricity, Coefficient of skewness, Coefficient of Kurtosis, Median intensity, Mode intensity. Keywords: Histogram Equalization, Segmentation, Morphological Reconstruction .
Finger Print Image Compression for Extracting Texture Features and Reconstru...IOSR Journals
The document summarizes a method for fingerprint image compression that involves decomposing the image into two components - ridges (primary component) and textures/features (secondary component). The ridges are extracted and encoded using arithmetic coding combined with vector quantization, achieving a higher compression ratio than FBI standards. The decoding process reconstructs a hybrid surface based on the encoded ridges. The method allows for extracting minutiae directly from the compressed image without needing decompression, and provides both compression and the ability to reconstruct the original image. Experimental results show the compression ratio is better than FBI specified methods.
Content-based Image Retrieval Using The knowledge of Color, Texture in Binary...Zahra Mansoori
This document presents a new approach for content-based image retrieval that combines color, texture, and a binary tree structure to describe images and their features. Color histograms in HSV color space and wavelet texture features are extracted as low-level features. A binary tree partitions each image into regions based on color and represents higher-level spatial relationships. The performance of the proposed system is evaluated on a subset of the COREL image database and compared to the SIMPLIcity image retrieval system. Experimental results show the proposed system has better retrieval performance than SIMPLIcity in some categories and comparable performance in others.
AN ENHANCED EDGE ADAPTIVE STEGANOGRAPHY APPROACH USING THRESHOLD VALUE FOR RE...ijcsa
The document summarizes an enhanced edge adaptive steganography approach using a threshold value for region selection. It aims to improve the quality and modification rate of a stego image compared to Sobel and Canny edge detection techniques. The proposed approach uses a threshold value to select high frequency pixels from the cover image for data embedding using LSBMR. Experimental results on 100 images show about a 0.2-0.6% improvement in image quality measured by PSNR and a 4-10% improvement in modification rate measured by MSE compared to Sobel and Canny edge detection.
Graph fusion of finger multimodal biometricsAnu Antony
Graph fusion technique i.e., weighted graph structure model to characterize the finger biometrics, and present the fusion frameworks for the trimodal images of a finger.
A comparative study on content based image retrieval methodsIJLT EMAS
Content-based image retrieval (CBIR) is a method of
finding images from a huge image database according to persons’
interests. Content-based here means that the search involves
analysis the actual content present in the image. As database of
images is growing daybyday, researchers/scholars are searching
for better techniques for retrieval of images maintaining good
efficiency. This paper presents the visual features and various
ways for image retrieval from the huge image database.
This document provides a survey of various image segmentation techniques used in image processing. It begins with an introduction to image segmentation and its importance in fields like pattern recognition and medical imaging. It then categorizes and describes different segmentation approaches like edge-based, threshold-based, region-based, etc. The literature survey section summarizes several papers on specific segmentation algorithms or applications. It concludes with a table comparing the advantages and disadvantages of different segmentation techniques. The overall document aims to provide an overview of segmentation methods and their uses in computer vision.
An Unsupervised Cluster-based Image Retrieval Algorithm using Relevance FeedbackIJMIT JOURNAL
Content-based image retrieval (CBIR) systems utilize low level query image feature as identifying similarity between a query image and the image database. Image contents are plays significant role for image retrieval. There are three fundamental bases for content-based image retrieval, i.e. visual feature extraction, multidimensional indexing, and retrieval system design. Each image has three contents such as: color, texture and shape features. Color and texture both plays important image visual features used in Content-Based Image Retrieval to improve results. Color histogram and texture features have potential to retrieve similar images on the basis of their properties. As the feature extracted from a query is low level, it is extremely difficult for user to provide an appropriate example in based query. To overcome these problems and reach higher accuracy in CBIR system, providing user with relevance feedback is famous for provide promising solutio
Modified Skip Line Encoding for Binary Image Compressionidescitation
This paper proposes a modified skip line encoding technique for lossless compression of binary images. Skip line encoding exploits correlation between successive scan lines by encoding only one line and skipping similar lines. The proposed technique improves upon existing skip line encoding by allowing a scan line to be skipped if a similar line exists anywhere in the image, rather than just successive lines. Experimental results on sample images show the modified technique achieves higher compression ratios than conventional skip line encoding.
The document discusses clustering images based on their properties. Images are converted into intensity, contrast, Weibull and fractal images. Eight properties are calculated for each image type, including brightness, standard deviation, entropy, skewness, kurtosis, separability, spatial frequency and visibility. The properties are normalized and clustered using k-means clustering. Tables show normalized property values for different image types. The clustering groups similar images based on their discriminative properties.
This document summarizes an evaluation of texture feature extraction methods for content-based image retrieval, including co-occurrence matrices, Tamura features, and Gabor filters. The evaluation tested these methods on a Corel image collection using Manhattan distance as the similarity measure. Co-occurrence matrices performed best with homogeneity as the feature, while Gabor wavelets showed better performance for homogeneous textures of fixed sizes. Tamura features performed poorly with directionality. Overall, co-occurrence matrices provided the best results for general texture retrieval.
This document discusses content-based image mining techniques for image retrieval. It provides an overview of image mining, describing how image mining goes beyond content-based image retrieval by aiming to discover significant patterns in large image collections according to user queries. The document reviews several existing image mining techniques, including those using color histograms, texture analysis, clustering algorithms like k-means, and association rule mining. It discusses challenges in developing universal image retrieval methods and proposes combining low-level visual features with high-level semantic features. Overall, the document surveys the state of the art in content-based image mining and retrieval.
Multiple Ant Colony Optimizations for Stereo MatchingCSCJournals
The stereo matching problem, which obtains the correspondence between right and left images, can be cast as a search problem. The matching of all candidates in the same line forms a 2D optimization task and the two dimensional (2D) optimization is a NP-hard problem. There are two characteristics in stereo matching. Firstly, the local optimization process along each scan-line can be done concurrently; secondly, there are some relationship among adjacent scan-lines can be explored to promote the matching correctness. Although there are many methods, such as GCPs, GGCPs are proposed, but these so called GCPs maybe not be ground. The relationship among adjacent scan-lines is posteriori, that is to say the relationship can only be discovered after every optimization is finished. The Multiple Ant Colony Optimization(MACO) is efficient to solve large scale problem. It is a proper way to settle down the stereo matching task with constructed MACO, in which the master layer values the sub-solutions and propagate the reliability after every local optimization is finished. Besides, whether the ordering and uniqueness constraints should be considered during the optimization is discussed, and the proposed algorithm is proved to guarantee its convergence to find the optimal matched pairs.
This document presents a hybrid approach for color image segmentation that integrates color edge information and seeded region growing. It uses color edge detection in CIE L*a*b color space to select initial seed regions and guide region growth. Seeded region growing is performed based on color similarity between pixels. The edge map and region map are fused to produce homogeneous regions with closed boundaries. Small regions are then merged. The approach is tested on images from the Berkeley segmentation dataset and produces reasonably good segmentation results by combining color and edge information.
Strong Image Alignment for Meddling Recognision PurposeIJMER
International Journal of Modern Engineering Research (IJMER) is Peer reviewed, online Journal. It serves as an international archival forum of scholarly research related to engineering and science education.
A NOVEL IMAGE SEGMENTATION ENHANCEMENT TECHNIQUE BASED ON ACTIVE CONTOUR AND...acijjournal
This document summarizes a novel image segmentation technique based on active contours and topological alignments. The technique aims to improve boundary detection by incorporating the advantages of both active contours and topological alignments. It presents a two-step algorithm: 1) Initial segmentation is performed using topological alignments to improve cell tracking results. 2) The output is transformed into the input for an active contour model, which evolves toward cell boundaries for analysis of cell mobility. Tests on 70 grayscale cell images showed the technique achieved better segmentation and boundary detection compared to active contours alone, including for low contrast images and cases of under/over-segmentation.
Automatic dominant region segmentation for natural imagescsandit
Image Segmentation segments an image into different homogenous regions. An efficient
semantic based image retrieval system divides the image into different regions separated by
color or texture sometimes even both. Features are extracted from the segmented regions and
are annotated automatically. Relevant images are retrieved from the database based on the
keywords of the segmented region In this paper, automatic image segmentation is proposed to
obtained dominant region of the input natural images. Dominant region are segmented and
results are obtained . Results are also recorded in comparison to JSEG algorithm
COLOUR BASED IMAGE SEGMENTATION USING HYBRID KMEANS WITH WATERSHED SEGMENTATIONIAEME Publication
Image processing, arbitrarily manipulating an image to achieve an aesthetic standard or to support a preferred reality. The objective of segmentation is partitioning an image into distinct regions containing each pixels with similar attributes. Image segmentation can be done using thresholding, color space segmentation, k-means clustering.
Segmentation is the low-level operation concerned with partitioning images by determining disjoint and homogeneous regions or, equivalently, by finding edges or boundaries. The homogeneous regions, or the edges, are supposed to correspond, actual objects, or parts of them, within the images. Thus, in a large number of applications in image processing and computer vision, segmentation plays a fundamental role as the first step before applying to images higher-level operations such as recognition, semantic interpretation, and representation. Until very recently, attention has been focused on segmentation of gray-level images since these have been the only kind of visual information that acquisition devices were able to take the computer resources to handle. Nowadays, color image has definitely displaced monochromatic information and computation power is no longer a limitation in processing large volumes of data. In this paper proposed hybrid k-means with watershed segmentation algorithm is used segment the images. Filtering techniques is used as noise filtration method to improve the results and PSNR, MSE performance parameters has been calculated and shows the level of accuracy
This document examines using a Static Synchronous Compensator (STATCOM) for reactive power compensation in Nigeria's electricity grid. It develops power flow equations to model the grid's 28-bus, 330kV system with and without STATCOM. Simulations using MATLAB show that with STATCOM, voltage magnitudes at 5 buses rise to within limits of 0.95-1.05 p.u. and total system power loss reduces by 5.88% from 98.21MW to 92.44MW. The results indicate STATCOM can stabilize voltages and reduce losses in the Nigeria grid.
The document discusses the utilization of foundry waste sand in the preparation of concrete. It presents the results of experiments conducted to study the compressive strength, split tensile strength, and flexural strength of M20 and M25 grade concrete containing 0%, 10%, and 100% replacement of foundry waste sand in place of fine aggregate. The tests were conducted at curing periods of 7, 28, and 56 days. The results showed that 100% replacement of foundry waste sand can be used for M20 and M25 grade concrete based on the compressive strengths achieved at different curing periods being comparable to control mixes. Flexural and split tensile strengths were also found to be comparable between control mixes and mixes with foundry
Investigation of Thermal Insulation on Ice CoolersIOSR Journals
This document investigates different materials for thermal insulation in ice coolers. It tests coconut fibre, polystyrene, and polyurethane at various densities using the Lee's Disk method to determine their thermal conductivity. The study finds that polyurethane with a density of 95kg/m3 has the lowest thermal conductivity of 0.0195 W/mK. Numerical analysis confirms that polyurethane of this density and thickness of 64mm maintains the lowest inside temperature for an ice cooler. The experimental data and numerical analysis show that polyurethane of 95kg/m3 density and 64mm thickness provides the best thermal insulation to minimize heat transfer and increase ice melting time in coolers.
Design of Gabor Filter for Noise Reduction in Betel Vine leaves Disease Segme...IOSR Journals
This document describes a design of a Gabor filter for noise reduction in images of betel vine leaves to aid in disease segmentation. A Gabor filter is designed using Verilog HDL and implemented on a CADENCE platform. The filter takes pixel inputs from images that have undergone preprocessing like Sobel edge detection and segmentation. It convolves the pixels with stored filter coefficients to reduce noise and segment the diseased areas. The proposed Gabor filter achieves noiseless segmentation with increased speed and reduced delay compared to existing methods. It utilizes fewer resources with minimal warnings. The system could be enhanced further with 2D and 3D image processing and neural network training.
Numerical Simulation and Design Optimization of Intake and Spiral Case for Lo...IOSR Journals
This document discusses the numerical simulation and design optimization of the intake and spiral case for a low head vertical turbine. It begins by introducing the research scope, which is to study the pressure and velocity fields in spiral cases through numerical simulation in order to reduce hydraulic losses. It then provides background on spiral case design, describing the shape, dimensions, and hydraulic parameters that must be considered. The document presents the numerical simulation methodology, which uses the Reynolds-averaged Navier-Stokes equations and non-structured grids. It analyzes the results of simulations run at different discharge rates, finding the highest pressures occur at the end of the spiral case. Total hydraulic losses in the spiral case are calculated to be 0.2273697 meters based on
This document discusses the Self Organizing Queue (SOQ) clustering algorithm for grouping similar data points. It aims to improve upon spectral clustering and graph clustering algorithms. SOQ uses a bio-inspired approach where data points are placed in queues based on similarity, allowing similar points to self-organize into the same queue/cluster. The paper presents the SOQ algorithm and two variations, CESOQ and MSSOQ, which address limitations in the original SOQ. It also provides an example application of SOQ for vehicle clustering and experimental results comparing SOQ to other clustering methods using synthetic Gaussian data.
IOSR Journal of Pharmacy and Biological Sciences(IOSR-JPBS) is an open access international journal that provides rapid publication (within a month) of articles in all areas of Pharmacy and Biological Science. The journal welcomes publications of high quality papers on theoretical developments and practical applications in Pharmacy and Biological Science. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
Determination of load transfer in reinforced concrete solid slabs by finite e...IOSR Journals
This document analyzes load transfer in reinforced concrete solid slabs using finite element analysis. It models two types of slabs in SAP2000: 1) slabs with pin supports on all four edges and 2) slabs with pin supports at corners and beams along edges. For type 1, stresses are higher in the short direction but still significant in the long direction, showing load is transferred two-way. For type 2, stresses in the short direction increase with stiffer beams while stresses in the long direction decrease. The analysis concludes all concrete solid slabs behave as two-way slabs, transferring load in both directions regardless of dimensions or support conditions.
This document discusses using an enhanced support vector machine (ESVM) to detect and classify distributed denial of service (DDoS) attacks. The ESVM is trained on normal user access behavior attributes and then tests samples of application layer attacks like HTTP flooding and network layer attacks like TCP flooding. It aims to classify these attacks with high accuracy, over 99%. An interactive detection and classification system architecture is proposed that takes DDoS attack samples as input for the ESVM and cross-validates them against normal traffic training samples to identify anomalies.
This document summarizes previous work on data preprocessing for web usage mining. It discusses how web server log files contain raw data that needs preprocessing before analysis. The preprocessing steps commonly used are data cleaning, user identification, session identification, and path completion. Several papers are reviewed that discuss different techniques for preprocessing web server log files, including custom preprocessing steps, algorithms for reading logs and transferring data to a database, and the outputs of preprocessed data. The goal of the literature review is to study and compare various techniques for the important preprocessing phase of web usage mining.
1. The document proposes an efficient algorithm to retrieve videos from a database using a video clip as a query.
2. Key features like color, texture, edges and motion are extracted from video shots and clusters are created using these features to reduce search time complexity.
3. When a query video is given, its features are used to search the closest cluster. Then sequential matching of additional features and shot lengths is done to find the most similar matching videos from the database.
The effect of rotational speed variation on the velocity vectors in the singl...IOSR Journals
This document summarizes a study that uses computational fluid dynamics (CFD) to simulate the internal flow in a centrifugal pump with varying rotational speeds. The study models a single blade passage of a five-bladed centrifugal pump impeller to accurately predict velocity vectors on the blade, hub, and shroud. Results show that at higher rotational speeds above the design point, velocity vectors increase more gradually until reaching a maximum value at the leading edge of the blade. The analysis concludes that velocity vectors in the suction side remain approximately constant, but increase to a higher maximum at the leading edge as rotational speed increases, especially above the design point.
This document summarizes a research paper that proposes a new vibration propulsion system for powering a small mobile robot. The system uses two counter-rotating eccentric masses, similar to the Dean drive, to excite an oscillating inner frame attached to an outer frame by springs. Wheels on the outer frame can be driven forward due to inertial and friction forces generated by the oscillating system. The document presents the dynamic model of the system and derives the governing differential equation. Experimental testing showed the system could successfully propel a robot vehicle and generate a maximum towing force of 8.5N while weighing 25N itself. Further improvements to increase propulsion are recommended.
Evaluation of long-term vegetation trends for northeastern of Iraq: Mosul, Ki...IOSR Journals
This document describes a study on the laying performance of 10,000 Babcock-380 brown commercial laying hens in Kelantan, Malaysia. The key findings are:
1) Hens reached peak egg production at 21 weeks of age, producing on average 6.0743 eggs per hen per day. Production was stable until 44 weeks.
2) Egg production gradually declined with age after 44 weeks, falling to 4.3094 eggs per hen per day by 69-80 weeks.
3) A regression model found that 87.5% of the variation in weekly hen-housed egg production could be explained by hen age alone, but including feed consumption increased the explanation to 90.5%.
The Production of Triploid Clariobranchus in Indoor HatcheryIOSR Journals
This study evaluated the interactive effects of rhizobium and virus inocula on three cowpea cultivars. The cultivars were inoculated with two rhizobium strains (R25B and IRj2180A) and two virus strains (CABMV and CYMV) at two different times. Viral inoculation significantly reduced nodulation, biomass production, and grain yields across all cultivars. Maximum reductions occurred without rhizobium inoculation. Early inoculation had a greater effect than late inoculation. The interaction of rhizobium and virus strains showed that viral severity was not reduced by rhizobium presence. Cultivar IT90K-277-2 performed best
Stress Analysis of Automotive Chassis with Various ThicknessesIOSR Journals
Abstract : This paper presents, stress analysis of a ladder type low loader truck chassis structure consisting of
C-beams design for application of 7.5 tonne was performed by using FEM. The commercial finite element
package CATIA version 5 was used for the solution of the problem. To reduce the expenses of the chassis of the
trucks, the chassis structure design should be changed or the thickness should be decreased. Also determination
of the stresses of a truck chassis before manufacturing is important due to the design improvement. In order to
achieve a reduction in the magnitude of stress at critical point of the chassis frame, side member thickness,
cross member thickness and position of cross member from rear end were varied. Numerical results showed that
if the thickness change is not possible, changing the position of cross member may be a good alternative.
Computed results are then compared to analytical calculation, where it is found that the maximum deflection
agrees well with theoretical approximation but varies on the magnitude aspect.
Keywords - Stress analysis, fatigue life prediction and finite element method etc.
This document describes a microcontroller-based touch switch system using an ATMega8 microcontroller chip. The system allows multiple touch switches to be added digitally and at low cost compared to analog switches. When a touch point is pressed, the microcontroller detects the input signal, turns on the load by controlling a relay, and when pressed again turns off the load. The system provides a safe and reliable switching method that can be used for household applications and to control loads from a distance.
This document summarizes a research paper that proposes a method for denoising remote sensing images using a combination of second order and fourth order partial differential equations (PDEs). It begins by explaining how noise is introduced in images and why denoising is important. It then discusses existing denoising methods using second order and fourth order PDEs individually and their limitations. The proposed method combines the two approaches to reduce both the blocky effect of second order PDEs and the speckle effect of fourth order PDEs. Simulation results show the combined method achieves better peak signal-to-noise and signal-to-noise ratios compared to the individual methods.
This document summarizes research on the superconducting behavior of carbon nanotubes (CNTs). It first discusses how the transition temperature (Tc) determines a material's superconductivity and how electron-phonon interactions can induce superconductivity in CNTs due to their curvature. The document then presents a theoretical model for CNT resistivity incorporating relaxation time, Fermi velocity, and coulomb interaction. Experimental results showing superconductivity in CNT ropes/bundles at temperatures up to 15K are discussed. The conclusion is that managing electron-electron interaction to increase Fermi velocity can decrease resistivity and lead to superconductivity at low temperatures.
Documentaries use for the design of learning activitiesIOSR Journals
This document discusses using documentaries to design learning activities. It proposes a method to segment documentary content into learning activities. As a case study, it analyzes a documentary about mechatronics. The documentary is segmented into 6 parts based on conveyed knowledge. An analysis identifies the type of knowledge conveyed. 8 learning activities are designed around problems addressed. An IMS LD compliant pedagogical scenario is proposed to implement the activities in an e-learning platform. The method allows extracting educational potential from documentaries and designing activities to promote knowledge acquisition and mobilization.
CBIR Processing Approach on Colored and Texture Images using KNN Classifier a...IRJET Journal
This document presents a content-based image retrieval system that uses color and texture features. It uses a K-nearest neighbor classifier to classify images based on color features and extract texture features using log-Gabor filters. Images are then ranked based on their similarity to the query image using Spearman's rank correlation coefficient. The system is tested on a dataset of flag images to retrieve the most similar flags to a given query image based on color and texture features. Experimental results show that the combined approach of using classification, similarity measures and log-Gabor filtering for color and texture features provides better retrieval performance than methods using only wavelets or Gabor filters.
Web Image Retrieval Using Visual Dictionaryijwscjournal
In this research, we have proposed semantic based image retrieval system to retrieve set of relevant images for the given query image from the Web. We have used global color space model and Dense SIFT feature extraction technique to generate visual dictionary using proposed quantization algorithm. The images are transformed into set of features. These features are used as inputs in our proposed Quantization algorithm for generating the code word to form visual dictionary. These codewords are used to represent images semantically to form visual labels using Bag-of-Features (BoF). The Histogram intersection method is used to measure the distance between input image and the set of images in the image database to retrieve similar images. The experimental results are evaluated over a collection of 1000 generic Web images to demonstrate the effectiveness of the proposed system.
Web Image Retrieval Using Visual Dictionaryijwscjournal
In this research, we have proposed semantic based image retrieval system to retrieve set of relevant images for the given query image from the Web. We have used global color space model and Dense SIFT feature extraction technique to generate visual dictionary using proposed quantization algorithm. The images are transformed into set of features. These features are used as inputs in our proposed Quantization algorithm for generating the code word to form visual dictionary. These codewords are used to represent images semantically to form visual labels using Bag-of-Features (BoF). The Histogram intersection method is used to measure the distance between input image and the set of images in the image database to retrieve similar images. The experimental results are evaluated over a collection of 1000 generic Web images to demonstrate the effectiveness of the proposed system.
A Review of Feature Extraction Techniques for CBIR based on SVMIJEEE
As with the advancement of multimedia technologies, users are not gratified with the conventional retrieval system techniques. So a application “Content Based Image Retrieval System” is introduced. CBIR is the application to retrieve the images or to search the digital images from the large database .The term “content” deals with the colour, shape, texture and all the information which is extracted from the image itself. This paper reviews the CBIR system which uses SVM classifier based algorithms for feature extraction phase.
Orientation Spectral Resolution Coding for Pattern RecognitionIOSRjournaljce
In the approach of pattern recognition, feature descriptions are of greater importance. Features are represented in spatial domain and transformed domain. Wherein, spatial domain features are of lower representation, transformed domains are finer and more informative. In the transformed domain representation, features are represented using spectral coding using advanced transformation technique such as wavelet transformation. However, the feature extraction approach considers the band coefficients; the orientation variation is not considered. In this paper towards inherent orientation variation among each spectral band is derived, and the approach of orientation filtration is made for effective feature representation. The obtained result illustrates an improvement in the recognition accuracy, in comparison to conventional retrieval system.
Query Image Searching With Integrated Textual and Visual Relevance Feedback f...IJERA Editor
There are many researchers who have studied the relevance feedback in the literature of content based image
retrieval (CBIR) community, but none of CBIR search engines support it because of scalability, effectiveness
and efficiency issues. In this, we had implemented an integrated relevance feedback for retrieving of web
images. Here, we had concentrated on integration of both textual features (TF) and visual features (VF) based
relevance feedback (RF), simultaneously we also tested them individually. The TFRF employs and effective
search result clustering (SRC) algorithm to get salient phrases. Then a new user interface (UI) is proposed to
support RF. Experimental results show that the proposed algorithm is scalable, effective and accurated
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
IRJET- Shape based Image Classification using Geometric –PropertiesIRJET Journal
This document discusses shape-based image classification using geometric properties. It proposes classifying shapes based on extracting geometric properties like area, perimeter, circularity, and eccentricity. The Discrete Wavelet Transform is used to remove noise and compress images. Then a K-Nearest Neighbor classifier is used to classify objects like squares, circles, ellipses and rectangles. The method is evaluated on the MPEG-7 dataset and achieves a maximum accuracy. Geometric properties provide powerful representations for shape recognition in content-based image retrieval applications.
Image search using similarity measures based on circular sectorscsandit
With growing number of stored image data, image sea
rch and image similarity problem become
more and more important. The answer can be solved b
y Content-Based Image Retrieval
systems. This paper deals with an image search usin
g similarity measures based on circular
sectors method. The method is inspired by human eye
functionality. The main contribution of the
paper is a modified method that increases accuracy
for about 8% in comparison with original
approach. Here proposed method has used HSB colour
model and median function for feature
extraction. The original approach uses RGB colour m
odel with mean function. Implemented
method was validated on 10 image categories where o
verall average precision was 67%
IMAGE SEARCH USING SIMILARITY MEASURES BASED ON CIRCULAR SECTORScscpconf
With growing number of stored image data, image search and image similarity problem become
more and more important. The answer can be solved by Content-Based Image Retrieval
systems. This paper deals with an image search using similarity measures based on circular
sectors method. The method is inspired by human eye functionality. The main contribution of the
paper is a modified method that increases accuracy for about 8% in comparison with original
approach. Here proposed method has used HSB colour model and median function for feature
extraction. The original approach uses RGB colour model with mean function. Implemented
method was validated on 10 image categories where overall average precision was 67%.
IRJET- Image based Information RetrievalIRJET Journal
This document discusses content-based image retrieval (CBIR) for retrieving images based on visual similarity. It focuses on using CBIR to match images of monuments for tourism applications. The paper describes extracting shape features using edge histogram descriptors to divide images into sub-images and compare edge distributions. An experiment matches images of Humayun's Tomb and the Statue of Liberty by comparing their edge magnitude values across sub-images. Similar edge distributions between two images' sub-images indicates similarity in shape and matches the images. The paper concludes CBIR using shape features can effectively match similar images of monuments to provide relevant information to users.
A COMPARATIVE ANALYSIS OF RETRIEVAL TECHNIQUES IN CONTENT BASED IMAGE RETRIEVALcscpconf
Basic group of visual techniques such as color, shape, texture are used in Content Based Image Retrievals (CBIR) to retrieve query image or sub region of image to find similar images in image database. To improve query result, relevance feedback is used many times in CBIR to help user to express their preference and improve query results. In this paper, a new approach for image retrieval is proposed which is based on the features such as Color Histogram, Eigen Values and Match Point. Images from various types of database are first identified by using edge detection techniques .Once the image is identified, then the image is searched in the particular database, then all related images are displayed. This will save the retrieval time. Further to retrieve the precise query image, any of the three techniques are used and comparison is done w.r.t. average retrieval time. Eigen value technique found to be the best as compared with other two techniques.
A comparative analysis of retrieval techniques in content based image retrievalcsandit
Basic group of visual techniques such as color, shape, texture are used in Content Based Image
Retrievals (CBIR) to retrieve query image or sub region of image to find similar images in
image database. To improve query result, relevance feedback is used many times in CBIR to
help user to express their preference and improve query results. In this paper, a new approach
for image retrieval is proposed which is based on the features such as Color Histogram, Eigen
Values and Match Point. Images from various types of database are first identified by using
edge detection techniques .Once the image is identified, then the image is searched in the
particular database, then all related images are displayed. This will save the retrieval time.
Further to retrieve the precise query image, any of the three techniques are used and
comparison is done w.r.t. average retrieval time. Eigen value technique found to be the best as
compared with other two techniques.
SIGNIFICANCE OF DIMENSIONALITY REDUCTION IN IMAGE PROCESSING sipij
The aim of this paper is to present a comparative study of two linear dimension reduction methods namely
PCA (Principal Component Analysis) and LDA (Linear Discriminant Analysis). The main idea of PCA is to
transform the high dimensional input space onto the feature space where the maximal variance is
displayed. The feature selection in traditional LDA is obtained by maximizing the difference between
classes and minimizing the distance within classes. PCA finds the axes with maximum variance for the
whole data set where LDA tries to find the axes for best class seperability. The neural network is trained
about the reduced feature set (using PCA or LDA) of images in the database for fast searching of images
from the database using back propagation algorithm. The proposed method is experimented over a general
image database using Matlab. The performance of these systems has been evaluated by Precision and
Recall measures. Experimental results show that PCA gives the better performance in terms of higher
precision and recall values with lesser computational complexity than LDA
Low level features for image retrieval basedcaijjournal
In this paper, we present a novel approach for image retrieval based on extraction of low level features
using techniques such as Directional Binary Code (DBC), Haar Wavelet transform and Histogram of
Oriented Gradients (HOG). The DBC texture descriptor captures the spatial relationship between any pair
of neighbourhood pixels in a local region along a given direction, while Local Binary Patterns (LBP)
descriptor considers the relationship between a given pixel and its surrounding neighbours. Therefore,
DBC captures more spatial information than LBP and its variants, also it can extract more edge
information than LBP. Hence, we employ DBC technique in order to extract grey level texture features
(texture map) from each RGB channels individually and computed texture maps are further combined
which represents colour texture features (colour texture map) of an image. Then, we decomposed the
extracted colour texture map and original image using Haar wavelet transform. Finally, we encode the
shape and local features of wavelet transformed images using Histogram of Oriented Gradients (HOG) for
content based image retrieval. The performance of proposed method is compared with existing methods on
two databases such as Wang’s corel image and Caltech 256. The evaluation results show that our
approach outperforms the existing methods for image retrieval.
Analysis of combined approaches of CBIR systems by clustering at varying prec...IJECEIAES
The image retrieving system is used to retrieve images from the image database. Two types of Image retrieval techniques are commonly used: content-based and text-based techniques. One of the well-known image retrieval techniques that extract the images in an unsupervised way, known as the cluster-based image retrieval technique. In this cluster-based image retrieval, all visual features of an image are combined to find a better retrieval rate and precisions. The objectives of the study were to develop a new model by combining the three traits i.e., color, shape, and texture of an image. The color-shape and colortexture models were compared to a threshold value with various precision levels. A union was formed of a newly developed model with a color-shape, and color-texture model to find the retrieval rate in terms of precisions of the image retrieval system. The results were experimented on on the COREL standard database and it was found that the union of three models gives better results than the image retrieval from the individual models. The newly developed model and the union of the given models also gives better results than the existing system named clusterbased retrieval of images by unsupervised learning (CLUE).
This document summarizes research on using image stitching and optical flow to generate panoramic views from video frames in real-time. Key aspects include:
1) Features are detected in frames using Shi-Tomasi corner detection and tracked between frames using optical flow.
2) A key frame is selected when less than half of features from the previous frame are successfully tracked, allowing sufficient rotation for homography calculation.
3) Homographies relating key frames are estimated and used to stitch and map frames to a cylindrical panorama for 3D visualization by a teleoperator.
4) Experimental results found the Shi-Tomasi/optical flow method was over 10x faster than SIFT/
A Survey on Image Retrieval By Different Features and TechniquesIRJET Journal
This document discusses various techniques for content-based image retrieval. It begins with an introduction to content-based image retrieval and describes how it uses visual features like color, texture, shape and regions to index and represent image content for retrieval. The document then reviews related work on image retrieval using different features. It discusses features used for image identification like color, edges, corners and texture. The document also outlines techniques for image retrieval including relevance feedback, support vector machines, block truncation coding, and image clustering. Finally, it evaluates parameters for comparing image retrieval algorithms.
IRJET- Content Based Image Retrieval (CBIR)IRJET Journal
This document describes a content-based image retrieval system that uses color features to retrieve similar images from a large database. It discusses using color descriptor features to extract feature vectors from images that can then be used to retrieve near matches based on similarity. Color features provide approximate matches more quickly than individual approaches. The system works by extracting visual features from both a query image and images in the database, then comparing the features to retrieve the most similar matches from the database. Color histograms and color moments are discussed as common color features used for this type of content-based image retrieval.
Information search using text and image queryeSAT Journals
Abstract An image retrieval and re-ranking system utilizing a visual re-ranking framework which is proposed in this paper the system retrieves a dataset from the World Wide Web based on textual query submitted by the user. These results are kept as data set for information retrieval. This dataset is then re-ranked using a visual query (multiple images selected by user from the dataset) which conveys user’s intention semantically. Visual descriptors (MPEG-7) which describe image with respect to low-level feature like color, texture, etc are used for calculating distances. These distances are a measure of similarity between query images and members of the dataset. Our proposed system has been assessed on different types of queries such as apples, Console, Paris, etc. It shows significant improvement on initial text-based search results.This system is well suitable for online shopping application. Index Terms: MPEG-7, Color Layout Descriptor (CLD), Edge Histogram Descriptor (EHD), image retrieval and re-ranking system
This document provides a technical review of secure banking using RSA and AES encryption methodologies. It discusses how RSA and AES are commonly used encryption standards for secure data transmission between ATMs and bank servers. The document first provides background on ATM security measures and risks of attacks. It then reviews related work analyzing encryption techniques. The document proposes using a one-time password in addition to a PIN for ATM authentication. It concludes that implementing encryption standards like RSA and AES can make transactions more secure and build trust in online banking.
This document analyzes the performance of various modulation schemes for achieving energy efficient communication over fading channels in wireless sensor networks. It finds that for long transmission distances, low-order modulations like BPSK are optimal due to their lower SNR requirements. However, as transmission distance decreases, higher-order modulations like 16-QAM and 64-QAM become more optimal since they can transmit more bits per symbol, outweighing their higher SNR needs. Simulations show lifetime extensions up to 550% are possible in short-range networks by using higher-order modulations instead of just BPSK. The optimal modulation depends on transmission distance and balancing the energy used by electronic components versus power amplifiers.
This document provides a review of mobility management techniques in vehicular ad hoc networks (VANETs). It discusses three modes of communication in VANETs: vehicle-to-infrastructure (V2I), vehicle-to-vehicle (V2V), and hybrid vehicle (HV) communication. For each communication mode, different mobility management schemes are required due to their unique characteristics. The document also discusses mobility management challenges in VANETs and outlines some open research issues in improving mobility management for seamless communication in these dynamic networks.
This document provides a review of different techniques for segmenting brain MRI images to detect tumors. It compares the K-means and Fuzzy C-means clustering algorithms. K-means is an exclusive clustering algorithm that groups data points into distinct clusters, while Fuzzy C-means is an overlapping clustering algorithm that allows data points to belong to multiple clusters. The document finds that Fuzzy C-means requires more time for brain tumor detection compared to other methods like hierarchical clustering or K-means. It also reviews related work applying these clustering algorithms to segment brain MRI images.
1) The document simulates and compares the performance of AODV and DSDV routing protocols in a mobile ad hoc network under three conditions: when users are fixed, when users move towards the base station, and when users move away from the base station.
2) The results show that both protocols have higher packet delivery and lower packet loss when users are either fixed or moving towards the base station, since signal strength is better in those scenarios. Performance degrades when users move away from the base station due to weaker signals.
3) AODV generally has better performance than DSDV, with higher throughput and packet delivery rates observed across the different user mobility conditions.
This document describes the design and implementation of 4-bit QPSK and 256-bit QAM modulation techniques using MATLAB. It compares the two techniques based on SNR, BER, and efficiency. The key steps of implementing each technique in MATLAB are outlined, including generating random bits, modulation, adding noise, and measuring BER. Simulation results show scatter plots and eye diagrams of the modulated signals. A table compares the results, showing that 256-bit QAM provides better performance than 4-bit QPSK. The document concludes that QAM modulation is more effective for digital transmission systems.
The document proposes a hybrid technique using Anisotropic Scale Invariant Feature Transform (A-SIFT) and Robust Ensemble Support Vector Machine (RESVM) to accurately identify faces in images. A-SIFT improves upon traditional SIFT by applying anisotropic scaling to extract richer directional keypoints. Keypoints are processed with RESVM and hypothesis testing to increase accuracy above 95% by repeatedly reprocessing images until the threshold is met. The technique was tested on similar and different facial images and achieved better results than SIFT in retrieval time and reduced keypoints.
This document studies the effects of dielectric superstrate thickness on microstrip patch antenna parameters. Three types of probes-fed patch antennas (rectangular, circular, and square) were designed to operate at 2.4 GHz using Arlondiclad 880 substrate. The antennas were tested with and without an Arlondiclad 880 superstrate of varying thicknesses. It was found that adding a superstrate slightly degraded performance by lowering the resonant frequency and increasing return loss and VSWR, while decreasing bandwidth and gain. Specifically, increasing the superstrate thickness or dielectric constant resulted in greater changes to the antenna parameters.
This document describes a wireless environment monitoring system that utilizes soil energy as a sustainable power source for wireless sensors. The system uses a microbial fuel cell to generate electricity from the microbial activity in soil. Two microbial fuel cells were created using different soil types and various additives to produce different current and voltage outputs. An electronic circuit was designed on a printed circuit board with components like a microcontroller and ZigBee transceiver. Sensors for temperature and humidity were connected to the circuit to monitor the environment wirelessly. The system provides a low-cost way to power remote sensors without needing battery replacement and avoids the high costs of wiring a power source.
1) The document proposes a model for a frequency tunable inverted-F antenna that uses ferrite material.
2) The resonant frequency of the antenna can be significantly shifted from 2.41GHz to 3.15GHz, a 31% shift, by increasing the static magnetic field placed on the ferrite material.
3) Altering the permeability of the ferrite allows tuning of the antenna's resonant frequency without changing the physical dimensions, providing flexibility to operate over a wide frequency range.
This document summarizes a research paper that presents a speech enhancement method using stationary wavelet transform. The method first classifies speech into voiced, unvoiced, and silence regions based on short-time energy. It then applies different thresholding techniques to the wavelet coefficients of each region - modified hard thresholding for voiced speech, semi-soft thresholding for unvoiced speech, and setting coefficients to zero for silence. Experimental results using speech from the TIMIT database corrupted with white Gaussian noise at various SNR levels show improved performance over other popular denoising methods.
This document reviews the design of an energy-optimized wireless sensor node that encrypts data for transmission. It discusses how sensing schemes that group nodes into clusters and transmit aggregated data can reduce energy consumption compared to individual node transmissions. The proposed node design calculates the minimum transmission power needed based on received signal strength and uses a periodic sleep/wake cycle to optimize energy when not sensing or transmitting. It aims to encrypt data at both the node and network level to further optimize energy usage for wireless communication.
This document discusses group consumption modes. It analyzes factors that impact group consumption, including external environmental factors like technological developments enabling new forms of online and offline interactions, as well as internal motivational factors at both the group and individual level. The document then proposes that group consumption modes can be divided into four types based on two dimensions: vertical (group relationship intensity) and horizontal (consumption action period). These four types are instrument-oriented, information-oriented, enjoyment-oriented, and relationship-oriented consumption modes. Finally, the document notes that consumption modes are dynamic and can evolve over time.
The document summarizes a study of different microstrip patch antenna configurations with slotted ground planes. Three antenna designs were proposed and their performance evaluated through simulation: a conventional square patch, an elliptical patch, and a star-shaped patch. All antennas were mounted on an FR4 substrate. The effects of adding different slot patterns to the ground plane on resonance frequency, bandwidth, gain and efficiency were analyzed parametrically. Key findings were that reshaping the patch and adding slots increased bandwidth and shifted resonance frequency. The elliptical and star patches in particular performed better than the conventional design. Three antenna configurations were selected for fabrication and measurement based on the simulations: a conventional patch with a slot under the patch, an elliptical patch with slots
1) The document describes a study conducted to improve call drop rates in a GSM network through RF optimization.
2) Drive testing was performed before and after optimization using TEMS software to record network parameters like RxLevel, RxQuality, and events.
3) Analysis found call drops were occurring due to issues like handover failures between sectors, interference from adjacent channels, and overshooting due to antenna tilt.
4) Corrective actions taken included defining neighbors between sectors, adjusting frequencies to reduce interference, and lowering the mechanical tilt of an antenna.
5) Post-optimization drive testing showed improvements in RxLevel, RxQuality, and a reduction in dropped calls.
This document describes the design of an intelligent autonomous wheeled robot that uses RF transmission for communication. The robot has two modes - automatic mode where it can make its own decisions, and user control mode where a user can control it remotely. It is designed using a microcontroller and can perform tasks like object recognition using computer vision and color detection in MATLAB, as well as wall painting using pneumatic systems. The robot's movement is controlled by DC motors and it uses sensors like ultrasonic sensors and gas sensors to navigate autonomously. RF transmission allows communication between the robot and a remote control unit. The overall aim is to develop a low-cost robotic system for industrial applications like material handling.
This document reviews cryptography techniques to secure the Ad-hoc On-Demand Distance Vector (AODV) routing protocol in mobile ad-hoc networks. It discusses various types of attacks on AODV like impersonation, denial of service, eavesdropping, black hole attacks, wormhole attacks, and Sybil attacks. It then proposes using the RC6 cryptography algorithm to secure AODV by encrypting data packets and detecting and removing malicious nodes launching black hole attacks. Simulation results show that after applying RC6, the packet delivery ratio and throughput of AODV increase while delay decreases, improving the security and performance of the network under attack.
The document describes a proposed modification to the conventional Booth multiplier that aims to increase its speed by applying concepts from Vedic mathematics. Specifically, it utilizes the Urdhva Tiryakbhyam formula to generate all partial products concurrently rather than sequentially. The proposed 8x8 bit multiplier was coded in VHDL, simulated, and found to have a path delay 44.35% lower than a conventional Booth multiplier, demonstrating its potential for higher speed.
This document discusses image deblurring techniques. It begins by introducing image restoration and focusing on image deblurring. It then discusses challenges with image deblurring being an ill-posed problem. It reviews existing approaches to screen image deconvolution including estimating point spread functions and iteratively estimating blur kernels and sharp images. The document also discusses handling spatially variant blur and summarizes the relationship between the proposed method and previous work for different blur types. It proposes using color filters in the aperture to exploit parallax cues for segmentation and blur estimation. Finally, it proposes moving the image sensor circularly during exposure to prevent high frequency attenuation from motion blur.
This document describes modeling an adaptive controller for an aircraft roll control system using PID, fuzzy-PID, and genetic algorithm. It begins by introducing the aircraft roll control system and motivation for developing an adaptive controller to minimize errors from noisy analog sensor signals. It then provides the mathematical model of aircraft roll dynamics and describes modeling the real-time flight control system in MATLAB/Simulink. The document evaluates PID, fuzzy-PID, and PID-GA (genetic algorithm) controllers for aircraft roll control and finds that the PID-GA controller delivers the best performance.
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024Neo4j
Neha Bajwa, Vice President of Product Marketing, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
Full-RAG: A modern architecture for hyper-personalizationZilliz
Mike Del Balso, CEO & Co-Founder at Tecton, presents "Full RAG," a novel approach to AI recommendation systems, aiming to push beyond the limitations of traditional models through a deep integration of contextual insights and real-time data, leveraging the Retrieval-Augmented Generation architecture. This talk will outline Full RAG's potential to significantly enhance personalization, address engineering challenges such as data management and model training, and introduce data enrichment with reranking as a key solution. Attendees will gain crucial insights into the importance of hyperpersonalization in AI, the capabilities of Full RAG for advanced personalization, and strategies for managing complex data integrations for deploying cutting-edge AI solutions.
Dr. Sean Tan, Head of Data Science, Changi Airport Group
Discover how Changi Airport Group (CAG) leverages graph technologies and generative AI to revolutionize their search capabilities. This session delves into the unique search needs of CAG’s diverse passengers and customers, showcasing how graph data structures enhance the accuracy and relevance of AI-generated search results, mitigating the risk of “hallucinations” and improving the overall customer journey.
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...Neo4j
Leonard Jayamohan, Partner & Generative AI Lead, Deloitte
This keynote will reveal how Deloitte leverages Neo4j’s graph power for groundbreaking digital twin solutions, achieving a staggering 100x performance boost. Discover the essential role knowledge graphs play in successful generative AI implementations. Plus, get an exclusive look at an innovative Neo4j + Generative AI solution Deloitte is developing in-house.
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slackshyamraj55
Discover the seamless integration of RPA (Robotic Process Automation), COMPOSER, and APM with AWS IDP enhanced with Slack notifications. Explore how these technologies converge to streamline workflows, optimize performance, and ensure secure access, all while leveraging the power of AWS IDP and real-time communication via Slack notifications.
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...SOFTTECHHUB
The choice of an operating system plays a pivotal role in shaping our computing experience. For decades, Microsoft's Windows has dominated the market, offering a familiar and widely adopted platform for personal and professional use. However, as technological advancements continue to push the boundaries of innovation, alternative operating systems have emerged, challenging the status quo and offering users a fresh perspective on computing.
One such alternative that has garnered significant attention and acclaim is Nitrux Linux 3.5.0, a sleek, powerful, and user-friendly Linux distribution that promises to redefine the way we interact with our devices. With its focus on performance, security, and customization, Nitrux Linux presents a compelling case for those seeking to break free from the constraints of proprietary software and embrace the freedom and flexibility of open-source computing.
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
“An Outlook of the Ongoing and Future Relationship between Blockchain Technologies and Process-aware Information Systems.” Invited talk at the joint workshop on Blockchain for Information Systems (BC4IS) and Blockchain for Trusted Data Sharing (B4TDS), co-located with with the 36th International Conference on Advanced Information Systems Engineering (CAiSE), 3 June 2024, Limassol, Cyprus.
Introducing Milvus Lite: Easy-to-Install, Easy-to-Use vector database for you...Zilliz
Join us to introduce Milvus Lite, a vector database that can run on notebooks and laptops, share the same API with Milvus, and integrate with every popular GenAI framework. This webinar is perfect for developers seeking easy-to-use, well-integrated vector databases for their GenAI apps.
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Building RAG with self-deployed Milvus vector database and Snowpark Container...Zilliz
This talk will give hands-on advice on building RAG applications with an open-source Milvus database deployed as a docker container. We will also introduce the integration of Milvus with Snowpark Container Services.
Building RAG with self-deployed Milvus vector database and Snowpark Container...
B017360516
1. IOSR Journal of Computer Engineering (IOSR-JCE)
e-ISSN: 2278-0661,p-ISSN: 2278-8727, Volume 17, Issue 3, Ver. VI (May – Jun. 2015), PP 05-16
www.iosrjournals.org
DOI: 10.9790/0661-17360516 www.iosrjournals.org 5 | Page
Empirical Coding for Curvature Based Linear Representation in
Image Retrieval System
Dayanand Jamkhandikar1
, Dr. Surendra Pal Singh2
, Dr.V. D. Mytri3
1
(Research Scholar, CS&E Department, Nims Institute of Engineering and Technology,
NIMS University, Jaipur, India)
2
(Assoc. Professor, CS&E Department, Nims Institute of Engineering and Technology,
NIMS University, Jaipur, India)
3
(Principal, Appa Institute of Engineering and Technology, Gulbarga, India)
Abstract : Image retrieval systems are finding their applications in all automation systems, wherein automated
decision needs to be taken based on the image contents. The prime requirement of such systems is to develop a
very accurate coding system for maximum retrieval accuracy. Due to the processing error or surrounding
errors the efficiency of such systems is reduced, so for the optimization of recognition efficiency, a linear coding
for curvature based image recognition system is developed. This paper presents an approach of linear
representation of curvature scale coding for an image shape descriptor. In this approach, an average curvature
scale representation is done and an empirical modeling of signal decomposition is applied for feature
description. In the process of empirical feature description, a spectral density based coding approach is also
proposed for the selection of descriptive shape information from the extracted contour regions. This approach is
named as Linear Curvature Empirical Coding (LCEC) which is simpler to derive shape features from curvature
information based on linear signal representation and coding. The results show that our proposed approach
performs better in comparison with previous known approaches.
Keywords: Empirical Signal Decomposition, Curvature Scale Coding, Linear Curvature Empirical Coding,
CBIR.
I. Introduction
In the process of image recognition various approaches of image representation and coding were developed.
Image recognition has emerged into new area of applications, such as e-learning, medical diagnosis,
authentication and security, mining, industrial applications etc. With development of new technologies in
imaging, images are now captured at very high resolutions, and each detail of the image could be extracted at a
very finer level to represent the image. However the representing coding, such as shape, color, textures was
extracted from the content based on feature descriptors used. It is hence observed that the performance of an
image retrieval system mainly depends on the representing features. Among all these representative features,
shape is observed to be a simpler and distinct representative feature for an image sample. To derive the shape
feature edge based feature descriptors were proposed. In [1] an optimal edge based shape detection using the
approach of derivative of the double exponential (DODE) function is proposed. The approach is a enhance
modeling of image shape representation, wherein a DODE filter is applied over the bounding contour to derive
exact shape of an image. In [2] to derive edge features, a combination of invariant moments and edge direction
histogram is proposed. In various approaches, moments are used as a shape descriptor to define the shape
feature. In [3] an angular radial transform (ART) descriptor, for MPEG-7 region-based shape description is
proposed. This ART descriptor is defined as a ICA Zernike moment shape descriptor and the whitening Zernike
moment shape descriptor for image shape representation. Where in edge based approaches are the simplest
mode of shape representation, in most of the image representation, edge operators derive coefficients out of the
bounding regions. These extra information’s result in computational overhead. Hence, results in slower system
process. To derive more precise shape description, contour based codlings were developed. A contour based
learning approach is defined in [4]. The approach of contour is a bound region growing method where the outer
bounding region is extracted via a region growing approach to derive image representation. In [5], a binary
image is decomposed into a set of closed contours, where each contour is represented as a chain code. To
measure the similarity between two images, the distances between each contour in one image and all contours in
the other are computed using a string matching technique. Then, a weighted sum of the average and maximum
of these distances constitutes the final similarity. The methods of the contour-based are mainly Polygonal
approximation, Fourier descriptors [6], wavelet descriptors, or scale space [7, 8], wherein the region-based
methods are mainly geometric moment invariants, and orthogonal moments [9]. In addition to the edge and
contour based coding various other approaches such as in [10], a graph structure, called concavity graph,
representing multi object images using individual objects and their spatial relationships is proposed. In [11] a
2. Empirical Coding for Curvature Based Linear Representation in Image Retrieval System
DOI: 10.9790/0661-17360516 www.iosrjournals.org 6 | Page
very efficient shape descriptor called shape context (SC) was developed. This descriptor defines a histogram
based modeling is proposed for the attached coefficient of each boundary point describing the relative
distribution of the remaining points to that point. In [12] a polar transformation uses the shape points about the
geometric center of object, the distinctive vertices of the shape are extracted and used as comparative parameters
to minimize the difference of shape distance from the center. In [13], a retrieval method based on local
perceptual shape descriptor and similarity indexing is proposed. A local invariant feature called ‘SIFT’ (Scale-
invariant feature transform) [14], which compute a histogram of local oriented gradients around the feature point
is proposed. However the contour based or the other techniques such as graph based, context based etc. defines
the overall bounding contour, wherein the variations in the feature coefficients are very large. Each projection in
the contour region is taken as a feature, which leads to large feature data set. To overcome the problem of large
feature vectors, curvature coding has emerged in recent past. A curvature based scale space representation is
presented in [3]. A very effective representation method was proposed that makes use of both concavities and
convexities of all curvature points [6]. It is called multi scale convexity concavity (MCC) representation, where
different scales are obtained by smoothing the boundary with Gaussian kernels of different widths. The
curvature of each boundary point is measured based on the relative displacement of a contour point with respect
to its position in the preceding scale level. The approach of curvature coding, results in lower feature descriptors
for image retrieval. However in such coding, features are extracted based on a thresholding of the curvature plot,
and values with higher magnitude are selected. This approach of feature selection process discards the lower
variational information considering as noise. However in various image samples curvature with variations
existing for a lower time period exist. So, this assumption of feature selection process minimizes the descriptive
features relevancy with respect to image representation. To overcome this issue, in this paper a new coding
approach, by the linearization of curvature coding and normalization process is proposed. The linearization
process results in the representation of curvature information into a 1-D plane, which is then processed for
feature representation, based on Empirical coding. This proposed approach improves the selection of feature
relevancy, in terms of selectivity, where features are selected based on variational density rather to magnitudes.
To present the stated work, this paper is presented in 6 sections. Wherein, section 2 outlines the modeling of an
image retrieval system. The geometrical features for image representation are presented in section 3. The
conventional approach of curvature based coding for image retrieval is presented. Section 4 presents the
approach of proposed LCEC approach for image retrieval. The evaluation of the developed approach is
presented in section 5. The concluding remark is presented in section 6.
II. Image Retrieval System
In the process of image retrieval various coding approaches were developed in past. These developed
approaches were developed based on the content information’s of the sample. Such systems are termed as
content based image retrieval (CBIR) system. A lot of research has been carried out on Content based image
retrieval (CBIR) in the past decade. The goal of CBIR systems is to return images or its information’s, which are
similar to a query image. Such system characterizes images using low-level perceptual features like color, shape
and texture and the overall similarity of a query image with data base images. Due to rapid increase in amount
of digital image collections, various techniques for storing, browsing, retrieving images have been investigate in
recent years. The traditional approach to image retrieval is to interpret image by text and then use text based
data base management system to perform image retrieval. The image retrieval systems are computed with
basically the content features of the image namely color, or shape recognition. To achieve the objective of image
retrieval, the operation is performed in two operational stages, training and testing. A Basic operational
architecture for such a system is shown in figure 1.
Figure 1: A Basic model of content based image retrieval system
3. Empirical Coding for Curvature Based Linear Representation in Image Retrieval System
DOI: 10.9790/0661-17360516 www.iosrjournals.org 7 | Page
In such system the samples are preprocessed for filtration, and dimensional uniformity. The pre-
processed samples were then processed for feature extraction. These features are the descriptive details of each
test sample or training sample which are stored onto dataset for further processing. The accuracy of these feature
descriptors defines the processing accuracy of the system.
III. Geometrical Shape Descriptor
In the process of feature extraction, to retrieve geometrical feature information’s, edge descriptors were
used. Edge descriptors were defined to represent the image information in 2 logical levels representing, high or
low based on the bounding regions pixel magnitude. Edge information’s were then used to derive the regions
content for which feature extraction process is carried out. An Edge based coding approach for image retrieval
application is presented in [1]. However in the process of edge based coding the discontinuous edge regions
reflects in discard of image regions or the inclusion of such regions increases the number of processing regions.
This intern increases the overhead of feature extraction, number of feature coefficients and reduces the retrieval
performance. To overcome the limitation of region selection on edge coding, curvature based coding were
developed. This approach is observed to be more effective in region extraction based on the closed bounding
regions, termed as contours. The approach of contour based curvature coding, were also applied to image
retrieval. In [4, 6] a curvature coding approach is proposed. The coding approach derives all the concavity and
connectivity of a contour region by the successive smoothening of contour region using a Gaussian filtration
kernel. A multi scale curvature coding termed MCC is defined for such coding. The approach present an
approach to curvature coding based on 8-negihbour hood region growing method for contour extraction and
region representation. A bounding corner point for the contour is used as shape descriptor in this approach. To
derive the curvature points in such approach a contour coding is defined, over which a curvature based coding is
developed. In the process of contour coding, with defined constraints contours were extracted. For the detection
of contour evaluation all the true corners should be detected and no false corners should be detected. All the
corner points should be located for proper continuity. The contour evaluator must be effective in estimating the
true edges under different noise levels for robust contour estimation. For the estimation of the contour region an
8-region neighborhood-growing algorithm is used [20, 21]. The process of contour evolution process is depicted
in figure 2.
Figure 2: Tracing operation in contour evolution.
To the derived contour coordinates (x (u), y(u)) a curvature coding is applied to extract the shape features,
defining dominant curve regions. The curvature of a curve is defined as the derivative of the tangent angle to the
curve, consider a parametric vector equation for a curve [15, 22]; given by eqn.(1)
... (1)
Where u is an arbitrary parameter. The formula for computing the curvature function can be expressed as
the curvature, ‘K’, for given contour coordinates is defined by the eqn.(2)
... (2)
Where are first derivative and are the double derivative of x and y contour co-ordinates
respectively.
4. Empirical Coding for Curvature Based Linear Representation in Image Retrieval System
DOI: 10.9790/0661-17360516 www.iosrjournals.org 8 | Page
This curvature represents the curvature pattern for the extracted contour. A smoothening of such contour reveals
in the dominant curvature patterns, which illustrates the representing shape of the region. Hence to extract the
dominant curvature patterns, the obtained curvature is recursively smoothened by using Gaussian smoothening
parameter (σ). The Gaussian smoothening operation is then defined by eqn.(3),
( , ) ( ) ( , ) ( , ) ( ) ( , )X u x u g u Y u y u g u … (3)
Where, g(u, σ) denotes a Gaussian of width ‘σ’ defined by eqn.(4),
2
2
2
1
( , )
2
u
g u e
… (4)
The curvature ‘K’ is then defined by the eqn.(5) as;
3/22 2
( , ) ( , ) ( , ) ( , )
( , )
( , ) ( , )
u uu uu u
u
X u Y u X u Y u
k u
X u Y u
… (5)
Figure 3 shows the process of smoothening and the application of smoothening factor ‘σ’.
Figure 3: Smoothening process for a curvature at different smoothening factors.
These curvatures are stored as a measuring parameter defining, shape of the image. A plot of
coordinate over the derived curvature coordinate is defined as the curvature scale space image (CSS) [15,22]. A
CSS explores the dominant edge regions in an image, wherein curvature having higher dominance will be
presented for a longer time than the finer edge regions. A threshold approach is then applied over this CSS curve
to pick the defining features, and edge coefficients which are extracted over the threshold are used as feature
descriptor. A CSS plot and the process of feature selection are as shown in figure 3.
Figure 4: CSS plot and process of thresholding
A CSS representation of a given query sample is illustrated in figure 4. (p1,k1),(p2,k2),…… (pn,kn) are
the extracted feature values used as image descriptor.
However, in such a coding process, information’s below the threshold is totally neglected. This elimination is
made based on the assumption that, only dominant edges exist for longer duration of smoothing and all the
lower values are neglected treating as noise. For example, for a given CSS plot for a query sample, the region
below the threshold is considered to be non-informative and totally neglected. This consideration leads to
following observations;
1) Under semantic objects having similar edge representation, a false classification will appear.
2) Information’s at lower regions also reveals information of images having shorter projections such as spines.
3) Direct elimination of the entire coefficient leads to information loss as well, a random pickup will leads to
higher noise density.
These problems are to be overcome to achieve higher level of retrieval accuracy in spatial semantic samples, or
with sample having finer edge regions. To achieve the objective of efficient retrieval in semantic observations, a
5. Empirical Coding for Curvature Based Linear Representation in Image Retrieval System
DOI: 10.9790/0661-17360516 www.iosrjournals.org 9 | Page
linear curvature Empirical coding (LCEC) is proposed. The proposed approach is as outlined in following
section.
IV. Linear Curvature Empirical Coding (LCEC)
It could be observed that the obtained CSS plot represents the edge variations over different smoothing
factors. This CSS representation appears as a 1-D signal with random variations. Taking this observation in
consideration, a linear representation of the curvature coding for feature representation is proposed. For the
linearization of the CSS curve, a linear sum of the entire curvature plane at different Gaussian smoothening
actor is taken. The linear transformation is defined by eqn.(6) as,
Ls = … (6)
Where, Ls is the linearized signal, and Ki is the obtained curvature for ith
value of σ.
0 50 100 150 200 250 300 350 400
1
2
3
4
5
6
7
8
9
10
11
Time
Magnitude
1-D Signal representtion
Figure 5: Linearized representation of a CSS plot
A linearized signal representation for a CSS plot is as shown in figure 5. This signal represents all the variations
present from the lower smoothening to the highest smoothening factor. Among all these variations, it is required
to extract the required variation coefficient, which best represent the image shape. To extract the optimal peak
points, an empirical coding is developed. In the process of empirical coding, the linearized signal Ls is
decomposed into intermediate frequency components using the approach of Empirical mode decomposition
(EMD) [16]. Empirical mode decomposition (EMD) is a very popular and effective tool in the area of speech
[17], image [18] and signal processing [19]. Various applications were developed using the approach of EMD
for the enhancement of input sample, such as speech denoising, jitter elimination, or noise reduction. EMD is
processed in a nonlinear manner for analysis of non-stationary signals. EMD decompose the time domain signal
into a set of adaptive basis functions called intrinsic mode function (IMF). The IMF is formulated as the
oscillatory components of the signal with no DC element. In the process of decomposition, tow extreme are
extracted and high frequency components are selected between these two points. The left out are defined as the
low frequency components. This process is repeated over the residual part repetitively to derive n-IMFs
reflecting different frequency elements. A signal x(n) is represented by EMD as eqn.(7),
… (7)
Where,
is the residual component.
The IMFs are varied from high frequency content to low frequency content with increase in IMF order.
The proposed Algorithm for LCEC is as outlined;
Algorithm LCEC:
Input: linear curvature,
Output: Feature vector, sfi
Step 1: Perform EMD computation for the obtained Linear Curvature signal.
Compute EMD by following step a-f:
Step a: compute the Local maxima(Xmax) and local minima(Xmin) for linear curvature sequence x[n]
Step b: compute minimum and maximum envelop signal, emin and emax.
Step c: Derive the mean envelop m[n].
Step d: Compute the detail signal d[n].
Step e: Validate for Zero mean stopping criterion.
Step f: Buffer data as IMF or residuals, from the derived detail signal.
6. Empirical Coding for Curvature Based Linear Representation in Image Retrieval System
DOI: 10.9790/0661-17360516 www.iosrjournals.org 10 | Page
Step 2: For obtained IMFs compute spectral density of each IMF using PSD.
Step 3: Select two IMFs having highest energy density (I1, I2).
Step 4: Compute threshold limit for I1, I2 as 0.6 of max(Ii).
Step 5: Derive the feature vectors (sfi) from these two IMFs for classification.
The operational flow chart for the EMD based decomposition is summarized in figure 6.
0 50 100 150 200 250 300 350 400
-20
0
20
imf1
0 50 100 150 200 250 300 350 400
-10
0
10
imf2
0 50 100 150 200 250 300 350 400
-5
0
5
imf3
0 50 100 150 200 250 300 350 400
-2
0
2
imf4
0 50 100 150 200 250 300 350 400
0
0.5
1
residual
time
Figure 6: Operational flow chart for EMD coding Figure 7: IMFs derived for the linearized signal
An example for the obtained IMFs for the linearized signal of figure 4, is shown in figure 7.
The obtained IMFs {I1 – I4} are the decomposed detail IMFs reveling different frequency content at each level.
At each decomposition level the residual IMF, r[n], is decomposed in each successive IMF to obtain finer
frequency information’s. Each obtained IMF, reveals a finer frequency content and based on the density of these
frequency contents, then a decision of feature selection is made. This approach of feature selection, results in
selection of feature details, at lower frequency resolutions also, which were discarded in the conventional CSS
approach. To derive the spectral density of these obtained IMFs, power spectral densities (PSD) to the obtained
IMFs are computed. PSD is defined as a density operator which defines the variation of power over different
content frequencies, in a given signal x(t).
The Power spectral density (PSD) for a given signal x(t) is defined by eqn.(8) as,
… (8)
Taking each IMF ‘Ii’ as reference, a PSD for each IMF, ‘PIi’ is computed. The PSD features for the four
obtained IMFs are then defined by eqn.(9) ,
PIi = PSD (Ii), for i = 1 to 4 … (9)
The IMF PSD’s are derived as eqn.(10) ,
… (10)
From these obtained energy values, IMFs are selected based on a defined selection criterion, as outlined,
7. Empirical Coding for Curvature Based Linear Representation in Image Retrieval System
DOI: 10.9790/0661-17360516 www.iosrjournals.org 11 | Page
for the obtained PIi, maximum PI is computed as given under,
MPIi = max(PIi)
For i =1 to 4
if (PIi ≥ (MPIi / 2))
sel_Ii = Ii,
end
For these selected IMFs, ‘Sel_Ii’ features are then computed by the approach of peak picking, as carried out in
CSS approach. For each select IMF a maximum value is computed and all the coordinates above 60% of the
pick value are taken as shape features ‘sfi’. With this approach the finer frequency contents which were
discarded in the CSS approach, were also given consideration for feature extraction. These approaches hence
derive more informative feature information than CSS. To evaluate the developed approach a simulation model
of the proposed approach is developed. The system architecture for the proposed approach is shown in figure 8.
Figure 8: System Architecture for the proposed approach
The developed approach is carried out in two operation stages, training and testing. Wherein, training
process set of recorded images are processed in a sequence and the process of feature extraction is performed.
For each of the training image, the obtained features are buffered into an array termed as knowledge data base.
During the process of querying the same process is repeated over the test sample and the obtained query feature
is passed to a classifier to retrieve information’s from the knowledge data base. For the process of classification,
a k-Nearest Neighbors (K-NN) classifier is used. The classifier is designed with a Euclidian distance based
approach to obtain the best set of matches from the knowledge data base. The decision ‘D’ for the retrieval is
derived as the minimum value of the Euclidian distance defined as eqn.(11) ,
… (11)
Where,
Euclidian Distance, … (12)
Where, Q is the query feature and, dbfi is the features trained in the data base.
To analyze the performance of this developed system an experimental analysis is carried out as presented below.
8. Empirical Coding for Curvature Based Linear Representation in Image Retrieval System
DOI: 10.9790/0661-17360516 www.iosrjournals.org 12 | Page
V. Experimental Results
For the simulation of the proposed approach, ACER, leaf database MEW2010 [23], is used. This database was
created for the experimental usage of recognition of wood species based on leaf shape. The data base consists of
90 wood species, for trees, and bushes. There are 2 to 25 samples for each species, with a total of 795 samples in
the database. These samples were scanned by 300dpi scanner, and save in binarized format. Few of the samples
of this data base are shown in figure 9.
Figure 9: Training Database samples
These test samples are passed to the processing algorithm for training, where each image is read in a
sequence and the computed features are buffered in an array. This buffered information is taken as the
knowledge information for classification. The process of proposed approach is carried out for a selected query
sample. The processing results obtained are as illustrated below.
Original Query Sample Extracted Edge Evolved Contour
Figure 10: Query sample Figure 11: Extracted edge regions Figure 12: Extracted Contour regions
For the evaluation of developed work, a randomly selected test query is passed to the developed
system. The Selected test image sample is shown in figure 10. This sample is processed to retrieve similar
details from the database stored. For the given test sample, an edge extraction process is carried out to find the
bounding region for the given test sample. The extracted edge details are shown in figure 11. A Canny operator
is used for the extraction of the edge details for the given test sample. By the usage of 8-neighbour region
growing method, the contour evolution process is carried out. A bounding region for the obtained edge region is
obtained, as shown in figure 12. The contour defines the defining shape region of a sample. In the figure
illustrated it is observed that, for given sample the obtained contour, defines the shape of the feet region. Taking
the contour evolved a curvature computation is made. The curvature obtained is represented in linear 1-D signal,
as illustrated in below figure 13.
9. Empirical Coding for Curvature Based Linear Representation in Image Retrieval System
DOI: 10.9790/0661-17360516 www.iosrjournals.org 13 | Page
0 100 200 300 400 500 600 700
1
2
3
4
5
6
7
Time
Magnitude
Linear Signal representtion
0 100 200 300 400 500 600 700
-4
-3
-2
-1
0
1
2
3
4
5
6
time(sec)
magnitude(mv)
initial IMF
Figure 13: Linearized signal representation Figure 14: Initial IMF for the linearized signal
of the curvature scale plot for the given sample
To this derived Linear signal representation of the given test sample, the obtained curvature
information’s are buffered in a linear array, and the coefficients are then considered as a 1-D linear signal
elements to perform empirical decomposition to compute IMFs. The initial IMF for the linearized signal is
shown in above figure 14. On the decomposition of the linearized signal, IMFs are obtained. In the initial IMF,
the variations in the signals are highly concentric in this IMF. A successive decomposition is then carried out
over the residual signal.
0 100 200 300 400 500 600 700
-10
0
10
imf1
0 100 200 300 400 500 600 700
-5
0
5
imf2
0 100 200 300 400 500 600 700
-10
0
10
imf3
0 100 200 300 400 500 600 700
-5
0
5
imf4
0 100 200 300 400 500 600 700
-5
0
5
imf5
0 100 200 300 400 500 600 700
-5
0
5
imf6
0 100 200 300 400 500 600 700
0
2
4
residual
time(sec)
1 2 3 4 5 6 7
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
IMFs
SpectralEnergyDensity
Figure 15: IMFs for the linearized signal Figure 16: spectral Energy Density for Decomposed
representation IMFs
The 6 IMFs and the residual obtained are shown in above figure 15. It is observed that IMF 1 and IMF
3 exhibits higher coefficients variation than the other two bands, hence more curvature informations are
presented in these two functions. To select the required IMFs for feature extraction, a spectral density using
power spectral density is used. The energy density for each band is as shown in figure 16.The spectral energy
density for each IMF is computed using, a power spectral density approach. Each IMF coefficients are averaged
by the squared summation of its coefficients and energy is computed. From the IMF energy obtained, it is
observed that, IMF 1 and 3 has comparatively higher energy density than the other two IMFs. This is observed
to be synch with the observations made from the IMFs obtained as seen in above figure. Based on the energy
derived, two highest energy density IMFs are selected, which is 1 and 3 in this case.
Figure 17: Extraction of Features from Figure 18: Extraction of Features from
selected IMF -1 selected IMF -3
Figure 17 and 18 shows the two selected IMFs for feature extraction. The features are selected based on the
similar procedure of maximum thresholding approach as used in conventional MCC approach. For the two
selected IMF, maximum peak values are found, and a threshold of 0.6 or the maximum peak is set as the
10. Empirical Coding for Curvature Based Linear Representation in Image Retrieval System
DOI: 10.9790/0661-17360516 www.iosrjournals.org 14 | Page
threshold value. All the peaks falling above to this threshold is recorded as the feature magnitude with its
corresponding coordinates, recording dominant curvature peaks. The obtained feature values for the given test
sample is recorded as, (3.5,50),(1.4,600),(2.9,99) and (2.3, 380).
With these extracted features, a retrieval process is carried out for two test cases. The developed system is
evaluated under two samples of different types, called dissimilar case, and two samples of spatially similar case,
where the samples are observe to be visibly similar. This evaluation is carried out to analyze the performance of
developed system retrieval performance under two distinct samples and two similar samples. The obtained
results for the developed systems are as shown below.
Original Query Sample
Figure 19: Test Sample
Top 3 classified Samples Using-Edge
Reterived Sample-Edge
(a) (b)
Figure 20: (a) Top 3- classification for Edge Based [1] coding (b) Top retrieved sample
Top 3 classified Samples Using-CONTOUR
Reterived Sample-CONTOUR
(a) (b)
Figure 21: (a) Top 3- classification for contour Based [4,5] coding (b) Top retrieved sample
Top 3 classified Samples Using-MCC
Reterived Sample-MCC
(a) (b)
Figure 22: (a) Top 3- classification for MCC Based [6] coding (b) Top retrieved sample
Top 3 classified Samples Using-LCEC
Reterived Sample-LCEC
(a) (b)
Figure 23: (a) Top 3- classification for LCEC Based coding (b) Top retrieved sample
The retrival system observations are presented in figure 20-23. The original test sample is shown in
figure 19. For The retrival operation for a edge based coding is presented in figure 20 (a),(b). A canny edge
operator is used to derive the edge informations [1], and features are drived based on the obtained edge region to
retrive informations.The top three classifed observation and the best match retreival is shown in figure 20 (a)
and (b) respectively. Figure 21(a) shows the top retrival resutls obtained for Contour based [4,5] approach, and
the top retrieval is hown in figure 21(b). For the MCC [6] based aprpoach, the observtiosn are shown in figure
22(a) and (b) respectively. In figure 23, the retrival observations based on the proposed LCEC approach is
11. Empirical Coding for Curvature Based Linear Representation in Image Retrieval System
DOI: 10.9790/0661-17360516 www.iosrjournals.org 15 | Page
presented. From the obtained observations for the developed methods, it is observed that the proposed LCEC
based approach retrieve better retrieval performance than the conventional approaches. This is obtained due to
an inclusion of finer variation details in LCEC. Due to two variations selection process, the curvature of higher
density are recorded as in MCC, however in the 2nd
IMF selection, the second level curvature coefficients were
also selected for feature description, which are totally discarded in all previous methods. This 2 level curvature
selection results in higher retrieval accuracy.
To evaluate the retrieval efficiency of the developed approach, the performance measures of recall and
precision is made. The recall and the precision factor are derived from [10, 11], where the recall is defined as a
ratio of number of relevant image retrieved over, total number of relevant image present. The Precision is
derived as a ratio of number of relevant images retrieved to the total number of images retrieved. The recall and
the precision factor are defined as;
retrievedimagesofNo..
retrievedimagesrelevantofNo..
Precesion
presentimagesrelevantofNo..
retrievedimagesrelevantofNo..
Recall
The obtained recall over precision observation for different test observations is presented in below table I
10 20 30 40 50 60 70 80
40
50
60
70
80
90
100
Recall rate (%)
Precission(%)
EDge [1]
Contour [4,5]
MCC[6]
LCEC-proposed
Table I: Observation of recall v/s precision for Figure 24: Recall-Precision
the developed system for developed system
The observation obtained for the recall-precission curve for propsoed system is depicted in above figure 24.
VI. Conclusion
A new Linear Curvature Empirical Coding (LCEC) approach using Empirical Mode Decomposition for
shape feature descriptor is proposed. The approach of linear representation of multiple signals is developed,
wherein the curvature coefficient at different level of smoothening factor is aggregated to perform a
linearization operation. In the process a spectral feature representation based on power spectral decomposition is
developed. A selection criterion of IMFs for the feature extraction is proposed. From the obtained observations
the recall rate of the proposed system is observed to be improved, due to finer feature information incorporated
in the feature description.
References
[1] Hankyu Moon, Rama Chellappa, and Azriel Rosenfeld, Optimal Edge-Based Shape Detection, IEEE Transactions on Image
Processing, Vol. 11, No. 11, November, 2002.
[2] Ken Chatfield, James Philbin, Andrew Zisserman, Efficient retrieval of deformable shape classes using local self-similarities, proc.
12th
international conference on computer vision workshops, IEEE ,2009.
[3] Ye Mei and Dimitrios Androutsos, Robust affine invariant region-based shape descriptors: The ICA zernike moment shape
descriptor and the whitening Zernike moment shape descriptor,” IEEE signal processing, Vol. 16, No. 10, October 2009.
[4] Jamie Shotton, Andrew Blake, Roberto Cipolla, Contour-Based Learning for Object Detection”, Proc.10th
International Conference
on Computer Vision, IEEE, 2005.
[5] Xiang Baia, Xingwei Yang, Longin Jan Latecki, Detection and recognition of contour parts based on shape similarity, Pattern
recognition, Elsevier 2008.
[6] Livari Kunttu, Leena Lepisto, Juhani Rauhamaa and Ari Visa, Multiscale Fourier descriptor for shape-based image retrieval, Proc.
International conference on pattern recognition 2004.
[7] B. Zhong and W. Liao, Direct curvature scale space: theory and corner detection, IEEE Transactions on Pattern Analysis and
Machine Intelligence, vol. 29, no. 3, pp. 508–512, 2007.
12. Empirical Coding for Curvature Based Linear Representation in Image Retrieval System
DOI: 10.9790/0661-17360516 www.iosrjournals.org 16 | Page
[8] Y. Cui and B. Zhong, Shape retrieval based on parabolically fitted curvature scale-space maps, Intelligent Science and Intelligent
Data Engineering, vol. 7751 of Lecture Notes in Computer Science, pp.743–750, 2013.
[9] Y. Gao, G. Han, G. Li, Y.Wo, and D.Wang, Development of current moment techniques in image analysis, Journal of Image and
Graphics, vol. 14, no. 8, pp. 1495–1501, 2009.
[10] Mehul P. Sampat, Zhou Wang, Shalini Gupta, Alan Conrad Bovik and Mia K. Markey, Complex wavelet structural similarity: A
new image similarity index, IEEE transactions on image processing, Vol. 18, No. 11, November 2009.
[11] Suhas G. Salve, Kalpana C. Jondhale, Shape matching and object recognition using shape contexts, Proc. IEEE 2010.
[12] Sergie Belongie, Jitendra Malik and Jan Puzicha, Shape matching and object recognition using shape contexts, IEEE Transactions
on Pattern Analysis and Machine Intelligence, 2002.
[13] Dengsheng Zhang, Guojun Lu, Review of shape representation and description techniques, Pattern recognition, Elsevier, 2004.
[14] Gul-e-Saman, S. Asif, M. Gilani, Object recognition by modified scale invariant feature transform, Proc. 3rd
international workshop
on semantic media adaptation and personalization, 2008.
[15] Sadegh Abbasi, Farzin Mokhtarian, Josef Kittler, Curvature scale space image in shape similarity retrieval, Multimedia Systems 7:
467–476, Springer, 1999.
[16] Donghoh Kim and Hee-Seok Oh, EMD: A Package for Empirical Mode Decomposition and Hilbert Spectrum, The R Journal Vol.
1,1, May 2009.
[17] Navin Chatlani and John J. Soraghan, EMD-Based Filtering (EMDF) of Low-Frequency Noise for Speech Enhancement, IEEE
Transactions on Audio, Speech, and Language Processing, Vol. 20, No. 4, May 2012.
[18] Konstantinos Ioannidis and Ioannis Andreadis, A Digital Image Stabilization Method Based on the Hilbert–Huang Transform, IEEE
Transactions on Instrumentation and Measurement, Vol. 61, No. 9, September 2012.
[19] Jeffery C. Chan, Hui Ma, Tapan K. Saha and Chandima Ekanayake, Self-adaptive Partial Discharge Signal De-noising Based on
Ensemble Empirical Mode Decomposition and Automatic Morphological Thresholding, IEEE Transactions On Dielectrics and
Electrical Insulation, 21 1: 294-303, 2014.
[20] Piotr Dudek, David Lopez Vilarino, A Cellular Active Contours Algorithm Based on Region Evolution, Proc. International
Workshop on Cellular Neural Networks and Their Applications, IEEE, 2006.
[21] Tian Qiu, Yong Yan, and Gang Lu, An Autoadaptive Edge-Detection Algorithm for Flame and Fire Image Processing, IEEE
Transactions on Instrumentation and Measurement, Vol. 61, No. 5, May 2012.
[22] Dayanand Jamkhnadikar, V.D. Mytri, CSS based Trademark retrieval system, Proc. of International Conference on Electronic
Systems, Signal Processing and Computing Technologies IEEE 2014 pp 129-133.
[23] http://zoi.utia.cas.cz/tree_leaves