This paper elaborates on the selection of suitable similarity measure for content based image retrieval. It contains the analysis done after the application of similarity measure named Minkowiski Distance from order first to fifth. It also explains the effective use of similarity measure named correlation distance in the form of angle ‘cosè’ between two vectors. Feature vector database prepared for this experimentation is based on extraction of first four moments into 27 bins formed by partitioning the equalized histogram of R, G and B planes of image into three parts. This generates the feature vector of dimension 27. Image database used in this work includes 2000 BMP images from 20 different classes. Three feature vector databases of four moments namely Mean, Standard deviation, Skewness and Kurtosis are prepared for three color intensities (R, G and B) separately. Then system enters in the second phase of comparing the query image and database images which makes of set of similarity measures mentioned above. Results obtained using all distance measures are then evaluated using three parameters PRCP, LSRR and Longest String. Results obtained are then refined and narrowed by combining the three different results of three different colors R, G and B using criterion 3. Analysis of these results with respect to similarity measures describes the effectiveness of lower orders of Minkowiski distance as compared to higher orders. Use of Correlation distance also proved its best for these CBIR results.
Analysis of color image features extraction using texture methodsTELKOMNIKA JOURNAL
A digital color images are the most important types of data currently being traded; they are used in many vital and important applications. Hence, the need for a small data representation of the image is an important issue. This paper will focus on analyzing different methods used to extract texture features for a color image. These features can be used as a primary key to identify and recognize the image. The proposed discrete wave equation DWE method of generating color image key will be presented, implemented and tested. This method showed that the percentage of reduction in the key size is 85% compared with other methods.
Multi Resolution features of Content Based Image RetrievalIDES Editor
Many content based retrieval systems have been
proposed to manage and retrieve images on the basis of their
content. In this paper we proposed Color Histogram, Discrete
Wavelet Transform and Complex Wavelet Transform
techniques for efficient image retrieval from huge database.
Color Histogram technique is based on exact matching of
histogram of query image and database. Discrete Wavelet
transform technique retrieves images based on computation
of wavelet coefficients of subbands. Complex Wavelet
Transform technique includes computation of real and
imaginary part to extract the details from texture. The
proposed method is tested on COREL1000 database and
retrieval results have demonstrated a significant improvement
in precision and recall.
COMPARATIVE STUDY OF DISTANCE METRICS FOR FINDING SKIN COLOR SIMILARITY OF TW...cscpconf
This paper describes the comparative study of performance between the existing distance metrics like Manhattan, Euclidean, Vector Cosine Angle and Modified Euclidean distance for finding the similarity of complexion by calculating the distance between the skin colors of two color facial images. The existing methodologies have been tested on 110 male and 40 female facial images taken from FRAV2D database. To verify the result obtained from the existing methodologies an opinion poll of 100 peoples have been taken. The experimental result shows that the result obtained by the methodologies of Manhattan, Euclidean and Vector Cosine Angle distance contradict the survey result in 80% cases and for Modified Euclidean distance methodology the contradiction arises in 60% cases. The present work has been implemented using Matlab 7.
Quality and size assessment of quantized images using K-Means++ clusteringjournalBEEI
In this paper, an amended K-Means algorithm called K-Means++ is implemented for color quantization. K-Means++ is an improvement to the K-Means algorithm in order to surmount the random selection of the initial centroids. The main advantage of K-Means++ is the centroids chosen are distributed over the data such that it reduces the sum of squared errors (SSE). K-Means++ algorithm is used to analyze the color distribution of an image and create the color palette for transforming to a better quantized image compared to the standard K-Means algorithm. The tests were conducted on several popular true color images with different numbers of K value: 32, 64, 128, and 256. The results show that K-Means++ clustering algorithm yields higher PSNR values and lower file size compared to K-Means algorithm; 2.58% and 1.05%. It is envisaged that this clustering algorithm will benefit in many applications such as document clustering, market segmentation, image compression and image segmentation because it produces accurate and stable results.
Analysis of color image features extraction using texture methodsTELKOMNIKA JOURNAL
A digital color images are the most important types of data currently being traded; they are used in many vital and important applications. Hence, the need for a small data representation of the image is an important issue. This paper will focus on analyzing different methods used to extract texture features for a color image. These features can be used as a primary key to identify and recognize the image. The proposed discrete wave equation DWE method of generating color image key will be presented, implemented and tested. This method showed that the percentage of reduction in the key size is 85% compared with other methods.
Multi Resolution features of Content Based Image RetrievalIDES Editor
Many content based retrieval systems have been
proposed to manage and retrieve images on the basis of their
content. In this paper we proposed Color Histogram, Discrete
Wavelet Transform and Complex Wavelet Transform
techniques for efficient image retrieval from huge database.
Color Histogram technique is based on exact matching of
histogram of query image and database. Discrete Wavelet
transform technique retrieves images based on computation
of wavelet coefficients of subbands. Complex Wavelet
Transform technique includes computation of real and
imaginary part to extract the details from texture. The
proposed method is tested on COREL1000 database and
retrieval results have demonstrated a significant improvement
in precision and recall.
COMPARATIVE STUDY OF DISTANCE METRICS FOR FINDING SKIN COLOR SIMILARITY OF TW...cscpconf
This paper describes the comparative study of performance between the existing distance metrics like Manhattan, Euclidean, Vector Cosine Angle and Modified Euclidean distance for finding the similarity of complexion by calculating the distance between the skin colors of two color facial images. The existing methodologies have been tested on 110 male and 40 female facial images taken from FRAV2D database. To verify the result obtained from the existing methodologies an opinion poll of 100 peoples have been taken. The experimental result shows that the result obtained by the methodologies of Manhattan, Euclidean and Vector Cosine Angle distance contradict the survey result in 80% cases and for Modified Euclidean distance methodology the contradiction arises in 60% cases. The present work has been implemented using Matlab 7.
Quality and size assessment of quantized images using K-Means++ clusteringjournalBEEI
In this paper, an amended K-Means algorithm called K-Means++ is implemented for color quantization. K-Means++ is an improvement to the K-Means algorithm in order to surmount the random selection of the initial centroids. The main advantage of K-Means++ is the centroids chosen are distributed over the data such that it reduces the sum of squared errors (SSE). K-Means++ algorithm is used to analyze the color distribution of an image and create the color palette for transforming to a better quantized image compared to the standard K-Means algorithm. The tests were conducted on several popular true color images with different numbers of K value: 32, 64, 128, and 256. The results show that K-Means++ clustering algorithm yields higher PSNR values and lower file size compared to K-Means algorithm; 2.58% and 1.05%. It is envisaged that this clustering algorithm will benefit in many applications such as document clustering, market segmentation, image compression and image segmentation because it produces accurate and stable results.
A new hybrid steganographic method for histogram preservation IJEEE
This paper presents a histogram preserving data embedding method for grey-scale images which is based on pixel value differencing (PVD) and least-significant-bit (LSB) substitution methods. Various PVD based steganographic methods achieve high data embedding capacity with minimum distortions in stego image at the cost of change in histogram characteristics which is can be detected by histogram based steganalysers. This persistent problem can been taken care off by proposed method of data hiding. The improved performance of the proposed method is verified through extensive simulations.
Colorization of Greyscale Images Using Kekre’s Biorthogonal Color Spaces and ...Waqas Tariq
There is no exact solution for colorization of greyscale images. The main focus of the techniques [24] is to minimise the human efforts needed in manually coloring the greyscale images. The human interaction is needed only to find a reference color image, then the job of transferring color traits from reference color image to greyscale image is done by techniques discussed in[1]. Here the colors from some source color image are picked up and squirted into the colored greyscale image. The color palette used in colorization technique discussed here is generated using the modified VQ codebook obtained by applying Kekre’s Fast Codebook generation algorithm. The technique is tested using various VQ codebook sizes like 64, 128, 256. In this papaer the techniques of color traits transfer to greyscale images are revisited with various color spaces like newly introduced Kekre’s Biorthogonal color spaces and RGB color space. The pixel window size used is of size 2x2. Color traits transfer to greyscale algorithms are tested over five different images for deciding the color space giving best quality of coloring. The experimental results show that the Kekre’s Biorthogonal Green color space gives better coloring.
APPLYING R-SPATIOGRAM IN OBJECT TRACKING FOR OCCLUSION HANDLINGsipij
Object tracking is one of the most important problems in computer vision. The aim of video tracking is to extract the trajectories of a target or object of interest, i.e. accurately locate a moving target in a video sequence and discriminate target from non-targets in the feature space of the sequence. So, feature descriptors can have significant effects on such discrimination. In this paper, we use the basic idea of many trackers which consists of three main components of the reference model, i.e., object modeling, object detection and localization, and model updating. However, there are major improvements in our system. Our forth component, occlusion handling, utilizes the r-spatiogram to detect the best target candidate. While spatiogram contains some moments upon the coordinates of the pixels, r-spatiogram computes region-based compactness on the distribution of the given feature in the image that captures richer features to represent the objects. The proposed research develops an efficient and robust way to keep tracking the object throughout video sequences in the presence of significant appearance variations and severe occlusions. The proposed method is evaluated on the Princeton RGBD tracking dataset considering sequences with different challenges and the obtained results demonstrate the effectiveness of the proposed method.
Application of Image Retrieval Techniques to Understand Evolving Weatherijsrd.com
Multispectral satellite images provide valuable information to understand the evolution of various weather systems such as tropical cyclones, shifting of intra tropical convergence zone, moments of various troughs etc., accurate prediction and estimation will save live and property. This work will deal with the development of an application which will enable users to search an image from database using either gray level, texture and shape features for meteorological satellite image retrieval .Gray level feature is extracted using histogram method. The Texture feature is extracted using gray level co-occurrence method and wavelet approach. The shape feature vector is extracted using morphological operations. The similarity between query image and database images is calculated using Euclidian distance. The performance of the system is evaluated using precision
The development of multimedia system technology in Content based Image Retrieval (CBIR) System is
one in every of the outstanding area to retrieve the images from an oversized collection of database. The feature
vectors of the query image are compared with feature vectors of the database images to get matching images.It is
much observed that anyone algorithm isn't beneficial in extracting all differing kinds of natural images. Thus an
intensive analysis of certain color, texture and shape extraction techniques are allotted to spot an efficient CBIR
technique that suits for a selected sort of images. The Extraction of an image includes feature description and
feature extraction. During this paper, we tend to projected Color Layout Descriptor (CLD), grey Level Co-
Occurrences Matrix (GLCM), Marker-Controlled Watershed Segmentation feature extraction technique that
extract the matching image based on the similarity of Color, Texture and shape within the database. For
performance analysis, the image retrieval timing results of the projected technique is calculated and compared
with every of the individual feature.
Abstract: Many applications such as robot navigation, defense, medical and remote sensing performvarious processing tasks, which can be performed more easily when all objects in different images of the same scene are combined into a single fused image. In this paper, we propose a fast and effective method for image fusion. The proposed method derives the intensity based variations that is large and small scale, from the source images. In this approach, guided filtering is employed for this extraction. Gaussian and Laplacian pyramidal approach is then used to fuse the different layers obtained. Experimental results demonstrate that the proposed method can obtain better performance for fusion of
all sets of images. The results clearly indicate the feasibility of the proposed approach.
Developing 3D Viewing Model from 2D Stereo Pair with its Occlusion RatioCSCJournals
We intend to make a 3D model using a stereo pair of images by using a novel method of local matching in pixel domain for calculating horizontal disparities. We also find the occlusion ratio using the stereo pair followed by the use of The Edge Detection and Image SegmentatiON (EDISON) system, on one the images, which provides a complete toolbox for discontinuity preserving filtering, segmentation and edge detection. Instead of assigning a disparity value to each pixel, a disparity plane is assigned to each segment. We then warp the segment disparities to the original image to get our final 3D viewing Model.
Image Inpainting Using Cloning AlgorithmsCSCJournals
In image recovery image inpainting has become essential content and crucial topic in research of a new era. The objective is to restore the image with the surrounding information or modifying an image in a way that looks natural for the viewer. The process involves transporting and diffusing image information. In this paper to inpaint an image cloning concept has been used. Multiscale transformation method is used for cloning process of an image inpainting. Results are compared with conventional methods namely Taylor expansion method, poisson editing, Shepard’s method. Experimental analysis verifies better results and shows that Shepard’s method using multiscale transformation not only restores small scale damages but also large damaged area and useful in duplication of image information in an image.
Image enhancement technique plays vital role in improving the quality of the image. Enhancement
technique basically enhances the foreground information and retains the background and improve the
overall contrast of an image. In some case the background of an image hides the structural information of
an image. This paper proposes an algorithm which enhances the foreground image and the background
part separately and stretch the contrast of an image at inter-object level and intra-object level and then
combines it to an enhanced image. The results are compared with various classical methods using image
quality measures
In this paper, we analyze and compare the performance of fusion methods based on four different
transforms: i) wavelet transform, ii) curvelet transform, iii) contourlet transform and iv) nonsubsampled
contourlet transform. Fusion framework and scheme are explained in detail, and two different sets of
images are used in our experiments. Furthermore, eight different performancemetrics are adopted to
comparatively analyze the fusion results. The comparison results show that the nonsubsampled contourlet
transform method performs better than the other three methods, both spatially and spectrally. We also
observed from additional experiments that the decomposition level of 3 offered the best fusion performance,
anddecomposition levels beyond level-3 did not significantly improve the fusion results.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Efficient fingerprint image enhancement algorithm based on gabor filtereSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
Comparative Analysis of Lossless Image Compression Based On Row By Row Classi...IJERA Editor
Lossless image compression is needed in many fields like medical imaging, telemetry, geophysics, remote
sensing and other applications, which require exact replica of original image and loss of information is not
tolerable. In this paper, a near lossless image compression algorithm based on row by row classifier with
encoding schemes like Lempel Ziv Welch (LZW), Huffman and Run Length Encoding (RLE) on color images
is proposed. The algorithm divides the image into three parts R, G and B, apply row by row classification on
each part and result of this classification is records in the mask image. After classification the image data is
decomposed into two sequences each for R, G and B and mask image is hidden in them. These sequences are
encoded using different encoding schemes like LZW, Huffman and RLE. An exhaustive comparative analysis is
performed to evaluate these techniques, which reveals that the pro
Gabor filter is a powerful way to enhance biometric images like fingerprint images in order to extract correct features from these images, Gabor filter used in extracting features directly asin iris images, and sometimes Gabor filter has been used for texture analysis. In fingerprint images The even symmetric Gabor filter is contextual filter or multi-resolution filter will be used to enhance fingerprint imageby filling small gaps (low-pass effect) in the direction of the ridge (black regions) and to increase the discrimination between ridge and valley (black and white regions) in the direction, orthogonal to the ridge, the proposed method in applying Gabor filter on fingerprint images depending on translated fingerprint image into binary image after applying some simple enhancing methods to partially overcome time consuming problem of the Gabor filter.
Performance Comparison of Image Retrieval Using Fractional Coefficients of Tr...CSCJournals
The thirst of better and faster retrieval techniques has always fuelled to the research in content based image retrieval (CBIR). The paper presents innovative content based image retrieval (CBIR) techniques based on feature vectors as fractional coefficients of transformed images using Discrete Cosine, Walsh, Haar and Kekre’s transforms. Here the advantage of energy compaction of transforms in higher coefficients is taken to greatly reduce the feature vector size per image by taking fractional coefficients of transformed image. The feature vectors are extracted in fourteen different ways from the transformed image, with the first being considering all the coefficients of transformed image and then fourteen reduced coefficients sets (as 50%, 25%, 12.5%, 6.25%, 3.125%, 1.5625% ,0.7813%, 0.39%, 0.195%, 0.097%, 0.048%, 0.024%, 0.012% and 0.06% of complete transformed image) are considered as feature vectors. The four transforms are applied on gray image equivalents and the colour components of images to extract Gray and RGB feature sets respectively. Instead of using all coefficients of transformed images as feature vector for image retrieval, these fourteen reduced coefficients sets for gray as well as RGB feature vectors are used, resulting into better performance and lower computations. The proposed CBIR techniques are implemented on a database having 1000 images spread across 11 categories. For each proposed CBIR technique 55 queries (5 per category) are fired on the database and net average precision and recall are computed for all feature sets per transform. The results have shown performance improvement (higher precision and recall values) with fractional coefficients compared to complete transform of image at reduced computations resulting in faster retrieval. Finally Kekre’s transform surpasses all other discussed transforms in performance with highest precision and recall values for fractional coefficients (6.25% and 3.125% of all coefficients) and computation are lowered by 94.08% as compared to DCT.
A new hybrid steganographic method for histogram preservation IJEEE
This paper presents a histogram preserving data embedding method for grey-scale images which is based on pixel value differencing (PVD) and least-significant-bit (LSB) substitution methods. Various PVD based steganographic methods achieve high data embedding capacity with minimum distortions in stego image at the cost of change in histogram characteristics which is can be detected by histogram based steganalysers. This persistent problem can been taken care off by proposed method of data hiding. The improved performance of the proposed method is verified through extensive simulations.
Colorization of Greyscale Images Using Kekre’s Biorthogonal Color Spaces and ...Waqas Tariq
There is no exact solution for colorization of greyscale images. The main focus of the techniques [24] is to minimise the human efforts needed in manually coloring the greyscale images. The human interaction is needed only to find a reference color image, then the job of transferring color traits from reference color image to greyscale image is done by techniques discussed in[1]. Here the colors from some source color image are picked up and squirted into the colored greyscale image. The color palette used in colorization technique discussed here is generated using the modified VQ codebook obtained by applying Kekre’s Fast Codebook generation algorithm. The technique is tested using various VQ codebook sizes like 64, 128, 256. In this papaer the techniques of color traits transfer to greyscale images are revisited with various color spaces like newly introduced Kekre’s Biorthogonal color spaces and RGB color space. The pixel window size used is of size 2x2. Color traits transfer to greyscale algorithms are tested over five different images for deciding the color space giving best quality of coloring. The experimental results show that the Kekre’s Biorthogonal Green color space gives better coloring.
APPLYING R-SPATIOGRAM IN OBJECT TRACKING FOR OCCLUSION HANDLINGsipij
Object tracking is one of the most important problems in computer vision. The aim of video tracking is to extract the trajectories of a target or object of interest, i.e. accurately locate a moving target in a video sequence and discriminate target from non-targets in the feature space of the sequence. So, feature descriptors can have significant effects on such discrimination. In this paper, we use the basic idea of many trackers which consists of three main components of the reference model, i.e., object modeling, object detection and localization, and model updating. However, there are major improvements in our system. Our forth component, occlusion handling, utilizes the r-spatiogram to detect the best target candidate. While spatiogram contains some moments upon the coordinates of the pixels, r-spatiogram computes region-based compactness on the distribution of the given feature in the image that captures richer features to represent the objects. The proposed research develops an efficient and robust way to keep tracking the object throughout video sequences in the presence of significant appearance variations and severe occlusions. The proposed method is evaluated on the Princeton RGBD tracking dataset considering sequences with different challenges and the obtained results demonstrate the effectiveness of the proposed method.
Application of Image Retrieval Techniques to Understand Evolving Weatherijsrd.com
Multispectral satellite images provide valuable information to understand the evolution of various weather systems such as tropical cyclones, shifting of intra tropical convergence zone, moments of various troughs etc., accurate prediction and estimation will save live and property. This work will deal with the development of an application which will enable users to search an image from database using either gray level, texture and shape features for meteorological satellite image retrieval .Gray level feature is extracted using histogram method. The Texture feature is extracted using gray level co-occurrence method and wavelet approach. The shape feature vector is extracted using morphological operations. The similarity between query image and database images is calculated using Euclidian distance. The performance of the system is evaluated using precision
The development of multimedia system technology in Content based Image Retrieval (CBIR) System is
one in every of the outstanding area to retrieve the images from an oversized collection of database. The feature
vectors of the query image are compared with feature vectors of the database images to get matching images.It is
much observed that anyone algorithm isn't beneficial in extracting all differing kinds of natural images. Thus an
intensive analysis of certain color, texture and shape extraction techniques are allotted to spot an efficient CBIR
technique that suits for a selected sort of images. The Extraction of an image includes feature description and
feature extraction. During this paper, we tend to projected Color Layout Descriptor (CLD), grey Level Co-
Occurrences Matrix (GLCM), Marker-Controlled Watershed Segmentation feature extraction technique that
extract the matching image based on the similarity of Color, Texture and shape within the database. For
performance analysis, the image retrieval timing results of the projected technique is calculated and compared
with every of the individual feature.
Abstract: Many applications such as robot navigation, defense, medical and remote sensing performvarious processing tasks, which can be performed more easily when all objects in different images of the same scene are combined into a single fused image. In this paper, we propose a fast and effective method for image fusion. The proposed method derives the intensity based variations that is large and small scale, from the source images. In this approach, guided filtering is employed for this extraction. Gaussian and Laplacian pyramidal approach is then used to fuse the different layers obtained. Experimental results demonstrate that the proposed method can obtain better performance for fusion of
all sets of images. The results clearly indicate the feasibility of the proposed approach.
Developing 3D Viewing Model from 2D Stereo Pair with its Occlusion RatioCSCJournals
We intend to make a 3D model using a stereo pair of images by using a novel method of local matching in pixel domain for calculating horizontal disparities. We also find the occlusion ratio using the stereo pair followed by the use of The Edge Detection and Image SegmentatiON (EDISON) system, on one the images, which provides a complete toolbox for discontinuity preserving filtering, segmentation and edge detection. Instead of assigning a disparity value to each pixel, a disparity plane is assigned to each segment. We then warp the segment disparities to the original image to get our final 3D viewing Model.
Image Inpainting Using Cloning AlgorithmsCSCJournals
In image recovery image inpainting has become essential content and crucial topic in research of a new era. The objective is to restore the image with the surrounding information or modifying an image in a way that looks natural for the viewer. The process involves transporting and diffusing image information. In this paper to inpaint an image cloning concept has been used. Multiscale transformation method is used for cloning process of an image inpainting. Results are compared with conventional methods namely Taylor expansion method, poisson editing, Shepard’s method. Experimental analysis verifies better results and shows that Shepard’s method using multiscale transformation not only restores small scale damages but also large damaged area and useful in duplication of image information in an image.
Image enhancement technique plays vital role in improving the quality of the image. Enhancement
technique basically enhances the foreground information and retains the background and improve the
overall contrast of an image. In some case the background of an image hides the structural information of
an image. This paper proposes an algorithm which enhances the foreground image and the background
part separately and stretch the contrast of an image at inter-object level and intra-object level and then
combines it to an enhanced image. The results are compared with various classical methods using image
quality measures
In this paper, we analyze and compare the performance of fusion methods based on four different
transforms: i) wavelet transform, ii) curvelet transform, iii) contourlet transform and iv) nonsubsampled
contourlet transform. Fusion framework and scheme are explained in detail, and two different sets of
images are used in our experiments. Furthermore, eight different performancemetrics are adopted to
comparatively analyze the fusion results. The comparison results show that the nonsubsampled contourlet
transform method performs better than the other three methods, both spatially and spectrally. We also
observed from additional experiments that the decomposition level of 3 offered the best fusion performance,
anddecomposition levels beyond level-3 did not significantly improve the fusion results.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Efficient fingerprint image enhancement algorithm based on gabor filtereSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
Comparative Analysis of Lossless Image Compression Based On Row By Row Classi...IJERA Editor
Lossless image compression is needed in many fields like medical imaging, telemetry, geophysics, remote
sensing and other applications, which require exact replica of original image and loss of information is not
tolerable. In this paper, a near lossless image compression algorithm based on row by row classifier with
encoding schemes like Lempel Ziv Welch (LZW), Huffman and Run Length Encoding (RLE) on color images
is proposed. The algorithm divides the image into three parts R, G and B, apply row by row classification on
each part and result of this classification is records in the mask image. After classification the image data is
decomposed into two sequences each for R, G and B and mask image is hidden in them. These sequences are
encoded using different encoding schemes like LZW, Huffman and RLE. An exhaustive comparative analysis is
performed to evaluate these techniques, which reveals that the pro
Gabor filter is a powerful way to enhance biometric images like fingerprint images in order to extract correct features from these images, Gabor filter used in extracting features directly asin iris images, and sometimes Gabor filter has been used for texture analysis. In fingerprint images The even symmetric Gabor filter is contextual filter or multi-resolution filter will be used to enhance fingerprint imageby filling small gaps (low-pass effect) in the direction of the ridge (black regions) and to increase the discrimination between ridge and valley (black and white regions) in the direction, orthogonal to the ridge, the proposed method in applying Gabor filter on fingerprint images depending on translated fingerprint image into binary image after applying some simple enhancing methods to partially overcome time consuming problem of the Gabor filter.
Performance Comparison of Image Retrieval Using Fractional Coefficients of Tr...CSCJournals
The thirst of better and faster retrieval techniques has always fuelled to the research in content based image retrieval (CBIR). The paper presents innovative content based image retrieval (CBIR) techniques based on feature vectors as fractional coefficients of transformed images using Discrete Cosine, Walsh, Haar and Kekre’s transforms. Here the advantage of energy compaction of transforms in higher coefficients is taken to greatly reduce the feature vector size per image by taking fractional coefficients of transformed image. The feature vectors are extracted in fourteen different ways from the transformed image, with the first being considering all the coefficients of transformed image and then fourteen reduced coefficients sets (as 50%, 25%, 12.5%, 6.25%, 3.125%, 1.5625% ,0.7813%, 0.39%, 0.195%, 0.097%, 0.048%, 0.024%, 0.012% and 0.06% of complete transformed image) are considered as feature vectors. The four transforms are applied on gray image equivalents and the colour components of images to extract Gray and RGB feature sets respectively. Instead of using all coefficients of transformed images as feature vector for image retrieval, these fourteen reduced coefficients sets for gray as well as RGB feature vectors are used, resulting into better performance and lower computations. The proposed CBIR techniques are implemented on a database having 1000 images spread across 11 categories. For each proposed CBIR technique 55 queries (5 per category) are fired on the database and net average precision and recall are computed for all feature sets per transform. The results have shown performance improvement (higher precision and recall values) with fractional coefficients compared to complete transform of image at reduced computations resulting in faster retrieval. Finally Kekre’s transform surpasses all other discussed transforms in performance with highest precision and recall values for fractional coefficients (6.25% and 3.125% of all coefficients) and computation are lowered by 94.08% as compared to DCT.
In this project, we proposed a Content Based Image Retrieval (CBIR) system which is used to retrieve a
relevant image from an outsized database. Textile images showed the way for the development of CBIR. It
establishes the efficient combination of color, shape and texture features. Here the textile image is given as
dataset. The images in database are loaded. The resultant image is given as input to feature extraction
technique which is transformation of input image into a set of features such as color, texture and shape.
The texture feature of an image is taken out by using Gray level co-occurrence matrix (GLCM). The color
feature of an image is obtained by HSI color space. The shape feature of an image is extorted by sobel
technique. These algorithms are used to calculate the similarity between extracted features. These features
are combined effectively so that the retrieval accuracy and recall rate is enhanced. The classification
techniques such as Support Vector Machine (SVM) are used to classify the features of a query image by
splitting the group such as color, shape and texture. Finally, the relevant images are retrieved from a large
database and hence the efficiency of an image is plotted.The software used is MATLAB 7.10 (matrix
laboratory) which is built software applications
Content-Based Image Retrieval (CBIR) systems employ color as primary feature with texture and shape as secondary features. In this project a simple, image retrieval system will be implemented
Learning in content based image retrieval a revieweSAT Journals
Abstract
Relevance feedback in Content Based Image Retrieval is an interactive process where the user provides feedback on the systemretrieved
images to bridge the gap between user semantics at high level and machine extracted low level features of images.
RF exploits Machine Learning and Pattern Recognition techniques for Short Term Learning and Long Term Learning to provide
improved performance in retrieval. Intra query and across query learning have received enormous attention over the past
decade. This paper first categorizes the various learning techniques and discusses the intuition behind each of these techniques.
State-of-art learning techniques ranging from Feature Relevance learning to manifold learning in STL and Latent Semantic
Analysis used in text processing to recent kernel semantic learning in LTL are discussed.
Keywords: Relevance Feedback, Short Term Learning, Long Term Learning, Sematic Gap, High Level Features
The project aims at development of efficient segmentation method for the CBIR system. Mean-shift segmentation generates a list of potential objects which are meaningful and then these objects are clustered according to a predefined similarity measure. The method was tested on benchmark data and F-Score of .30 was achieved.
FACE RECOGNITION USING DIFFERENT LOCAL FEATURES WITH DIFFERENT DISTANCE TECHN...IJCSEIT Journal
A face recognition system using different local features with different distance measures is proposed in this
paper. Proposed method is fast and gives accurate detection. Feature vector is based on Eigen values,
Eigen vectors, and diagonal vectors of sub images. Images are partitioned into sub images to detect local
features. Sub partitions are rearranged into vertically and horizontally matrices. Eigen values, Eigenvector
and diagonal vectors are computed for these matrices. Global feature vector is generated for face
recognition. Experiments are performed on benchmark face YALE database. Results indicate that the
proposed method gives better recognition performance in terms of average recognized rate and retrieval
time compared to the existing methods.
Comprehensive Performance Comparison of Cosine, Walsh, Haar, Kekre, Sine, Sla...CSCJournals
The desire of better and faster retrieval techniques has always fuelled to the research in content based image retrieval (CBIR). The extended comparison of innovative content based image retrieval (CBIR) techniques based on feature vectors as fractional coefficients of transformed images using various orthogonal transforms is presented in the paper. Here the fairly large numbers of popular transforms are considered along with newly introduced transform. The used transforms are Discrete Cosine, Walsh, Haar, Kekre, Discrete Sine, Slant and Discrete Hartley transforms. The benefit of energy compaction of transforms in higher coefficients is taken to reduce the feature vector size per image by taking fractional coefficients of transformed image. Smaller feature vector size results in less time for comparison of feature vectors resulting in faster retrieval of images. The feature vectors are extracted in fourteen different ways from the transformed image, with the first being all the coefficients of transformed image considered and then fourteen reduced coefficients sets are considered as feature vectors (as 50%, 25%, 12.5%, 6.25%, 3.125%, 1.5625% ,0.7813%, 0.39%, 0.195%, 0.097%, 0.048%, 0.024%, 0.012% and 0.06% of complete transformed image coefficients). To extract Gray and RGB feature sets the seven image transforms are applied on gray image equivalents and the color components of images. Then these fourteen reduced coefficients sets for gray as well as RGB feature vectors are used instead of using all coefficients of transformed images as feature vector for image retrieval, resulting into better performance and lower computations. The Wang image database of 1000 images spread across 11 categories is used to test the performance of proposed CBIR techniques. 55 queries (5 per category) are fired on the database o find net average precision and recall values for all feature sets per transform for each proposed CBIR technique. The results have shown performance improvement (higher precision and recall values) with fractional coefficients compared to complete transform of image at reduced computations resulting in faster retrieval. Finally Kekre transform surpasses all other discussed transforms in performance with highest precision and recall values for fractional coefficients (6.25% and 3.125% of all coefficients) and computation are lowered by 94.08% as compared to Cosine or Sine or Hartlay transforms.
International Journal of Computational Engineering Research(IJCER) is an intentional online Journal in English monthly publishing journal. This Journal publish original research work that contributes significantly to further the scientific knowledge in engineering and Technology.
Wavelet-Based Color Histogram on Content-Based Image RetrievalTELKOMNIKA JOURNAL
The growth of image databases in many domains, including fashion, biometric, graphic design,
architecture, etc. has increased rapidly. Content Based Image Retrieval System (CBIR) is a technique used
for finding relevant images from those huge and unannotated image databases based on low-level features
of the query images. In this study, an attempt to employ 2nd level Wavelet Based Color Histogram (WBCH)
on a CBIR system is proposed. Image database used in this study are taken from Wang’s image database
containing 1000 color images. The experiment results show that 2nd level WBCH gives better precision
(0.777) than the other methods, including 1st level WBCH, Color Histogram, Color Co-occurrence Matrix,
and Wavelet texture feature. It can be concluded that the 2nd Level of WBCH can be applied to CBIR system.
Extended Performance Appraise of Image Retrieval Using the Feature Vector as ...Waqas Tariq
The extension to the content based image retrieval (CBIR) technique based on row mean of transformed columns of image is presented here. As compared to earlier contemplation three image transforms, now the performance appraise of proposed CBIR technique is done using seven different image transforms like Discrete Cosine Transform (DCT), Discrete Sine Transform (DST), Hartley Transform, Haar Transform, Kekre Transform, Walsh Transform and Slant Transform. The generic image database with 1000 images spread across 11 categories is used to test the performance of proposed CBIR techniques. For each transform 55 queries (5 per category) were fired on the image database. Every technique is tested on both the color and grey version of image database. To compare the performance of image retrieval technique across transforms average precision and recall are computed of all queries. The results have shown the performance improvement (higher precision and recall values) with proposed methods compared to all pixel data of image at reduced computations resulting in faster retrieval in both gray as well as color versions of image database. Even the variation of considering DC component of transformed columns as part of feature vector and excluding it are also tested and it is found that presence of DC component in feature vector improvises the results in image retrieval. The ranking of transforms for performance in proposed gray CBIR techniques with DC component consideration can be given as DST, Haar, Hartley, DCT, Walsh, Slant and Kekre. In color variants of proposed techniques with DC component, the performance ranking of image transforms starting from best can be listed as DCT, Haar, Walsh, Slant, DST, Hartley and Kekre transform.
Hybrid Technique for Copy-Move Forgery Detection Using L*A*B* Color Space IJEEE
Copy-move forgery is applied on an image to hide a region or an object. Most of the detection techniques either use transform domain or spatial domain information to detect the forgery. This paper presents a hybrid method to detect the forgery making use of both the domains i.e. transform domain in whichSVD is used to extract the useful information from image and spatial domain in which L*a*b* color space is used. Here block based approach and lexicographical sorting is used to group matching feature vectors. Obtained experimental results demonstrate that proposed method efficiently detects copy-move forgery even when post-processing operations like blurring, noise contamination, and severe lossy compression are applied.
Performance Comparison of Density Distribution and Sector mean of sal and cal...CSCJournals
In this paper we have proposed two different approaches for feature vector generation with absolute difference as similarity measuring parameter. Sal-cal vectors density distribution and Individual sector mean of complex Walsh transform. The cross over point performance of overall average of precision and recall for both approaches on all applicable sectors sizes are compared. The complex Walsh transform is conceived by multiplying sal components by j= ã-1. The density distribution of real (cal) and imaginary (sal) values and individual mean of Walsh sectors in all three color planes are considered to design the feature vector. The algorithm proposed here is worked over database of 270 images spread over 11 different classes. Overall Average precision and recall is calculated for the performance evaluation and comparison of 4, 8, 12 & 16 Walsh sectors. The overall average of cross over points of precision and recall is of all methods for both approaches are compared. The use of Absolute difference as similarity measure always gives lesser computational complexity and Individual sector mean approach of feature vector has the best retrieval.
Web Image Retrieval Using Visual Dictionaryijwscjournal
In this research, we have proposed semantic based image retrieval system to retrieve set of relevant images for the given query image from the Web. We have used global color space model and Dense SIFT feature extraction technique to generate visual dictionary using proposed quantization algorithm. The images are transformed into set of features. These features are used as inputs in our proposed Quantization algorithm for generating the code word to form visual dictionary. These codewords are used to represent images semantically to form visual labels using Bag-of-Features (BoF). The Histogram intersection method is used to measure the distance between input image and the set of images in the image database to retrieve similar images. The experimental results are evaluated over a collection of 1000 generic Web images to demonstrate the effectiveness of the proposed system.
Web Image Retrieval Using Visual Dictionaryijwscjournal
In this research, we have proposed semantic based image retrieval system to retrieve set of relevant images for the given query image from the Web. We have used global color space model and Dense SIFT feature extraction technique to generate visual dictionary using proposed quantization algorithm. The images are transformed into set of features. These features are used as inputs in our proposed Quantization algorithm for generating the code word to form visual dictionary. These codewords are used to represent images semantically to form visual labels using Bag-of-Features (BoF). The Histogram intersection method is used to measure the distance between input image and the set of images in the image database to retrieve similar images. The experimental results are evaluated over a collection of 1000 generic Web images to demonstrate the effectiveness of the proposed system.
Research Inventy : International Journal of Engineering and Scienceresearchinventy
Research Inventy : International Journal of Engineering and Science is published by the group of young academic and industrial researchers with 12 Issues per year. It is an online as well as print version open access journal that provides rapid publication (monthly) of articles in all areas of the subject such as: civil, mechanical, chemical, electronic and computer engineering as well as production and information technology. The Journal welcomes the submission of manuscripts that meet the general criteria of significance and scientific excellence. Papers will be published by rapid process within 20 days after acceptance and peer review process takes only 7 days. All articles published in Research Inventy will be peer-reviewed.
A COMPARATIVE ANALYSIS OF RETRIEVAL TECHNIQUES IN CONTENT BASED IMAGE RETRIEVALcscpconf
Basic group of visual techniques such as color, shape, texture are used in Content Based Image Retrievals (CBIR) to retrieve query image or sub region of image to find similar images in image database. To improve query result, relevance feedback is used many times in CBIR to help user to express their preference and improve query results. In this paper, a new approach for image retrieval is proposed which is based on the features such as Color Histogram, Eigen Values and Match Point. Images from various types of database are first identified by using edge detection techniques .Once the image is identified, then the image is searched in the particular database, then all related images are displayed. This will save the retrieval time. Further to retrieve the precise query image, any of the three techniques are used and comparison is done w.r.t. average retrieval time. Eigen value technique found to be the best as compared with other two techniques.
A comparative analysis of retrieval techniques in content based image retrievalcsandit
Basic group of visual techniques such as color, shape, texture are used in Content Based Image
Retrievals (CBIR) to retrieve query image or sub region of image to find similar images in
image database. To improve query result, relevance feedback is used many times in CBIR to
help user to express their preference and improve query results. In this paper, a new approach
for image retrieval is proposed which is based on the features such as Color Histogram, Eigen
Values and Match Point. Images from various types of database are first identified by using
edge detection techniques .Once the image is identified, then the image is searched in the
particular database, then all related images are displayed. This will save the retrieval time.
Further to retrieve the precise query image, any of the three techniques are used and
comparison is done w.r.t. average retrieval time. Eigen value technique found to be the best as
compared with other two techniques.
SEMANTIC IMAGE RETRIEVAL USING MULTIPLE FEATUREScscpconf
In Content Based Image Retrieval (CBIR) some problem such as recognizing the similar
images, the need for databases, the semantic gap, and retrieving the desired images from huge
collections are the keys to improve. CBIR system analyzes the image content for indexing,
management, extraction and retrieval via low-level features such as color, texture and shape.
To achieve higher semantic performance, recent system seeks to combine the low-level features
of images with high-level features that conation perceptual information for human beings.
Performance improvements of indexing and retrieval play an important role for providing
advanced CBIR services. To overcome these above problems, a new query-by-image technique
using combination of multiple features is proposed. The proposed technique efficiently sifts through the dataset of images to retrieve semantically similar images.
Image search using similarity measures based on circular sectorscsandit
With growing number of stored image data, image sea
rch and image similarity problem become
more and more important. The answer can be solved b
y Content-Based Image Retrieval
systems. This paper deals with an image search usin
g similarity measures based on circular
sectors method. The method is inspired by human eye
functionality. The main contribution of the
paper is a modified method that increases accuracy
for about 8% in comparison with original
approach. Here proposed method has used HSB colour
model and median function for feature
extraction. The original approach uses RGB colour m
odel with mean function. Implemented
method was validated on 10 image categories where o
verall average precision was 67%
IMAGE SEARCH USING SIMILARITY MEASURES BASED ON CIRCULAR SECTORScscpconf
With growing number of stored image data, image search and image similarity problem become
more and more important. The answer can be solved by Content-Based Image Retrieval
systems. This paper deals with an image search using similarity measures based on circular
sectors method. The method is inspired by human eye functionality. The main contribution of the
paper is a modified method that increases accuracy for about 8% in comparison with original
approach. Here proposed method has used HSB colour model and median function for feature
extraction. The original approach uses RGB colour model with mean function. Implemented
method was validated on 10 image categories where overall average precision was 67%.
Content Based Image Retrieval (CBIR) is one of the
most active in the current research field of multimedia retrieval.
It retrieves the images from the large databases based on images
feature like color, texture and shape. In this paper, Image
retrieval based on multi feature fusion is achieved by color and
texture features as well as the similarity measures are
investigated. The work of color feature extraction is obtained by
using Quadratic Distance and texture features by using Pyramid
Structure Wavelet Transforms and Gray level co-occurrence
matrix. We are comparing all these methods for best image
retrieval
Evaluation of Euclidean and Manhanttan Metrics In Content Based Image Retriev...IJERA Editor
Content-based Image Retrieval is all about generating signatures of images in database and comparing the signature of the query image with these stored signatures. Color histogram can be used as signature of an image and used to compare two images based on certain distance metric. Distance metrics Manhattan distance (L1 norm) and Euclidean distance (L2 norm) are used to determine similarities between a pair of images. In this paper, Corel database is used to evaluate the performance of Manhattan and Euclidean distance metrics. The experimental results showed that Manhattan showed better precision rate than Euclidean distance metric. The evaluation is made using Content based image retrieval application developed using color moments of the Hue, Saturation and Value(HSV) of the image and Gabor descriptors are adopted as texture features.
WEB IMAGE RETRIEVAL USING CLUSTERING APPROACHEScscpconf
Image retrieval system is an active area to propose a new approach to retrieve images from the
large image database. In this concerned, we proposed an algorithm to represent images using
divisive based and partitioned based clustering approaches. The HSV color component and Haar wavelet transform is used to extract image features. These features are taken to segment an image to obtain objects. For segmenting an image, we used modified k-means clustering algorithm to group similar pixel together into K groups with cluster centers. To modify Kmeans, we proposed a divisive based clustering algorithm to determine the number of cluster and get back with number of cluster to k-means to obtain significant object groups. In addition, we also discussed the similarity distance measure using threshold value and object uniqueness to quantify the results.
Similar to Effect of Similarity Measures for CBIR using Bins Approach (20)
2024.06.01 Introducing a competency framework for languag learning materials ...Sandy Millin
http://sandymillin.wordpress.com/iateflwebinar2024
Published classroom materials form the basis of syllabuses, drive teacher professional development, and have a potentially huge influence on learners, teachers and education systems. All teachers also create their own materials, whether a few sentences on a blackboard, a highly-structured fully-realised online course, or anything in between. Despite this, the knowledge and skills needed to create effective language learning materials are rarely part of teacher training, and are mostly learnt by trial and error.
Knowledge and skills frameworks, generally called competency frameworks, for ELT teachers, trainers and managers have existed for a few years now. However, until I created one for my MA dissertation, there wasn’t one drawing together what we need to know and do to be able to effectively produce language learning materials.
This webinar will introduce you to my framework, highlighting the key competencies I identified from my research. It will also show how anybody involved in language teaching (any language, not just English!), teacher training, managing schools or developing language learning materials can benefit from using the framework.
Students, digital devices and success - Andreas Schleicher - 27 May 2024..pptxEduSkills OECD
Andreas Schleicher presents at the OECD webinar ‘Digital devices in schools: detrimental distraction or secret to success?’ on 27 May 2024. The presentation was based on findings from PISA 2022 results and the webinar helped launch the PISA in Focus ‘Managing screen time: How to protect and equip students against distraction’ https://www.oecd-ilibrary.org/education/managing-screen-time_7c225af4-en and the OECD Education Policy Perspective ‘Students, digital devices and success’ can be found here - https://oe.cd/il/5yV
Palestine last event orientationfvgnh .pptxRaedMohamed3
An EFL lesson about the current events in Palestine. It is intended to be for intermediate students who wish to increase their listening skills through a short lesson in power point.
Operation “Blue Star” is the only event in the history of Independent India where the state went into war with its own people. Even after about 40 years it is not clear if it was culmination of states anger over people of the region, a political game of power or start of dictatorial chapter in the democratic setup.
The people of Punjab felt alienated from main stream due to denial of their just demands during a long democratic struggle since independence. As it happen all over the word, it led to militant struggle with great loss of lives of military, police and civilian personnel. Killing of Indira Gandhi and massacre of innocent Sikhs in Delhi and other India cities was also associated with this movement.
The Roman Empire A Historical Colossus.pdfkaushalkr1407
The Roman Empire, a vast and enduring power, stands as one of history's most remarkable civilizations, leaving an indelible imprint on the world. It emerged from the Roman Republic, transitioning into an imperial powerhouse under the leadership of Augustus Caesar in 27 BCE. This transformation marked the beginning of an era defined by unprecedented territorial expansion, architectural marvels, and profound cultural influence.
The empire's roots lie in the city of Rome, founded, according to legend, by Romulus in 753 BCE. Over centuries, Rome evolved from a small settlement to a formidable republic, characterized by a complex political system with elected officials and checks on power. However, internal strife, class conflicts, and military ambitions paved the way for the end of the Republic. Julius Caesar’s dictatorship and subsequent assassination in 44 BCE created a power vacuum, leading to a civil war. Octavian, later Augustus, emerged victorious, heralding the Roman Empire’s birth.
Under Augustus, the empire experienced the Pax Romana, a 200-year period of relative peace and stability. Augustus reformed the military, established efficient administrative systems, and initiated grand construction projects. The empire's borders expanded, encompassing territories from Britain to Egypt and from Spain to the Euphrates. Roman legions, renowned for their discipline and engineering prowess, secured and maintained these vast territories, building roads, fortifications, and cities that facilitated control and integration.
The Roman Empire’s society was hierarchical, with a rigid class system. At the top were the patricians, wealthy elites who held significant political power. Below them were the plebeians, free citizens with limited political influence, and the vast numbers of slaves who formed the backbone of the economy. The family unit was central, governed by the paterfamilias, the male head who held absolute authority.
Culturally, the Romans were eclectic, absorbing and adapting elements from the civilizations they encountered, particularly the Greeks. Roman art, literature, and philosophy reflected this synthesis, creating a rich cultural tapestry. Latin, the Roman language, became the lingua franca of the Western world, influencing numerous modern languages.
Roman architecture and engineering achievements were monumental. They perfected the arch, vault, and dome, constructing enduring structures like the Colosseum, Pantheon, and aqueducts. These engineering marvels not only showcased Roman ingenuity but also served practical purposes, from public entertainment to water supply.
Instructions for Submissions thorugh G- Classroom.pptxJheel Barad
This presentation provides a briefing on how to upload submissions and documents in Google Classroom. It was prepared as part of an orientation for new Sainik School in-service teacher trainees. As a training officer, my goal is to ensure that you are comfortable and proficient with this essential tool for managing assignments and fostering student engagement.
How to Split Bills in the Odoo 17 POS ModuleCeline George
Bills have a main role in point of sale procedure. It will help to track sales, handling payments and giving receipts to customers. Bill splitting also has an important role in POS. For example, If some friends come together for dinner and if they want to divide the bill then it is possible by POS bill splitting. This slide will show how to split bills in odoo 17 POS.
Ethnobotany and Ethnopharmacology:
Ethnobotany in herbal drug evaluation,
Impact of Ethnobotany in traditional medicine,
New development in herbals,
Bio-prospecting tools for drug discovery,
Role of Ethnopharmacology in drug evaluation,
Reverse Pharmacology.
Chapter 3 - Islamic Banking Products and Services.pptx
Effect of Similarity Measures for CBIR using Bins Approach
1. Dr. H. B. Kekre, Kavita Sonawane
International Journal of Image Processing (IJIP), Volume (6) : Issue (3) : 2012 182
Effect of Similarity Measures for CBIR Using Bins Approach
Dr. H. B Kekre hbkekre@yahoo.com
Professor,Department of Computer Engineering
NMIMS University,
Mumbai, Vileparle 056, India
Kavita Sonawane kavitavinaysonawane@gmail.com
Ph .D Research Scholar
NMIMS University,
Mumbai, Vileparle 056, India
Abstract
This paper elaborates on the selection of suitable similarity measure for content based image
retrieval. It contains the analysis done after the application of similarity measure named
Minkowski Distance from order first to fifth. It also explains the effective use of similarity measure
named correlation distance in the form of angle ‘cosθ’ between two vectors. Feature vector
database prepared for this experimentation is based on extraction of first four moments into 27
bins formed by partitioning the equalized histogram of R, G and B planes of image into three
parts. This generates the feature vector of dimension 27. Image database used in this work
includes 2000 BMP images from 20 different classes. Three feature vector databases of four
moments namely Mean, Standard deviation, Skewness and Kurtosis are prepared for three color
intensities (R, G and B) separately. Then system enters in the second phase of comparing the
query image and database images which makes of set of similarity measures mentioned above.
Results obtained using all distance measures are then evaluated using three parameters PRCP,
LSRR and Longest String. Results obtained are then refined and narrowed by combining the
three different results of three different colors R, G and B using criterion 3. Analysis of these
results with respect to similarity measures describes the effectiveness of lower orders of
Minkowski distance as compared to higher orders. Use of Correlation distance also proved its
best for these CBIR results.
Keywords: Equalized Histogram, Minkowski Distance, Cosine Correlation Distance, Moments,
LSRR, Longest String, PRCP.
1. INTRODUCTION
Research work in the field of CBIR systems is growing in various directions for various different
stages of CBIR like types of feature vectors, types of feature extraction techniques,
representation of feature vectors, application of similarity measures, performance evaluation
parameters etc[1][2][3][4][5][6]. Many approaches are being invented and designed in frequency
domain like application of various transforms over entire image, or blocks of images or row
column vector of images, Fourier descriptors or various other ways using transforms are
designed to extract and represent the image feature[7][8][9][10][11][12]. Similarly many methods
are being design and implemented in the spatial domain too. This includes use of image
histograms, color coherence vectors, vector quantization based techniques and many other
spatial features extraction methods for CBIR [13][14][15][ 16][17]. In our work we have prepared
the feature vector databases using spatial properties of image in the form statistical parameters
i.e. moments namely Mean, Standard deviation, Skewness and Kurtosis. These moments are
extracted into 27 bins formed by partitioning the equalized histograms of R, G and B planes of
image into 3 parts.[18][19][20]. The core part of all the CBIR systems is calculating the distance
between the query image and database images which has great impact on the behavior of the
CBIR system as it actually decides the set of images to be retrieved in final retrieval set. Various
2. Dr. H. B. Kekre, Kavita Sonawane
International Journal of Image Processing (IJIP), Volume (6) : Issue (3) : 2012 183
similarity measures are available can be used for CBIR [21][22][23][24]. Most commonly used
similarity measure we have seen in the literature survey of CBIR is Euclidean distance. Here we
have used Minkowski distance from order first to fifth where we found that performance of the
system goes on improving with decrease in the order (from 5 to 1) of Minkowski distance; one
more similarity measure we have used in this work is Cosine Correlation distance [25][26][27][28],
which has also proved its best after Minkowski order one. Performance of CBIR’s various
methods in both frequency and spatial domain will be evaluated using various parameters like
precision, recall, LSRR (Length of String to Retrieve all Relevant) and various others
[29][30][31][32][33]. In this paper we are using three parameters PRCP, LSRR and ‘Longest
String’ to evaluate the performance of our system for all the similarity measures used and for all
types of feature vectors for three colors R, G and B. We found scope to narrate and combine
these results obtained separately for three feature vector databases based on three colors. This
refinement is achieved using criterion designed to combine results of three colors which selects
the image in final retrieval set even though it is being retrieved in results set of only one of these
three colors [11[12].
2. ALGORITHMIC VIEW WITH IMPLEMENTATION DETAILS
2.1 Bins Formation by Partitioning the Equlaized Histogram of R, G, B Planes
i. First we have separated the image into R, G and B Planes and calculated the equalized
histogram for each plane as shown below.
ii. These histograms are then partitioned into three parts with id ‘0’, ‘1’ and ‘2’. This
partitioning generates the two threshold for the intensities distributed across x – axis of
histogram for each plane. We have named these threshold or partition boundaries as
GL1 and GL2 as shown in Figure 2.
FIGURE 1: Query Image: Kingfisher
FIGURE 2: Equalized Histograms of R, G and B Planes With Three partitions ‘0’, ‘1’ and ‘2’.
iii. Determination of Bin address: To determine the destination for the pixel under process of
extracting feature vector we have to check its R, G and B intensities where they fall, in
which partition of the respective equalized histogram either ‘0’,’1’ or ‘2’ and then this
way 3 digit flag is assigned to that pixel itself its destination bin address. Like this we
have obtained 000 to 222 total 27 bin addresses by dividing the histogram into 3 parts.
3. Dr. H. B. Kekre, Kavita Sonawane
International Journal of Image Processing (IJIP), Volume (6) : Issue (3) : 2012 184
2.2 Statistical Information Stored in 27 Bins: Mean, Standard Deviation, Skewness and
Kurtosis
Basically these bins obtained are having the count of pixels falling in particular range. Further
these bins are used to hold the statistical information in the form of first four moments for each
color separately. These moments are calculated for the pixel intensities coming into each bin
using the following Equations 1 to 4 respectively.
Mean ∑=
=
N
i
i
R
N
R
1
1
(1) Skew (3)
Standard deviation
( )∑=
−=
N
i
RR
NSDR
1
21
(2)
Kurtosis (4)
Where R is Bin_Mean_R in eq. 1, 2, 3 and 4.
These bins are directed to hold the absolute values of central moments and likewise we could
obtained 4 moments x 3 colors =12 feature vector databases, where each feature vector is
consist of 27 components. Following Figure 3 shows the bins of R, G, B colors for Mean
parameter. Sample 27 Bins of R, G and B Colors for Kingfisher image shown in Figure 1.
0
50
100
150
200
250
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27
'27' Bins
MeanR MeanG MeanB
MeanofCountofPixelsinEachBin
FIGURE 3: 27 Bins of R, G and B Colors for MEAN Parameter.
In above Figure 3 we can observe that Bin number 3, 7, 8, 9, 12, 18, 20, 21 and 24 are empty
because the count of pixels falling in those bins is zero in this image.
2.3 Application of Similarity Measures
Once the feature vector databases are ready we can fire the desired query to retrieve the similar
images from the database. To facilitate this, retrieval system has to perform the important task of
applying the similarity measure so that distance between the query image and database image
will be calculated and images having less distance will be retrieved in the final set. In this work we
are using 6 similarity measures we named them L1 to L6, which includes Minkowski distance
from order 1 to order 5(L1 to L5) and L6 is another distance i.e Correlation distance for the image
retrieval. We have analyzed their performance using different evaluation parameters. These
similarity measures are given in the following equations 5 and 6.
4. Dr. H. B. Kekre, Kavita Sonawane
International Journal of Image Processing (IJIP), Volume (6) : Issue (3) : 2012 185
Minkowski Distance :
rn
I
r
IIDQ QDDist
1
1
−= ∑=
(5)
Where r is a parameter, n is dimension and I is the
component of Database and Query image feature
vectors D and Q respectively.
Cosine Correlation Distance :
( ) ( )
•
2
)(
2
)(
)()(
nQnD
nQnD
(6)
Where D(n) and Q(n) are Database and
Query feature Vectors resp.
Minkowski Distance: Here the parameter ‘r’ can be taken from 1 to ∞. We have used this
distance with ‘r’ in the range from 1 to 5. When ‘r’ is =2 it is special case called Euclidean
distance (L2).
Cosine Correlation Distance: This can be expressed in the terms of Cos θ
θ1 θ2
FIGURE 4 : Comparison of Euclidean and Cosine Correlation Distance
Observation: ed2>ed1 But ed1’ >ed2’
Correlation measures in general are invariant to scale transformations and tend to give the
similarity measure for those feature vectors whose values are linearly related. In Figure 4. Cosine
Correlation distance is compared with the Euclidean distance. We can clearly notice that
Euclidean distance ed2 > ed1 between query image QI with two database image features DI1
and DI2 respectively for QI. At the same time we can see that θ1 > θ2 i.e distance L6 for DI1 and
DI2 respectively for QI.
If we scaled the query feature vector by simply constant factor k it becomes k.QI ; now if we
calculate the ED for DI1 and DI2 with query k.QI we got ed1’ and ed2’ now the relation they
have is ed1’ > ed2’ which is exactly opposite to what we had for QI. But if we see the cosine
correlation distance; it will not change even though we have scaled up the query feature vector to
k.QI. It clearly states that Euclidean distance varies with variation in the scale of the feature
vector but cosine correlation distance is invariant to this scale transformation. This property of
correlation distance triggered us to make use this for our CBIR. Actually this has been rarely used
for CBIR systems and here we found very good results for this similarity measure as compared to
Euclidean distance and the higher orders of Minkowski distance.
2.4 Performance Evaluation
Results obtained here are interpreted in the terms of PRCP: Precision Recall Cross over Point.
This parameter is designed using the conventional parameters precision and recall defined in
equation 7 and 8.
ed1
ed2
ed2’
ed1’
QI
DI1
DI2
k .QI
5. Dr. H. B. Kekre, Kavita Sonawane
International Journal of Image Processing (IJIP), Volume (6) : Issue (3) : 2012 186
According to this once the distance is calculated between the query image and database images,
these distances are sorted in ascending order. According to PRCP logic we are selecting first 100
images from sorted distances and among these we have to count the images which are relevant
to query; this is what called PRCP value for that query because we have total 100 images of each
class in our database.
Precision: Precision is the fraction of the relevant images which has been retrieved (from all
retrieved)
Recall: Recall is the fraction of the relevant images which has been retrieved (from all relevant):
(7)
(8)
Further performance of this system is evaluated using two more interesting parameters about
which all CBIR users will always be curious, that are LSRR: Length of String to Retrieve all
Relevant and Longest String: Longest continuous string of relevant images.
3. EXPERIMENTAL RESULTS AND DISCUSSIONS
In this work analysis is done to check the performance of the similarity measures for CBIR using
bins approach. That is why the results presented are highlighting the comparative study for
different similarity measures named as L1 to L6 as mentioned in above discussion.
3.1 Image Database and Query Images
Database used for the experiments is having 2000 BMP images which include 100 images from
20 different classes. The sample images from database are shown in Figure 5. We have
randomly selected 10 images from each class to be given as query to the system to be tested. In
all total 200 queries are executed for each feature vector database and for each similarity
measure. We have already shown one sample query image in Figure 1. i.e. Kingfisher image for
which the bins formation that is feature extraction process is explained thoroughly in section II
part A and B.
3.2 Discussion With Respect to PRCP
As discussed above the feature vector databases containing feature vectors of 27 bins
components for four absolute moments namely Mean, Standard deviation, Skewness and
Kurtosis for Red, Green and Blue colors separately are tested with 200 query images for six
similarity measures and the results obtained are given below in the following tables. Tables I to
XII are showing the results obtained for parameter PRCP i.e. Precision Recall Cross over Point
values for 10 queries from each class. Each entry in the table is representing the total retrieval of
(out of 1000 outputs) relevant images in terms of PRCP for 10 queries of that particular class
FIGURE 5 : 20 Sample Images from database of 2000 BMP images having 20 classes
6. Dr. H. B. Kekre, Kavita Sonawane
International Journal of Image Processing (IJIP), Volume (6) : Issue (3) : 2012 187
TABLE 2: PRCP FOR GREEN MEAN FOR L1 TO L6
CLASS L1 L2 L3 L4 L5 L6
Flower 258 214 182 173 165 239
Sunset 714 664 633 614 610 674
Mountain 147 127 121 124 126 124
Building 189 158 149 134 133 158
Bus 421 308 247 236 223 307
Dinosaur 223 189 168 163 160 200
Elephant 176 127 107 102 103 127
Barbie 537 503 486 478 468 463
Mickey 243 225 212 205 203 237
Horses 331 303 290 279 272 310
Kingfisher 350 314 286 282 286 321
Dove 199 188 179 170 166 190
Crow 147 136 120 117 115 110
Rainbowrose 652 613 590 563 555 647
Pyramids 172 138 114 110 106 132
Plates 240 215 198 169 156 210
Car 242 247 250 252 263 272
Trees 263 221 205 185 167 227
Ship 302 289 285 270 266 294
Waterfall 226 182 175 162 157 191
Total 6032 5361 4997 4788 4700 5433
mentioned in the first left most column of all the tables. Last rows of all the tables represent the
total PRCP retrieval out of 20,000 for 200 images. When we observe the individual entry in the
tables that is total of 10 queries for many classes with respect to distances L1 and L6 we have
found very good PRCP values for average of 10 queries in the range from 0.5 to 0.8 which is
quite good achievement. We can say that precision and recall both are reached to good height
which seems difficult in the field of CBIR for large size databases. Further we have planned to
improve these results not limiting to average of 10 queries but towards average of 200 queries.
To obtain this refinement what we did here is we have combined and reduced the results
obtained for three colors separately to single results set of three colors together by applying the
criterion explained below.
Criterion: The image will be retrieved in the final set if it is being retrieved in any one color results
from R, G and B.
By applying this criterion to all results obtained for three colors, four moments mentioned in the
tables from I to XII we have improved the system’s performance to very good extent for average
of 200 queries for moments namely Mean and Standard Deviation with similarity measures L1,
L6, L2 and L3 in increasing order. Results obtained are shown in Chart 1. We can see in chart
that the best average for 200 queries for PRCP values we could obtained is 0.5
0
CHART 1:. Results using Criterion to combine the R, G B color results for L1 to
TABLE 1: PRCP FOR RED MEAN FOR L1 TO L6
CLASS L1 L2 L3 L4 L5 L6
Flower 388 321 264 225 198 357
Sunset 764 707 603 522 461 727
Mountain 144 116 117 112 110 114
Building 177 165 161 163 161 162
Bus 512 474 439 414 407 472
Diansour 251 202 171 152 145 192
Elephant 157 128 124 119 120 133
Barbie 517 483 474 438 432 504
Mickey 305 308 301 302 300 314
Horses 285 230 194 177 173 214
Kingfisher 300 258 235 223 215 268
Dove 207 194 196 185 178 187
Crow 177 169 183 183 185 106
Rainbowrose 643 618 596 585 575 638
Pyramids 186 141 114 121 121 135
Plates 238 199 176 163 142 197
Car 134 111 104 93 91 105
Trees 283 239 231 213 206 242
Ship 327 276 256 252 244 249
Waterfall 281 214 195 190 191 205
Total 6276 5553 5134 4832 4655 5521
8. Dr. H. B. Kekre, Kavita Sonawane
International Journal of Image Processing (IJIP), Volume (6) : Issue (3) : 2012 189
4. PERFORMANCE EVALUATION USING LONGEST STRING AND LSRR
PARAMETERS
Along with the conventional parameters precision and recall used for CBIR we have evaluated
the system performance using two additional parameters namely Longest String and LSRR. As
discussed in section 2.4, CBIR users will always have curiosity to check what will be the
maximum continuous string of relevant images in the retrieval set which can be obtained using
the parameter longest string. LSRR gives the performance of the system in terms of the
maximum length of the sorted distances of all database images to be traversed to collect all
relevant images of the query class.
4.1 Longest String
This parameter is plotted through various charts. As we have 12 different feature vector
databases prepared for 4 moments for each of the three colors separately. We have calculated
the longest string for all the 12 database results, but the plots for longest string are showing the
maximum longest string obtained for each class for distances L1 to L6 irrespective of the three
colors and this way we have obtained total 4 sets of results plotted in charts 2, 3, 4 and 5 for first
four moments respectively. Among these few classes like Sunset, Rainbow rose, Barbie, Horses
and Pyramids are giving very good results that more than 60 as maximum longest string of
relevant images we could retrieve. In all the resultant bar of all graphs we can notice that L1 and
L6 are reaching to good height of similarity retrieval.
TABLE 5 : PRCP FOR GREEN STANDARD DEV.
CLASS L1 L2 L3 L4 L5 L6
Flower 320 352 332 319 296 376
Sunset 802 794 771 746 729 789
Mountain 243 249 236 225 223 238
Building 310 312 306 303 297 283
Bus 463 430 392 367 346 465
Diansour 359 358 347 338 328 304
Elephant 321 335 333 334 334 328
Barbie 461 416 401 395 385 430
Mickey 239 238 217 210 210 241
Horses 523 470 412 374 352 473
Kingfisher 368 389 363 353 348 383
Dove 355 307 270 243 238 315
Crow 238 211 192 192 187 120
Rainbowrose 647 652 624 590 577 708
Pyramids 351 350 334 323 319 174
Plates 345 345 330 317 311 370
Car 323 355 354 343 339 389
Trees 295 274 269 265 258 270
Ship 378 342 316 306 304 377
Waterfall 421 423 410 403 407 412
Total 7762 7602 7209 6946 6788 7445
TABLE 6 : PRCP FOR BLUE STANDARD DEV.
CLASS L1 L2 L3 L4 L5 L6
Flower 315 324 319 318 315 325
Sunset 696 593 529 483 462 630
Mountain 210 204 217 212 212 209
Building 224 214 194 191 183 196
Bus 480 484 474 439 422 531
Diansour 318 298 278 273 271 261
Elephant 228 252 257 256 259 245
Barbie 454 363 319 284 264 381
Mickey 222 213 199 196 190 229
Horses 453 446 425 404 403 445
Kingfisher 322 336 333 321 318 333
Dove 352 334 300 280 262 338
Crow 208 165 160 158 152 109
Rainbowrose 615 619 599 587 558 687
Pyramids 242 238 232 228 226 196
Plates 263 261 255 251 246 290
Car 227 218 211 195 187 250
Trees 253 228 215 200 191 227
Ship 414 402 387 375 367 435
Waterfall 273 258 247 246 239 260
Total 6769 6450 6150 5897 5727 6577
9. Dr. H. B. Kekre, Kavita Sonawane
International Journal of Image Processing (IJIP), Volume (6) : Issue (3) : 2012 190
4.2 LSRR
Similar to Longest String, the parameter LSRR is also used to evaluate the performance of 12
feature vector databases. As said earlier it gives the maximum length we need to travel in the
string of distances sorted in ascending order to collect all images from database which are
relevant to query image or say of query class. According to this logic of LSRR ; the value of LSRR
should be as low as possible so that with minimum traversal length and with less time we can
recall all the images from database. Results obtained for this parameter are the minimum values
in terms of percentage of LSRR are calculated for all 12 feature vector databases for 200 query
images with respect to all six similarity measures. The chart 6 is showing the results as best of
LSRR that is minimum LSRR for each class of image for all distance measures L1 to L6
irrespective of three colors and four moments.
CHART 2: Max. In Results of Longest string of Mean parameter into 27 Bins
TABLE 7 : PRCP FOR RED SKEWNESS
CLASS L1 L2 L3 L4 L5 L6
Flower 268 221 197 193 183 232
Sunset 646 578 524 495 482 635
Mountain 209 200 185 177 169 200
Building 223 211 199 182 176 214
Bus 422 411 391 380 369 429
Diansour 347 334 317 304 293 283
Elephant 246 271 280 281 277 237
Barbie 482 406 350 312 290 393
Mickey 245 249 241 237 226 229
Horses 399 389 350 313 303 391
Kingfisher 365 376 348 321 304 390
Dove 335 350 354 349 343 384
Crow 167 142 139 139 141 123
Rainbowrose 359 394 391 382 374 489
Pyramids 225 190 174 168 162 198
Plates 267 232 196 178 163 247
Car 155 161 157 152 148 225
Trees 296 279 260 248 247 225
Ship 342 297 268 256 249 311
Waterfall 362 352 332 319 309 263
Total 6360 6043 5653 5386 5208 6098
TABLE 8 : PRCP FOR GREEN SKEWNESS
CLASS L1 L2 L3 L4 L5 L6
Flower 375 361 319 291 275 379
Sunset 674 617 563 530 506 679
Mountain 216 203 191 184 186 205
Building 252 224 214 207 212 203
Bus 441 418 378 349 342 451
Diansour 293 257 230 220 210 200
Elephant 222 227 219 210 206 204
Barbie 459 450 451 450 446 436
Mickey 234 237 226 213 208 233
Horses 383 335 294 271 248 380
Kingfisher 327 356 354 354 343 355
Dove 349 336 316 305 300 370
Crow 181 161 146 143 137 134
Rainbowrose 508 540 519 500 481 577
Pyramids 282 298 284 273 268 153
Plates 237 236 228 218 211 246
Car 276 363 374 377 367 404
Trees 216 180 173 174 170 192
Ship 316 281 267 257 249 292
Waterfall 321 292 267 250 248 279
Total 6562 6372 6013 5776 5613 6372
12. Dr. H. B. Kekre, Kavita Sonawane
International Journal of Image Processing (IJIP), Volume (6) : Issue (3) : 2012 193
CHART 6. Min. In Results of LSRR for L1 to L6 Irrespective of Color and Moment
In above chart we can observe that many classes are performing well means minimum traversal
is giving 100% recall for them the classes giving best results are sunset, bus, horses, kingfisher
and pyramids etc. among these best is Sunset class where 14 %, traversal of 2000 images only
will give 100 % recall for sunset query for L6, 20% for L1 distance measure.
We have shown first few images from the PRCP result obtained for Kingfisher query image in
Figure 6. This is obtained for feature vector Green Kurtosis with the L1 distance measures. We
retrieved total 65 images as PRCP(from first 100) for this query.
13. Dr. H. B. Kekre, Kavita Sonawane
International Journal of Image Processing (IJIP), Volume (6) : Issue (3) : 2012 194
CONCLUSION
The ‘Bins Approach’ explained in this paper is new and simple in terms of computational
complexity for feature extraction. It is based on histogram partitioning of three color planes. As
histogram is partitioned into 3 parts, we could form 27 bins out of it. These bins are directed to
extract the features of images in the form of four statistical moments namely Mean, Standard
Deviation, Skewness and Kurtosis.
Similarity measures used to facilitate the comparison of database and query images we have
used two similarity measures that are Minkowski distance and Cosine correlation distance. We
have used multiple variations of Minkowski distance from order 1 to order 5 with nomenclature L1
Query Image
Retreived Images…
FIGURE 6 : Query Image and first 46 images retreived out of 65
14. Dr. H. B. Kekre, Kavita Sonawane
International Journal of Image Processing (IJIP), Volume (6) : Issue (3) : 2012 195
to L5 and L6 is used for cosine correlation distance. Among these six distances L1 and L6 are
giving best performance as compared to other increasing orders of Minkowski distance. Here we
have seen that performance goes on decreasing with increase in Minkowski order parameter ’r’
given in equation 5.
Conventional CBIR systems are mostly designed with Euclidean distance. We have shown the
effective use of other two similarity measures ‘Absolute distance’ and ‘Cosine correlation
distance’. The work presented in this paper has proved that AD and CD are giving far better
performance as compared to the commonly adopted conventional similarity measure Euclidean
distance. In all tables having PRCP results we have highlighted first two best results and after
counting them and comparing we found that AD and CD are better in maximum cases as
compared to ED.
Comparative study of types of feature vectors based on moments, even moments are performing
better as compared to odd moments i.e. standard deviation and kurtosis are better than mean
and skewness.
Observation of all performance evaluation parameters delineates that the best value obtained for
PRCP is 0.8 for average of 10 queries for many out of the 20 classes. Whereas combining the R,
G, B color results using special criterion; the best value of PRCP works out to 0.5 for average of
200 queries which is the most desirable performance for any CBIR. The maximum longest string
of relevant images obtained is for class rainbow rose and sunset; the value is around 70 (out of
100) for L1 and L6 distance measure as shown in charts 3 and 5 for even moments. The
minimum length traversed to retrieve all the relevant images from database i.e LSRR’s best value
is 14% for L6 and 20% for L1 for class sunset.
We have also worked with 8 bins and 64 bins by dividing the equalized histogram in 2 and 4 parts
respectively. However the best results are obtained for 27 bins which are presented here.
REFERENCES
[1] Yong Rui and Thomas S. Huang , “Image Retrieval: Current Techniques, Promising
Directions, and Open Issues” Journal of Visual Communication and Image Representation
10, 39–62 (1999).
[2] Dr. H.B. Kekre, Mr. Dhirendra Mishra, Mr. Anirudh Kariwala , “Survey Of Cbir Techniques
And Semantics”. International Journal of Engineering Science and Technology (IJEST),
Vol. 3 No. 5 May 2011.
[3] Raimondo Schettini, G. Ciocca, S. Zuffi, “A Survey Of Methods For Color Image Indexing
And Retreival In Image Databases”.
www.intelligence.tuc.gr/~petrakis/courses/.../papers/color-survey.pdf
[4] Sameer Antania, Rangachar Kasturia; , Ramesh Jainb “A surveyon the use of pattern
recognition methods for abstraction, indexing and retrieval of images and video” Pattern
Recognition 35 (2002) 945–965.
[5] Hualu Wang, Ajay Divakaran, Anthony Vetro, Shih-Fu Chang, and Huifang Sun “Survey of
compressed-domain features used in audio-visual indexing and analysis” 2003 Elsevier
Science (USA). All rights reserved.doi:10.1016/S1047-3203(03)00019-1.
[6] S. Nandagopalan, Dr. B. S. Adiga, and N. Deepak, “ A Universal Model for Content-Based
Image Retrieval” World Academy of Science, Engineering and Technology 46 2008.
[7] C. W.Ngo, T. C. Pong, R.T. Chin. “Exploiting image indexing techniques in DCT domain”
IAPR International Workshop on multimedia Media Information Analysis and Retrieval.
16. Dr. H. B. Kekre, Kavita Sonawane
International Journal of Image Processing (IJIP), Volume (6) : Issue (3) : 2012 197
[22] Zhi-Hua Zhou Hong-Bin Dai, “Query-Sensitive Similarity Measure for Content-Based Image
Retrieval”ICDM, 06, Proceeding’s of sixth International Conference on data Mining. IEEE
Comp. Society, Washington, DC, USA 2006.
[23] Ellen Spertus, Mehran Sahami, Orkut Buyukkokten, “Evaluating Similarity Measures:A
LargeScale Study in the Orkut Social network“ Copyright 2005ACM.The definitive version
was published in KDD ’05, August 2124,
2005http://doi.acm.org/10.1145/1081870.1081956.
[24] Simone Santini, Member, IEEE, and Ramesh Jain, Fellow, IEEE, “Similarity Measures”
IEEETransactions On Pattern Analysis And Machine Intelligence, Vol. 21, No. 9,
September 1999.
[25] John P., Van De Geer, “Some Aspects of Minkowski distance”, Department of data theory,
Leiden University. RR-95-03.
[26] Dengsheng Zhang and Guojun Lu “Evaluation Of Similarity Measurement For Image
Retrieval” www. Gscit.monash.edu.au/~dengs/resource/papers/icnnsp03.pdf.
[27] Gang Qian, Shamik Sural, Yuelong Gu† Sakti Pramanik, “Similarity between Euclidean and
cosine angle distance fornearest neighbor queries“, SAC’04, March 14-17, 2004, Nicosia,
Cyprus Copyright 2004 ACM 1-58113-812-1/03/04.
[28] Sang-Hyun Park, Hyung Jin Sung, “Correlation Based Image Registration for Pressure
Sensitive Paint“, flow.kaist.ac.kr/upload/paper/2004/SY2004.pdf .
[29] Julia Vogela, Bernt Schiele, “Performance evaluation and optimization for content-based
image retrieval“, 0031-3203/$30.00 _ 2005 Pattern Recognition Society. Published by
Elsevier Ltd. All rights reserved. doi:10.1016/j.patcog.2005.10.024
[30] Stéphane Marchand-Maillet, “Performance Evaluation in Content-based Image
Retrieval:The Benchathlon Network”,
[31] Thomas Deselaers, Daniel Keysers, and Hermann Ney , “Classification Error Rate for
Quantitative Evaluation of Content-based Image Retrieval Systems”
http://www.cs.washington.edu/research/imagedatabase/groundtruth/, and http://www-
i6.informatik.rwthaachen.de/˜deselaers/uwdb
[32] Dr. H. B. Kekre, Dhirendra Mishra “Image Retrieval using DST and DST Wavelet
Sectorization”, (IJACSA) International Journal of Advanced Computer Science and
Applications, Vol. 2, No. 6, 2011.
[33] Dr. H. B. Kekre, Kavita Sonawane,“ Image Retrieval Using Histogram Based Bins of Pixel
Counts and Average of Intensities”, (IJCSIS) International Journal of Computer Science
and Information Security, Vol. 10, No.1, 2012