This document presents a new algorithm for content-based image retrieval (CBIR) based on graph cut theory and local binary patterns (LBP). The algorithm calculates nine LBP histograms from each image by comparing each pixel to its neighbors, which are then used as a feature vector for image retrieval. Two experiments on standard databases show the new algorithm improves retrieval accuracy over LBP and other transform-based techniques. The document provides background on CBIR techniques, an overview of LBP for texture description, and describes how the new graph cut-based LBP calculates histograms for image retrieval.
3.[18 30]graph cut based local binary patterns for content based image retrievalAlexander Decker
This document presents a new algorithm for content-based image retrieval (CBIR) based on graph cut theory and local binary patterns (LBP). The algorithm calculates nine LBP histograms from a 3x3 pattern by comparing each node to all other nodes. These histograms are used as a feature vector for image retrieval. Two experiments on texture databases show the algorithm improves retrieval accuracy over LBP and other transform techniques.
11.graph cut based local binary patterns for content based image retrievalAlexander Decker
This document presents a new algorithm for content-based image retrieval (CBIR) based on graph cut theory and local binary patterns (LBP). The algorithm extracts nine LBP histograms from each image as features by comparing each pixel in a 3x3 pattern to the other pixels using graph cut theory. Two experiments on texture databases show the proposed Graph Cut Local Binary Patterns (GCLBP) algorithm achieves significantly better retrieval accuracy than LBP and other transform-based methods, as measured by average retrieval precision and rate.
3.[13 21]framework of smart mobile rfid networksAlexander Decker
This document presents a new algorithm for content-based image retrieval (CBIR) based on graph cut theory and local binary patterns (LBP). The algorithm calculates nine LBP histograms from a 3x3 pattern by comparing each node to all other nodes. These histograms are used as a feature vector for image retrieval. Two experiments on the Brodatz and MIT VisTex databases show the algorithm improves retrieval accuracy over LBP and other transform domain techniques.
This document presents a new algorithm for content-based image retrieval (CBIR) based on graph cut theory and local binary patterns (LBP). The algorithm extracts nine LBP histograms from each image as features by comparing each pixel in a 3x3 pattern to the other pixels using graph cut theory. Two experiments on texture databases show the proposed Graph Cut Local Binary Patterns (GCLBP) algorithm achieves significantly better retrieval accuracy than LBP and other transform-based methods, as measured by average retrieval precision and rate.
Research Inventy : International Journal of Engineering and Scienceresearchinventy
Research Inventy : International Journal of Engineering and Science is published by the group of young academic and industrial researchers with 12 Issues per year. It is an online as well as print version open access journal that provides rapid publication (monthly) of articles in all areas of the subject such as: civil, mechanical, chemical, electronic and computer engineering as well as production and information technology. The Journal welcomes the submission of manuscripts that meet the general criteria of significance and scientific excellence. Papers will be published by rapid process within 20 days after acceptance and peer review process takes only 7 days. All articles published in Research Inventy will be peer-reviewed.
This document summarizes an evaluation of texture feature extraction methods for content-based image retrieval, including co-occurrence matrices, Tamura features, and Gabor filters. The evaluation tested these methods on a Corel image collection using Manhattan distance as the similarity measure. Co-occurrence matrices performed best with homogeneity as the feature, while Gabor wavelets showed better performance for homogeneous textures of fixed sizes. Tamura features performed poorly with directionality. Overall, co-occurrence matrices provided the best results for general texture retrieval.
Content based image retrieval based on shape with texture featuresAlexander Decker
This document describes a content-based image retrieval system that extracts shape and texture features from images. It uses the HSV color space and wavelet transform for feature extraction. Color features are extracted by quantizing the H, S, and V components of HSV into unequal intervals based on human color perception. Texture features are extracted using wavelet transforms. The color and texture features are then combined to form a feature vector for each image. During retrieval, the similarity between a query image and images in the database is measured using the Euclidean distance between their feature vectors. The results show that retrieving images using HSV color features provides more accurate results and faster retrieval times compared to using RGB color features.
Web Image Retrieval Using Visual Dictionaryijwscjournal
In this research, we have proposed semantic based image retrieval system to retrieve set of relevant images for the given query image from the Web. We have used global color space model and Dense SIFT feature extraction technique to generate visual dictionary using proposed quantization algorithm. The images are transformed into set of features. These features are used as inputs in our proposed Quantization algorithm for generating the code word to form visual dictionary. These codewords are used to represent images semantically to form visual labels using Bag-of-Features (BoF). The Histogram intersection method is used to measure the distance between input image and the set of images in the image database to retrieve similar images. The experimental results are evaluated over a collection of 1000 generic Web images to demonstrate the effectiveness of the proposed system.
3.[18 30]graph cut based local binary patterns for content based image retrievalAlexander Decker
This document presents a new algorithm for content-based image retrieval (CBIR) based on graph cut theory and local binary patterns (LBP). The algorithm calculates nine LBP histograms from a 3x3 pattern by comparing each node to all other nodes. These histograms are used as a feature vector for image retrieval. Two experiments on texture databases show the algorithm improves retrieval accuracy over LBP and other transform techniques.
11.graph cut based local binary patterns for content based image retrievalAlexander Decker
This document presents a new algorithm for content-based image retrieval (CBIR) based on graph cut theory and local binary patterns (LBP). The algorithm extracts nine LBP histograms from each image as features by comparing each pixel in a 3x3 pattern to the other pixels using graph cut theory. Two experiments on texture databases show the proposed Graph Cut Local Binary Patterns (GCLBP) algorithm achieves significantly better retrieval accuracy than LBP and other transform-based methods, as measured by average retrieval precision and rate.
3.[13 21]framework of smart mobile rfid networksAlexander Decker
This document presents a new algorithm for content-based image retrieval (CBIR) based on graph cut theory and local binary patterns (LBP). The algorithm calculates nine LBP histograms from a 3x3 pattern by comparing each node to all other nodes. These histograms are used as a feature vector for image retrieval. Two experiments on the Brodatz and MIT VisTex databases show the algorithm improves retrieval accuracy over LBP and other transform domain techniques.
This document presents a new algorithm for content-based image retrieval (CBIR) based on graph cut theory and local binary patterns (LBP). The algorithm extracts nine LBP histograms from each image as features by comparing each pixel in a 3x3 pattern to the other pixels using graph cut theory. Two experiments on texture databases show the proposed Graph Cut Local Binary Patterns (GCLBP) algorithm achieves significantly better retrieval accuracy than LBP and other transform-based methods, as measured by average retrieval precision and rate.
Research Inventy : International Journal of Engineering and Scienceresearchinventy
Research Inventy : International Journal of Engineering and Science is published by the group of young academic and industrial researchers with 12 Issues per year. It is an online as well as print version open access journal that provides rapid publication (monthly) of articles in all areas of the subject such as: civil, mechanical, chemical, electronic and computer engineering as well as production and information technology. The Journal welcomes the submission of manuscripts that meet the general criteria of significance and scientific excellence. Papers will be published by rapid process within 20 days after acceptance and peer review process takes only 7 days. All articles published in Research Inventy will be peer-reviewed.
This document summarizes an evaluation of texture feature extraction methods for content-based image retrieval, including co-occurrence matrices, Tamura features, and Gabor filters. The evaluation tested these methods on a Corel image collection using Manhattan distance as the similarity measure. Co-occurrence matrices performed best with homogeneity as the feature, while Gabor wavelets showed better performance for homogeneous textures of fixed sizes. Tamura features performed poorly with directionality. Overall, co-occurrence matrices provided the best results for general texture retrieval.
Content based image retrieval based on shape with texture featuresAlexander Decker
This document describes a content-based image retrieval system that extracts shape and texture features from images. It uses the HSV color space and wavelet transform for feature extraction. Color features are extracted by quantizing the H, S, and V components of HSV into unequal intervals based on human color perception. Texture features are extracted using wavelet transforms. The color and texture features are then combined to form a feature vector for each image. During retrieval, the similarity between a query image and images in the database is measured using the Euclidean distance between their feature vectors. The results show that retrieving images using HSV color features provides more accurate results and faster retrieval times compared to using RGB color features.
Web Image Retrieval Using Visual Dictionaryijwscjournal
In this research, we have proposed semantic based image retrieval system to retrieve set of relevant images for the given query image from the Web. We have used global color space model and Dense SIFT feature extraction technique to generate visual dictionary using proposed quantization algorithm. The images are transformed into set of features. These features are used as inputs in our proposed Quantization algorithm for generating the code word to form visual dictionary. These codewords are used to represent images semantically to form visual labels using Bag-of-Features (BoF). The Histogram intersection method is used to measure the distance between input image and the set of images in the image database to retrieve similar images. The experimental results are evaluated over a collection of 1000 generic Web images to demonstrate the effectiveness of the proposed system.
Content-based Image Retrieval Using The knowledge of Color, Texture in Binary...Zahra Mansoori
This document presents a new approach for content-based image retrieval that combines color, texture, and a binary tree structure to describe images and their features. Color histograms in HSV color space and wavelet texture features are extracted as low-level features. A binary tree partitions each image into regions based on color and represents higher-level spatial relationships. The performance of the proposed system is evaluated on a subset of the COREL image database and compared to the SIMPLIcity image retrieval system. Experimental results show the proposed system has better retrieval performance than SIMPLIcity in some categories and comparable performance in others.
MMFO: modified moth flame optimization algorithm for region based RGB color i...IJECEIAES
Region-based color image segmentation is elementary steps in image processing and computer vision. The region-based color image segmentation has faced the problem of multidimensionality. The color image is considered in five-dimensional problems, in which three dimensions in color (RGB) and two dimensions in geometry (luminosity layer and chromaticity layer). In this paper, L*a*b color space conversion has been used to reduce the one dimension and geometrically it converts in the array hence the further one dimension has been reduced. This paper introduced, an improved algorithm modified moth flame optimization (MMFO) algorithm for RGB color image segmentation which is based on bio-inspired techniques. The simulation results of MMFO for region based color image segmentation are performed better as compared to PSO and GA, in terms of computation times for all the images. The experiment results of this method gives clear segments based on the different color and the different number of clusters is used during the segmentation process.
This document summarizes image indexing and its features. It discusses that image indexing is used to retrieve similar images from a database based on extracted features like color, shape, and texture. Color features can be represented by models like RGB, HSV, and color histograms. Shape features include global properties like roundness and local features like edge segments. Texture is described using statistical, structural, and spectral approaches. Texture feature extraction methods discussed include standard wavelets, Gabor wavelets, and extracting features like entropy and standard deviation. The paper provides an overview of the different features used for image indexing and classification.
Image Retrieval using Equalized Histogram Image Bins MomentsIDES Editor
CBIR operates on a totally different principle
from keyword indexing. Primitive features characterizing
image content, such as color, texture, and shape are computed
for both stored and query images, and used to identify the
images most closely matching the query. There have been
many approaches to decide and extract the features of images
in the database. Towards this goal we propose a technique by
which the color content of images is automatically extracted to
form a class of meta-data that is easily indexed. The color
indexing algorithm uses the back-projection of binary color
sets to extract color regions from images. This technique use
without histogram of image histogram bins of red, green and
blue color. The feature vector is composed of mean, standard
deviation and variance of 16 histogram bins of each color
space. The new proposed methods are tested on the database
of 600 images and the results are in the form of precision and
recall.
Multi Resolution features of Content Based Image RetrievalIDES Editor
Many content based retrieval systems have been
proposed to manage and retrieve images on the basis of their
content. In this paper we proposed Color Histogram, Discrete
Wavelet Transform and Complex Wavelet Transform
techniques for efficient image retrieval from huge database.
Color Histogram technique is based on exact matching of
histogram of query image and database. Discrete Wavelet
transform technique retrieves images based on computation
of wavelet coefficients of subbands. Complex Wavelet
Transform technique includes computation of real and
imaginary part to extract the details from texture. The
proposed method is tested on COREL1000 database and
retrieval results have demonstrated a significant improvement
in precision and recall.
Evaluation of Euclidean and Manhanttan Metrics In Content Based Image Retriev...IJERA Editor
This document evaluates the performance of the Euclidean and Manhattan distance metrics in a content-based image retrieval system. It finds that the Manhattan distance metric showed better precision than the Euclidean distance metric. The system uses color histograms and Gabor texture features to represent images. Color is represented in HSV color space and histograms of hue, saturation and value are used. Gabor filters are applied to capture texture at different scales and orientations. Distance between feature vectors is calculated using Euclidean and Manhattan distance formulas to find similar images from the database. The system was tested on a dataset of 1000 Corel images and Manhattan distance produced more relevant search results.
A Review of Feature Extraction Techniques for CBIR based on SVMIJEEE
As with the advancement of multimedia technologies, users are not gratified with the conventional retrieval system techniques. So a application “Content Based Image Retrieval System” is introduced. CBIR is the application to retrieve the images or to search the digital images from the large database .The term “content” deals with the colour, shape, texture and all the information which is extracted from the image itself. This paper reviews the CBIR system which uses SVM classifier based algorithms for feature extraction phase.
A comparative study on content based image retrieval methodsIJLT EMAS
Content-based image retrieval (CBIR) is a method of
finding images from a huge image database according to persons’
interests. Content-based here means that the search involves
analysis the actual content present in the image. As database of
images is growing daybyday, researchers/scholars are searching
for better techniques for retrieval of images maintaining good
efficiency. This paper presents the visual features and various
ways for image retrieval from the huge image database.
Wavelet-Based Color Histogram on Content-Based Image RetrievalTELKOMNIKA JOURNAL
The growth of image databases in many domains, including fashion, biometric, graphic design,
architecture, etc. has increased rapidly. Content Based Image Retrieval System (CBIR) is a technique used
for finding relevant images from those huge and unannotated image databases based on low-level features
of the query images. In this study, an attempt to employ 2nd level Wavelet Based Color Histogram (WBCH)
on a CBIR system is proposed. Image database used in this study are taken from Wang’s image database
containing 1000 color images. The experiment results show that 2nd level WBCH gives better precision
(0.777) than the other methods, including 1st level WBCH, Color Histogram, Color Co-occurrence Matrix,
and Wavelet texture feature. It can be concluded that the 2nd Level of WBCH can be applied to CBIR system.
Content Based Image Retrieval : Classification Using Neural Networksijma
In a content-based image retrieval system (CBIR), the main issue is to extract the image features that
effectively represent the image contents in a database. Such an extraction requires a detailed evaluation of
retrieval performance of image features. This paper presents a review of fundamental aspects of content
based image retrieval including feature extraction of color and texture features. Commonly used color
features including color moments, color histogram and color correlogram and Gabor texture are
compared. The paper reviews the increase in efficiency of image retrieval when the color and texture
features are combined. The similarity measures based on which matches are made and images are
retrieved are also discussed. For effective indexing and fast searching of images based on visual features,
neural network based pattern learning can be used to achieve effective classification.
Text-Image Separation in Document Images using Boundary/Perimeter DetectionIDES Editor
Document analysis plays an important role in office
automation, especially in intelligent signal processing. The
proposed system consists of two modules: block segmentation
and block identification. In this approach, first a document is
segmented into several non-overlapping blocks by utilizing a
novel recursive segmentation technique, and then extracts
the features embedded in each segmented block are extracted.
Two kinds of features, connected components and image
boundary/perimeter features are extracted. Document with
text inside image pose limitations in earlier reported literature.
This is taken care of by applying additional pass of the Run
Length Smearing on the extracted image that contains text.
Proposed scheme is independent of type and language of the
document.
A Survey on Image Retrieval By Different Features and TechniquesIRJET Journal
This document discusses various techniques for content-based image retrieval. It begins with an introduction to content-based image retrieval and describes how it uses visual features like color, texture, shape and regions to index and represent image content for retrieval. The document then reviews related work on image retrieval using different features. It discusses features used for image identification like color, edges, corners and texture. The document also outlines techniques for image retrieval including relevance feedback, support vector machines, block truncation coding, and image clustering. Finally, it evaluates parameters for comparing image retrieval algorithms.
Invention of digital technology has lead to increase in the number of images that can be stored in digital format. So searching and retrieving images in large image databases has become more challenging. From the last few years, Content Based Image Retrieval (CBIR) gained increasing attention from researcher. CBIR is a system which uses visual features of image to search user required image from large image
database and user’s requests in the form of a query image. Important features of images are colour, texture and shape which give detailed information about the image. CBIR techniques using different feature extraction techniques are discussed in this paper.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
An Improved Way of Segmentation and Classification of Remote Sensing Images U...ijsrd.com
The Ultimate significance of Images lies in processing the digital image which stems from two principal application areas: Advances of pictorial information for human interpretation; and dispensation of image data for storage, communication, and illustration for self-sufficient machine perception. The objective of this research work is to define the meaning and possibility of image segmentation based on remote sensing images which are successively classified with statistical measures. In this paper kernel induced Possiblistic C-means clustering algorithm has been implemented for classifying remote sensing image data with image features. As a final point of the proposed work is to point out that this algorithm works well for segmenting and classifying the image with better accuracy with statistical metrices.
C OMPARATIVE S TUDY OF D IMENSIONALITY R EDUCTION T ECHNIQUES U SING PCA AND ...csandit
The aim of this paper is to present a comparative s
tudy of two linear dimension reduction
methods namely PCA (Principal Component Analysis) a
nd LDA (Linear Discriminant Analysis).
The main idea of PCA is to transform the high dimen
sional input space onto the feature space
where the maximal variance is displayed. The featur
e selection in traditional LDA is obtained
by maximizing the difference between classes and mi
nimizing the distance within classes. PCA
finds the axes with maximum variance for the whole
data set where LDA tries to find the axes
for best class seperability. The proposed method is
experimented over a general image database
using Matlab. The performance of these systems has
been evaluated by Precision and Recall
measures. Experimental results show that PCA based
dimension reduction method gives the
better performance in terms of higher precision and
recall values with lesser computational
complexity than the LDA based method.
A Hybrid Approach for Content Based Image Retrieval SystemIOSR Journals
This document describes a hybrid approach for content-based image retrieval. It combines several spatial features - row sum, column sum, forward and backward diagonal sums - and histograms to represent images with feature vectors. Euclidean distance is used to calculate similarity between a query image's feature vector and those in the database. The approach is evaluated using precision-recall calculations on different image groups, showing the hybrid method performs best by combining multiple features.
Feature integration for image information retrieval using image mining techni...iaemedu
This document discusses feature extraction techniques for image information retrieval. It proposes integrating features using image mining to generate a super set of features. It describes extracting primitive features of color, texture, and shape. Color is extracted using histograms in RGB color space. Texture is extracted statistically using co-occurrence matrices and wavelet transforms. Shape is extracted using boundary-based and region-based methods like Canny edge detection. The document asserts that integrating features, such as color and texture or texture and shape, results in better performance than using features individually for image retrieval.
The development of multimedia system technology in Content based Image Retrieval (CBIR) System is
one in every of the outstanding area to retrieve the images from an oversized collection of database. The feature
vectors of the query image are compared with feature vectors of the database images to get matching images.It is
much observed that anyone algorithm isn't beneficial in extracting all differing kinds of natural images. Thus an
intensive analysis of certain color, texture and shape extraction techniques are allotted to spot an efficient CBIR
technique that suits for a selected sort of images. The Extraction of an image includes feature description and
feature extraction. During this paper, we tend to projected Color Layout Descriptor (CLD), grey Level Co-
Occurrences Matrix (GLCM), Marker-Controlled Watershed Segmentation feature extraction technique that
extract the matching image based on the similarity of Color, Texture and shape within the database. For
performance analysis, the image retrieval timing results of the projected technique is calculated and compared
with every of the individual feature.
This document provides a review of various texture classification approaches and texture datasets. It begins with an introduction to texture classification and its general framework. Key steps in texture classification are preprocessing, feature extraction, and classification. The document then discusses several common feature extraction methods used in texture classification, including local binary pattern (LBP), scale invariant feature transform (SIFT), speeded up robust features (SURF), Fourier transformation, texture spectrum, and gray level co-occurrence matrix (GLCM). It also reviews three popular classifiers for texture classification: K-nearest neighbors (K-NN), artificial neural network (ANN), and support vector machine (SVM). Finally, it mentions several popular texture datasets that are commonly used for training and testing texture
The document proposes an eXtended Center-Symmetric Local Binary Pattern (XCS-LBP) descriptor for background modeling and subtraction in videos. The XCS-LBP extracts more image details than the CS-LBP while producing a smaller histogram. Experimental results on real world video datasets show that XCS-LBP outperforms other descriptors like LBP, CS-LBP, and CS-LDP both qualitatively and quantitatively for background subtraction tasks. The XCS-LBP is also more computationally efficient and robust to illumination changes and noise.
A trends of salmonella and antibiotic resistanceAlexander Decker
This document provides a review of trends in Salmonella and antibiotic resistance. It begins with an introduction to Salmonella as a facultative anaerobe that causes nontyphoidal salmonellosis. The emergence of antimicrobial-resistant Salmonella is then discussed. The document proceeds to cover the historical perspective and classification of Salmonella, definitions of antimicrobials and antibiotic resistance, and mechanisms of antibiotic resistance in Salmonella including modification or destruction of antimicrobial agents, efflux pumps, modification of antibiotic targets, and decreased membrane permeability. Specific resistance mechanisms are discussed for several classes of antimicrobials.
Content-based Image Retrieval Using The knowledge of Color, Texture in Binary...Zahra Mansoori
This document presents a new approach for content-based image retrieval that combines color, texture, and a binary tree structure to describe images and their features. Color histograms in HSV color space and wavelet texture features are extracted as low-level features. A binary tree partitions each image into regions based on color and represents higher-level spatial relationships. The performance of the proposed system is evaluated on a subset of the COREL image database and compared to the SIMPLIcity image retrieval system. Experimental results show the proposed system has better retrieval performance than SIMPLIcity in some categories and comparable performance in others.
MMFO: modified moth flame optimization algorithm for region based RGB color i...IJECEIAES
Region-based color image segmentation is elementary steps in image processing and computer vision. The region-based color image segmentation has faced the problem of multidimensionality. The color image is considered in five-dimensional problems, in which three dimensions in color (RGB) and two dimensions in geometry (luminosity layer and chromaticity layer). In this paper, L*a*b color space conversion has been used to reduce the one dimension and geometrically it converts in the array hence the further one dimension has been reduced. This paper introduced, an improved algorithm modified moth flame optimization (MMFO) algorithm for RGB color image segmentation which is based on bio-inspired techniques. The simulation results of MMFO for region based color image segmentation are performed better as compared to PSO and GA, in terms of computation times for all the images. The experiment results of this method gives clear segments based on the different color and the different number of clusters is used during the segmentation process.
This document summarizes image indexing and its features. It discusses that image indexing is used to retrieve similar images from a database based on extracted features like color, shape, and texture. Color features can be represented by models like RGB, HSV, and color histograms. Shape features include global properties like roundness and local features like edge segments. Texture is described using statistical, structural, and spectral approaches. Texture feature extraction methods discussed include standard wavelets, Gabor wavelets, and extracting features like entropy and standard deviation. The paper provides an overview of the different features used for image indexing and classification.
Image Retrieval using Equalized Histogram Image Bins MomentsIDES Editor
CBIR operates on a totally different principle
from keyword indexing. Primitive features characterizing
image content, such as color, texture, and shape are computed
for both stored and query images, and used to identify the
images most closely matching the query. There have been
many approaches to decide and extract the features of images
in the database. Towards this goal we propose a technique by
which the color content of images is automatically extracted to
form a class of meta-data that is easily indexed. The color
indexing algorithm uses the back-projection of binary color
sets to extract color regions from images. This technique use
without histogram of image histogram bins of red, green and
blue color. The feature vector is composed of mean, standard
deviation and variance of 16 histogram bins of each color
space. The new proposed methods are tested on the database
of 600 images and the results are in the form of precision and
recall.
Multi Resolution features of Content Based Image RetrievalIDES Editor
Many content based retrieval systems have been
proposed to manage and retrieve images on the basis of their
content. In this paper we proposed Color Histogram, Discrete
Wavelet Transform and Complex Wavelet Transform
techniques for efficient image retrieval from huge database.
Color Histogram technique is based on exact matching of
histogram of query image and database. Discrete Wavelet
transform technique retrieves images based on computation
of wavelet coefficients of subbands. Complex Wavelet
Transform technique includes computation of real and
imaginary part to extract the details from texture. The
proposed method is tested on COREL1000 database and
retrieval results have demonstrated a significant improvement
in precision and recall.
Evaluation of Euclidean and Manhanttan Metrics In Content Based Image Retriev...IJERA Editor
This document evaluates the performance of the Euclidean and Manhattan distance metrics in a content-based image retrieval system. It finds that the Manhattan distance metric showed better precision than the Euclidean distance metric. The system uses color histograms and Gabor texture features to represent images. Color is represented in HSV color space and histograms of hue, saturation and value are used. Gabor filters are applied to capture texture at different scales and orientations. Distance between feature vectors is calculated using Euclidean and Manhattan distance formulas to find similar images from the database. The system was tested on a dataset of 1000 Corel images and Manhattan distance produced more relevant search results.
A Review of Feature Extraction Techniques for CBIR based on SVMIJEEE
As with the advancement of multimedia technologies, users are not gratified with the conventional retrieval system techniques. So a application “Content Based Image Retrieval System” is introduced. CBIR is the application to retrieve the images or to search the digital images from the large database .The term “content” deals with the colour, shape, texture and all the information which is extracted from the image itself. This paper reviews the CBIR system which uses SVM classifier based algorithms for feature extraction phase.
A comparative study on content based image retrieval methodsIJLT EMAS
Content-based image retrieval (CBIR) is a method of
finding images from a huge image database according to persons’
interests. Content-based here means that the search involves
analysis the actual content present in the image. As database of
images is growing daybyday, researchers/scholars are searching
for better techniques for retrieval of images maintaining good
efficiency. This paper presents the visual features and various
ways for image retrieval from the huge image database.
Wavelet-Based Color Histogram on Content-Based Image RetrievalTELKOMNIKA JOURNAL
The growth of image databases in many domains, including fashion, biometric, graphic design,
architecture, etc. has increased rapidly. Content Based Image Retrieval System (CBIR) is a technique used
for finding relevant images from those huge and unannotated image databases based on low-level features
of the query images. In this study, an attempt to employ 2nd level Wavelet Based Color Histogram (WBCH)
on a CBIR system is proposed. Image database used in this study are taken from Wang’s image database
containing 1000 color images. The experiment results show that 2nd level WBCH gives better precision
(0.777) than the other methods, including 1st level WBCH, Color Histogram, Color Co-occurrence Matrix,
and Wavelet texture feature. It can be concluded that the 2nd Level of WBCH can be applied to CBIR system.
Content Based Image Retrieval : Classification Using Neural Networksijma
In a content-based image retrieval system (CBIR), the main issue is to extract the image features that
effectively represent the image contents in a database. Such an extraction requires a detailed evaluation of
retrieval performance of image features. This paper presents a review of fundamental aspects of content
based image retrieval including feature extraction of color and texture features. Commonly used color
features including color moments, color histogram and color correlogram and Gabor texture are
compared. The paper reviews the increase in efficiency of image retrieval when the color and texture
features are combined. The similarity measures based on which matches are made and images are
retrieved are also discussed. For effective indexing and fast searching of images based on visual features,
neural network based pattern learning can be used to achieve effective classification.
Text-Image Separation in Document Images using Boundary/Perimeter DetectionIDES Editor
Document analysis plays an important role in office
automation, especially in intelligent signal processing. The
proposed system consists of two modules: block segmentation
and block identification. In this approach, first a document is
segmented into several non-overlapping blocks by utilizing a
novel recursive segmentation technique, and then extracts
the features embedded in each segmented block are extracted.
Two kinds of features, connected components and image
boundary/perimeter features are extracted. Document with
text inside image pose limitations in earlier reported literature.
This is taken care of by applying additional pass of the Run
Length Smearing on the extracted image that contains text.
Proposed scheme is independent of type and language of the
document.
A Survey on Image Retrieval By Different Features and TechniquesIRJET Journal
This document discusses various techniques for content-based image retrieval. It begins with an introduction to content-based image retrieval and describes how it uses visual features like color, texture, shape and regions to index and represent image content for retrieval. The document then reviews related work on image retrieval using different features. It discusses features used for image identification like color, edges, corners and texture. The document also outlines techniques for image retrieval including relevance feedback, support vector machines, block truncation coding, and image clustering. Finally, it evaluates parameters for comparing image retrieval algorithms.
Invention of digital technology has lead to increase in the number of images that can be stored in digital format. So searching and retrieving images in large image databases has become more challenging. From the last few years, Content Based Image Retrieval (CBIR) gained increasing attention from researcher. CBIR is a system which uses visual features of image to search user required image from large image
database and user’s requests in the form of a query image. Important features of images are colour, texture and shape which give detailed information about the image. CBIR techniques using different feature extraction techniques are discussed in this paper.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
An Improved Way of Segmentation and Classification of Remote Sensing Images U...ijsrd.com
The Ultimate significance of Images lies in processing the digital image which stems from two principal application areas: Advances of pictorial information for human interpretation; and dispensation of image data for storage, communication, and illustration for self-sufficient machine perception. The objective of this research work is to define the meaning and possibility of image segmentation based on remote sensing images which are successively classified with statistical measures. In this paper kernel induced Possiblistic C-means clustering algorithm has been implemented for classifying remote sensing image data with image features. As a final point of the proposed work is to point out that this algorithm works well for segmenting and classifying the image with better accuracy with statistical metrices.
C OMPARATIVE S TUDY OF D IMENSIONALITY R EDUCTION T ECHNIQUES U SING PCA AND ...csandit
The aim of this paper is to present a comparative s
tudy of two linear dimension reduction
methods namely PCA (Principal Component Analysis) a
nd LDA (Linear Discriminant Analysis).
The main idea of PCA is to transform the high dimen
sional input space onto the feature space
where the maximal variance is displayed. The featur
e selection in traditional LDA is obtained
by maximizing the difference between classes and mi
nimizing the distance within classes. PCA
finds the axes with maximum variance for the whole
data set where LDA tries to find the axes
for best class seperability. The proposed method is
experimented over a general image database
using Matlab. The performance of these systems has
been evaluated by Precision and Recall
measures. Experimental results show that PCA based
dimension reduction method gives the
better performance in terms of higher precision and
recall values with lesser computational
complexity than the LDA based method.
A Hybrid Approach for Content Based Image Retrieval SystemIOSR Journals
This document describes a hybrid approach for content-based image retrieval. It combines several spatial features - row sum, column sum, forward and backward diagonal sums - and histograms to represent images with feature vectors. Euclidean distance is used to calculate similarity between a query image's feature vector and those in the database. The approach is evaluated using precision-recall calculations on different image groups, showing the hybrid method performs best by combining multiple features.
Feature integration for image information retrieval using image mining techni...iaemedu
This document discusses feature extraction techniques for image information retrieval. It proposes integrating features using image mining to generate a super set of features. It describes extracting primitive features of color, texture, and shape. Color is extracted using histograms in RGB color space. Texture is extracted statistically using co-occurrence matrices and wavelet transforms. Shape is extracted using boundary-based and region-based methods like Canny edge detection. The document asserts that integrating features, such as color and texture or texture and shape, results in better performance than using features individually for image retrieval.
The development of multimedia system technology in Content based Image Retrieval (CBIR) System is
one in every of the outstanding area to retrieve the images from an oversized collection of database. The feature
vectors of the query image are compared with feature vectors of the database images to get matching images.It is
much observed that anyone algorithm isn't beneficial in extracting all differing kinds of natural images. Thus an
intensive analysis of certain color, texture and shape extraction techniques are allotted to spot an efficient CBIR
technique that suits for a selected sort of images. The Extraction of an image includes feature description and
feature extraction. During this paper, we tend to projected Color Layout Descriptor (CLD), grey Level Co-
Occurrences Matrix (GLCM), Marker-Controlled Watershed Segmentation feature extraction technique that
extract the matching image based on the similarity of Color, Texture and shape within the database. For
performance analysis, the image retrieval timing results of the projected technique is calculated and compared
with every of the individual feature.
This document provides a review of various texture classification approaches and texture datasets. It begins with an introduction to texture classification and its general framework. Key steps in texture classification are preprocessing, feature extraction, and classification. The document then discusses several common feature extraction methods used in texture classification, including local binary pattern (LBP), scale invariant feature transform (SIFT), speeded up robust features (SURF), Fourier transformation, texture spectrum, and gray level co-occurrence matrix (GLCM). It also reviews three popular classifiers for texture classification: K-nearest neighbors (K-NN), artificial neural network (ANN), and support vector machine (SVM). Finally, it mentions several popular texture datasets that are commonly used for training and testing texture
The document proposes an eXtended Center-Symmetric Local Binary Pattern (XCS-LBP) descriptor for background modeling and subtraction in videos. The XCS-LBP extracts more image details than the CS-LBP while producing a smaller histogram. Experimental results on real world video datasets show that XCS-LBP outperforms other descriptors like LBP, CS-LBP, and CS-LDP both qualitatively and quantitatively for background subtraction tasks. The XCS-LBP is also more computationally efficient and robust to illumination changes and noise.
A trends of salmonella and antibiotic resistanceAlexander Decker
This document provides a review of trends in Salmonella and antibiotic resistance. It begins with an introduction to Salmonella as a facultative anaerobe that causes nontyphoidal salmonellosis. The emergence of antimicrobial-resistant Salmonella is then discussed. The document proceeds to cover the historical perspective and classification of Salmonella, definitions of antimicrobials and antibiotic resistance, and mechanisms of antibiotic resistance in Salmonella including modification or destruction of antimicrobial agents, efflux pumps, modification of antibiotic targets, and decreased membrane permeability. Specific resistance mechanisms are discussed for several classes of antimicrobials.
A unique common fixed point theorems in generalized dAlexander Decker
This document presents definitions and properties related to generalized D*-metric spaces and establishes some common fixed point theorems for contractive type mappings in these spaces. It begins by introducing D*-metric spaces and generalized D*-metric spaces, defines concepts like convergence and Cauchy sequences. It presents lemmas showing the uniqueness of limits in these spaces and the equivalence of different definitions of convergence. The goal of the paper is then stated as obtaining a unique common fixed point theorem for generalized D*-metric spaces.
A universal model for managing the marketing executives in nigerian banksAlexander Decker
This document discusses a study that aimed to synthesize motivation theories into a universal model for managing marketing executives in Nigerian banks. The study was guided by Maslow and McGregor's theories. A sample of 303 marketing executives was used. The results showed that managers will be most effective at motivating marketing executives if they consider individual needs and create challenging but attainable goals. The emerged model suggests managers should provide job satisfaction by tailoring assignments to abilities and monitoring performance with feedback. This addresses confusion faced by Nigerian bank managers in determining effective motivation strategies.
A usability evaluation framework for b2 c e commerce websitesAlexander Decker
This document presents a framework for evaluating the usability of B2C e-commerce websites. It involves user testing methods like usability testing and interviews to identify usability problems in areas like navigation, design, purchasing processes, and customer service. The framework specifies goals for the evaluation, determines which website aspects to evaluate, and identifies target users. It then describes collecting data through user testing and analyzing the results to identify usability problems and suggest improvements.
Abnormalities of hormones and inflammatory cytokines in women affected with p...Alexander Decker
Women with polycystic ovary syndrome (PCOS) have elevated levels of hormones like luteinizing hormone and testosterone, as well as higher levels of insulin and insulin resistance compared to healthy women. They also have increased levels of inflammatory markers like C-reactive protein, interleukin-6, and leptin. This study found these abnormalities in the hormones and inflammatory cytokines of women with PCOS ages 23-40, indicating that hormone imbalances associated with insulin resistance and elevated inflammatory markers may worsen infertility in women with PCOS.
A survey on feature descriptors for texture image classificationIRJET Journal
This document discusses various feature descriptors that can be used for texture image classification. It provides an overview of 8 different feature descriptors that have been proposed in recent research: local binary patterns (LBP), dominant local binary patterns (DLBP), completed local binary patterns (CLBP), Weber local descriptor (WLD), local binary count (LBC), discriminant face descriptor (DFD), local vector quantization pattern (LVQP), and dense micro-block difference (DMD). For each descriptor, it briefly explains the approach and compares the methods. The goal of the descriptors is to effectively capture textural features while addressing challenges like rotation, illumination changes, and noise.
IRJET- Digital Image Forgery Detection using Local Binary Patterns (LBP) and ...IRJET Journal
This document proposes a method to detect digital image forgeries using local binary patterns (LBP) and histogram of oriented gradients (HOG). It extracts LBP features from the input image, then applies HOG to the LBP features. These combined features are classified using a support vector machine (SVM) as authentic or tampered. Testing on CASIA datasets achieved detection rates of 92.3% for CASIA-1 and 96.1% for CASIA-2, outperforming other existing methods. The method is effective at forgery detection while having reduced time complexity.
SIGNIFICANCE OF DIMENSIONALITY REDUCTION IN IMAGE PROCESSING sipij
The aim of this paper is to present a comparative study of two linear dimension reduction methods namely
PCA (Principal Component Analysis) and LDA (Linear Discriminant Analysis). The main idea of PCA is to
transform the high dimensional input space onto the feature space where the maximal variance is
displayed. The feature selection in traditional LDA is obtained by maximizing the difference between
classes and minimizing the distance within classes. PCA finds the axes with maximum variance for the
whole data set where LDA tries to find the axes for best class seperability. The neural network is trained
about the reduced feature set (using PCA or LDA) of images in the database for fast searching of images
from the database using back propagation algorithm. The proposed method is experimented over a general
image database using Matlab. The performance of these systems has been evaluated by Precision and
Recall measures. Experimental results show that PCA gives the better performance in terms of higher
precision and recall values with lesser computational complexity than LDA
APPLYING R-SPATIOGRAM IN OBJECT TRACKING FOR OCCLUSION HANDLINGsipij
Object tracking is one of the most important problems in computer vision. The aim of video tracking is to extract the trajectories of a target or object of interest, i.e. accurately locate a moving target in a video sequence and discriminate target from non-targets in the feature space of the sequence. So, feature descriptors can have significant effects on such discrimination. In this paper, we use the basic idea of many trackers which consists of three main components of the reference model, i.e., object modeling, object detection and localization, and model updating. However, there are major improvements in our system. Our forth component, occlusion handling, utilizes the r-spatiogram to detect the best target candidate. While spatiogram contains some moments upon the coordinates of the pixels, r-spatiogram computes region-based compactness on the distribution of the given feature in the image that captures richer features to represent the objects. The proposed research develops an efficient and robust way to keep tracking the object throughout video sequences in the presence of significant appearance variations and severe occlusions. The proposed method is evaluated on the Princeton RGBD tracking dataset considering sequences with different challenges and the obtained results demonstrate the effectiveness of the proposed method.
tScene classification using pyramid histogram ofijcsa
This document proposes a new method called Pyramid Histogram of Multi-scale Block Local Binary Pattern (PH-MBLBP) for scene classification. PH-MBLBP encodes both micro- and macro-structures of image patterns to provide a more complete representation than basic LBP. It divides images into spatial regions at multiple resolutions to capture geometric information. Experiments on 15 scene categories show PH-MBLBP outperforms SIFT and provides a powerful yet fast texture descriptor for scene recognition.
EFFICIENT IMAGE RETRIEVAL USING REGION BASED IMAGE RETRIEVALsipij
1) The document describes an efficient region-based image retrieval system that uses discrete wavelet transform and k-means clustering. It segments images into regions, each characterized by features like size, mean, and covariance.
2) The system pre-processes images by resizing, converting to HSV color space, performing DWT, and using k-means clustering on DWT coefficients to generate regions. It extracts features for each region and stores them in a database.
3) For retrieval, it pre-processes the query image similarly and calculates similarities between the query regions and database regions based on their features, returning similar images.
An effective RGB color selection for complex 3D object structure in scene gra...IJECEIAES
Our goal of the project is to develop a complete, fully detailed 3D interactive model of the human body and systems in the human body, and allow the user to interacts in 3D with all the elements of that system, to teach students about human anatomy. Some organs, which contain a lot of details about a particular anatomy, need to be accurately and fully described in minute detail, such as the brain, lungs, liver and heart. These organs are need have all the detailed descriptions of the medical information needed to learn how to do surgery on them, and should allow the user to add careful and precise marking to indicate the operative landmarks on the surgery location. Adding so many different items of information is challenging when the area to which the information needs to be attached is very detailed and overlaps with all kinds of other medical information related to the area. Existing methods to tag areas was not allowing us sufficient locations to attach the information to. Our solution combines a variety of tagging methods, which use the marking method by selecting the RGB color area that is drawn in the texture, on the complex 3D object structure. Then, it relies on those RGB color codes to tag IDs and create relational tables that store the related information about the specific areas of the anatomy. With this method of marking, it is possible to use the entire set of color values (R, G, B) to identify a set of anatomic regions, and this also makes it possible to define multiple overlapping regions.
HYPERSPECTRAL IMAGERY CLASSIFICATION USING TECHNOLOGIES OF COMPUTATIONAL INTE...IAEME Publication
Texture information is exploited for classification of HSI (Hyperspectral Imagery) at high spatial resolution. For this purpose, framework employs to LBP (Local Binary Pattern) to extract local image features such as edges, corners & spots. After the extraction of LBP feature two levels of fusions are applied along with Gabor feature & spectral feature, i.e. Feature level fusion & Decision level fusion. In Feature level fusion multiple features are concurred before pattern classification. While in decision level fusion, it works on probability output of each individual classification pipeline combines the distinct decisions into final one. Decision level fusion consists of either hard fusion, soft fusion method. In hard fusion we consider majority part & in soft fusion linear logarithmic opinion pool at probability level (LOGP). In addition to this, extreme learning machine (ELM) classifier is included which is more efficient than support vector machine (SVM), used to provide probability classification output. It has simple structure with one hidden layer & one linear output layer. ELM trained much faster than SVM.
Low level features for image retrieval basedcaijjournal
In this paper, we present a novel approach for image retrieval based on extraction of low level features
using techniques such as Directional Binary Code (DBC), Haar Wavelet transform and Histogram of
Oriented Gradients (HOG). The DBC texture descriptor captures the spatial relationship between any pair
of neighbourhood pixels in a local region along a given direction, while Local Binary Patterns (LBP)
descriptor considers the relationship between a given pixel and its surrounding neighbours. Therefore,
DBC captures more spatial information than LBP and its variants, also it can extract more edge
information than LBP. Hence, we employ DBC technique in order to extract grey level texture features
(texture map) from each RGB channels individually and computed texture maps are further combined
which represents colour texture features (colour texture map) of an image. Then, we decomposed the
extracted colour texture map and original image using Haar wavelet transform. Finally, we encode the
shape and local features of wavelet transformed images using Histogram of Oriented Gradients (HOG) for
content based image retrieval. The performance of proposed method is compared with existing methods on
two databases such as Wang’s corel image and Caltech 256. The evaluation results show that our
approach outperforms the existing methods for image retrieval.
An Hypergraph Object Oriented Model For Image Segmentation And AnnotationCrystal Sanchez
This document presents a system for segmenting images into regions and annotating those regions semantically. It uses a hypergraph object-oriented model constructed on a hexagonal image structure to represent the image, segmentation results, and annotation information. The system segments images by treating it as a hypergraph partitioning problem based on color and syntactic features. Experimental results on the Berkeley Dataset show the method is robust.
Query Image Searching With Integrated Textual and Visual Relevance Feedback f...IJERA Editor
There are many researchers who have studied the relevance feedback in the literature of content based image
retrieval (CBIR) community, but none of CBIR search engines support it because of scalability, effectiveness
and efficiency issues. In this, we had implemented an integrated relevance feedback for retrieving of web
images. Here, we had concentrated on integration of both textual features (TF) and visual features (VF) based
relevance feedback (RF), simultaneously we also tested them individually. The TFRF employs and effective
search result clustering (SRC) algorithm to get salient phrases. Then a new user interface (UI) is proposed to
support RF. Experimental results show that the proposed algorithm is scalable, effective and accurated
Texture descriptor based on local combination adaptive ternary patternProjectsatbangalore
The document describes a new texture descriptor called local combination adaptive ternary pattern (LCATP) to classify materials. LCATP uses a combination of three adaptive thresholding techniques to encode both color and local structure information, making it more robust to noise, illumination changes, and low image quality. It extends the approach to four color spaces and combines the descriptors to form LCATP fusion (LCATP_F). An evaluation on the challenging KTH-TIPS2b dataset shows LCATP_F improves classification accuracy over state-of-the-art methods, particularly in handling scale and pose variations.
PDE BASED FEATURES FOR TEXTURE ANALYSIS USING WAVELET TRANSFORMIJCI JOURNAL
This document summarizes a research paper that proposes a novel method for texture analysis using wavelet transforms and partial differential equations (PDEs). The method involves applying wavelet transforms to images to obtain directional information. Anisotropic diffusion, a PDE technique, is then used on the directional information to compute a texture approximation. Various statistical features are extracted from the texture approximation. Linear discriminant analysis enhances class separability of the features before classification using k-nearest neighbors. The method is evaluated on the Brodatz texture dataset and results show it achieves better classification accuracy than other methods while having lower computational cost.
This document summarizes an approach for content-based image retrieval using histograms. It discusses representing images as Histogram Attributed Relational Graphs (HARGs) where each node is an image region and edges represent relations between regions. A query is converted to a FARG which is compared to database FARGs using a graph matching algorithm. The system was tested on a database of natural images and performance was quantified using standard measures. It achieved good retrieval results but leaves room for improving retrieval time and reducing semantic gaps between low-level features and human perceptions.
A Novel Feature Extraction Scheme for Medical X-Ray ImagesIJERA Editor
X-ray images are gray scale images with almost the same textural characteristic. Conventional texture or color
features cannot be used for appropriate categorization in medical x-ray image archives. This paper presents a
novel combination of methods like GLCM, LBP and HOG for extracting distinctive invariant features from Xray
images belonging to IRMA (Image Retrieval in Medical applications) database that can be used to perform
reliable matching between different views of an object or scene. GLCM represents the distributions of the
intensities and the information about relative positions of neighboring pixels of an image. The LBP features are
invariant to image scale and rotation, change in 3D viewpoint, addition of noise, and change in illumination A
HOG feature vector represents local shape of an object, having edge information at plural cells. These features
have been exploited in different algorithms for automatic classification of medical X-ray images. Excellent
experimental results obtained in true problems of rotation invariance, particular rotation angle, demonstrate that
good discrimination can be achieved with the occurrence statistics of simple rotation invariant local binary
patterns.
The document proposes a new feature descriptor called Local Bit-Plane Wavelet Pattern (LBWP) to improve content-based retrieval of biomedical images like CT and MRI scans. LBWP encodes relationships between pixel intensities in different bit planes and applies a wavelet function, capturing more fine-grained image details than prior methods. Evaluation on a dataset from The Cancer Imaging Archive showed LBWP outperformed existing approaches like Local Wavelet Pattern with higher average retrieval precision, rate, and F-score.
Implementation of Fuzzy Logic for the High-Resolution Remote Sensing Images w...IOSR Journals
This document describes an implementation of fuzzy logic for high-resolution remote sensing image classification with improved accuracy. It discusses using an object-based approach with fuzzy rules to classify urban land covers in a satellite image. The approach involves image segmentation using k-means clustering or ISODATA clustering. Features are then extracted from the image objects and fuzzy logic is applied to classify the objects based on membership functions. The method was tested on different sensor and resolution images in MATLAB and showed improved classification accuracy over other techniques, achieving lower entropy in results. Future work planned includes designing an unsupervised classification model combining k-means clustering and fuzzy-based object orientation.
A Review on Matching For Sketch TechniqueIOSR Journals
This document summarizes several techniques for sketch-based image retrieval. It discusses methods using SIFT features, HOG descriptors, color segmentation, and gradient orientation histograms. It also reviews applications of these techniques to domains like facial recognition, graffiti matching, and tattoo identification for law enforcement. The techniques aim to extract visual features from sketches that can be used to match and retrieve similar images from databases. While achieving good results, the methods have limitations regarding database size and specificity, and accuracy with complex textures and shapes. Overall, the review examines advances in using sketches as queries for image retrieval.
Image hashing is an efficient way to handle digital data authentication problem. Image hashing represents quality summarization of image features in compact manner. In this paper, the modified center symmetric local binary pattern (CSLBP) image hashing algorithm is proposed. Unlike CSLBP 16 bin histogram, Modified CSLBP generates 8 bin histogram without compromise on quality to generate compact hash. It has been found that, uniform quantization on a histogram with more bin results in more precision loss. To overcome quantization loss, modified CSLBP generates the two histogram of a four bin. Uniform quantization on a 4 bin histogram results in less precision loss than a 16 bin histogram. The first generated histogram represents the nearest neighbours and second one is for the diagonal neighbours. To enhance quality in terms of discrimination power, different weight factor are used during histogram generation. For the nearest and the diagonal neighbours, two local weight factors are used. One is the Standard Deviation (SD) and other is the Laplacian of Gaussian (LoG). Standard deviation represents a spread of data which captures local variation from mean. LoG is a second order derivative edge detection operator which detects edges well in presence of noise. The proposed algorithm is resilient to the various kinds of attacks. The proposed method is tested on database having malicious and non-malicious images using benchmark like NHD and ROC which confirms theoretical analysis. The experimental results shows good performance of the proposed method for various attacks despite the short hash length.
Speeded-up and Compact Visual Codebook for Object RecognitionCSCJournals
The well known framework in the object recognition literature uses local information extracted at several patches in images which are then clustered by a suitable clustering technique. A visual codebook maps the patch-based descriptors into a fixed-length vector in histogram space to which standard classifiers can be directly applied. Thus, the construction of a codebook is an important step which is usually done by cluster analysis. However, it is still difficult to construct a compact codebook with reduced computational cost. This paper evaluates the effectiveness and generalisation performance of the Resource-Allocating Codebook (RAC) approach that overcomes the problem of constructing fixed size codebooks that can be used at any time in the learning process and the learning patterns do not have to be repeated. It either allocates a new codeword based on the novelty of a newly seen pattern, or adapts the codebook to fit that observation. Furthermore, we improve RAC to yield codebooks that are more compact. We compare and contrast the recognition performance of RAC evaluated with two distinctive feature descriptors: SIFT and SURF and two clustering techniques: K-means and Fast Reciprocal Nearest Neighbours (fast-RNN) algorithms. SVM is used in classifying the image signatures. The entire visual object recognition pipeline has been tested on three benchmark datasets: PASCAL visual object classes challenge 2007, UIUC texture, and MPEG-7 Part-B silhouette image datasets. Experimental results show that RAC is suitable for constructing codebooks due to its wider span of the feature space. Moreover, RAC takes only one-pass through the entire data that slightly outperforms traditional approaches at drastically reduced computing times. The modified RAC performs slightly better than RAC and gives more compact codebook. Future research should focus on designing more discriminative and compact codebooks such as RAC rather than focusing on methods tuned to achieve high performance in classification.
Similar to 3.[18 30]graph cut based local binary patterns for content based image retrieval (20)
A transformational generative approach towards understanding al-istifhamAlexander Decker
This document discusses a transformational-generative approach to understanding Al-Istifham, which refers to interrogative sentences in Arabic. It begins with an introduction to the origin and development of Arabic grammar. The paper then explains the theoretical framework of transformational-generative grammar that is used. Basic linguistic concepts and terms related to Arabic grammar are defined. The document analyzes how interrogative sentences in Arabic can be derived and transformed via tools from transformational-generative grammar, categorizing Al-Istifham into linguistic and literary questions.
A time series analysis of the determinants of savings in namibiaAlexander Decker
This document summarizes a study on the determinants of savings in Namibia from 1991 to 2012. It reviews previous literature on savings determinants in developing countries. The study uses time series analysis including unit root tests, cointegration, and error correction models to analyze the relationship between savings and variables like income, inflation, population growth, deposit rates, and financial deepening in Namibia. The results found inflation and income have a positive impact on savings, while population growth negatively impacts savings. Deposit rates and financial deepening were found to have no significant impact. The study reinforces previous work and emphasizes the importance of improving income levels to achieve higher savings rates in Namibia.
A therapy for physical and mental fitness of school childrenAlexander Decker
This document summarizes a study on the importance of exercise in maintaining physical and mental fitness for school children. It discusses how physical and mental fitness are developed through participation in regular physical exercises and cannot be achieved solely through classroom learning. The document outlines different types and components of fitness and argues that developing fitness should be a key objective of education systems. It recommends that schools ensure pupils engage in graded physical activities and exercises to support their overall development.
A theory of efficiency for managing the marketing executives in nigerian banksAlexander Decker
This document summarizes a study examining efficiency in managing marketing executives in Nigerian banks. The study was examined through the lenses of Kaizen theory (continuous improvement) and efficiency theory. A survey of 303 marketing executives from Nigerian banks found that management plays a key role in identifying and implementing efficiency improvements. The document recommends adopting a "3H grand strategy" to improve the heads, hearts, and hands of management and marketing executives by enhancing their knowledge, attitudes, and tools.
This document discusses evaluating the link budget for effective 900MHz GSM communication. It describes the basic parameters needed for a high-level link budget calculation, including transmitter power, antenna gains, path loss, and propagation models. Common propagation models for 900MHz that are described include Okumura model for urban areas and Hata model for urban, suburban, and open areas. Rain attenuation is also incorporated using the updated ITU model to improve communication during rainfall.
A synthetic review of contraceptive supplies in punjabAlexander Decker
This document discusses contraceptive use in Punjab, Pakistan. It begins by providing background on the benefits of family planning and contraceptive use for maternal and child health. It then analyzes contraceptive commodity data from Punjab, finding that use is still low despite efforts to improve access. The document concludes by emphasizing the need for strategies to bridge gaps and meet the unmet need for effective and affordable contraceptive methods and supplies in Punjab in order to improve health outcomes.
A synthesis of taylor’s and fayol’s management approaches for managing market...Alexander Decker
1) The document discusses synthesizing Taylor's scientific management approach and Fayol's process management approach to identify an effective way to manage marketing executives in Nigerian banks.
2) It reviews Taylor's emphasis on efficiency and breaking tasks into small parts, and Fayol's focus on developing general management principles.
3) The study administered a survey to 303 marketing executives in Nigerian banks to test if combining elements of Taylor and Fayol's approaches would help manage their performance through clear roles, accountability, and motivation. Statistical analysis supported combining the two approaches.
A survey paper on sequence pattern mining with incrementalAlexander Decker
This document summarizes four algorithms for sequential pattern mining: GSP, ISM, FreeSpan, and PrefixSpan. GSP is an Apriori-based algorithm that incorporates time constraints. ISM extends SPADE to incrementally update patterns after database changes. FreeSpan uses frequent items to recursively project databases and grow subsequences. PrefixSpan also uses projection but claims to not require candidate generation. It recursively projects databases based on short prefix patterns. The document concludes by stating the goal was to find an efficient scheme for extracting sequential patterns from transactional datasets.
A survey on live virtual machine migrations and its techniquesAlexander Decker
This document summarizes several techniques for live virtual machine migration in cloud computing. It discusses works that have proposed affinity-aware migration models to improve resource utilization, energy efficient migration approaches using storage migration and live VM migration, and a dynamic consolidation technique using migration control to avoid unnecessary migrations. The document also summarizes works that have designed methods to minimize migration downtime and network traffic, proposed a resource reservation framework for efficient migration of multiple VMs, and addressed real-time issues in live migration. Finally, it provides a table summarizing the techniques, tools used, and potential future work or gaps identified for each discussed work.
A survey on data mining and analysis in hadoop and mongo dbAlexander Decker
This document discusses data mining of big data using Hadoop and MongoDB. It provides an overview of Hadoop and MongoDB and their uses in big data analysis. Specifically, it proposes using Hadoop for distributed processing and MongoDB for data storage and input. The document reviews several related works that discuss big data analysis using these tools, as well as their capabilities for scalable data storage and mining. It aims to improve computational time and fault tolerance for big data analysis by mining data stored in Hadoop using MongoDB and MapReduce.
1. The document discusses several challenges for integrating media with cloud computing including media content convergence, scalability and expandability, finding appropriate applications, and reliability.
2. Media content convergence challenges include dealing with the heterogeneity of media types, services, networks, devices, and quality of service requirements as well as integrating technologies used by media providers and consumers.
3. Scalability and expandability challenges involve adapting to the increasing volume of media content and being able to support new media formats and outlets over time.
This document surveys trust architectures that leverage provenance in wireless sensor networks. It begins with background on provenance, which refers to the documented history or derivation of data. Provenance can be used to assess trust by providing metadata about how data was processed. The document then discusses challenges for using provenance to establish trust in wireless sensor networks, which have constraints on energy and computation. Finally, it provides background on trust, which is the subjective probability that a node will behave dependably. Trust architectures need to be lightweight to account for the constraints of wireless sensor networks.
This document discusses private equity investments in Kenya. It provides background on private equity and discusses trends in various regions. The objectives of the study discussed are to establish the extent of private equity adoption in Kenya, identify common forms of private equity utilized, and determine typical exit strategies. Private equity can involve venture capital, leveraged buyouts, or mezzanine financing. Exits allow recycling of capital into new opportunities. The document provides context on private equity globally and in developing markets like Africa to frame the goals of the study.
This document discusses a study that analyzes the financial health of the Indian logistics industry from 2005-2012 using Altman's Z-score model. The study finds that the average Z-score for selected logistics firms was in the healthy to very healthy range during the study period. The average Z-score increased from 2006 to 2010 when the Indian economy was hit by the global recession, indicating the overall performance of the Indian logistics industry was good. The document reviews previous literature on measuring financial performance and distress using ratios and Z-scores, and outlines the objectives and methodology used in the current study.
A study to evaluate the attitude of faculty members of public universities of...Alexander Decker
This study evaluated faculty members' attitudes toward shared governance in public universities in Pakistan. It used a questionnaire to assess attitudes on 4 indicators of shared governance: the role of the dean, role of faculty, role of the board, and role of joint decision-making. The study analyzed responses from 90 faculty across various universities. Statistical analysis found significant differences in perceptions of shared governance based on faculty rank and gender. Faculty rank influenced perceptions of the dean's role and role of joint decision-making. Gender influenced overall perceptions of shared governance. The results indicate a need to improve shared governance practices in Pakistani universities.
A study to assess the knowledge regarding prevention of pneumonia among middl...Alexander Decker
1) The study assessed knowledge of pneumonia prevention among 60 middle-aged adults in rural Moodbidri, India. Most subjects (55%) had poor knowledge and 41.67% had average knowledge. The mean knowledge score was 40.66%.
2) Knowledge was lowest in areas of diagnosis, prevention and management (35.61%) and highest in introduction to pneumonia (45.42%).
3) There was a significant association between knowledge and gender but not other demographic factors like age, education level or occupation. The study concluded knowledge of prevention was low and health education is needed.
A study regarding analyzing recessionary impact on fundamental determinants o...Alexander Decker
This document analyzes the impact of fundamental factors on stock prices in India during normal and recessionary periods. It finds that during normal periods from 2000-2007, earnings per share had a positive and significant impact on stock prices, while coverage ratio had a negative impact. During the recession from 2007-2009, price-earnings ratio positively and significantly impacted stock prices, while growth had a negative effect. Overall, the study aims to compare the influence of fundamental factors like book value, dividends, earnings, etc. on stock prices during different economic conditions in India.
A study on would be urban-migrants’ needs and necessities in rural bangladesh...Alexander Decker
This document summarizes a study on the needs and necessities of potential rural migrants in Bangladesh and how providing certain facilities could encourage them to remain in rural areas. The study involved surveys of 350 local and non-local people across 7 upazilas to understand their satisfaction with existing services and priority of needs. The findings revealed variations in requirements between local and non-local respondents. Based on the analysis, the study recommends certain priority facilities, such as employment opportunities and community services, that should be provided in rural areas to improve quality of life and reduce migration to cities. Limitations include the small sample size not representing all of Bangladesh and difficulties collecting full information from all respondents.
A study on the evaluation of scientific creativity among scienceAlexander Decker
This study evaluated scientific creativity among 31 science teacher candidates in Turkey. The candidates were asked open-ended questions about scientific creativity and how they would advance science. Their responses showed adequate fluency and scientific knowledge, but low flexibility and originality. When asked to self-evaluate, most said their scientific creativity was partially adequate. The study aims to help improve the development of scientific creativity among future teachers.
A study on the antioxidant defense system in breast cancer patients.Alexander Decker
This document discusses a study on the antioxidant defense system in breast cancer patients. The study measured levels of reduced glutathione (GSH), superoxide dismutase (SOD) activity, total antioxidant potential (AOP), malondialdehyde (MDA), and nitrate in 40 breast cancer patients and 20 healthy controls. The results found increased MDA, SOD, and nitrite levels and decreased GSH and AOP levels in breast cancer patients compared to controls, indicating higher oxidative stress in patients from increased free radicals and lower antioxidant defenses.
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!SOFTTECHHUB
As the digital landscape continually evolves, operating systems play a critical role in shaping user experiences and productivity. The launch of Nitrux Linux 3.5.0 marks a significant milestone, offering a robust alternative to traditional systems such as Windows 11. This article delves into the essence of Nitrux Linux 3.5.0, exploring its unique features, advantages, and how it stands as a compelling choice for both casual users and tech enthusiasts.
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...SOFTTECHHUB
The choice of an operating system plays a pivotal role in shaping our computing experience. For decades, Microsoft's Windows has dominated the market, offering a familiar and widely adopted platform for personal and professional use. However, as technological advancements continue to push the boundaries of innovation, alternative operating systems have emerged, challenging the status quo and offering users a fresh perspective on computing.
One such alternative that has garnered significant attention and acclaim is Nitrux Linux 3.5.0, a sleek, powerful, and user-friendly Linux distribution that promises to redefine the way we interact with our devices. With its focus on performance, security, and customization, Nitrux Linux presents a compelling case for those seeking to break free from the constraints of proprietary software and embrace the freedom and flexibility of open-source computing.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/building-and-scaling-ai-applications-with-the-nx-ai-manager-a-presentation-from-network-optix/
Robin van Emden, Senior Director of Data Science at Network Optix, presents the “Building and Scaling AI Applications with the Nx AI Manager,” tutorial at the May 2024 Embedded Vision Summit.
In this presentation, van Emden covers the basics of scaling edge AI solutions using the Nx tool kit. He emphasizes the process of developing AI models and deploying them globally. He also showcases the conversion of AI models and the creation of effective edge AI pipelines, with a focus on pre-processing, model conversion, selecting the appropriate inference engine for the target hardware and post-processing.
van Emden shows how Nx can simplify the developer’s life and facilitate a rapid transition from concept to production-ready applications.He provides valuable insights into developing scalable and efficient edge AI solutions, with a strong focus on practical implementation.
HCL Notes and Domino License Cost Reduction in the World of DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-and-domino-license-cost-reduction-in-the-world-of-dlau/
The introduction of DLAU and the CCB & CCX licensing model caused quite a stir in the HCL community. As a Notes and Domino customer, you may have faced challenges with unexpected user counts and license costs. You probably have questions on how this new licensing approach works and how to benefit from it. Most importantly, you likely have budget constraints and want to save money where possible. Don’t worry, we can help with all of this!
We’ll show you how to fix common misconfigurations that cause higher-than-expected user counts, and how to identify accounts which you can deactivate to save money. There are also frequent patterns that can cause unnecessary cost, like using a person document instead of a mail-in for shared mailboxes. We’ll provide examples and solutions for those as well. And naturally we’ll explain the new licensing model.
Join HCL Ambassador Marc Thomas in this webinar with a special guest appearance from Franz Walder. It will give you the tools and know-how to stay on top of what is going on with Domino licensing. You will be able lower your cost through an optimized configuration and keep it low going forward.
These topics will be covered
- Reducing license cost by finding and fixing misconfigurations and superfluous accounts
- How do CCB and CCX licenses really work?
- Understanding the DLAU tool and how to best utilize it
- Tips for common problem areas, like team mailboxes, functional/test users, etc
- Practical examples and best practices to implement right away
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024Neo4j
Neha Bajwa, Vice President of Product Marketing, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slackshyamraj55
Discover the seamless integration of RPA (Robotic Process Automation), COMPOSER, and APM with AWS IDP enhanced with Slack notifications. Explore how these technologies converge to streamline workflows, optimize performance, and ensure secure access, all while leveraging the power of AWS IDP and real-time communication via Slack notifications.
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...Neo4j
Leonard Jayamohan, Partner & Generative AI Lead, Deloitte
This keynote will reveal how Deloitte leverages Neo4j’s graph power for groundbreaking digital twin solutions, achieving a staggering 100x performance boost. Discover the essential role knowledge graphs play in successful generative AI implementations. Plus, get an exclusive look at an innovative Neo4j + Generative AI solution Deloitte is developing in-house.
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
Building Production Ready Search Pipelines with Spark and MilvusZilliz
Spark is the widely used ETL tool for processing, indexing and ingesting data to serving stack for search. Milvus is the production-ready open-source vector database. In this talk we will show how to use Spark to process unstructured data to extract vector representations, and push the vectors to Milvus vector database for search serving.
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
20240605 QFM017 Machine Intelligence Reading List May 2024
3.[18 30]graph cut based local binary patterns for content based image retrieval
1. Information and Knowledge Management www.iiste.org
ISSN 2224-5758 (Paper) ISSN 2224-896X (Online)
Vol 1, No.1, 2011
Graph Cut Based Local Binary Patterns for Content Based Image
Retrieval
Dilkeshwar Pandey
Department of Mathematics
Deen Bandhu Chotu Ram University of Science & Tech.
Murthal, Harayana, India
Email:dilkeshwar@hotmail.com
Rajive Kumar
Department of Mathematics, Deen Bandhu Chotu Ram University of Science & Tech.
Murthal, Harayana, India
E-mail: 2rajiv_kansal@yahoo.com
Abstract
In this paper, a new algorithm which is based on the graph cut theory and local binary patterns (LBP) for content
based image retrieval (CBIR) is proposed. In graph cut theory, each node is compared with the all other nodes
for edge map generation. The same concept is utilized at LBP calculation which is generating nine LBP patterns
from a given 3×3 pattern. Finally, nine LBP histograms are calculated which are used as a feature vector for
image retrieval. Two experiments have been carried out for proving the worth of our algorithm. It is further
mentioned that the database considered for experiments are Brodatz database (DB1), and MIT VisTex database
(DB2). The results after being investigated shows a significant improvement in terms of their evaluation
measures as compared to LBP and other existing transform domain techniques.
Keywords: Feature Extraction; Local Binary Patterns; Image Retrieval
1. Introduction
With the rapid expansion of worldwide network and advances in information technology there is an explosive
growth of multimedia databases and digital libraries. This demands an effective tool that allow users to search
and browse efficiently through such a large collections. In many areas of commerce, government, academia,
hospitals, entertainment, and crime preventions large collections of digital images are being created. Usually, the
only way of searching these collections was by using keyword indexing, or simply by browsing. However, as
the databases grew larger, people realized that the traditional keywords based methods to retrieve a particular
image in such a large collection are inefficient. To describe the images with keywords with a satisfying degree
of concreteness and detail, we need a very large and sophisticated keyword system containing typically several
hundreds of different keywords. One of the serious drawbacks of this approach is the need of trained personnel
not only to attach keywords to each image (which may take several minutes for one single image) but also to
retrieve images by selecting keywords, as we usually need to know all keywords to choose good ones. Further,
such a keyword based approach is mostly influenced by subjective decision about image content and also it is
very difficult to change a keyword based system afterwards. Therefore, new techniques are needed to overcome
these limitations. Digital image databases however, open the way to content based searching. It is common
phrase that an image speaks thousands of words. So instead of manual annotation by text based keywords,
images should be indexed by their own visual contents, such as color, texture and shape. The main advantage of
this method is its ability to support the visual queries. Hence researchers turned attention to content based image
retrieval (CBIR) methods. Several methods achieving effective feature extraction have been proposed in the
literature [Rui et al., Smeulders et al., kokare et al., and Liu et al.].
Swain et al. proposed the concept of color histogram in 1991 and also introduced the histogram intersection
distance metric to measure the distance between the histograms of images. Stricker et al. used the first three
central moments called mean, standard deviation and skewness of each color for image retrieval. Pass et al.
introduced color coherence vector (CCV). CCV partitions the each histogram bin into two types, i.e., coherent,
if it belongs to a large uniformly colored region or incoherent, if it does not. Huang et al. used a new color
feature called color correlogram which characterizes not only the color distributions of pixels, but also spatial
correlation of pair of colors. Lu et al. proposed color feature based on vector quantized (VQ) index histograms
in the discrete cosine transform (DCT) domain. They computed 12 histograms, four for each color component
18 | P a g e
www.iiste.org
2. Information and Knowledge Management www.iiste.org
ISSN 2224-5758 (Paper) ISSN 2224-896X (Online)
Vol 1, No.1, 2011
from 12 DCT-VQ index sequences.
Texture is another salient and indispensable feature for CBIR. Smith et al. used the mean and variance of the
wavelet coefficients as texture features for CBIR. Moghaddam et al. proposed the Gabor wavelet correlogram
(GWC) for CBIR. Ahmadian et al. used the wavelet transform for texture classification. Moghaddam et al.
introduced new algorithm called wavelet correlogram (WC). Saadatmand et al. improved the performance of
WC algorithm by optimizing the quantization thresholds using genetic algorithm (GA). Birgale et al. and
Subrahmanyam et al. combined the color (color histogram) and texture (wavelet transform) features for CBIR.
Subrahmanyam et al. proposed correlogram algorithm for image retrieval using wavelets and rotated wavelets
(WC+RWC).
The recently proposed local binary pattern (LBP) features are designed for texture description. Ojala et al.
proposed the LBP and these LBPs are converted to rotational invariant for texture classification. Pietikainen et
al. proposed the rotational invariant texture classification using feature distributions. Ahonen et al. and Zhao et
al used the LBP operator facial expression analysis and recognition. Heikkila et al. proposed the background
modeling and detection by using LBP. Huang et al. proposed the extended LBP for shape localization. Heikkila
et al. used the LBP for interest region description. Li et al. used the combination of Gabor filter and LBP for
texture segmentation. Zhang et al. proposed the local derivative pattern for face recognition. They have
considered LBP as a nondirectional first order local pattern, which are the binary results of the first-order
derivative in images.
To improve the retrieval performance in terms of retrieval accuracy, in this paper, we proposed the graph cut
based local binary patterns (GCLBP) for CBIR. Two experiments have been carried out on Brodatz and MIT
VisTex databases for proving the worth of our algorithm. The results after being investigated show a significant
improvement in terms of their evaluation measures as compared to LBP and other existing transform domain
techniques.
The organization of the paper as follows: In section 1, a brief review of image retrieval and related work is
given. Section 2, presents a concise review of local binary patterns (LBP). Section 3, presents the feature
extraction, proposed system framework, and similarity measure. Experimental results and discussions are given
in section 4. Based on above work conclusions are derived in section 5.
2. 2. Local Binary Patterns
Ojala et al. proposed the local binary pattern (LBP) operator which describes the surroundings of a pixel by
generating a bit-code from the binary derivatives of a pixel as a complementary measure for local image
contrast. The LBP operator takes the eight neighboring pixels using the center gray value as a threshold. The
operator generates a binary code 1 if the neighbor is greater or equal than the center otherwise generates a binary
code 0. The eight neighboring binary code can be represented by a 8-bit number. The LBP operator outputs for
all the pixels in the image can be accumulated to form a histogram. Fig.1 shows an example of LBP operator.
For given a center pixel in the image, LBP value is computed by comparing it with those of its neighborhoods:
P −1
LBPP , R = ∑ 2i × f ( gi − g c )
i =0
(1)
1 x≥0
f ( x) =
0 else
(2)
where g c is the gray value of the center pixel, g i is the gray value of its neighbors, P is the number of
neighbors and R is the radius of the neighborhood. Fig. 2 shows the examples of circular neighbor sets for
different configurations of ( P, R) .
The LBP measure the local structure by assigning unique identifiers, the binary number, to various micro-
structures in the image. Thus, LBP capture many structures in one unified framework. In the example in Fig.
3(b), the local structure is a vertical edge with a leftward intensity gradient. Other microstructures are assigned
different LBP codes, e.g., corners and spots, as illustrated in Fig. 4. By varying the radius R and the number of
samples P, the structures are measured at different scales, and LBP allows for measuring large scale structures
without smoothing effects, as is, e.g., the case for Gaussian-based filters.
19 | P a g e
www.iiste.org
3. Information and Knowledge Management www.iiste.org
ISSN 2224-5758 (Paper) ISSN 2224-896X (Online)
Vol 1, No.1, 2011
Fig. 1: LBP calculation for 3×3 pattern
Fig. 2: Circular neighborhood sets for different (P,R)
Fig. 3. Illustration of LBP. (a) The LBP filter is defined by two parameters; the circle radius R and the number of
samples P on the circle. (b) Local structure is measured w.r.t. a given pixel by placing the center of the circle in
the position of that pixel. (c) Samples on the circle are binarized by thresholding with the intensity in the center
pixel as threshold value. Black is zero and white is one. The example image shown in (b) has an LBP code of
124. (d) Rotating the example image in (b) 900 clockwise reduces the LBP code to 31, which is the smallest
possible code for this binary pattern. This principle is used to achieve rotation invariance.
Fig. 4: Various microstructures measured by LBP. The gray circle indicates the center pixel. Black and white
circles are binarized samples; black is zero and white is one.
After identifying the LBP pattern of each pixel (j, k), the whole image is represented by building a histogram:
20 | P a g e
www.iiste.org
4. Information and Knowledge Management www.iiste.org
ISSN 2224-5758 (Paper) ISSN 2224-896X (Online)
Vol 1, No.1, 2011
N1 N2
H LBP (l ) = ∑∑ f (LBP( j , k ), l ); l ∈ [0, (2 P − 1)]
j =1 k =1
(3)
1 x= y
f ( x, y ) =
0 else
(4)
where the size of input image is N1 × N 2 .
3. 3. Feature Extraction
The weighted graph (Li Xi et al.,) with no self loops is G = (V , E ,W ) , where V = {1, 2,......., N } the node set is
(N=m.n is the total number of pixels in Q ∈ R m×n ) E ⊆ V × V represents the edge set, and W = ( wij ) denotes
N×N
an affinity matrix with the element wij being the edge weight between nodes i and j.
Based on the above graph cut theory we compare the each pixel of 3×3 pattern with remaining eight pixel gray
values for generating binary code. Finally, nine LBP patterns are collected for LBP histogram calculation and
these are used as a feature vector for image retrieval. The flowchart of the proposed system is shown in Fig. 5
and algorithm for the same is given below:
3.1 Proposed System Framework (GCLBP)
Algorithm:
Input: Image; Output: Retrieval Result
1. Load the input image.
2. Collect the 3×3 pattern for a center pixel i.
• Construct the graph cut for 3×3 pattern.
• Generate nine LBP patterns.
• Go to next center pixel.
3. Calculate the graph cut LBP (GCLBP) histograms.
4. Form the feature vector by concatenating the nine LBP features.
5. Calculate the best matches using Eq. (5).
6. Retrieve the number of top matches.
Fig. 5: Proposed system framework
3.2 Similarity Measurement
In the presented work d1 similarity distance metric is used as shown below:
21 | P a g e
www.iiste.org
5. Information and Knowledge Management www.iiste.org
ISSN 2224-5758 (Paper) ISSN 2224-896X (Online)
Vol 1, No.1, 2011
Lg f I1 ,i − fQ ,i
D(Q, I1 ) = ∑
i =1 1 + f I1 ,i + f Q ,i
(5)
where Q is query image, Lg is feature vector length, I1 is image in database; f I ,i is ith feature of image I in the
database, f Q ,i is ith feature of query image Q.
4. Experimental Results and Discussions
For the work reported in this paper, retrieval tests are conducted on two different databases (Brodatz, and MIT
VisTex) and results are presented separately.
4.1. Database (DB1)
The database DB1 used in our experiment that consists of 116 different textures comprising of 109 textures
from Brodatz texture photographic album [Brodatz P.], seven textures from USC database
[http://sipi.usc.edu/database/]. The size of each texture is 512 × 512 and is further divided into sixteen 128 × 128
non-overlapping sub-images, thus creating a database of 1856 (116 × 16) images.
No. of Relevant Images Retrieved
Precision ( P) = × 100
Total No. of Images Retrieved
(6)
N1
1
Group Precision (GP ) = ∑P
N1 i =1
(7)
1 Γ1
Average Retrieval Precision ( ARR ) = ∑ GP
Γ1 j =1
(8)
Number of relevant images retrieved
Recall ( R) =
Total Number of relevant images
(9)
1 N1
Group Recall (GR) = ∑R
N1 i =1
(10)
Γ1
1
Average Retrieval Rate ( ARR) = ∑ GR
Γ1 j =1
(11)
where N1 is number of relevant images and Γ1 is number of groups.
Table 1: Retrieval results of proposed method (GCLBP) and LBP in terms of average retrieval precision (ARP)
(%)
Number of top matches considered
Method
1 3 5 7 9 11 13 15 16
LBP 100 89.17 84.67 81.71 79.01 76.33 73.86 71.18 69.65
GCLBP 100 93.19 89.73 87.27 85.02 82.71 80.47 77.88 76.45
Table 2: Retrieval results of proposed method (GCLBP) and LBP in terms of average retrieval rate (ARR) (%)
Number of top matches considered
Method
16 32 48 64 80 96 112
LBP 69.65 80.16 84.47 87.05 89.02 90.44 91.63
22 | P a g e
www.iiste.org
6. Information and Knowledge Management www.iiste.org
ISSN 2224-5758 (Paper) ISSN 2224-896X (Online)
Vol 1, No.1, 2011
GCLBP 76.45 84.57 87.85 89.79 91.13 92.18 93.03
DT-CWT 74.16 83.83 87.13 89.11 90.48 91.48 92.3
DT-RCWT 72.33 80.88 84.32 86.28 87.82 88.98 89.92
Table 3: Performance of proposed method (GCLBP) with different distance measures in terms of average
retrieval rate (ARR) (%)
Distance Number of top matches considered
Method
Measure 16 32 48 64 80 96 112
Manhattan 79.89 86.82 89.61 91.30 92.46 93.34 94.04
Canberra 77.73 85.09 88.24 90.02 91.37 92.32 93.07
GCLBP
Euclidean 78.81 85.59 88.43 90.24 91.54 92.47 93.25
d1 76.45 84.57 87.85 89.79 91.13 92.18 93.03
Fig. 6: comparison of proposed method (GCLBP) with LBP on DB1 database in terms of ARP
Table 1 and Fig. 6 summarize the retrieval results of the proposed method (GCLBP), and LBP in terms of
average retrieval precision and Table 2 and Fig. 7 illustrate the performance of proposed method (GCLBP), LBP
and other transform domain techniques in terms of average retrieval rate. Table 3 and Fig. 8 summarize the
performance of proposed method (GCLBP) with different distance measures in terms of average retrieval rate.
23 | P a g e
www.iiste.org
7. Information and Knowledge Management www.iiste.org
ISSN 2224-5758 (Paper) ISSN 2224-896X (Online)
Vol 1, No.1, 2011
Fig. 7: Comparison of proposed method (GCLBP) with: (a) LBP on DB1 database in terms of ARR, (b) with
LBP and other transform domain features on DB1 database in terms of ARR.
From the Tables 1 to 3 and Fig. 6 to 8 the following can be observed:
1. The average retrieval precision of proposed method (GCLBP) (100% to 76.45%) is more as compared to
LBP (100% to 69.65%).
2. The average retrieval rate of GCLBP (76.45% to 93.03%) is more compared to LBP (69.65% to
91.63%), DT-CWT (74.16% to 92.3%), and DT-RCWT (72.33% to 89.92%).
3. The performance of the proposed method with Manhattan distance (79.89% to 94.04%) is more as
compared to Canberra (77.73% to 93.07%), Euclidean (78.81% to 93.25%), and d1 distance (76.45% to
93.03%).
From Tables 1 to 3, Fig. 6 to 8, and above observations, it is clear that the proposed method is outperforming the
LBP and other transform domain techniques. Fig. 9 illustrates the retrieval results of query image based on the
proposed method (GCLBP).
24 | P a g e
www.iiste.org
8. Information and Knowledge Management www.iiste.org
ISSN 2224-5758 (Paper) ISSN 2224-896X (Online)
Vol 1, No.1, 2011
Fig. 8: Performance of proposed method (GCLBP) with different distance measures on DB1 database in terms
of ARR.
4.2. Database DB2
The database DB2 used in our experiment consists of 40 different textures
[http://vismod.www.media.mit.edu]. The size of each texture is 512 × 512 . Each 512 × 512 image is divided into
sixteen 128 ×128 non-overlapping sub-images, thus creating a database of 640 (40 × 16) images. The
performance of the proposed method is measured in terms of ARP and ARR.
Table 4: Retrieval results of proposed method (GCLBP) and LBP in terms of average retrieval precision (ARP)
(%)
Number of top matches considered
Method
1 3 5 7 9 11 13 15 16
LBP 100 93.85 90.90 88.37 85.45 82.69 79.85 76.35 74.39
GCLBP 100 97.13 95.25 93.05 90.45 87.52 84.87 81.46 79.44
25 | P a g e
www.iiste.org
9. Information and Knowledge Management www.iiste.org
ISSN 2224-5758 (Paper) ISSN 2224-896X (Online)
Vol 1, No.1, 2011
Fig. 9: Retrieval results of proposed method (GCLBP) of query image: (a) 1, (b) 724, and (c) 1850 of database
DB1.
Table 4 and Fig. 10 summarize the retrieval results of the proposed method (GCLBP) and LBP in terms of
average retrieval precision and Table 5 and Fig. 11 illustrate the performance of proposed method (GCLBP) and
LBP in terms of average retrieval rate. Table 6 and Fig. 12 summarize the performance of proposed method
(GCLBP) with different distance measures in terms of average retrieval rate.
From the Tables 4 to 6 and Fig. 10 to 12 the following can be observed:
1. The average retrieval precision of proposed method (GCLBP) (100% to 79.44%) is more as compared to
26 | P a g e
www.iiste.org
10. Information and Knowledge Management www.iiste.org
ISSN 2224-5758 (Paper) ISSN 2224-896X (Online)
Vol 1, No.1, 2011
LBP (100% to 74.39%).
2. The average retrieval rate of GCLBP (79.44% to 97.24%) is more compared to LBP (74.39% to
97.08%).
3. The performance of the proposed method with d1 distance (79.44% to 97.24%) is more as compared to
Canberra (74.7% to 93.48%), Euclidean (80.07% to 97.20%), and Manhattan distance (80.47% to
95.46%).
From Tables 4 to 6, Fig. 10 to 12, and above observations, it is clear that the proposed method is outperforming
the LBP and other transform domain techniques.
Fig. 10: comparison of proposed method (GCLBP) with LBP on DB2 database in terms of ARP
Table 5: Retrieval results of proposed method (GCLBP) and LBP in terms of average retrieval rate (ARR) (%)
Number of top matches considered
Method
16 32 48 64 80 96 112
LBP 74.39 86.69 91.14 93.77 95.35 96.36 97.08
GCLBP 79.44 88.36 92.00 94.22 95.54 96.53 97.24
27 | P a g e
www.iiste.org
11. Information and Knowledge Management www.iiste.org
ISSN 2224-5758 (Paper) ISSN 2224-896X (Online)
Vol 1, No.1, 2011
Fig. 11: Comparison of proposed method (GCLBP) with LBP on DB2 database in terms of ARR.
Fig. 12: Performance of proposed method (GCLBP) with different distance measures on DB2 database in terms
of ARR.
Table 6: Performance of proposed method (GCLBP) with different distance measures in terms of average
retrieval rate (ARR) (%)
Distance Number of top matches considered
Method
Measure 16 32 48 64 80 96 112
Manhattan 80.47 88.62 91.560 93.21 94.23 94.97 95.46
Canberra 74.70 84.57 88.260 90.58 92.00 92.81 93.48
GCLBP
Euclidean 80.07 88.44 92.07 94.17 95.56 96.52 97.20
d1 79.44 88.36 92.00 94.22 95.54 96.53 97.24
28 | P a g e
www.iiste.org
12. Information and Knowledge Management www.iiste.org
ISSN 2224-5758 (Paper) ISSN 2224-896X (Online)
Vol 1, No.1, 2011
5. Conclusion
A new algorithm which is based on the graph cut theory and local binary patterns (LBP) for content based image
retrieval (CBIR) is proposed in this paper. The proposed method extracts the nine LBP patterns from a given
3×3 pattern and these are used as the features. Two experiments have been carried out for proving the worth of
our algorithm. The results after being investigated shows a significant improvement in terms of their evaluation
measures as compared to LBP and other existing transform domain techniques.
References
Ahonen T., Hadid A., Pietikainen M., Face description with local binary patterns: Applications to face
recognition, IEEE Trans. Pattern Anal. Mach. Intell., 28 (12): 2037-2041, 2006.
Ahmadian A., Mostafa A. (2003), An Efficient Texture Classification Algorithm using Gabor wavelet, 25th
Annual international conf. of the IEEE EMBS, Cancun, Mexico, 930-933.
Birgale L., Kokare M., Doye D. (2006), Color and Texture Features for Content Based Image Retrieval,
International Conf. Computer Grafics, Image and Visualisation, Washington, DC, USA, 146 – 149.
Brodatz P. (1996), “Textures: A Photographic Album for Artists and Designers,” New York: Dover.
Heikkil M., Pietikainen M., A texture based method for modeling the background and detecting moving objects,
IEEE Trans. Pattern Anal. Mach. Intell., 28 (4): 657-662, 2006.
Heikkila M., Pietikainen M., Schmid C., Description of interest regions with local binary patterns, Elsevie J.
Pattern recognition, 42: 425-436, 2009.
Huang J., Kumar S. R., and Mitra M., Combining supervised learning with color correlograms for content-based
image retrieval, Proc. 5th ACM Multimedia Conf., (1997) 325–334.
Kokare M., Chatterji B. N., Biswas P. K., A survey on current content based image retrieval methods, IETE J.
Res., 48 (3&4) 261–271, 2002.
Liu Ying, Dengsheng Zhang, Guojun Lu, Wei-Ying Ma, Asurvey of content-based image retrieval with high-
level semantics, Elsevier J. Pattern Recognition, 40, 262-282, 2007.
Li M., Staunton R. C., Optimum Gabor filter design and local binary patterns for texture segmentation, Elsevie
J. Pattern recognition, 29: 664-672, 2008.
Li Xi, Hu Weiming, Zhang Zhongfei, and Wang Hanzi, Heat Kernel Based Local Binary Pattern for Face
Representation, IEEE Signal Processing Letters, 17 (3) 308–311 2010.
Lu Z. M. and Burkhardt H., Colour image retrieval based on DCT domain vector quantization index histograms,
J. Electron. Lett., 41 (17) (2005) 29–30.
MIT Vision and Modeling Group, Vision Texture. [Online]. Available: http://vismod.www.media.mit.edu.
Moghaddam H. A., Khajoie T. T., Rouhi A. H and Saadatmand T. M. (2005), Wavelet Correlogram: A new
approach for image indexing and retrieval, Elsevier J. Pattern Recognition, 38 2506-2518.
Moghaddam H. A. and Saadatmand T. M. (2006), Gabor wavelet Correlogram Algorithm for Image Indexing
and Retrieval, 18th Int. Conf. Pattern Recognition, K.N. Toosi Univ. of Technol., Tehran, Iran, 925-928.
Moghaddam H. A., Khajoie T. T. and Rouhi A. H. (2003), A New Algorithm for Image Indexing and Retrieval
Using Wavelet Correlogram, Int. Conf. Image Processing, K.N. Toosi Univ. of Technol., Tehran, Iran, 2 497-
500.
Ojala T., Pietikainen M., Harwood D., A comparative sudy of texture measures with classification based on
feature distributions, Elsevier J. Pattern Recognition, 29 (1): 51-59, 1996.
Ojala T., Pietikainen M., Maenpaa T., Multiresolution gray-scale and rotation invariant texture classification
with local binary patterns, IEEE Trans. Pattern Anal. Mach. Intell., 24 (7): 971-987, 2002.
Pass G., Zabih R., and Miller J., Comparing images using color coherence vectors, Proc. 4th ACM Multimedia
Conf., Boston, Massachusetts, US, (1997) 65–73.
Pietikainen M., T. Ojala, T. Scruggs, K. W. Bowyer, C. Jin, K. Hoffman, J. Marques, M. Jacsik, W. Worek,
Overview of the face recognition using feature distributions, Elsevier J. Pattern Recognition, 33 (1): 43-52,
2000.
Rui Y. and Huang T. S., Image retrieval: Current techniques, promising directions and open issues, J.. Vis.
Commun. Image Represent., 10 (1999) 39–62.
29 | P a g e
www.iiste.org
13. Information and Knowledge Management www.iiste.org
ISSN 2224-5758 (Paper) ISSN 2224-896X (Online)
Vol 1, No.1, 2011
Saadatmand T. M. and Moghaddam H. A., Enhanced Wavelet Correlogram Methods for Image Indexing and
Retrieval, IEEE Int. Conf. Image Processing, K.N. Toosi Univ. of Technol., Tehran, Iran, (2005) 541-544.
Saadatmand T. M. and Moghaddam H. A., A Novel Evolutionary Approach for Optimizing Content Based
Image Retrieval, IEEE Trans. Systems, Man, and Cybernetics, 37 (1) (2007) 139-153.
Smeulders A. W.M., Worring M., Santini S., Gupta A., and Jain R., Content-based image retrieval at the end of
the early years, IEEE Trans. Pattern Anal. Mach. Intell., 22 (12) 1349–1380, 2000.
Smith J. R. and Chang S. F., Automated binary texture feature sets for image retrieval, Proc. IEEE Int. Conf.
Acoustics, Speech and Signal Processing, Columbia Univ., New York, (1996) 2239–2242.
Stricker M. and Oreng M., Similarity of color images, Proc. SPIE, Storage and Retrieval for Image and Video
Databases, (1995) 381–392.
Subrahmanyam M., Gonde A. B. and Maheshwari R. P., Color and Texture Features for Image Indexing and
Retrieval, IEEE Int. Advance Computing Conf., Patial, India, (2009) 1411-1416.
Subrahmanyam Murala, Maheshwari R. P., Balasubramanian R., A Correlogram Algorithm for Image Indexing
and Retrieval Using Wavelet and Rotated Wavelet Filters, Int. J. Signal and Imaging Systems Engineering.
Swain M. J. and Ballar D. H., Indexing via color histograms, Proc. 3rd Int. Conf. Computer Vision, Rochester
Univ., NY, (1991) 11–32.
Tan X. and Triggs B., Enhanced local texture feature sets for face recognition under difficult lighting conditions,
IEEE Tans. Image Proc., 19(6): 1635-1650, 2010.
University of Suthern California, Signal and Image Processing Institute, Rotated Textures. [Online]. Available:
http://sipi.usc.edu/database/.
Zhao G., Pietikainen M., Dynamic texture recognition using local binary patterns with an application to facial
expressions, IEEE Trans. Pattern Anal. Mach. Intell., 29 (6): 915-928, 2007.
30 | P a g e
www.iiste.org