Nowadays, Content-Based Image Retrieval has received a massive attention in the literature of image information retrieval, and accordingly a broad range of techniques have been proposed. However, these techniques are not free of defects in terms of recognition. In this paper, content based image retrieval has been proposed with a new method of building feature vector to represente an image for the clustertnig, which consiss of 140 elements taken from several feature types as following color historgram, color moments, Gabor filters, GLCM matrix, wavelet transformation, tamura feature, and moment invaraints. Aftering preparing the feature vector, clustering operation named K-Mean is exploited here to give the centroid of each image features. Finally Minkowski-Form Distance and Euclidean distance as a similarity measurement are applied for clustering groups of images having the same charactersitcs, shape and colors. The experiment is run on IMPLIcity database which has 1000 colored images. The evaluation of this proposed algorithm was by selecting random five images as query images, a fruitful result has been gotten as clustering set of images as illustared in the result section of this paper.
HARDNESS, FRACTURE TOUGHNESS AND STRENGTH OF CERAMICS
Content-Based Image Retrieval by Multi-Featrus Extraction and K-Means Clustering
1. International Journal of Electrical, Electronics and Computers (EEC Journal) [Vol-2, Issue-3, May-Jun 2017]
https://dx.doi.org/10.24001/eec.2.3.1 ISSN: 2456-2319
www.eecjournal.com Page | 1
Content-Based Image Retrieval by Multi-
Features Extraction and K-Means Clustering
Mostafa G. Saeed1
, Fahad Layth Malallah2
, Zaid Ahmed Aljawaryy3
1,2
Department of computer Science, Cihan University / Sulaimaniya, Iraq
3
Faculty of Science and Technology University of Human Development / Sulaimaniya, Kurdistan Region, Iraq
Abstract— Nowadays, Content-Based Image Retrieval
has received a massive attention in the literature of image
information retrieval, and accordingly a broad range of
techniques have been proposed. However, these
techniques are not free of defects in terms of recognition.
In this paper, content based image retrieval has been
proposed with a new method of building feature vector to
represente an image for the clustertnig, which consiss of
140 elements taken from several feature types as
following color historgram, color moments, Gabor filters,
GLCM matrix, wavelet transformation, tamura feature,
and moment invaraints. Aftering preparing the feature
vector, clustering operation named K-Mean is exploited
here to give the centroid of each image features. Finally
Minkowski-Form Distance and Euclidean distance as a
similarity measurement are applied for clustering groups
of images having the same charactersitcs, shape and
colors. The experiment is run on IMPLIcity database
which has 1000 colored images. The evaluation of this
proposed algorithm was by selecting random five images
as query images, a fruitful result has been gotten as
clustering set of images as illustared in the result section
of this paper.
Keywords—Image processing, Pattern Recognition,
Machine learning.
I. INTRODUCTION
A picture worths a thousand words as human beings are
able to tell a story from a picture based on what they see
[1]. Recent years have seen a rapid increase in the size of
digital image collections. Every day, both military and
civilian equipment generate Giga-bytes of images. A huge
amount of information is out there. However, to access of
make use of this information it should be organized to
allow efficient browsing, searching, and retrieval. Image
retrieval has been a very active research area since the
1970's. In various computer vision applications are
widely the process of retrieving desired images from a
large collection on the basis of features that can be
automatically extracted from the images themselves.
These systems called CBIR (Content-Based Image
Retrieval) have received intensive attention in the
literature of image information retrieval since this area
was started years ago, and consequently a broad range of
techniques have been proposed [2]. More and more
images are being readily available to professional and
amateur users because of astonishing advancements in
color imaging technologies. The large numbers of image
collections, available from a variety of sources (digital
camera, digital video, scanner, the internet etc.) have
posed increasing technical challenges to computer
systems to store/transmit and index/manage image data
effectively to make such collections easily available [1]
[2] [3]. In 1991, both Swain and Ballard worked on CBIR
and proposed histogram intersection, an L1 metric, as the
similarity measure for the color histogram. [4]. While in
1994 Niblack et al and his colleagues introduced an L2-
related metric in comparing the histograms [5].
Furthermore, considering that most color histograms are
very sparse and thus sensitive to noise, in 1995 Stricker
and Orengo proposed using the cumulated color
histogram. Their research results demonstrated the
advantages of the proposed approach over the
conventional color histogram approach[6]. In 1995 Both
Stricker and Orengo work on other color features and
proposed using the color moments to overcome the
quantization effects, as in color histogram. most of the
information is concentrated on the low-order moments,
only the first moment (mean), and the second and third
central moments (variance and skewness) [6]. Also In
1995, Smith and Chang worked To facilitate fast search
over large-scale image collections, they proposed color
sets as an approximation to the color histogram. A binary
search tree was constructed to allow a fast search [7] [8].
In 1990 Gotlieb and Kreyszig studied the statistics of the
first constructed co-occurrence matrix originally proposed
in 1973 which explored the gray level spatial dependence
of texture, and experimentally found out that contrast,
inverse deference moment, and entropy had the biggest
discriminatory power [9]. In 1993 Chang and Kuo used a
tree-structured wavelet transform To explore the middle-
band characteristics, to further improve the classification
accuracy [10]. While in 1994, 1996 Smith and Chang
used the statistics (mean and variance) extracted from the
wavelet subbands as the texture representation. This
approach achieved over 90% accuracy on the 112 Brodatz
2. International Journal of Electrical, Electronics and Computers (EEC Journal) [Vol-2, Issue-3, May-Jun 2017]
https://dx.doi.org/10.24001/eec.2.3.1 ISSN: 2456-2319
www.eecjournal.com Page | 2
texture images [11]. In 1994 Gross and his colleagues
combined, the wavelet transform with other techniques to
achieve better performance. Gross et al. used the wavelet
transform, together with KL expansion and Kohonen
maps, to perform the texture analysis [12]. In 1992, 1994
Thyagarajan et al. and Kundu et al. combined the
wavelet transform with a co-occurrence matrix to take
advantage of both statistics-based and transform-based
texture analyses [13]. In 1994, based on the discrete
version of Green’s theorem, Yang and Albregtsen
proposed a fast method of computing moments in binary
images. Motivated by the fact that most useful invariants
were found by extensive experience and trial-and-error
[14].
In terms of feature extraction, Feature (content) extraction
is the basis of content-based image retrieval. In a broad
sense, features may include both text-based features (key
words, annotations) and visual features (color, texture,
shape, faces). However, since there already exist rich
literature on text-based feature extraction in the DBMS
and information retrieval research communities, we will
confine ourselves to the techniques of visual feature
extraction. Within the visual feature scope, the features
can be further classified as general features and domain
specific features. The former include color, texture, and
shape features while the latter is application-dependent
and may include, for example, human faces and finger
prints.
About the application Applications of CBIR, the CBIR
technology has been used in several applications such as
fingerprint identification, biodiversity information
systems, digital libraries, crime prevention, medicine,
historical research, among others. Some of these
applications are presented in this section [15]. In terms of
Medical Applications the use of CBIR can result in
powerful services that can benefit biomedical information
systems. Three large domains can instantly take
advantage of CBIR techniques: teaching, research, and
diagnostics. From the teaching perspective, searching
tools can be used to find important cases to present to
students. Research also can be enhanced by using services
combining image content information with different kinds
of data. For example, scientists can use mining tools to
discover unusual patterns among textual (e.g., treatments
reports, and patient records) and image content
information. Similarity queries based on image content
descriptors can also help the diagnostic process.
Clinicians usually use similar cases for case-based
reasoning in their clinical decision-making process. In this
sense, while textual data can be used to find images of
interest, visual features can be used to retrieve relevant
information for a clinical case (e.g., comments, related
literature, HTML pages, etc [15].
The objective of this paper is to combine several types of
feature extraction operations and then to build a strong
feacture vector to be input into K-mean classifier. In other
words, the object is doing search based on image to
output set or group of images that have the same
characteristics of the qnuireied image.
This paper is organized as follows. Section II is dedicated
for literature review related to Content-Based Image
Retrieval. In section III, the methodology is proposed. In
Section IV, the experiment and results of thid paper, in
Section V, the conclusion is presented with future work.
II. LITERATURE REVIEW
Content-based image retrieval (CBIR), also known as
query by image content (QBIC) and content-based visual
information retrieval (CBVIR) is the application of
computer vision techniques to the image retrieval
problem, that is, the problem of searching for digital
images in large databases. "Content-based" means that the
search will analyze the actual contents of the image rather
than the metadata such as keywords, tags, and/or
descriptions associated with the image [16]. CBIR is a
new but widely adopted method for finding images from
vast and unannotated image databases. In CBIR images
are indexed on the basis of low-level features, such as
color, texture, and shape that can automatically be derived
from the visual content of the images [15].
2.1. Image Feature Extraction
Feature extraction is the basic of content-based image
retrieval. In a broad sense, features may include both text-
based features (keywords, annotations) and visual features
such as color, texture, shape, faces. CBIR system is
performed based on a comparison of low level features
such as color, shape, and texture etc. extracted from the
images [17]. In general Image Features can be divided in
to three sub-classes; color features, texture features and
shape features.
In terms of color feature, which is one of the most
important features, makes the recognition of images.
Color is a property that depends on the reflection of light
to the processing of that information in the brain. The
color is used every day to tell the difference between
objects, places, and the time of day. Typically, the color
of an image is represented through some color model.
There are various color models to describe color
information. A color model is specified in terms of 3-D
coordinate system and a subspace within that system
where each color is represented by a single point. The
majorty used color space is RGB (red, green, blue), HSV
(hue, saturation, value) and Y,Cb,Cr (luminance and
chrominance), Thus the color content is characterized
by3-channels from some color model. One of the color
3. International Journal of Electrical, Electronics and Computers (EEC Journal) [Vol-2, Issue-3, May-Jun 2017]
https://dx.doi.org/10.24001/eec.2.3.1 ISSN: 2456-2319
www.eecjournal.com Page | 3
content representations of an image is by using a color
histogram. Statistically, it denotes the joint probability of
the intensities of the three-color channels[17].
RGB colors are called primary colors and are additive, By
varying their combinations, other colors can be obtained
also the representation of the HSV spaces derived from
the RGB space cube, with the main diagonal of the RGB
model, as the vertical axis in HSV [18]. As saturation
varies from 0 to 1, the colors vary from unsaturated (gray)
to saturate (no white component). Hue ranges from 0 to
360 degrees, with variation beginning with red, going
through yellow, green, cyan, blue and magenta and back
to red. Color space form (RGB) and (HSV) are
represented in Figure 1.
Fig.1: The RGB color space and the HSV color space
These color spaces are intuitively corresponding to the
RGB model from which they can be derived through
linear or non-linear transformations. The YCbCr color
space is used in the JPEG and MPEG international coding
standards. In MPEG-7 the YCbCr color space is
demonstrated in Figure 1.
About color histogram feature, a color histogram defined
as a color vector H for a given image as a vector H =
{h[1], h[2], . . . h[i], . . . , h[N]} where i represents a color
in the color histogram, h[i] is the number of pixels in
color i in that image, and N is the number of bins in the
color histogram, i.e., the number of colors in the adopted
color model. In order to compare images of different
sizes, color histograms should be normalized[19].
In terms of color moments, color moments have been
successfully used in many retrieval systems. Color
moments are measures that can be used to differentiate
images based on their features of color depend on
statistical methods. Once calculated, these moments
provide a measurement for color similarity between
images. These values of similarity can then be compared
to the values of images indexed in a database for tasks
like image retrieval. Color moments have been proved to
be efficient and effective in representing color
distributions of images [19]. About Texture Feature,
which has been used to classify and recognize objects and
used in finding similarities between images in multimedia
databases [19].Texture is a very useful characterization
for a wide range of image; It is generally believed that
human visual systems use texture for recognition and
interpretation. In general, color is usually a pixel property
while texture can only be measured from a group of
pixels. A large number of techniques have been proposed
to extract texture features. Based on the domain from
which the texture feature is extracted, they can be broadly
classified into; spatial texture feature and spectral texture
feature extraction methods. For the former approach,
texture features are extracted by computing the pixel
statistics or finding the local pixel structures in original
image domain, whereas the latter transforms an image
into frequency domain and then calculates feature from
the transformed image [19]. A variety of techniques have
been used for measuring texture such as co-occurrence
matrix, Fractals, Gabor filters, variations of wavelet
transform [20].
Gabor Filters, the most common method for texture
feature extraction, Gabor filter has been widely used in
image texture feature extraction. Gabor filter is
specifically designed to sample the entire frequency
domain of an image by characterizing the center
frequency and orientation parameters. The image is
filtered with a bank of Gabor filters or Gabor wavelets of
different preferred spatial frequencies and orientations.
Each wavelet captures energy at a specific frequency and
direction which provide a localized frequency as a feature
vector. Thus, texture features can be extracted from this
group of energy distributions. Given an input image
I(x,y), Gabor wavelet transform convolves I(x,y) with a
set of Gabor filters of different spatial frequencies and
orientations [21].
Wavelet transformation gives information about the
variations in the image at different scales. Discrete
4. International Journal of Electrical, Electronics and Computers (EEC Journal) [Vol-2, Issue-3, May-Jun 2017]
https://dx.doi.org/10.24001/eec.2.3.1 ISSN: 2456-2319
www.eecjournal.com Page | 4
Wavelet Transform (DWT) represents an image as a sum
of wavelet functions with different locations (shift) and
scales. Any decomposition of an 1D image into wavelet
involves a pair of waveforms: the high frequency
components are corresponding to th detailed parts of an
image while the low frequency components are
corresponding to the smooth parts of an image. DWT for
an image as a 2D signal can be derived from a 1D DWT,
implement 1D DWT to every rows then implement 1D
DWT to every column. Any decomposition of an 2D
image into wavelet involves four sub-band elements
representing LL (Approximation), HL (Vertical Detail),
LH (Horizontal Detail), and HH (Detail), respectively.
The wavelet transform allows for the decomposition of a
signal using a series of elemental functions called
wavelets and scaling, which are created by scaling and
translations of a base function, known as the mother
wavelet [22].
About tamura feature is designed in accordance with
psychological studieson the human perception of texture:
coarseness, contrast, directionality, line-likeness,
regularity, and roughness. They make experiments to test
the significance of the features. They found the first three
features to be very important, which correlate strongly
with the human perception. These three features,
coarseness, contrast, and directionality, are defined as
follows [23].
In terms of shape features is known as an important cue
for human beings to identify and recognize the real-world
objects, whose purpose is to encode simple geometrical
forms such as straight lines in different directions [16].
Shape descriptors can be divided into two main
categories: region based and contour-based methods.
Region-based methods use the whole area of an object for
shape description, while contour-based methods use only
the information present in the contour of an object. In
retrieval applications, a small set of lower order moments
is used to discriminate among different images. The most
common moments are: the geometrical moments, central
moments and the normalized central moments, the
moment invariants, the Zernike moments and the
Legendre moments,(which are based on the theory of
orthogonal polynomials, the complex moments.
2.2. Classification
After feature extraction is done, classifiecation stage is
applied to the prepared feacture vector. In this paper,
retrieval system classifiection is applied as a
classifiection, which combines these feature vectors and
calculates the similarity between the combined feature
vectors of the query image, and retrieves a given number
of the most similar target images [24]. Different similarity
or so called distance measures will be affected
significantly on the retrieval performances of an image
retrieval system [25]. On the most popular used method of
distance measurement is Standardized Euclidean distance,
which is calculated on standardized data, in this case
standardized by the standard deviations and it can be
calculated as in equation (1).
(1)
In terms of clustering alogirthm, K-Means Clustering
[26] is commonely used algorithm, which is a partitioning
based clustering. K-Means clustering is used to group n
objects into k clusters to guarantee the resemblance
among objects in the same cluster and the dissimilarity
among samples in different cluster.
III. METHODOLOGY
The methodology consists of three steps, which are image
collecting a database, feature extraction, and k-Mean
clustering algorithm with similarity measurement, as
shown in Figure 2.
5. International Journal of Electrical, Electronics and Computers (EEC Journal) [Vol-2, Issue-3, May-Jun 2017]
https://dx.doi.org/10.24001/eec.2.3.1 ISSN: 2456-2319
www.eecjournal.com Page | 5
Fig.2: Block diagram for image retrieval system
3.1. Database and data pre-processing
Fig.3: Image categorization
6. International Journal of Electrical, Electronics and Computers (EEC Journal) [Vol-2, Issue-3, May-Jun 2017]
https://dx.doi.org/10.24001/eec.2.3.1 ISSN: 2456-2319
www.eecjournal.com Page | 6
In this paper, the dataset has been used from database
named SIMPLIcity as referenced in [27], which contains
1000 coloured images. The database was downloaded
from the website [27]. Sizes of the images in this database
are either 256 × 384 or 384 × 256 pixels. The images in
this database are several types and various kinds as shown
in Fgiure 3, in order to generalize this study in the
processing and to get more accurate results. For instance,
this database has various group categories of image
around people, animals, different colors, landscape
groups, structure groups, flower groups and shape group
images.
3.2. Feature Extraction
The feature extraction process aims to describe each
image in the database in terms of low level features.
Feature extraction is a fundamental component in a CBIR
system. For this module, actually, occur in both pre-
processing stage and the time when users do request to
system with an image query. The objective of feature
extraction is to automatically determine a set of features
to describe each image. In this step, the features of images
data are extracted from images. These low level features,
known as descriptors, are used to provide similarity
measures between different images. Descriptors are
typically smaller in size compared to the original image.
The feature extraction flowchart is illustrated in Figure 2.
It is worth to mention that each image is represented with
a feature vector to be input to the K-mean clustering. As
overall, the number of elements of each image feacture
vector is 140 features generated as a combination of the
following features as follows: color historgram, color
moments, Gabor filters, GLCM matrix, wavelet
transformation, tamura feature, and moment invaraints.
As detailed, color historgram which is color histogram
feature values are obtained after converting to HSV, color
moments which are color moments feature values are
obtained after reading an image and dividing it to four
segments then calculate the color variance of each
segment then combine the segment variances into one
single variance, Gabor filters which are values of this
filter taken to RBG image after converting to gray scale
image, GLCM matrix which is RGB image converted to
2D gray scale format, then taken their means and
variances of all the parameters, wavelet transformation
which are values found by reading the RGB image and
resizing it to the size of MxN for M=N=256 where M is
the number of rows, N is the number of columns, without
information loss, after that convert it to 2D gray-scale
format, then decompose the image into sub-images, then
extracting the feature vector, tamura feature, and moment
invaraints which are obtained by converting the RGB
image to grayscale, then applying Haar filter on it.
3.3. Image Clustering
In clustering operation by K-means, the 1000 images in
the database are grouped into 100 separated groups. Each
group represents a center in the clustering algorithm. In
other words, similar set of pictures are put together in
every single group. The center of a group has an average
of the features of the images belong to that group, for
instance, cluster #21 containing images #701, #707, #714,
and #717 which are similar to the query image having
similar features as wil be illustrated i the result section.
The purpose of image clustering is to decrease the number
of image (or features) vectors compared with the query
image. The query is compared to the centroids only, the
best clusters are then selected and the images that belong
to that cluster are retrieved. It iwork to explain K- Means
Clustering optimizes only intra cluster similarity. The
steps for k-means algorithm are as follows: initialize
number of clusters, then, randomly choose centroid from
database, after that compute the Euclidean distance
between data points and cluster centroid using equation
(2).
(2)
And then the clusters are created based on minimum
distance after that, update the cluster centroid by
computing the mean for clusters. Finally, the procedure is
continued until the mean values are same for several
consecutive iterations [28].
IV. RESULT AND DISCUSSION
To test the proposed algorithm of image retrival, five
input images have been used as a query images, and these
five images each of which will show a set of similar
images as a group.In figure 4, qiurey image which is
described as face, has sequence number in the database as
#7, the result of its k-menas clustering is 13 images that
have been clustered into one group as shown in Figure 4.
These 13 images have the following sequence number in
the database as: 9,10,24,32,39,40,41,43,57,75, 81, 95 and
933.
For the second test image, which is in in figure 5, qiurey
image which is described as flower, has sequence number
in the database as #600, the result of its k-menas
clustering is 14 images that have been clustered into one
group as shown in Figure 5. These 14 images have the
following sequence number in the database as: 600 ,605,
609, 614, 615, 623, 628, 631, 633, 641, 644, 647,
653,666 and 679.
For the third test image, which is in in figure 6, qiurey
image which is described as mountain, has sequence
7. International Journal of Electrical, Electronics and Computers (EEC Journal) [Vol-2, Issue-3, May-Jun 2017]
https://dx.doi.org/10.24001/eec.2.3.1 ISSN: 2456-2319
www.eecjournal.com Page | 7
number in the database as #844, the result of its k-menas
clustering is 4 images that have been clustered into one
group as shown in Figure 6. These 4 images have the
following sequence number in the database as: 804, 881
,892 and 943.
For the forth test image, which is in in figure 7, qiurey
image which is described as dinosaur, has sequence
number in the database as #488, the result of its k-menas
clustering is 9 images that have been clustered into one
group as shown in Figure 7. These 9 images have the
following sequence number in the database as: 412, 429,
446, 457,470, 473 482,490 and 495.
Finally the fifth test image, which is in in Figure 8, qiurey
image is described as Bus that has sequence number in the
database as #332, the result of its k-menas clustering is 5
images that have been clustered into one group as shown
in Figure 8. These 5 images have the following sequence
number in the database as: 309,330, 334, 357, 374 and
382.
Query Image
Target (result) images
as one cluster for faces
Fig.4: Clustering Output for query image as face #7.
8. International Journal of Electrical, Electronics and Computers (EEC Journal) [Vol-2, Issue-3, May-Jun 2017]
https://dx.doi.org/10.24001/eec.2.3.1 ISSN: 2456-2319
www.eecjournal.com Page | 8
Query Image
Target (result) images
as one cluster for
flowers
Fig.5: Clustering Output for query image as flower #600.
Query Image
Target (result) images
as one cluster for
mountains
Fig.6: Clustering Output for query image as mountains #844.
9. International Journal of Electrical, Electronics and Computers (EEC Journal) [Vol-2, Issue-3, May-Jun 2017]
https://dx.doi.org/10.24001/eec.2.3.1 ISSN: 2456-2319
www.eecjournal.com Page | 9
Query Image
Target (result) images
as one cluster for
dinosaur
Fig.7: Clustering Output for Query Image as Dinosaur #488.
Query Image
Target (result)
images as one cluster
for Bus
Fig.8: Clustering Output for Query Image as Bus #332.
The previous five query images have been shosen
randomly to do the test, as it is cleqar in each cluster
groups tere are images as the same as the quiry image.
However, the databasr is challenging and not trivial tsk to
achieve 100% as fuccessful accuracy. Therefore this
proposed moetgos promis with a fuitful result in terms of
clustering images.
V. CONCLUSION
In this paper contnet based image retrievl (CBIR) has
been ahcievd by using a proposed collection of features
and clustered by using k-means clustering algorithm after
that an Euclidean distance has been used for the distance
measurement to make the decision of the cultering group.
The new idea in this paper is how to collect and build the
10. International Journal of Electrical, Electronics and Computers (EEC Journal) [Vol-2, Issue-3, May-Jun 2017]
https://dx.doi.org/10.24001/eec.2.3.1 ISSN: 2456-2319
www.eecjournal.com Page | 10
feature vector to represent the image. In this paper, 140
features elements have been utilized as a combination of
several feature types as follows: color historgram, color
moments, Gabor filters, GLCM matrix, wavelet
transformation, tamura feature, and moment invaraints.
Result of this research has a promising outcome as the
experiment conducted on IMPLIcity database which has
1000 images. In the testing, five randomly selected
images from this database have been used as query
images. The result noticed was clustering several images
as a group of each query image.
REFERENCES
[1] D. ping Tian, "A review on image feature extraction
and representation techniques," International Journal
of Multimedia and Ubiquitous Engineering, vol. 8,
pp. 385-396, 2013.
[2] R. S. Choras, "Image feature extraction techniques
and their applications for CBIR and biometrics
systems," International journal of biology and
biomedical engineering, vol. 1, pp. 6-16, 2007.
[3] H. Kekre, et al., "Image Retrieval using Texture
Features extracted from GLCM, LBG and KPE,"
International Journal of computer theory and
Engineering, vol. 2, p. 695, 2010.
[4] Y. Rui, et al., "Image retrieval: Current techniques,
promising directions, and open issues," Journal of
visual communication and image representation, vol.
10, pp. 39-62, 1999.
[5] M. S. Bhadange and Y. R. Kalshetty, "Querying
Images by Content Using Color, Texture and
Shape."
[6] M. A. Stricker and M. Orengo, "Similarity of color
images," in IS&T/SPIE's Symposium on Electronic
Imaging: Science & Technology, 1995, pp. 381-392.
[7] J. R. Smith and S.-F. Chang, "Single color extraction
and image query," in Image processing, 1995.
Proceedings., International conference on, 1995, pp.
528-531.
[8] J. R. Smith and S.-F. Chang, "Tools and techniques
for color image retrieval," in Electronic Imaging:
Science & Technology, 1996, pp. 426-437.
[9] C. C. Gotlieb and H. E. Kreyszig, "Texture
descriptors based on co-occurrence matrices,"
Computer Vision, Graphics, and Image Processing,
vol. 51, pp. 70-86, 1990.
[10] T. Chang and C.-C. Kuo, "Texture analysis and
classification with tree-structured wavelet
transform," IEEE transactions on image processing,
vol. 2, pp. 429-441, 1993.
[11] J. R. Smith and S.-F. Chang, "Automated binary
texture feature sets for image retrieval," in
Acoustics, Speech, and Signal Processing, 1996.
ICASSP-96. Conference Proceedings., 1996 IEEE
International Conference on, 1996, pp. 2239-2242.
[12] M. H. Gross, et al., "Multiscale image texture
analysis in wavelet spaces," in Image Processing,
1994. Proceedings. ICIP-94., IEEE International
Conference, 1994, pp. 412-416.
[13] K. Thyagarajan, et al., "A maximum likelihood
approach to texture classification using wavelet
transform," in Image Processing, 1994. Proceedings.
ICIP-94., IEEE International Conference, 1994, pp.
640-644.
[14] L. Yang and F. Albregtsen, "Fast computation of
invariant geometric moments: A new method giving
correct results," in Pattern Recognition, 1994. Vol.
1-Conference A: Computer Vision & Image
Processing., Proceedings of the 12th IAPR
International Conference on, 1994, pp. 201-204.
[15] R. da Silva Torres and A. X. Falcao, "Content-based
image retrieval: theory and applications," RITA, vol.
13, pp. 161-185, 2006.
[16] H. Müller, et al., "A review of content-based image
retrieval systems in medical applications—clinical
benefits and future directions," International journal
of medical informatics, vol. 73, pp. 1-23, 2004.
[17] A. Sandhu and A. Kochhar, "Content Based Image
Retrieval using Texture, Color and Shape for Image
Analysis," International Journal of Computers &
Technology, vol. 3, pp. 149-152, 2012.
[18] J. Kim, et al., "Salient region detection via high-
dimensional color transform and local spatial
support," IEEE transactions on image processing,
vol. 25, pp. 9-23, 2016.
[19] M. H. Memon, et al., "GEO matching regions:
multiple regions of interests using content based
image retrieval based on relative locations,"
Multimedia Tools and Applications, pp. 1-35, 2016.
[20] R. Ashraf, et al., "Content-based image retrieval by
exploring bandletized regions through support vector
machines," Journal of Information Science and
Engineering, vol. 32, pp. 245-269, 2016.
[21] W.-M. Pang, et al., "Fast Gabor texture feature
extraction with separable filters using GPU," Journal
of Real-Time Image Processing, vol. 12, pp. 5-13,
2016.
[22] D. R. Nayak, et al., "Brain MR image classification
using two-dimensional discrete wavelet transform
and AdaBoost with random forests,"
Neurocomputing, vol. 177, pp. 188-197, 2016.
[23] Z. Xia, et al., "A privacy-preserving and copy-
deterrence content-based image retrieval scheme in
cloud computing," IEEE transactions on information
forensics and security, vol. 11, pp. 2594-2608, 2016.
11. International Journal of Electrical, Electronics and Computers (EEC Journal) [Vol-2, Issue-3, May-Jun 2017]
https://dx.doi.org/10.24001/eec.2.3.1 ISSN: 2456-2319
www.eecjournal.com Page | 11
[24] X.-Y. Wang, et al., "An effective image retrieval
scheme using color, texture and shape features,"
Computer Standards & Interfaces, vol. 33, pp. 59-
68, 2011.
[25] F. Long, et al., "Fundamentals of content-based
image retrieval," in Multimedia Information
Retrieval and Management, ed: Springer, 2003, pp.
1-26.
[26] M. B. Cohen, et al., "Dimensionality reduction for k-
means clustering and low rank approximation," in
Proceedings of the Forty-Seventh Annual ACM on
Symposium on Theory of Computing, 2015, pp.
163-172.
[27] J. Z. Wang, et al., "SIMPLIcity: Semantics-sensitive
integrated matching for picture libraries," IEEE
transactions on pattern analysis and machine
intelligence, vol. 23, pp. 947-963, 2001.
[28] R. Kalpana and K. Shanthi, "Image Retrieval Using
Partitioning Based Clustering Methods,"
International Journal of Science Engineering and
Technology Research (IJSETR), vol. 3, 2014.