1. IJRREST
INTERNATIONAL JOURNAL OF RESEARCH REVIEW IN ENGINEERING SCIENCE & TECHNOLOGY
(ISSN 2278–6643)
VOLUME-2, ISSUE-2, JUNE-2013
IJRREST, ijrrest.org 56 | P a g e
Texture Local Tetra Pattern with Gabor
Transform
*Dinesh Verma, **Ashish Gupta
Abstract- In this research, an innovative image indexing and retrieval algorithm using Texture Local Tetra Pattern (LTrP) for content-
based image retrieval (CBIR) with Gabor Transform has been proposed. The implemented algorithm describes the spatial structure of the
local texture using the direction of the center gray pixel. The objective of the proposed work is to retrieve the best images from the stored
database that resembles the query image. To implement this system, firstly we find the directions (horizontal and vertical) of the each
pixel and then divide the patterns into four parts based on the direction of the center pixel. After that, calculation of the tetra patterns will
be done and histogram will be calculated for the constructed binary patterns and finally we construct the feature vector. In the field of
texture classification and retrieval we use various algorithms and techniques like color histogram, color correlogram, LBP (Local Binary
Pattern), LDP (Local Derivative Pattern) and LTP (Local Ternary Pattern) etc. The database consists of a large number of images of
various contents. The performance resulting from the combination of the Gabor Transformation and the LTrP has been also analyzed.
Index Terms Used- Content based Image Retrieval (CBIR), Gabor Transform (GT), Local Binary Pattern (LBP), Local Derivative Pattern
(LDP), Local Ternary Pattern (LTP), Local Tetra Pattern (LTrP), Texture.
—————————— ——————————
1 PROPOSED METHODOLOGIES
1.1 LTrP (Local Tetra Pattern) with Gabor
Transformation
he idea of local patterns (the LBP, the LDP and the LTP)
has been adopted to define LTrPs. The LTrP describes
the spatial structure of the local texture using the
direction of the center gray pixel . The first-order
derivatives along and directions are denoted as
Let denote the center pixel in I, and let and
denote the horizontal and vertical neighborhoods of
respectively. Then the first-order derivatives at the center
pixel can be written or calculated as:
Eq. (1)
Eq. (2)
And the center pixel’s direction can be calculated as:
= Eq. (3)
From equation (3), it is evident that the possible
direction for each center pixel can be 1, 2, 3 or 4, and
eventually, the image is converted into four values, i.e.,
directions. The second-order is defined as:
Eq. (4)
Eq. (5)
From equations (4) and (5), we get 8-bit tetra pattern for
each center pixel.
Then on the basis of the direction of center pixel, we
separate all patterns into four parts. Finally, the tetra
patterns for each part (direction) are converted to three
binary patterns.
Let the direction of center pixel ) obtained using
equation (3) be “1”; then, can be defined by
segregating it into three binary patterns as follows:
Eq. (6)
Eq.
(7)
Where = 2, 3, 4.
Similarly, the other three tetra patterns for remaining
three directions (parts) of center pixels are converted to
binary patterns. Thus, we get 12 (4 x 3) binary patterns.
T
____________________________________
**Assistant Professor, RPIIT Technical Campus, Karnal, Haryana, India
E-Mail:mtech.ashish@outlook.com
*Scholar, RPIIT Technical Campus, Karnal, Haryana, India
E-Mail: dinesh.verma.knl@gmail.com
2. IJRREST
INTERNATIONAL JOURNAL OF RESEARCH REVIEW IN ENGINEERING SCIENCE & TECHNOLOGY
(ISSN 2278–6643)
VOLUME-2, ISSUE-2, JUNE-2013
IJRREST, ijrrest.org 57 | P a g e
Figure1. Calculation of tetra pattern bits
In the figure1 given above, the calculation of tetra
pattern bits has been done for the centre-pixel direction
“1” using the direction of neighbors. Direction of (red) the
centre pixel and (cyan) that its neighborhood pixels.
Figure2. Example to obtain the tetra and magnitude patterns
The bit is coded with the direction of neighbour when
the direction of the centre pixel and its neighbour are
different, otherwise “0” for generating a tetra pattern. For
calculating the magnitude pattern, the bit is coded with
“1” when the magnitude of the centre pixel is less than the
magnitude of its neighbour, otherwise “0.”
The possible transitions of local pattern resulting in an
LTrP for direction “1” of the centre pixel have been
illustrated by figure1. An example of the second-order
LTrP computation resulting in direction “1” for a centre
pixel marked with red has been illustrated in figure2.
1.2 Gabor Transformation
The Gabor transform, named after Dennis Gabor, is a
special case of the short-time Fourier transform. It is used
to determine the sinusoidal frequency and phase content
of local sections of a signal as it changes over time. The
function to be transformed is first multiplied by a Gaussian
function, which can be regarded as a window function,
and the resulting function is then transformed with a
Fourier transform to derive the time-frequency analysis.
The window function means that the signal near the time
being analyzed will have higher weight. The Gabor
transform of a signal x (t) is defined by this formula:
3. IJRREST
INTERNATIONAL JOURNAL OF RESEARCH REVIEW IN ENGINEERING SCIENCE & TECHNOLOGY
(ISSN 2278–6643)
VOLUME-2, ISSUE-2, JUNE-2013
IJRREST, ijrrest.org 58 | P a g e
The Gaussian function has infinite range and it is
impractical for implementation. However, a level of
significance can be chosen (for instance 0.00001) for the
distribution of the Gaussian function.
Outside these limits of integration , the
Gaussian function is small enough to be ignored. Thus the
Gabor transform can be satisfactorily approximated as:
This simplification makes the Gabor transform
practical and realizable. The window function width can
also be varied to optimize the time-frequency resolution
trade-off for a particular application by replacing
t)2 with − α( −t)2 for some chosen alpha.
1.3 Algorithm implementation
Input: Query image;
Output: Retrieval result
1. Initialize and load the image, and then convert it into
greyscale.
2. In horizontal and vertical axis, apply the first-order
derivatives.
3. For each and every pixel, calculate the direction.
4. Based on the direction of the centre pixel, divide the
obtained patterns into four parts.
5. After that calculate the tetra patterns and then separate
them into three binary patterns.
6. Then histograms of binary patterns will be calculated.
7. Calculate the magnitudes of centre pixels.
8. Calculate their histogram after constructing the binary
patterns.
9. Combine the histograms calculated from steps 6 & 8.
10. Construct the feature vector using Gabor
Transformation.
11. Do the comparison of the query image with the images
stored in the database.
12. Retrieve the images based on the best matches which
are similar to the query image.
This algorithm is also applied on Gabor wavelet sub bands
(with three scales and two directions) for GLTrP.
1.4 Proposed System Framework
Figure3 illustrates the flowchart of the proposed
image retrieval system and algorithm as given below.
Figure3. Proposed image retrieval system framework
The objective of the proposed work is to retrieve the
best images from the stored database that resemble the
query image. Firstly, we find the directions (horizontal and
vertical) of the each pixel and then divide the patterns into
four parts based on the direction of the centre pixel. After
that, calculation of the tetra patterns will be done, and
separate them into three binary patterns. The next step is
to construct the binary patterns and calculate their
histogram and finally we construct the feature vector.
The database consists of a large number of images of
various contents. All the images will be of same standard
size (64 by 64). For each query, the system collects n
database images X = (X1, X2… Xn) with the shortest image
matching distance computed.
4. IJRREST
INTERNATIONAL JOURNAL OF RESEARCH REVIEW IN ENGINEERING SCIENCE & TECHNOLOGY
(ISSN 2278–6643)
VOLUME-2, ISSUE-2, JUNE-2013
IJRREST, ijrrest.org 59 | P a g e
2 RESULTS
In the figure shown below, there are two parts: First
part (a) shows the query image, on the basis of which the
search will take place and the Second part (b) shows the
images which are retrieved from the database after
matching the query image. It shows the images which are
found similar to the given query image.
Query Image
Retrieved Images
Figure4. An example of Query Image and Retrieved Images basis on implemented algorithm
2.1 Medical Images Database Collection
In the figure5 mentioned below, the collections of all
images which are stored in the database have been shown.
There are 26 images stored in the current database which
are collected through the internet.
Figure5. Medical images database
5. IJRREST
INTERNATIONAL JOURNAL OF RESEARCH REVIEW IN ENGINEERING SCIENCE & TECHNOLOGY
(ISSN 2278–6643)
VOLUME-2, ISSUE-2, JUNE-2013
IJRREST, ijrrest.org 60 | P a g e
2.2 Example
The example shows the implementation process of
research by taking an example of any image of heart as the
query image and then the search is being processed against
that query image. After that the system fetched some
similar images from the database and the results are
shown by displaying images which matched the given
query image.
2.3 Inputting the Query Image for Heart
In the figure6 given below, the process of selection of
query image has been shown. In this example, the heart
image has been selected for the value of query image.
Figure6. Selecting Query Image of Heart
2.4 Resulting Images matching with the Query
Image
The figure7 and 8 given below shows the retrieved
similar images against the given value of query image, In
this example, 7 similar images of heart have been shown
which matches the given query image value. And also the
matched position has been specified which tells the user
that which one is the exactly matched image out of total
retrieved images.
Figure7. Retrieved similar images of Heart
6. IJRREST
INTERNATIONAL JOURNAL OF RESEARCH REVIEW IN ENGINEERING SCIENCE & TECHNOLOGY
(ISSN 2278–6643)
VOLUME-2, ISSUE-2, JUNE-2013
IJRREST, ijrrest.org 61 | P a g e
Figure8. Retrieved images of Heart along with matched position
3 CONCLUSION & FUTURE SCOPE
The particular technology success and completion is
often based on the confluence of supporting, available
technologies at the time of critical need. CBIR for medical
images has achieved a degree of completion, admitting at
a research level, at a time of significant need. However,
the field has yet to make noticeable intrusion into
mainstream medical research, clinical practice or training.
4 CONCLUSION
In this paper, the implementation algorithm has been
applied on the 26 medical images in our medical image
database. Now in the system we have applied the query
image of heart and have retrieved 7 similar images by
comparing the query image and the images in the
database. We have got good results in the retrieval
accuracy for the retrieved images. We have maintained
an image database for the medical images consisting of
total 26 images having size of 64 by 64. In this paper, we
have presented an innovative approach referred as LTrPs
for CBIR using Gabor Transformation.
5 FUTURE SCOPE
The important role has been played by medical
images in surgical planning, medical training, and disease
diagnoses. Because there has been lot of work done in this
region, a comprehensive survey of CBMIR has been
conducted and there is increasing interest in the use of
CBIR techniques to aid diagnosis by identifying similar
past cases. In future, the other descriptors like color
descriptors can also be used to enhance the capability of
this system.
REFERENCES
[1] Subrahmanyam Murala, R. P. Maheshwari, Member,
IEEE, and R. Balasubramanian, Member, IEEE,”
Local Tetra Patterns: A New Feature Descriptor for
Content-Based Image Retrieval,” IEEE Trans. on
Image Processing, vol. 21, no. 5, May 2012.
[2] Ashish Gupta, “Content Based Medical Image
Retrieval Using Texture Descriptor, IJREAS” Volume
2, Issue 2 (February 2012), ISSN: 2249- 3905.
[3] B. Zhang, Y. Gao, S. Zhao, and J. Liu, “Local
derivative pattern versus local binary pattern: Face
recognition with higher-order local pattern
descriptor,” IEEE Trans. Image Process., vol. 19, no.
2, pp. 533–544, Feb. 2010.
[4] X. Tan and B. Triggs, “Enhanced local texture feature
sets for face recognition under difficult lighting
conditions,” IEEE Trans. Image Process., vol. 19, no.
6, pp. 1635–1650, Jun. 2010.
[5] MurthyV.S, Vamsidhar E,. Swarup Kumar J.N.V.R,
P.Sankara Rao, “Content Based Image Retrieval using
Hierarchical and K-Means Clustering Techniques”,
International Journal of Engineering Science and
Technology. 209-212, 2010.
8. IJRREST
INTERNATIONAL JOURNAL OF RESEARCH REVIEW IN ENGINEERING SCIENCE & TECHNOLOGY
(ISSN 2278–6643)
VOLUME-2, ISSUE-2, JUNE-2013
IJRREST, ijrrest.org 63 | P a g e
on Systems, Man, and Cybernetcs, VOL.SMC-8,
NO.6, pp.460–472, June 1978.
[31]H. Tamura, S. Mori, and T. Yamawaki, “Texture
features corresponding to visual perception,” IEEE
Transactions on Systems, Man and Cybernetics, 6(4),
pp. 460-473