SlideShare a Scribd company logo
1 of 4
Download to read offline
Prathusha.Perugu, Krishna veni / International Journal of Engineering Research and
                      Applications (IJERA) ISSN: 2248-9622 www.ijera.com
                        Vol. 2, Issue 5, September- October 2012, pp.246-249

          A Innovative method for Image retrieval using perceptual
                             textural features
                                             Prathusha.Perugu1, Krishna veni 2
                                 1
                            M.Tech Student , Kottam College of Engineering ,Kurnool,AP.
2
    Asst. Prof. ,Dept. of Computer Science and Engineering, Kottam College of Engineering , Tekur, Kurnool,A.P


Abstract :
         Texture is a very important image                     For images containing repetitive texture patterns, the
feature extremely used in various image                        autocorrelation function exhibits periodic behavior
processing problems like Computer Vision                       with the same period as in the original image. For
,Image Representation and Retrieval It has been                coarse textures, the autocorrelation function
shown that humans use some perceptual textural                 decreases slowly, whereas for fine textures it
features to distinguish between textured images                decreases rapidly For            images containing
or regions. Some of the most important features                orientation(s), the autocorrelation function saves the
are coarseness, contrast, direction and busyness.              same orientation(s).
In this paper a study is made on these texture
features. The proposed computational measures                           In this paper the test images are taken from
can be based upon two representations: the                     Broadatz database [2] and the autocorrelation
original images representation and the                         function is applied on original images shown in
autocorrelation function (associated with original             figure 1 . The original images are scaled in to nine
images) representation.                                        parts and autocorrelation function is applied on the
                                                               one of the scaled images randomly. Scaled images
Keywords : Texture features, Content Based Image               are shown in figure 2.
Retrieval, Computational measures

     I.        Introduction
Texture has been extensively studied and used in
literature since it plays a very important role in
human visual perception. Texture refers to the
spatial distribution of grey-levels and can be defined
as the deterministic or random repetition of one or            Fig :1 Original images from Broadatz Database
several primitives in an image.
Texture analysis techniques can be divided into two            Any image of the scaled images can be selected and
main categories: spatial techniques and frequency-             the autocorrelation function is applied on the scaled
based techniques. Frequency based methods include              image.
the Fourier transform and the wavelet-based
methods such as the Gabor model. Spatial texture
analysis methods can be categorized as statistical
methods, structural methods, or hybrid methods.
It is widely admitted in the computer vision
community that there is a set of textural features that
human beings use to recognize and categorize
textures. Such features include coarseness, contrast
and directionality.

    II.        Autocorrelation function
The Auto correlation Function [1] denoted as
 𝑓 𝛿𝑖, 𝛿𝑗 for an image I of nXm dimensions is
defined as
                          1
            𝑓 𝛿𝑖, 𝛿𝑗 =         𝑋                               Figure 3 shows the randomly selected scaled image
                                     𝑛 βˆ’π›Ώπ‘–   π‘š βˆ’π›Ώπ‘—
     π‘›βˆ’π›Ώπ‘– βˆ’1     π‘šβˆ’π›Ώπ‘— βˆ’1                                       from the scaled images and its histogram.
    𝑖=0         𝑗 =0       𝐼 𝑖, 𝑗 𝐼(𝑖 + 𝛿𝑖, 𝑗 + 𝛿𝑗)   (1)




                                                                                                     246 | P a g e
Prathusha.Perugu, Krishna veni / International Journal of Engineering Research and
                         Applications (IJERA) ISSN: 2248-9622 www.ijera.com
                           Vol. 2, Issue 5, September- October 2012, pp.246-249

                                                                                                         primitives and is characterized by a high degree of
                                12000                                                                    local variations of grey-levels. First we compute the
                                10000                                                                    first derivative of the autocorrelation function in a
                                 8000
                                                                                                         separable way according to rows and columns,
                                                                                                         respectively.Two functions 𝐢π‘₯ 𝑖, 𝑗 π‘Žπ‘›π‘‘ 𝐢𝑦 𝑖, 𝑗 are
                                 6000
                                                                                                         then obtained
                                 4000
                                                                                                                    𝐢π‘₯ 𝑖, 𝑗 = 𝑓 𝑖, 𝑗 βˆ’ 𝑓 𝑖 + 1, 𝑗
                                 2000                                                                             𝐢𝑦 𝑖, 𝑗 = 𝑓 𝑖, 𝑗 βˆ’ 𝑓 𝑖, 𝑗 + 1       (2)
                                           0

                                               0                 100               200                   Second, we compute the first derivatives of the
                                                                                                         obtained functions 𝐢π‘₯(𝑖, 𝑗)and 𝐢𝑦 𝑖, 𝑗          in a
       Fig :3 Randomly selected scaled image and its                                                     separable way according to rows and columns. Two
       histogram                                                                                         functions 𝐢π‘₯π‘₯(𝑖, 𝑗) and 𝐢𝑦𝑦(𝑖, 𝑗)are then obtained
                                                                                                                  𝐢π‘₯π‘₯ 𝑖, 𝑗 = 𝐢π‘₯ 𝑖, 𝑗 βˆ’ 𝐢π‘₯ 𝑖 + 1, 𝑗
       After applying the autocorrelation fucntion the                                                            𝐢𝑦𝑦 𝑖, 𝑗 = 𝐢𝑦 𝑖, 𝑗 βˆ’ 𝐢𝑦(𝑖, 𝑗 + 1) (3)
       image is shown in figure 4 and its equivalent
       histogram.                                                                                        To detect maxima, we use the following equations
                                           4
                                       x 10



                                                                                                         (according to rows and columns, respectively):
                                  3


                                 2.5


                                  2
                                                                                                                              𝐢π‘₯ 𝑖, 𝑗 = 0
                                 1.5


                                  1


                                 0.5                                                                     𝐢π‘₯π‘₯ 𝑖, 𝑗 < 0              (4)
                                  0

                                       0       0.1   0.2   0.3   0.4   0.5   0.6   0.7   0.8   0.9   1
                                                                                                         𝐢𝑦 𝑖, 𝑗 = 0
       Fig:4 Autocorrelation applied on scaled image and
       its histogram                                                                                     𝐢𝑦𝑦 𝑖, 𝑗 < 0              (5)

III.      Computational measures for Textural                                                            Coarseness, denoted ,Cs is estimated as the average
                                                                                                         number of maxima in the autocorrelation function: a
          features
                                                                                                         coarse texture will have a small number of maxima
                 Perceptual features ave been largely used
                                                                                                         and a fine texture will have a large number of
       both in texture retrieval and content-based image
                                                                                                         maxima. Let π‘€π‘Žπ‘₯π‘₯ 𝑖, 𝑗 = 1if pixel (i,j) is a
       retrieval (CBIR)[4] systems like:
                                                                                                         maximum on rows andπ‘€π‘Žπ‘₯π‘₯ 𝑖, 𝑗 = 0 if pixel is not
       β€’ IBM QBIC system which implements Tamura
       features.                                                                                         a maximum on rows. Similarly Let π‘€π‘Žπ‘₯𝑦 𝑖, 𝑗 =
       β€’ Candid by the Los Alamos National Lab’s by                                                      1 if pixel is a maximum on columns and
       Kelly et alii using Laws filters .                                                                 π‘€π‘Žπ‘₯𝑦 𝑖, 𝑗 = 0 if pixel is not a maximum on
       β€’ Photobook by Pentland et alii from MIT referring                                                columns. Coarseness can be expressed by the
       to Wold Texture decomposition model                                                               following equation:
                                                                                                                                           1
       The general estimation process of computational                                                   𝐢𝑠            𝑛 βˆ’1    π‘š βˆ’1 π‘€π‘Žπ‘₯π‘₯ 𝑖,𝑗 +    π‘š βˆ’1    𝑛 βˆ’1
                                                                                                                                                                              (6)
                                                                                                              1/2𝑋(   𝑖=0     𝑗 =0               𝑗 =0    𝑖=0 π‘€π‘Žπ‘₯𝑦 (𝑖,𝑗 )/π‘š
       measures simulating human visual perception is as                                                                                𝑛

       follows [3] .
                                                                                                                 The denominator gives the number of
                                                                                                         maxima according to rows and columns. The more
       Step :1 The autocorrelation 𝑓(𝛿𝑖, 𝛿𝑗) is computed
                                                                                                         the number of maxima is high, the less the
       on Image 𝐼(𝑖, 𝑗).
                                                                                                         coarseness is and vice-versa.
       Step 2 : Then, the convolution of the autocorrelation
       function and the gradient of the Gaussian function is
                                                                                                         B. Contrast
       computed in a separable way
                                                                                                                   Contrast measures the degree of clarity
       Step 3 : Based on these two functions,
                                                                                                         with which one can distinguish between different
       computational measures for each perceptual feature
                                                                                                         primitives in a texture. A well contrasted image is
       are computed
                                                                                                         an image in which primitives are clearly visible and
                                                                                                         separable. Local contrast is commonly defined for
       A.Coarseness
                                                                                                         each pixel as an estimate of the local variation in a
                 Coarseness is the most important feature ,
                                                                                                         neighborhood. More precisely, given a pixel p=(i,j)
       it is coarseness that determines the existence of
                                                                                                         and neighbor mask W X Wof the pixel, local
       texture in an image. Coarseness measures the size of
                                                                                                         contrast is computed as
       the primitives that constitute the texture. A coarse
       texture is composed of large primitives and is                                                                         π‘šπ‘Žπ‘₯𝑝 βˆˆπ‘Šπ‘‹π‘Š 𝑃 βˆ’ π‘šπ‘–π‘›π‘ βˆˆπ‘Šπ‘‹π‘Š (𝑃)
       characterized by a high degree of local uniformity of                                             𝐿_𝐢𝑑 𝑖, 𝑗 =                                                    (7)
                                                                                                                              π‘šπ‘Žπ‘₯𝑝 βˆˆπ‘Šπ‘‹π‘Š 𝑃 +π‘šπ‘–π‘›π‘ βˆˆπ‘Šπ‘‹π‘Š (𝑃)
       grey-levels. A fine texture is constituted by small



                                                                                                                                                              247 | P a g e
Prathusha.Perugu, Krishna veni / International Journal of Engineering Research and
                  Applications (IJERA) ISSN: 2248-9622 www.ijera.com
                    Vol. 2, Issue 5, September- October 2012, pp.246-249

We propose to measure the global contrast as the           orientation(s) 𝑑 . Let πœƒπ‘‘ 𝑖, 𝑗 = 1 if pixel(i,j) has a
global arithmetic mean of all the local contrast           dominant orientation πœƒπ‘‘ and πœƒπ‘‘ 𝑖, 𝑗 = 0 if pixel
values over the image:                                     (i, j) does not have a dominant orientation.We
                                                           consider only dominant orientations πœƒπ‘‘ that are
           1      𝑛     π‘š
𝐺_𝐢𝑑 =           𝑖=1   𝑗 =𝑖   𝐿+ 𝐢𝑑(𝑖, 𝑗)   (8)            present in a sufficient number of pixels, and then
         π‘š βˆ—π‘›
                                                           more than a threshold t so that orientation becomes
where n, m are the dimensions of the image.                visible. Let us denote the number of non oriented
                                                           pixels π‘πœƒπ‘›π‘‘ The degree of directionality π‘πœƒπ‘‘ of
                                                           an image can be expressed by the following
                                                           equation:
                                                                         𝑛 βˆ’1    π‘š βˆ’1
                                                                        𝑖=0     𝑗 =0 πœƒπ‘‘ 𝑖,𝑗
                                                                π‘πœƒπ‘‘ =      π‘›π‘‹π‘š βˆ’π‘πœƒπ‘›π‘‘
                                                                                                     (10)
                                                          The more NӨd is large, the more the image is
                                                          directional. The more NӨd is small, the more the
                                                          image is nondirectional.

                                                          IV.      Experimental Results
Fig. 5 The less and the most contrasted texture in the             The Computational perceptual features are
Vis Tex database.                                         used to retrieve the image. Figure 6 shows the
                                                          window to select the query image. Among the
          In Figure 5 are shown the most and the          available images the image is
less contrasted textures in Vis Tex texture database
[5] as respect to the proposed measure

C. Directionality
         Regarding directionality,we want to
estimate two parameters: the dominant orientation(s)
and the degree of directionality. Orientation refers to
the global orientation of primitives that constitute
the texture. The degree of directionality is related to
the visibility of the dominant orientation(s) in an
image, and refers to the number of pixels having the
dominant orientation(s).

Orientation Estimation: When considering the
autocorrelation function, one can notice two
phenomena concerning the orientation:                           Figure 6 : Window to select query image
 1. existing orientation in the original image is saved
in the corresponding autocorrelation function;            retrieved .In the figure 7 we can see the total number
 2. the usage of the autocorrelation function instead     of retrieved images is Nine. Precision of the image
of the original image allows                              retrieved is calculated as total number of relavent
keeping the global orientation rather than the local      images / total number of images i.e Precision = 6/9
orientation when one uses the original image.             =0.667 in this example

It follows that the global orientation of the image
can be estimated by applying the gradient on the
autocorrelation function of the original image
according to the lines Cx and according to the
columns Cy . The orientation Σ¨ is given by
                       𝐢𝑦
          πœƒ = arctan⁑ 𝐢π‘₯ )
                     (                    (9)
We consider only pixels with a significant
orientation. A pixel is considered as oriented if its
amplitude is superior than a certain threshold t.

 Directionality Estimation:For directionality, we
 consider the number of pixels having dominant                  Figure 7 : Retrieved images



                                                                                                 248 | P a g e
Prathusha.Perugu, Krishna veni / International Journal of Engineering Research and
                   Applications (IJERA) ISSN: 2248-9622 www.ijera.com
                     Vol. 2, Issue 5, September- October 2012, pp.246-249

 V.      Conclusion
         An Innovative method is used to content
based image retrieval using perceptual features of
texture. The precision is calculated and shown
experimentally that it is 0.6 to 0.8. And this method
of image retrieval is effective compared to existing
methods.

References :
  1.   N. Abbadeni, D. Ziou, and S. Wang.
       Autocovariance based perceptual textural
       features corresponding to human visual
       perception. to appear in the Proceedingsof the
       15th      International     Conference      on
       PatternRecognition,      Barcelona,    Spain,
       September 3-8, 2000
  2.   P. Brodatz, Textures: A Photographic Album
       for Artists and Designers.
  3.   New York: Dover, 1966.
  4.   Noureddine       Abbadeni       Computational
       Perceptual       Features      for    Texture
       Representation and Retrieval            IEEE
       transactions on Image processing, Vol. 20,
       NO. 1, January 2011
  5.   Sara Sanam1, Sudha Madhuri.M ,Perception
       based Texture Classification,Representation
       and Retrieval , International Journal of
       Computer Trends and Technology- volume3
       Issue1- 2012
  6.   VISTEX texture database:
  7.   http://www.white.media.mit.edu/vismod/ima
       gery/VisionTexture/ vistex.html




                                                                                  249 | P a g e

More Related Content

What's hot

GRUPO 5 : novel fuzzy logic based edge detection technique
GRUPO 5 :  novel fuzzy logic based edge detection techniqueGRUPO 5 :  novel fuzzy logic based edge detection technique
GRUPO 5 : novel fuzzy logic based edge detection techniqueviisonartificial2012
Β 
An adaptive method for noise removal from real world images
An adaptive method for noise removal from real world imagesAn adaptive method for noise removal from real world images
An adaptive method for noise removal from real world imagesIAEME Publication
Β 
New approach for generalised unsharp masking alogorithm
New approach for generalised unsharp masking alogorithmNew approach for generalised unsharp masking alogorithm
New approach for generalised unsharp masking alogorithmeSAT Publishing House
Β 
3 intensity transformations and spatial filtering slides
3 intensity transformations and spatial filtering slides3 intensity transformations and spatial filtering slides
3 intensity transformations and spatial filtering slidesBHAGYAPRASADBUGGE
Β 
11.optimal nonlocal means algorithm for denoising ultrasound image
11.optimal nonlocal means algorithm for denoising ultrasound image11.optimal nonlocal means algorithm for denoising ultrasound image
11.optimal nonlocal means algorithm for denoising ultrasound imageAlexander Decker
Β 
Optimal nonlocal means algorithm for denoising ultrasound image
Optimal nonlocal means algorithm for denoising ultrasound imageOptimal nonlocal means algorithm for denoising ultrasound image
Optimal nonlocal means algorithm for denoising ultrasound imageAlexander Decker
Β 
Collaborative Filtering Based on Star Users
Collaborative Filtering Based on Star UsersCollaborative Filtering Based on Star Users
Collaborative Filtering Based on Star UsersQiang Liu
Β 
Fusion Based Gaussian noise Removal in the Images using Curvelets and Wavelet...
Fusion Based Gaussian noise Removal in the Images using Curvelets and Wavelet...Fusion Based Gaussian noise Removal in the Images using Curvelets and Wavelet...
Fusion Based Gaussian noise Removal in the Images using Curvelets and Wavelet...CSCJournals
Β 
Image Restitution Using Non-Locally Centralized Sparse Representation Model
Image Restitution Using Non-Locally Centralized Sparse Representation ModelImage Restitution Using Non-Locally Centralized Sparse Representation Model
Image Restitution Using Non-Locally Centralized Sparse Representation ModelIJERA Editor
Β 
Tracking and counting human in visual surveillance system
Tracking and counting human in visual surveillance systemTracking and counting human in visual surveillance system
Tracking and counting human in visual surveillance systemiaemedu
Β 
Labview with dwt for denoising the blurred biometric images
Labview with dwt for denoising the blurred biometric imagesLabview with dwt for denoising the blurred biometric images
Labview with dwt for denoising the blurred biometric imagesijcsa
Β 
GREY LEVEL CO-OCCURRENCE MATRICES: GENERALISATION AND SOME NEW FEATURES
GREY LEVEL CO-OCCURRENCE MATRICES: GENERALISATION AND SOME NEW FEATURESGREY LEVEL CO-OCCURRENCE MATRICES: GENERALISATION AND SOME NEW FEATURES
GREY LEVEL CO-OCCURRENCE MATRICES: GENERALISATION AND SOME NEW FEATURESijcseit
Β 
GRUPO 1 : digital manipulation of bright field and florescence images noise ...
GRUPO 1 :  digital manipulation of bright field and florescence images noise ...GRUPO 1 :  digital manipulation of bright field and florescence images noise ...
GRUPO 1 : digital manipulation of bright field and florescence images noise ...viisonartificial2012
Β 
Despeckling of Ultrasound Imaging using Median Regularized Coupled Pde
Despeckling of Ultrasound Imaging using Median Regularized Coupled PdeDespeckling of Ultrasound Imaging using Median Regularized Coupled Pde
Despeckling of Ultrasound Imaging using Median Regularized Coupled PdeIDES Editor
Β 

What's hot (17)

LN s05-machine vision-s2
LN s05-machine vision-s2LN s05-machine vision-s2
LN s05-machine vision-s2
Β 
GRUPO 5 : novel fuzzy logic based edge detection technique
GRUPO 5 :  novel fuzzy logic based edge detection techniqueGRUPO 5 :  novel fuzzy logic based edge detection technique
GRUPO 5 : novel fuzzy logic based edge detection technique
Β 
An adaptive method for noise removal from real world images
An adaptive method for noise removal from real world imagesAn adaptive method for noise removal from real world images
An adaptive method for noise removal from real world images
Β 
New approach for generalised unsharp masking alogorithm
New approach for generalised unsharp masking alogorithmNew approach for generalised unsharp masking alogorithm
New approach for generalised unsharp masking alogorithm
Β 
3 intensity transformations and spatial filtering slides
3 intensity transformations and spatial filtering slides3 intensity transformations and spatial filtering slides
3 intensity transformations and spatial filtering slides
Β 
11.optimal nonlocal means algorithm for denoising ultrasound image
11.optimal nonlocal means algorithm for denoising ultrasound image11.optimal nonlocal means algorithm for denoising ultrasound image
11.optimal nonlocal means algorithm for denoising ultrasound image
Β 
Optimal nonlocal means algorithm for denoising ultrasound image
Optimal nonlocal means algorithm for denoising ultrasound imageOptimal nonlocal means algorithm for denoising ultrasound image
Optimal nonlocal means algorithm for denoising ultrasound image
Β 
Collaborative Filtering Based on Star Users
Collaborative Filtering Based on Star UsersCollaborative Filtering Based on Star Users
Collaborative Filtering Based on Star Users
Β 
Bh044365368
Bh044365368Bh044365368
Bh044365368
Β 
Fusion Based Gaussian noise Removal in the Images using Curvelets and Wavelet...
Fusion Based Gaussian noise Removal in the Images using Curvelets and Wavelet...Fusion Based Gaussian noise Removal in the Images using Curvelets and Wavelet...
Fusion Based Gaussian noise Removal in the Images using Curvelets and Wavelet...
Β 
Image Restitution Using Non-Locally Centralized Sparse Representation Model
Image Restitution Using Non-Locally Centralized Sparse Representation ModelImage Restitution Using Non-Locally Centralized Sparse Representation Model
Image Restitution Using Non-Locally Centralized Sparse Representation Model
Β 
Tracking and counting human in visual surveillance system
Tracking and counting human in visual surveillance systemTracking and counting human in visual surveillance system
Tracking and counting human in visual surveillance system
Β 
Labview with dwt for denoising the blurred biometric images
Labview with dwt for denoising the blurred biometric imagesLabview with dwt for denoising the blurred biometric images
Labview with dwt for denoising the blurred biometric images
Β 
N018219199
N018219199N018219199
N018219199
Β 
GREY LEVEL CO-OCCURRENCE MATRICES: GENERALISATION AND SOME NEW FEATURES
GREY LEVEL CO-OCCURRENCE MATRICES: GENERALISATION AND SOME NEW FEATURESGREY LEVEL CO-OCCURRENCE MATRICES: GENERALISATION AND SOME NEW FEATURES
GREY LEVEL CO-OCCURRENCE MATRICES: GENERALISATION AND SOME NEW FEATURES
Β 
GRUPO 1 : digital manipulation of bright field and florescence images noise ...
GRUPO 1 :  digital manipulation of bright field and florescence images noise ...GRUPO 1 :  digital manipulation of bright field and florescence images noise ...
GRUPO 1 : digital manipulation of bright field and florescence images noise ...
Β 
Despeckling of Ultrasound Imaging using Median Regularized Coupled Pde
Despeckling of Ultrasound Imaging using Median Regularized Coupled PdeDespeckling of Ultrasound Imaging using Median Regularized Coupled Pde
Despeckling of Ultrasound Imaging using Median Regularized Coupled Pde
Β 

Viewers also liked

Viewers also liked (20)

Oi2423352339
Oi2423352339Oi2423352339
Oi2423352339
Β 
Oa2422882296
Oa2422882296Oa2422882296
Oa2422882296
Β 
Of2423212324
Of2423212324Of2423212324
Of2423212324
Β 
Op2423922398
Op2423922398Op2423922398
Op2423922398
Β 
Ol2423602367
Ol2423602367Ol2423602367
Ol2423602367
Β 
V24154161
V24154161V24154161
V24154161
Β 
Fz2510901096
Fz2510901096Fz2510901096
Fz2510901096
Β 
Nw2422642271
Nw2422642271Nw2422642271
Nw2422642271
Β 
R24129136
R24129136R24129136
R24129136
Β 
Oc2423022305
Oc2423022305Oc2423022305
Oc2423022305
Β 
Fv2510671071
Fv2510671071Fv2510671071
Fv2510671071
Β 
Cn31594600
Cn31594600Cn31594600
Cn31594600
Β 
Cx31676679
Cx31676679Cx31676679
Cx31676679
Β 
Da31699705
Da31699705Da31699705
Da31699705
Β 
Asaduzzaman - Mitigation in Bangladesh
Asaduzzaman - Mitigation in BangladeshAsaduzzaman - Mitigation in Bangladesh
Asaduzzaman - Mitigation in Bangladesh
Β 
Chiarini elena esercizio3
Chiarini elena esercizio3Chiarini elena esercizio3
Chiarini elena esercizio3
Β 
Sessions alumnes. el que ens transmet la publicitat 1,2
Sessions alumnes. el que ens transmet la publicitat 1,2Sessions alumnes. el que ens transmet la publicitat 1,2
Sessions alumnes. el que ens transmet la publicitat 1,2
Β 
Jk2416381644
Jk2416381644Jk2416381644
Jk2416381644
Β 
Plataforma PLATICAR. INTA/Costa Rica
Plataforma PLATICAR. INTA/Costa RicaPlataforma PLATICAR. INTA/Costa Rica
Plataforma PLATICAR. INTA/Costa Rica
Β 
Jo2416561659
Jo2416561659Jo2416561659
Jo2416561659
Β 

Similar to Innovative Image Retrieval Using Perceptual Textural Features

Solving linear equations from an image using ann
Solving linear equations from an image using annSolving linear equations from an image using ann
Solving linear equations from an image using anneSAT Journals
Β 
SEMANTIC IMAGE RETRIEVAL USING MULTIPLE FEATURES
SEMANTIC IMAGE RETRIEVAL USING MULTIPLE FEATURESSEMANTIC IMAGE RETRIEVAL USING MULTIPLE FEATURES
SEMANTIC IMAGE RETRIEVAL USING MULTIPLE FEATUREScscpconf
Β 
Aggregation operator for image reduction
Aggregation operator for image reductionAggregation operator for image reduction
Aggregation operator for image reductionAbu Sadat Mohammed Yasin
Β 
Estimation, Detection & Comparison of Soil Nutrients using Matlab
Estimation, Detection & Comparison of Soil Nutrients using MatlabEstimation, Detection & Comparison of Soil Nutrients using Matlab
Estimation, Detection & Comparison of Soil Nutrients using MatlabIRJET Journal
Β 
A broad ranging open access journal Fast and efficient online submission Expe...
A broad ranging open access journal Fast and efficient online submission Expe...A broad ranging open access journal Fast and efficient online submission Expe...
A broad ranging open access journal Fast and efficient online submission Expe...ijceronline
Β 
Structure and Motion - 3D Reconstruction of Cameras and Structure
Structure and Motion - 3D Reconstruction of Cameras and StructureStructure and Motion - 3D Reconstruction of Cameras and Structure
Structure and Motion - 3D Reconstruction of Cameras and StructureGiovanni Murru
Β 
Comparative Analysis of Hand Gesture Recognition Techniques
Comparative Analysis of Hand Gesture Recognition TechniquesComparative Analysis of Hand Gesture Recognition Techniques
Comparative Analysis of Hand Gesture Recognition TechniquesIJERA Editor
Β 
chAPTER1CV.pptx is abouter computer vision in artificial intelligence
chAPTER1CV.pptx is abouter computer vision in artificial intelligencechAPTER1CV.pptx is abouter computer vision in artificial intelligence
chAPTER1CV.pptx is abouter computer vision in artificial intelligenceshesnasuneer
Β 
computervision1.pptx its about computer vision
computervision1.pptx its about computer visioncomputervision1.pptx its about computer vision
computervision1.pptx its about computer visionshesnasuneer
Β 
Av4301248253
Av4301248253Av4301248253
Av4301248253IJERA Editor
Β 
Image processing second unit Notes
Image processing second unit NotesImage processing second unit Notes
Image processing second unit NotesAAKANKSHA JAIN
Β 
griffm3_ECSE4540_Final_Project_Report
griffm3_ECSE4540_Final_Project_Reportgriffm3_ECSE4540_Final_Project_Report
griffm3_ECSE4540_Final_Project_ReportMatt Griffin
Β 
Application of Image Retrieval Techniques to Understand Evolving Weather
Application of Image Retrieval Techniques to Understand Evolving WeatherApplication of Image Retrieval Techniques to Understand Evolving Weather
Application of Image Retrieval Techniques to Understand Evolving Weatherijsrd.com
Β 
On image intensities, eigenfaces and LDA
On image intensities, eigenfaces and LDAOn image intensities, eigenfaces and LDA
On image intensities, eigenfaces and LDARaghu Palakodety
Β 
Amalgamation of contour, texture, color, edge, and
Amalgamation of contour, texture, color, edge, andAmalgamation of contour, texture, color, edge, and
Amalgamation of contour, texture, color, edge, andeSAT Publishing House
Β 
Anish_Hemmady_assignmnt1_Report
Anish_Hemmady_assignmnt1_ReportAnish_Hemmady_assignmnt1_Report
Anish_Hemmady_assignmnt1_Reportanish h
Β 
Tracking and counting human in visual surveillance system
Tracking and counting human in visual surveillance systemTracking and counting human in visual surveillance system
Tracking and counting human in visual surveillance systemiaemedu
Β 
Tracking and counting human in visual surveillance system
Tracking and counting human in visual surveillance systemTracking and counting human in visual surveillance system
Tracking and counting human in visual surveillance systemIAEME Publication
Β 
Tracking and counting human in visual surveillance system
Tracking and counting human in visual surveillance systemTracking and counting human in visual surveillance system
Tracking and counting human in visual surveillance systemiaemedu
Β 
Improvement of the Recognition Rate by Random Forest
Improvement of the Recognition Rate by Random ForestImprovement of the Recognition Rate by Random Forest
Improvement of the Recognition Rate by Random ForestIJERA Editor
Β 

Similar to Innovative Image Retrieval Using Perceptual Textural Features (20)

Solving linear equations from an image using ann
Solving linear equations from an image using annSolving linear equations from an image using ann
Solving linear equations from an image using ann
Β 
SEMANTIC IMAGE RETRIEVAL USING MULTIPLE FEATURES
SEMANTIC IMAGE RETRIEVAL USING MULTIPLE FEATURESSEMANTIC IMAGE RETRIEVAL USING MULTIPLE FEATURES
SEMANTIC IMAGE RETRIEVAL USING MULTIPLE FEATURES
Β 
Aggregation operator for image reduction
Aggregation operator for image reductionAggregation operator for image reduction
Aggregation operator for image reduction
Β 
Estimation, Detection & Comparison of Soil Nutrients using Matlab
Estimation, Detection & Comparison of Soil Nutrients using MatlabEstimation, Detection & Comparison of Soil Nutrients using Matlab
Estimation, Detection & Comparison of Soil Nutrients using Matlab
Β 
A broad ranging open access journal Fast and efficient online submission Expe...
A broad ranging open access journal Fast and efficient online submission Expe...A broad ranging open access journal Fast and efficient online submission Expe...
A broad ranging open access journal Fast and efficient online submission Expe...
Β 
Structure and Motion - 3D Reconstruction of Cameras and Structure
Structure and Motion - 3D Reconstruction of Cameras and StructureStructure and Motion - 3D Reconstruction of Cameras and Structure
Structure and Motion - 3D Reconstruction of Cameras and Structure
Β 
Comparative Analysis of Hand Gesture Recognition Techniques
Comparative Analysis of Hand Gesture Recognition TechniquesComparative Analysis of Hand Gesture Recognition Techniques
Comparative Analysis of Hand Gesture Recognition Techniques
Β 
chAPTER1CV.pptx is abouter computer vision in artificial intelligence
chAPTER1CV.pptx is abouter computer vision in artificial intelligencechAPTER1CV.pptx is abouter computer vision in artificial intelligence
chAPTER1CV.pptx is abouter computer vision in artificial intelligence
Β 
computervision1.pptx its about computer vision
computervision1.pptx its about computer visioncomputervision1.pptx its about computer vision
computervision1.pptx its about computer vision
Β 
Av4301248253
Av4301248253Av4301248253
Av4301248253
Β 
Image processing second unit Notes
Image processing second unit NotesImage processing second unit Notes
Image processing second unit Notes
Β 
griffm3_ECSE4540_Final_Project_Report
griffm3_ECSE4540_Final_Project_Reportgriffm3_ECSE4540_Final_Project_Report
griffm3_ECSE4540_Final_Project_Report
Β 
Application of Image Retrieval Techniques to Understand Evolving Weather
Application of Image Retrieval Techniques to Understand Evolving WeatherApplication of Image Retrieval Techniques to Understand Evolving Weather
Application of Image Retrieval Techniques to Understand Evolving Weather
Β 
On image intensities, eigenfaces and LDA
On image intensities, eigenfaces and LDAOn image intensities, eigenfaces and LDA
On image intensities, eigenfaces and LDA
Β 
Amalgamation of contour, texture, color, edge, and
Amalgamation of contour, texture, color, edge, andAmalgamation of contour, texture, color, edge, and
Amalgamation of contour, texture, color, edge, and
Β 
Anish_Hemmady_assignmnt1_Report
Anish_Hemmady_assignmnt1_ReportAnish_Hemmady_assignmnt1_Report
Anish_Hemmady_assignmnt1_Report
Β 
Tracking and counting human in visual surveillance system
Tracking and counting human in visual surveillance systemTracking and counting human in visual surveillance system
Tracking and counting human in visual surveillance system
Β 
Tracking and counting human in visual surveillance system
Tracking and counting human in visual surveillance systemTracking and counting human in visual surveillance system
Tracking and counting human in visual surveillance system
Β 
Tracking and counting human in visual surveillance system
Tracking and counting human in visual surveillance systemTracking and counting human in visual surveillance system
Tracking and counting human in visual surveillance system
Β 
Improvement of the Recognition Rate by Random Forest
Improvement of the Recognition Rate by Random ForestImprovement of the Recognition Rate by Random Forest
Improvement of the Recognition Rate by Random Forest
Β 

Innovative Image Retrieval Using Perceptual Textural Features

  • 1. Prathusha.Perugu, Krishna veni / International Journal of Engineering Research and Applications (IJERA) ISSN: 2248-9622 www.ijera.com Vol. 2, Issue 5, September- October 2012, pp.246-249 A Innovative method for Image retrieval using perceptual textural features Prathusha.Perugu1, Krishna veni 2 1 M.Tech Student , Kottam College of Engineering ,Kurnool,AP. 2 Asst. Prof. ,Dept. of Computer Science and Engineering, Kottam College of Engineering , Tekur, Kurnool,A.P Abstract : Texture is a very important image For images containing repetitive texture patterns, the feature extremely used in various image autocorrelation function exhibits periodic behavior processing problems like Computer Vision with the same period as in the original image. For ,Image Representation and Retrieval It has been coarse textures, the autocorrelation function shown that humans use some perceptual textural decreases slowly, whereas for fine textures it features to distinguish between textured images decreases rapidly For images containing or regions. Some of the most important features orientation(s), the autocorrelation function saves the are coarseness, contrast, direction and busyness. same orientation(s). In this paper a study is made on these texture features. The proposed computational measures In this paper the test images are taken from can be based upon two representations: the Broadatz database [2] and the autocorrelation original images representation and the function is applied on original images shown in autocorrelation function (associated with original figure 1 . The original images are scaled in to nine images) representation. parts and autocorrelation function is applied on the one of the scaled images randomly. Scaled images Keywords : Texture features, Content Based Image are shown in figure 2. Retrieval, Computational measures I. Introduction Texture has been extensively studied and used in literature since it plays a very important role in human visual perception. Texture refers to the spatial distribution of grey-levels and can be defined as the deterministic or random repetition of one or Fig :1 Original images from Broadatz Database several primitives in an image. Texture analysis techniques can be divided into two Any image of the scaled images can be selected and main categories: spatial techniques and frequency- the autocorrelation function is applied on the scaled based techniques. Frequency based methods include image. the Fourier transform and the wavelet-based methods such as the Gabor model. Spatial texture analysis methods can be categorized as statistical methods, structural methods, or hybrid methods. It is widely admitted in the computer vision community that there is a set of textural features that human beings use to recognize and categorize textures. Such features include coarseness, contrast and directionality. II. Autocorrelation function The Auto correlation Function [1] denoted as 𝑓 𝛿𝑖, 𝛿𝑗 for an image I of nXm dimensions is defined as 1 𝑓 𝛿𝑖, 𝛿𝑗 = 𝑋 Figure 3 shows the randomly selected scaled image 𝑛 βˆ’π›Ώπ‘– π‘š βˆ’π›Ώπ‘— π‘›βˆ’π›Ώπ‘– βˆ’1 π‘šβˆ’π›Ώπ‘— βˆ’1 from the scaled images and its histogram. 𝑖=0 𝑗 =0 𝐼 𝑖, 𝑗 𝐼(𝑖 + 𝛿𝑖, 𝑗 + 𝛿𝑗) (1) 246 | P a g e
  • 2. Prathusha.Perugu, Krishna veni / International Journal of Engineering Research and Applications (IJERA) ISSN: 2248-9622 www.ijera.com Vol. 2, Issue 5, September- October 2012, pp.246-249 primitives and is characterized by a high degree of 12000 local variations of grey-levels. First we compute the 10000 first derivative of the autocorrelation function in a 8000 separable way according to rows and columns, respectively.Two functions 𝐢π‘₯ 𝑖, 𝑗 π‘Žπ‘›π‘‘ 𝐢𝑦 𝑖, 𝑗 are 6000 then obtained 4000 𝐢π‘₯ 𝑖, 𝑗 = 𝑓 𝑖, 𝑗 βˆ’ 𝑓 𝑖 + 1, 𝑗 2000 𝐢𝑦 𝑖, 𝑗 = 𝑓 𝑖, 𝑗 βˆ’ 𝑓 𝑖, 𝑗 + 1 (2) 0 0 100 200 Second, we compute the first derivatives of the obtained functions 𝐢π‘₯(𝑖, 𝑗)and 𝐢𝑦 𝑖, 𝑗 in a Fig :3 Randomly selected scaled image and its separable way according to rows and columns. Two histogram functions 𝐢π‘₯π‘₯(𝑖, 𝑗) and 𝐢𝑦𝑦(𝑖, 𝑗)are then obtained 𝐢π‘₯π‘₯ 𝑖, 𝑗 = 𝐢π‘₯ 𝑖, 𝑗 βˆ’ 𝐢π‘₯ 𝑖 + 1, 𝑗 After applying the autocorrelation fucntion the 𝐢𝑦𝑦 𝑖, 𝑗 = 𝐢𝑦 𝑖, 𝑗 βˆ’ 𝐢𝑦(𝑖, 𝑗 + 1) (3) image is shown in figure 4 and its equivalent histogram. To detect maxima, we use the following equations 4 x 10 (according to rows and columns, respectively): 3 2.5 2 𝐢π‘₯ 𝑖, 𝑗 = 0 1.5 1 0.5 𝐢π‘₯π‘₯ 𝑖, 𝑗 < 0 (4) 0 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 𝐢𝑦 𝑖, 𝑗 = 0 Fig:4 Autocorrelation applied on scaled image and its histogram 𝐢𝑦𝑦 𝑖, 𝑗 < 0 (5) III. Computational measures for Textural Coarseness, denoted ,Cs is estimated as the average number of maxima in the autocorrelation function: a features coarse texture will have a small number of maxima Perceptual features ave been largely used and a fine texture will have a large number of both in texture retrieval and content-based image maxima. Let π‘€π‘Žπ‘₯π‘₯ 𝑖, 𝑗 = 1if pixel (i,j) is a retrieval (CBIR)[4] systems like: maximum on rows andπ‘€π‘Žπ‘₯π‘₯ 𝑖, 𝑗 = 0 if pixel is not β€’ IBM QBIC system which implements Tamura features. a maximum on rows. Similarly Let π‘€π‘Žπ‘₯𝑦 𝑖, 𝑗 = β€’ Candid by the Los Alamos National Lab’s by 1 if pixel is a maximum on columns and Kelly et alii using Laws filters . π‘€π‘Žπ‘₯𝑦 𝑖, 𝑗 = 0 if pixel is not a maximum on β€’ Photobook by Pentland et alii from MIT referring columns. Coarseness can be expressed by the to Wold Texture decomposition model following equation: 1 The general estimation process of computational 𝐢𝑠 𝑛 βˆ’1 π‘š βˆ’1 π‘€π‘Žπ‘₯π‘₯ 𝑖,𝑗 + π‘š βˆ’1 𝑛 βˆ’1 (6) 1/2𝑋( 𝑖=0 𝑗 =0 𝑗 =0 𝑖=0 π‘€π‘Žπ‘₯𝑦 (𝑖,𝑗 )/π‘š measures simulating human visual perception is as 𝑛 follows [3] . The denominator gives the number of maxima according to rows and columns. The more Step :1 The autocorrelation 𝑓(𝛿𝑖, 𝛿𝑗) is computed the number of maxima is high, the less the on Image 𝐼(𝑖, 𝑗). coarseness is and vice-versa. Step 2 : Then, the convolution of the autocorrelation function and the gradient of the Gaussian function is B. Contrast computed in a separable way Contrast measures the degree of clarity Step 3 : Based on these two functions, with which one can distinguish between different computational measures for each perceptual feature primitives in a texture. A well contrasted image is are computed an image in which primitives are clearly visible and separable. Local contrast is commonly defined for A.Coarseness each pixel as an estimate of the local variation in a Coarseness is the most important feature , neighborhood. More precisely, given a pixel p=(i,j) it is coarseness that determines the existence of and neighbor mask W X Wof the pixel, local texture in an image. Coarseness measures the size of contrast is computed as the primitives that constitute the texture. A coarse texture is composed of large primitives and is π‘šπ‘Žπ‘₯𝑝 βˆˆπ‘Šπ‘‹π‘Š 𝑃 βˆ’ π‘šπ‘–π‘›π‘ βˆˆπ‘Šπ‘‹π‘Š (𝑃) characterized by a high degree of local uniformity of 𝐿_𝐢𝑑 𝑖, 𝑗 = (7) π‘šπ‘Žπ‘₯𝑝 βˆˆπ‘Šπ‘‹π‘Š 𝑃 +π‘šπ‘–π‘›π‘ βˆˆπ‘Šπ‘‹π‘Š (𝑃) grey-levels. A fine texture is constituted by small 247 | P a g e
  • 3. Prathusha.Perugu, Krishna veni / International Journal of Engineering Research and Applications (IJERA) ISSN: 2248-9622 www.ijera.com Vol. 2, Issue 5, September- October 2012, pp.246-249 We propose to measure the global contrast as the orientation(s) 𝑑 . Let πœƒπ‘‘ 𝑖, 𝑗 = 1 if pixel(i,j) has a global arithmetic mean of all the local contrast dominant orientation πœƒπ‘‘ and πœƒπ‘‘ 𝑖, 𝑗 = 0 if pixel values over the image: (i, j) does not have a dominant orientation.We consider only dominant orientations πœƒπ‘‘ that are 1 𝑛 π‘š 𝐺_𝐢𝑑 = 𝑖=1 𝑗 =𝑖 𝐿+ 𝐢𝑑(𝑖, 𝑗) (8) present in a sufficient number of pixels, and then π‘š βˆ—π‘› more than a threshold t so that orientation becomes where n, m are the dimensions of the image. visible. Let us denote the number of non oriented pixels π‘πœƒπ‘›π‘‘ The degree of directionality π‘πœƒπ‘‘ of an image can be expressed by the following equation: 𝑛 βˆ’1 π‘š βˆ’1 𝑖=0 𝑗 =0 πœƒπ‘‘ 𝑖,𝑗 π‘πœƒπ‘‘ = π‘›π‘‹π‘š βˆ’π‘πœƒπ‘›π‘‘ (10) The more NΣ¨d is large, the more the image is directional. The more NΣ¨d is small, the more the image is nondirectional. IV. Experimental Results Fig. 5 The less and the most contrasted texture in the The Computational perceptual features are Vis Tex database. used to retrieve the image. Figure 6 shows the window to select the query image. Among the In Figure 5 are shown the most and the available images the image is less contrasted textures in Vis Tex texture database [5] as respect to the proposed measure C. Directionality Regarding directionality,we want to estimate two parameters: the dominant orientation(s) and the degree of directionality. Orientation refers to the global orientation of primitives that constitute the texture. The degree of directionality is related to the visibility of the dominant orientation(s) in an image, and refers to the number of pixels having the dominant orientation(s). Orientation Estimation: When considering the autocorrelation function, one can notice two phenomena concerning the orientation: Figure 6 : Window to select query image 1. existing orientation in the original image is saved in the corresponding autocorrelation function; retrieved .In the figure 7 we can see the total number 2. the usage of the autocorrelation function instead of retrieved images is Nine. Precision of the image of the original image allows retrieved is calculated as total number of relavent keeping the global orientation rather than the local images / total number of images i.e Precision = 6/9 orientation when one uses the original image. =0.667 in this example It follows that the global orientation of the image can be estimated by applying the gradient on the autocorrelation function of the original image according to the lines Cx and according to the columns Cy . The orientation Σ¨ is given by 𝐢𝑦 πœƒ = arctan⁑ 𝐢π‘₯ ) ( (9) We consider only pixels with a significant orientation. A pixel is considered as oriented if its amplitude is superior than a certain threshold t. Directionality Estimation:For directionality, we consider the number of pixels having dominant Figure 7 : Retrieved images 248 | P a g e
  • 4. Prathusha.Perugu, Krishna veni / International Journal of Engineering Research and Applications (IJERA) ISSN: 2248-9622 www.ijera.com Vol. 2, Issue 5, September- October 2012, pp.246-249 V. Conclusion An Innovative method is used to content based image retrieval using perceptual features of texture. The precision is calculated and shown experimentally that it is 0.6 to 0.8. And this method of image retrieval is effective compared to existing methods. References : 1. N. Abbadeni, D. Ziou, and S. Wang. Autocovariance based perceptual textural features corresponding to human visual perception. to appear in the Proceedingsof the 15th International Conference on PatternRecognition, Barcelona, Spain, September 3-8, 2000 2. P. Brodatz, Textures: A Photographic Album for Artists and Designers. 3. New York: Dover, 1966. 4. Noureddine Abbadeni Computational Perceptual Features for Texture Representation and Retrieval IEEE transactions on Image processing, Vol. 20, NO. 1, January 2011 5. Sara Sanam1, Sudha Madhuri.M ,Perception based Texture Classification,Representation and Retrieval , International Journal of Computer Trends and Technology- volume3 Issue1- 2012 6. VISTEX texture database: 7. http://www.white.media.mit.edu/vismod/ima gery/VisionTexture/ vistex.html 249 | P a g e