• Share
  • Email
  • Embed
  • Like
  • Save
  • Private Content
Zernike moment of invariants for effective image retrieval using gaussian filters

Zernike moment of invariants for effective image retrieval using gaussian filters






Total Views
Views on SlideShare
Embed Views



0 Embeds 0

No embeds


Upload Details

Uploaded via as Adobe PDF

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
Post Comment
Edit your comment

    Zernike moment of invariants for effective image retrieval using gaussian filters Zernike moment of invariants for effective image retrieval using gaussian filters Document Transcript

    • International Journal of Electronics and Communication Engineering & Technology (IJECET), ISSN0976 – 6464(Print), ISSN 0976 – 6472(Online) Volume 4, Issue 2, March – April (2013), © IAEME412ZERNIKE MOMENT OF INVARIANTS FOR EFFECTIVE IMAGERETRIEVAL USING GAUSSIAN FILTERS1Anirudha R.Deshpande, 2Dr. Sudhir S. Kanade1(Department of ECE, TPCT’s C.O.E.Osmanabad, India)2 (Head of Department of ECE, TPCT’s C.O.E.OsmanabadMaharashtra, India)ABSTRACTThis paper proposes a steerable Gaussian filter and moment of invariant for effectiveimage retrieval for color texture and shape features. Percentage of the small number of thedominant color can be obtained. Image can be defined by fast color quantization algorithmwith cluster merging. It provides an efficient and flexible approximation of early processingin the human visual system. Pseudo-Zernike moment is applied in the sharp descriptor. It ismore robust to noise and provides good feature representation than other representation.Finally the robust feature set for image retrieval can be obtained by the combination of thecolor, shape and texture feature. Experimental result shows that the proposed methodprovides better result than the other methods and it provides accurate user-interested image.Keywords: Image retrieval, Moment of invariants, steerable Gaussian filters, Featureextraction.1. INTRODUCTIONEfficient indexing and searching become essential for large image achieves due toincrease of digital image on theinternet. Diversity and ambiguity of the image content can bedescribed by the keywords of manual labor to label images. Hence content base imageretrieval (CBIR) [1] has drawn substantial. CBIR indexes images by low level visual featuresuch as texture, color and shape. Visual information cannot be completely characterizedsemantic content, but they are easy to integrate into mathematical formulation [2]. Extractionof good visual feature which compactly represent a query images is one of the important taskin CBIR.Color is most widely used in the low-level features and is in variant to image size andorientation [1]. Color Correlogram, color histogram and dominant color descriptor (DCD) areINTERNATIONAL JOURNAL OF ELECTRONICS ANDCOMMUNICATION ENGINEERING & TECHNOLOGY (IJECET)ISSN 0976 – 6464(Print)ISSN 0976 – 6472(Online)Volume 4, Issue 2, March – April, 2013, pp. 412-425© IAEME: www.iaeme.com/ijecet.aspJournal Impact Factor (2013): 5.8896 (Calculated by GISI)www.jifactor.comIJECET© I A E M E
    • International Journal of Electronics and Communication Engineering & Technology (IJECET), ISSN0976 – 6464(Print), ISSN 0976 – 6472(Online) Volume 4, Issue 2, March – April (2013), © IAEME413the features used in CBIR. Color histogram is used for color representation, but it does notcontain any spatial information Li et al [3] presents a novel algorithm based on running subblocks with different similarity weights for object-based on running sub blocks the colorregion information. Using Matrix analysis images are retrieved under the query of specialobject. Color Correlogram describes about the probability of finding color pairs at a pixel at afixed pixel distance and provides spatial information. Color Correlogram provides goodretrieval accuracy as compared with the color histogram. It captures the spatial correlationbetween identical colors only and it provides significant computational benefit over colorCorrelogram, it is more suitable for retrieving the image. DCD is MPEG-7 color descriptor[4]. DCD describes about the salient color distribution in an image and provides a compact,effective and intuitive representation of color in an image. DCD matching does not fit humanperception very well and it cause incorrect ranks for images with similar color distribution.[5, 6] yang et al. presented a color quantization in [7] and computation. Lu et al. [8] usedcolor distribution, standard deviation and the mean value to represent the globalcharacteristics of the image, to represent the global characteristics of the image for increasingthe accuracy of the system.Texture is an important feature in the visual feature and essential surface property ofan object. Many objects in an image can be distinguished solely by their textures without anyother information. Texture consists of some basic primitive which describes the structuralarrangement of a region and the relationship of the surrounding regions [9]. Statistic texturefeature using gray-level co-occurrence matrix (GLCM) Markov random field (MRF) model,simultaneous autoregressive (SAR) model, Edge histogram descriptor (EHD), Worlddecomposition model, and wavelet moments [2]. BVLC (block variation of local correlationcoefficients) and BDIP (Block difference of inverse probabilities) is presented effectivelymeasure local brightness variation and local texture smoothness [10]. In [11], texture ismodeled by fusion of the marginal densities of the sub bands DCT coefficients. In thismethod one can extract samples from the texture distribution by utilizing small neighbor ofscale-to scale coefficients. In [12] Kokare et al. improves the image retrieval accuracy fornew set of 2D rotate wavelet by using Daubechies’ eight tap coefficient.Han et al. [13] proposed a rotation-invariant and scale-invariant Gabor representation,where each representation need only few summations on the conventional Gabor filterimpulse response, and the texture feature are then from the new representation for conductingscale-invariant texture and rotation-invariant image retrieval. The shapes have some semanticinformation and shapes features are different from other elementary visual feature such astexture or color feature. Many shapes representation methods are their such as Fourierdescriptor, B-splines, deformable templates, polygonal approximation and curvature scalespace (CSS) [2].CSS shape representation method is selected for moving picture expert group(MPEG)-7 standardization [14]. Fourier descriptor is efficient than the CSS in a review ofshape representation and description techniques [15]. Xu et al. [16] described a innovationpartial shape matching (PSM) technique using dynamic programming (DP) to obtain thespine X-ray images. In [17] Wei presented a novel content-based trademark retrieval systemwith a feasible set of the future descriptors. In [18], one dimensional or two dimensionalhistogram of the CIElab chromaticity coordinates is selected as the color feature, andvariance extracted by discrete wavelet frames analysis are selected for texture features. In[19] Haar or Daubechies wavelet moment is used as texture feature and the color histogram isused as a color feature. In these method, their feature vector dimension is not important forcombining multiple feature without increasing the vector dimension and it does not alwaysprovide better retrieval accuracy [20].
    • International Journal of Electronics and Communication Engineering & Technology (IJECET), ISSN0976 – 6464(Print), ISSN 0976 – 6472(Online) Volume 4, Issue 2, March – April (2013), © IAEME414Chun et al. [9] proposed a CBIR method used to combine the color autocorrelation ofhue and saturation components images and BDIP and BVLC moments of value componentsimage in the wavelet transform domain. In [21, 22] presented novel retrieval framework forcombining the color, texture and shape information. So as to improve the retrievalperformance and to combine selected effectively without increase of feature vectordimension.In this paper a detail description about the effective color image retrieval scheme isgiven in which the combination of the dynamic dominant color. Pseudo- Zernike momentsand steerable filter descriptor. Rest of the paper as follows Section 2 gives the detail about thedynamic dominant color extraction. Section 3 describes the steerable filter decomposition andtexture representation. Section 4 describes about the pseudo-Zernike moments based shapedescriptor. Section 6 describes the experimental result. Section 7 provides the conclusion.2. COLOR FEATURE REPRESENTATIONColor is one of the most distinguishable and dominating low-level feature to describeimage. CBIR system provides the color to retrieve images such as QBIC system and VisualSEEK. In a given color image, the number of the actual color occupies only the smallproportion of the total number of the colors only occupies a small proportion of the totalnumber of colors in the whole color space. Dominant colors cover a majority of the pixel.MPEG-7 final committee draft, DCD contains two main components that represent the colorand the percentage of each color. It provides a compact, effective and intuitive salient colorrepresentation. It provides the detail description about color distribution in an image or regionof intersecting. Major part is located in the higher color distribution it is consistent withhuman perception because human eyes cannot differentiate the colors with close distance.As per the experiment the selection of the color space is not a critical issue for DCDextraction. Without the loss of the generality, the RGB color space is used for the simplicity.As shown in Figure 1 the RGB color space is divided into the Colors are considered as sameif there are several colors in the partitioned block.After the above coarse partition, the centriod of each partition (“color Bin” in MPEG-7) is selected as its quantized color. Let X ൌ ሺXୖ, Xୋ, X୆ሻ express the color components Red,Green, Blue, ‫ܥ‬௜ be a quantized color partition i. Average value of the color distribution foreach partition center can be determined by the‫ݔ‬ҧ௜ ൌ∑ ܺ௫ୀ଴∑ 1௫ୀ஼೔ሺ1ሻFig 1: The coarse division of RGB color space
    • International Journal of Electronics and Communication Engineering & Technology (IJECET), ISSN0976 – 6464(Print), ISSN 0976 – 6472(Online) Volume 4, Issue 2, March – April (2013), © IAEME415After the average value each quantized color‫ܥ‬௜ ൌ ൫‫ݔ‬ҧ௜ோ, ‫ݔ‬ҧ௜ீ, ‫ݔ‬ҧ௜஻൯ሺ1 ൑ ݅ ൑ 8ሻcan be estimated. We calculate the actual distance of two adjacent ‫ܥ‬௜ and then merge similar“color Bines” using weighted average agglomerative procedure in the following equation.‫ە‬ۖۖ‫۔‬ۖۖ‫ܺۓ‬ோൌ ܺଵோൈ ൬ܲோଵܲோଵ ൅ ܲோଶ൰ ൅ ܺଶோൈ ൬ܲோிଶܲோଵ ൅ ܲோଶ൰ܺீൌ ܺଵீൈ ൬ܲீଵܲீଵ ൅ ܲீଶ൰ ൅ ܺଶீൈ ൬ܲீଶܲீଵ ൅ ܲீଶ൰ܺ஻ൌ ܺଵ஻ൈ ൬ܲ஻ଵܲ஻ଵ ൅ ܲ஻ଶ൰ ൅ ܺଶ஻ൈ ൬ܲ஻ଶܲ஻ଵ ൅ ܲ஻ଶ൰ሺ2ሻIn Equation (2), ܲோ, ܲீ, ܲ஻ represents the percentage of R, G, B component. Merging processis carried out until the minimum Euclidian distance between the adjacent color cluster centershould be large than the thresholdܶௗ. Dominant colors is significant enough, therefore wewill merge the insignificant color into its neighboring color. If the percentage is less thanthresholdܶ௠, it will be merged into the nearest color. In the proposed method, we set thevalue of ܶௗ as 25 and ܶ௠ as 6%. As the result, we obtain a set of dominant colors, and thefinal number of dominant colors is constrained to 4-5 on average. The dominant colordescriptor is defined as‫ܨ‬௖ ൌ ሼሺ‫ܥ‬௜ܲ௜ሻ, ݅ ൌ 1, … … . , ܰWhere N is the total number of the dominant colors in an image, ‫ܥ‬௜is a 3D dominant colorvector, ܲ௜ is the percentage for each dominant color, and the sum of the ܲ௜ is the percentagefor each dominant color, and the sum of the ܲ௜ is equal to 1. The final number of thedominant color of the images after color quantization is five as shown in Figure 2. Thedominant color of the image after color quantized is five; the dominant color and theirpercentage are shown in Table.1TABLE 1Dominant color feature of Figure 2(a)N Ci(R,G,B) Pi1 92,94,69 0.26422 115, 134, 119 0.10413 142, 118, 66 0.10354 173, 146, 94 0.31795 152, 158, 168 0.2103
    • International Journal of Electronics and Communication Engineering & Technology (IJECET), ISSN0976 – 6464(Print), ISSN 0976 – 6472(Online) Volume 4, Issue 2, March – April (2013), © IAEME416Fig 2: The Original image and its quantized image (a) The original image (b) thecorresponding quantized image3. STEERABLE FILTER DECOMPOSITION & TEXTURE REPRESENTATIONSurface shows texture, which is an important low level visual feature. Recognition oftexture is the natural part of the many computer vision system. In this paper we presented anew rotation- invariant and scale –invariant texture representation for image retrieval basedon the steerable filter decomposition.3.1 Steerable filterOriented filter are used in the image processing task, such as texture analysis, edgesdetection, image data compression. Filter is applied in arbitrary orientation under adaptivecontrol and to examine the filter output as a function of both orientation and phase.Resonance of filter is calculated in many orientation is applied in many version of the samefilter, each of them are different in some rotation in angle. Steerable filter is a class of filter inwhich arbitrary orientation is synthesized as a linear orientation is synthesized as a linearcombination of set of “basis filter” [23].The edges of different orientation in an image can be detected by splitting the imagesinto orientation sub bands obtained by the basis filter having these orientation sub bandsobtained by the basis filter having these orientations. It allows one to adaptively “steer” afilter to any orientation. Calculate analytically the filter output as a function of orientation.The steering constant is‫ܨ‬ఏሺ݉, ݊ሻ ൌ ෍ ܾ௞ே௞ୀଵሺߠሻ‫ܣ‬௞ሺ݉, ݊ሻ ሺ3ሻWhere ܾ௞ሺߠሻ is the interpolation function based on the arbitrary orientation ߠ which controlsthe filter orientation the basis filter ‫ܣ‬௄ሺ݉, ݊ሻ are rotated version of impulse response atߠ.Convolution is the linear operation; we can synthesize an image filter at an arbitraryorientation by taking linear combination of the image filtered at an arbitrary orientation. Bytaking the linear combination of the image filtered with the basis filter. Where * representsconvolution of image‫ܫ‬ሺ݉, ݊ሻ.‫ܫ‬ሺ݉, ݊ሻ ‫כ‬ ‫ܨ‬ఏሺ݉, ݊ሻ ൌ ෍ ܾ௞ே௞ୀଵሺߠሻ‫ܫ‬ሺ݉, ݊ሻሻ ‫כ‬ ‫ܣ‬௞ሺ݉, ݊ሻ ሺ4ሻ
    • International Journal of Electronics and Communication Engineering & Technology (IJECET), ISSN0976 – 6464(Print), ISSN 0976 – 6472(Online) Volume 4, Issue 2, March – April (2013), © IAEME417Structure of the steerable filter is given in Figure 3.Generally 4-6 orientation sub bands are adopted to extract image texture information onapplying steerable filter to analyze digital image. In our paper we extract the texture featurefrom four orientation sub bands, as shown in Figure 4. Figure.4 (a) shows the original images(Zoneplate), and Figure 4(b)-(e) are the filter image in different orientation.Statistical measure is performed which provides the information which is used tocapture the relevant image content into feature vectors. The mean ߤ and standard deviation ߪof the energy distribution of the filtered images Siሺx, yሻሺi ൌ 1, 2, 3, 4 represent horizontalorientation, rotation of 45◦, vertical orientation, and rotation of -45◦), by considering thepresence of the homogeneous regions in texture images. An image Iሺx, yሻ is given, itssteerable filter decomposition is given as:ܵ௜ሺ‫,ݔ‬ ‫ݕ‬ሻ ൌ ෍ ෍ ‫ܫ‬ሺ‫ݔ‬௜, ‫ݕ‬௜௬೔௫೔ሻ‫ܤ‬௜ሺ‫ݔ‬ െ ‫ݔ‬ଵ‫ݕ‬ െ ‫ݕ‬ଵሻ ሺ5ሻWhere ‫ܤ‬௜ denotes the directional band pass filters at orientationi ൌ 1, 2, 3, 4.The energy distribution ‫ܧ‬௜ሺ‫,ݔ‬ ‫ݕ‬ሻ of the filtered images ܵ௜ሺ‫,ݔ‬ ‫ݕ‬ሻ is defined as‫ܧ‬௜ ൌ ෍ ෍|ܵ௜ሺ‫,ݔ‬ ‫ݕ‬ሻ|௬௫ሺ6ሻAdditionally, the mean (ߤ௜) and standard deviation (ߪ௜) are found as follows:ߤ௜ ൌ1‫ܰܯ‬‫ܧ‬௜ሺ‫,ݔ‬ ‫ݕ‬ሻ ሺ7ሻߪ௜ ൌ ඨ1‫ܰܯ‬෍ ෍ሺܵ௜ሺ‫,ݔ‬ ‫ݕ‬ሻ௬௫െ ߤ௜ሻଶ ሺ8ሻwhere M, and N is the width and height of the image ‫ܫ‬ሺ‫,ݔ‬ ‫ݕ‬ሻ. Corresponding texture featurevector of the original image Iሺx, yሻ should be defined as‫ܨ‬் െ ሺߤଵ, ߪଵ, ߤଶ, ߪଶ, ߤଷ, ߪଷ, ߤସ, ߪସ ሻ ሺ9ሻ4. THE PSEUDO-ZERNIKE MOMENTS BASED SHAPE DESCRIPTORShapes are the important factor in human recognition and perception. To identify anobject, shapes feature gives a powerful clue form which a human can identify. Shapedescriptor, are used in the existing CBIR systems which is broadly classified into two groups,namely contour and region based descriptor [2]. Commonly used approaches for region-basedshape descriptors moment and function of moments have been utilized as pattern feature invarious application [1,2]. Theory of the moment, including Hu moments, wavelet moments,Zernike moments, Krawtchouk moments and Legendre moments, provides useful seriesexpansion for the representation of object shapes. Pseudo-Zernike moments have set ofcomplex number moments and orthogonal [24] which contains important properties. The
    • International Journal of Electronics and Communication Engineering & Technology (IJECET), ISSN0976 – 6464(Print), ISSN 0976 – 6472(Online) Volume 4, Issue 2, March – April (2013), © IAEME418pseudo-Zernike moments are invariant under image rotation. Second moment will be lesssensitive to the image noise. In this paper, the pseudo-Zernike moments of images are usedfor shape descriptor which provides a better feature representation capability and more likelyto noise than other moment representation.4.1 Pseudo-Zernike momentsIt consist of set of polynomials [24] that form a complete orthogonal set over theinterior of the unit circle, ‫ݔ‬ଶ൅ ‫ݕ‬ଶ൑ 1. If the set of the polynomial is denoted by the{ܸ௡௠ሺ‫,ݔ‬ ‫ݕ‬ሻሽ, then the form of these polynomial is as followsܸ௡௠ሺ‫,ݔ‬ ‫ݕ‬ሻ ൌ ܸ௡௠ሺߩ, ߠሻ ൌ ܴ௡௠ሺߩሻ expሺ݆݉ߠሻ ሺ10ሻWhereߩ ൌ ඥ‫ݔ‬ଶ ൅ ‫ݕ‬ଶ , ߠ ൌ tanିଵሺ‫ݕ‬ ‫ݔ‬⁄ ሻ where n is a non-negative integer, m is restricted tobe |݉| ൑ ݊ and the radial pseudo-Zernike polynomial ܴ௡௠ሺߩሻ is defined as the followingܴ௡௠ሺߩሻ ൌ ෍ሺെ1ሻௌሺ2݊ ൅ 1 െ ‫ݏ‬ሻ! ߩ௡ି௦‫!ݏ‬ ሺ݊ ൅ |݉| ൅ 1 െ ‫!ݏ‬ ሺ݊ െ |݉| െ ‫ݏ‬ሻ!௡ି|௠|௫ୀ଴ሺ11ሻLike any other orthogonal and complete basis, the pseudo-Zernike polynomial can be used todecompose an analog image function݂ሺ‫,ݔ‬ ‫ݕ‬ሻ:݂ሺ‫,ݔ‬ ‫ݕ‬ሻ ൌ ෍ ෍ ‫ܣ‬௡௠ܸ௡௠ሺ௠:|௠|ஸ௡ሻ∞௡ୀ଴ሺ‫,ݔ‬ ‫ݕ‬ሻ ሺ12ሻFig 3: The structure of steerable filterBasis filter 1Base filter 2Base filter NInterpolationfunction 1Interpolationfunction 2Interpolationfunction NSumming junctionArbitraryorientationInputImageAdaptive filtered image
    • International Journal of Electronics and Communication Engineering & Technology (IJECET), ISSN0976 – 6464(Print), ISSN 0976 – 6472(Online) Volume 4, Issue 2, March – April (2013), © IAEME419(a) (b)(c) (d) (e)Fig 4: The filtered Images using steerable filter (a) The Original image (b) the sub bandsimage in horizontal orientation (c) Sub bands image in rotation of 450(d) Sub bands image invertical orientation (e) the sub bands image for rotation of -450Where ‫ܣ‬௡௠is the pseudo-Zernike moment of order n with repetition m which is defined as‫ܣ‬௡௠ ൌ݊ ൅ 1݊ඵ ‫ݔ‬ଶ൅ ‫ݕ‬ଶ൑ 1݂ሺ‫,ݔ‬ ‫ݕ‬ሻܸ௡௠‫כ‬ ሺ‫,ݔ‬ ‫ݕ‬ሻ݀‫ݕ݀ݔ‬ ሺ13ሻIt should be pointed out in case of digital image, Equation (13) cannot be applied directly butthe approximate version is selected. Given a digital image of size‫ܯ‬ ൈ ܰ, its pseudo-Zernikemoments are estimated as‫ܣ‬௡௠ ൌ݊ ൅ 1݊෍ ෍ ݄௡௠ே௝ୀଵெ௜ୀଵ൫‫ݔ‬௜, ‫ݕ‬௝൯݂ሺ‫ݔ‬௜, ‫ݕ‬௜ሻ ሺ14ሻWhere the value of I and j are taken such that‫ݔ‬௜ଶ൅ ‫ݕ‬௝ଶ൑ 1, and݄௡௠൫‫ݔ‬௜, ‫ݕ‬௝൯ ൌ න න ܸ௡௠‫כ‬ ሺ‫,ݔ‬ ‫ݕ‬ሻ݀‫ݕ݀ݔ‬ ሺ15ሻ௬భା∆೤మ௬భା∆೤మ௫భା∆ೣమ௫భା∆ೣమ
    • International Journal of Electronics and Communication Engineering & Technology (IJECET), ISSN0976 – 6464(Print), ISSN 0976 – 6472(Online) Volume 4, Issue 2, March – April (2013), © IAEME420Where∆‫ݔ‬ ൌଶெ, ∆‫ݕ‬ ൌଶே݄௠௡ሺ‫ݔ‬௜, ‫ݕ‬௜ሻ can be estimated to address the non trivial issue ofaccuracy. In this we use a formula the most commonly used in literature to calculate pseudo-Zernike moments of discrete images.‫ܣ‬መ௡௠ ൌ݊ ൅ 1݊෍ ෍ ܸ௡௠‫כ‬ே௝ୀଵெ௜ୀଵሺ‫ݔ‬௜, ‫ݕ‬௜ሻ݂ሺ‫ݔ‬௜, ‫ݕ‬௜ሻ∆‫ݕ∆ݔ‬ ሺ16ሻ4.2 The advantage of pseudo-Zernike momentsPseudo-Zernike moments have the following advantage4.2.1 The invariance propertiesWe use pseudo-Zernike moment for shape descriptor because of some properties, i.e,their magnitudes are invariant under image rotation and image flipping. We now elaborate onthose invariance properties.If image ݂ሺ‫,ݎ‬ ߠሻ is rotated ߙ degrees counter clockwise, the image under rotation is݂′݂^′ ሺ‫,ݎ‬ ߠሻ ൌ ݂ሺ‫,ݎ‬ ߠ െ ߙሻ, and the pseudo-Zernike moments of ݂′ሺ‫,ݎ‬ ߠሻ is given as‫ܣ‬௡௠‫כ‬ൌ݊ ൅ 1݊න න ݂ሺ‫,ݎ‬ ߠ െ ߙሻሾܴ௡௠ ሺ‫ݎ‬ሻ݁ି௝௠ఏሿ‫כ‬‫ߠ݀ݎ݀ݎ‬ ሺ17ሻିଵ଴గିగassume ߠ′ൌ ߠ െ ߙ , we can get‫ܣ‬௡௠‫כ‬ൌ݊ ൅ 1ߨන න ݂ሺ‫,ݎ‬ ߠ െ ߙሻሾܴ௡௠ሺ‫ݎ‬ሻ݁ି௝గఏሿ‫כ‬‫ߠ݀ݎ݀ݎ‬ ሺ18ሻଵ଴గିగൌ ൤݊ ൅ 1ߨන න ݂ሺ‫,ݎ‬ ߠ′ሻିଵ଴గିగ቏ ሾܴ௡௠ሺ‫ݎ‬ሻ݁ି௝గఏ′ሿ݁ି௝௠ఈൌ ‫ܣ‬௡௠݁ି௝௡ఈIt is given that the pseudo-Zernike moments of the result images are ‫ܣ‬௡௠′ൌ ‫ܣ‬௡௠exp ሺെ݆݉ߙሻwhich leads to ห‫ܣ‬௡௠′ห ൌ |‫ܣ‬௡௠|It shows that it is robust to rotation.It can also be given as a image ݂ሺ‫,ݎ‬ ߠሻis flipped horizontally. The pseudo-Zernikemoments of the resulting images are‫ܣ‬௡௠ሺ௛௙ሻൌ ‫ܣ‬௡௠‫כ‬, ݂݅݉݅‫;݊݁ݒ݁ݏ‬‫ܣ‬௡௠ሺ௛௙ሻൌ െ‫ܣ‬௡௠‫כ‬,݂݅݉݅‫.݀݀݋ݏ‬Similarly, if an image ݂ሺ‫,ݎ‬ ߠሻis flipped vertically, the pseudo-Zernike moments of the resultimage ‫ܣ‬௡௠ሺ௩௙ሻൌ ‫ܣ‬௡௠‫כ‬for m even or odd, the magnitude of the pseudo-Zernike moments haveperfect robustness to image attacks
    • International Journal of Electronics and Communication Engineering & Technology (IJECET), ISSN0976 – 6464(Print), ISSN 0976 – 6472(Online) Volume 4, Issue 2, March – April (2013), © IAEME4214.2.2 Multilayer expressionFor an image, the low-order moments of the pseudo-Zernike moments can be given asthe outline of the image: and the higher order moments of the pseudo-Zernike moments canbe given in detail of the image Figure 5 shows the binary image H and its reconstructedimage. From left to right, the order is 5, 10, 15 and 25. It is not difficult to see that the low-order pseudo-Zernike moments of digital image contain contour, but the high-order providesthe detail.Fig 5 The Binary image H and its reconstructed image (a) Original image; (b) thereconstructed image (order is 5): (c) the reconstructed image (order is 10): (d) thereconstructed image (order is 15): (e) the reconstructed image (order is 20): (f) thereconstructed image (order is 25)4.3 Shapes feature representationPseudo-Zernike moments are not scale or translation invariant. In our proposedmethod, the scaling and translation invariance are firstly obtained by normalizing the images,and then ห‫ܣ‬መ௡௠ห is selected as shape feature set for images retrieval. The pseudo-Zernikemoments based shape feature set for image retrieval.The pseudo-Zernike moments based shape feature vector is given by:‫ܨ‬௦ ൌ ቀห‫ܣ‬መ଴଴ห,ห‫ܣ‬መଵ଴ห, ห‫ܣ‬መଵଵห, ห‫ܣ‬መଶ଴ห, െ, ห‫ܣ‬መหହସ, ห‫ܣ‬መหହହቁ ሺ19ሻ5. SIMILARITY MEASUREAfter the color, texture and shape feature vectors are extracted, the retrieval systemcombines these feature vectors, determine system combines these feature vectors of the queryimage and that of that of each target images DB. Obtain a given number of the most similartarget images.(a) (b) (c)(d) (e) (f)
    • International Journal of Electronics and Communication Engineering & Technology (IJECET), ISSN0976 – 6464(Print), ISSN 0976 – 6472(Online) Volume 4, Issue 2, March – April (2013), © IAEME422Fig 6: Block diagram of the proposed method5.1 Color feature similarity measureMPEG-7 DCD matching does not fit human perception very well; it will cause incorrectranks for image with similar color distribution [7]. We use a modification distance function toimprove the robustness of color feature of query images ܳ is ‫ܨ‬௖ொൌ ሼሺ‫ܥ‬௜, ܲ௜ሻ, ݅ ൌ1 … … , ܰொሽand the color feature of each target image I in an image DB is ‫ܨ‬௖௜ൌ ሼሺ‫ܥ‬௜, ܲ௜ሻ, ݅ ൌ1, … , ܰூ}we define a new color feature similarity asܵ௖௢௟௢௥ሺܳ, ‫ܫ‬ሻ ෍ ෍ ݀௜௝ேభ௝ୀଵேబ௜ୀଵ‫ݏ‬௜௝ ሺ20ሻWhereܰொ and ܰூ denote the number of the dominant colors in query image Q and the targetimage I: ݀௜,௝ ൌ ฮ‫ܥ‬௜ொെ ‫ܥ‬௝ூฮ denotes the Euclidean distance the dominate color ‫ܥ‬௜ொof queryimage Q and the dominant color ‫ܥ‬௜ூof the target image I; ܵ௜௝ ൌ ሾ1 െ หܲ௜ொെ ܲ௝ூห൧ ൈ minሺܲ௜ொെܲ௝ூሻ denotes the similarity score between dominant colors. Here ܲ௜ொܽ݊݀ܲ௝ூare the percentageof the ithand jthdominant color in the query image and target image, the݉݅݊ሺܲ௜ொ, ܲ௝ூሻ is theintersection of the ܲ௜ொand ܲ௝ூit represent the similarity between two colors in percentage. Theterm in bracket 1 െ หܲ௜ொെ ܲ௝ூห used to measure the difference of two colors in percentage. Ifܲ௜ொequal to ܲ௝௤then their percentage is same and their color similarity is given byminሺܲ௜ொ, ܲ௝ூሻ otherwise the large difference between ܲ௜ொand ܲ௝ொwill decreases the similaritymeasureColor QuantizationSteerable filter DecompositionPseudo-Zernike momentColor featureextractionShape FeatureExtractionTexture FeatureExtractionFeature VectorCombinationProgressiveSimilaritycombinationMultiresolution Feature extractionFeature DB Image DBRetrievedImageQueryImage
    • International Journal of Electronics and Communication Engineering & Technology (IJECET), ISSN0976 – 6464(Print), ISSN 0976 – 6472(Online) Volume 4, Issue 2, March – April (2013), © IAEME4235.2 Texture Feature similarity measureThe texture similarity is given bySTextureሺQ, Iሻ ൌ ൭෍ൣሺµiQ4iୀ1െ µiI൯2൅ ሺσiQെ σiI൯2ቃ൱ ሺ21ሻWhere µQiand σIidenotes the texture feature of query imageܳ. . ߤொ௜Andߪூ௜denotes the texturefeature of target image I.5.3 Shapes feature similarity measureWe provides the shape feature similarity as followsܵௌ௛௔௣௘ሺܳ,‫ܫ‬ሻ ൌ ෍ ෍ሺห‫ܣ‬௜௝ொห௜௝ହ௜ୀ଴െ ห‫ܣ‬௜௝ூห൯ଶቁଵ/ଶሺ22ሻWhere ห‫ܣ‬መ௜௝ொห and ห‫ܣ‬መ௜௝ூหdenotes the shape feature of the query image and target image I. Distanceis used to compute the similarity between the query feature vector and the target feature vectorwhich is given asܵሺ‫,ܫ‬ ܳሻ ൌ ‫ݓ‬௖‫ݏ‬஼௢௟௢௥ሺܳ, ‫ܫ‬ሻ ൅ ‫ݓ‬்‫ݏ‬்௘௫௧௨௥௘ሺܳ,‫ܫ‬ሻܵሺ‫,ܫ‬ ܳሻ ൌ ‫ݓ‬௖‫ݏ‬஼௢௟௢௥ሺܳ, ‫ܫ‬ሻ ൅ ‫ݓ‬்‫ݏ‬்௘௫௧௨௥௘ሺܳ,‫ܫ‬ሻWhere WC, WT, WS is the weight of the color, texture and shape features It should be pointed outthat S (Q, I) according to the formula, so we define the final similarity as:ܵ′ሺܳ, ‫ܫ‬ሻ ൌܵሺ‫,ܫ‬ ܳሻ ൅ ܵሺܳ, ‫ܫ‬ሻ2ሺ23ሻWhere retrieving image, first we estimate the similarity between the query image and each targetimages in the image DB and the retrieval results according to the similarity value.6. EXPERIMENTAL RESULTSIn this paper, we propose new and effective color image retrieval for combining color,texture and shape information, which provides higher retrieval method. The color image retrievalsystem has been implemented in Matlab 2010b environment on a Pentium dual core (3 GHz) PC.To verify the retrieval efficiency of the proposed method, performance with the general purposeimage database consists of 1,000 images of 10 categories from the Corel image Gallery is tested.Corel images are used by the image processing and CBIR research communities. They cover avariety of topics, such as "flower" ,"bus", "eagle", "gun", "sunset", "horse", etc. to evaluate theoverall performance of the proposed image feature in retrieval, a number of experiments wereperformed on image retrieval and the state-of-the-arts. The methods to be compared includetraditional color histogram and the color histogram of subblocks [3]. The parameter settings are:ܶ ൌ 25, ܶ ൌ 0.06, ‫ݓ‬௖ ൌ ‫ݓ‬் ൌ ‫ݓ‬௦ ൌ 13ൗFigure 7 the image retrieval result using the traditional color histogram, the colorhistogram, the color histogram of sub blocks [3], and the proposed method. Validity of theproposed algorithm is conformed, We randomly select 50 images as query image from the aboveimage database (the tested 10 semantic class includes bus, flower, horse, dinosaur, building,
    • International Journal of Electronics and Communication Engineering & Technology (IJECET), ISSN0976 – 6464(Print), ISSN 0976 – 6472(Online) Volume 4, Issue 2, March – April (2013), © IAEME424elephant, people, beach, scenery, and dish) Each kind is extracted 5 image, and each time returnsthe first 20 most similar image as retrieval results. The average normal precision and the averagenormal recall of 5times query result are calculated.Fig 7: (a) Query Image, (b), (c),(d) Retrieved Images7. CONCLUSIONIn image processing CBIR is a active research topic for pattern reorganization, andcomputer vision. In our method, CBIR method has been used as the combination of dynamicdominant color steerable filter texture feature, and pseudo- Zernike moments shape descriptor.Experimental result shows that our method is more efficient than the other conventional methodwith no greater feature vector dimension. Proposed method estimate for more various DB and tobe applied to video retrial and almost provides average retrieval time over the other methods.REFERENCES[1] RitendraDatta, Dhiraj Joshi, Jia Li, James Z. Wang, Image retrieval: ideas, influences, andtrends of the new age, ACM Computing Surveys 40 (2) (2008) 1–60.[2] J. Vogel, B. Schiele, Performance evaluation and optimization for content-based imageretrieval, Pattern Recognition 39 (5) (2006) 897–909.[3] Xuelong Li, Image retrieval based on perceptive weighted color blocks, Pattern RecognitionLetters 24 (12) (2003) 1935–1941.[4] ISO/IEC 15938-3/FDIS Information Technology—Multimedia Content DescriptionInterface—Part 3 Visual Jul. 2001, ISO/IEC/JTC1/SC29/WG11 Doc. N4358.[5] L.M. Po, K.M. Wong, A new palette histogram similarity measure for MPEG-7 dominantcolor descriptor, Proceedings of the 2004 IEEE International Conference on Image Processing(ICIP04), Singapore, October 2004, pp. 1533–1536.[6] Rui Min, H.D. Cheng, Effective image retrieval using dominant color descriptor and fuzzysupport vector machine, Pattern Recognition 42 (1) (2009) 147–157.[7] Nai-Chung Yang, Wei-Han Chang, Chung-Ming Kuo, Tsia-Hsing Li, A fast MPEG-7dominant color extraction with new similarity measure for image retrieval, Journal of VisualCommunication and Image Representation 19 (2) (2008) 92–105.[8] Tzu-Chuen Lu, Chin-Chen Chang, Color image retrieval technique based on color featuresand image bitmap, Information Processing and Management 43 (2) (2007) 461–472.[9] Young Deok Chun, Nam Chul Kim, Ick Hoon Jang, Content-based image retrieval usingmultiresolution color and texture features, IEEE Transactions on Multimedia 10 (6) (2008) 1073–1084.(a) (b)(c) (d)
    • International Journal of Electronics and Communication Engineering & Technology (IJECET), ISSN0976 – 6464(Print), ISSN 0976 – 6472(Online) Volume 4, Issue 2, March – April (2013), © IAEME425[10] Y.D. Chun, S.Y. Seo, N.C. Kim, Image retrieval using BDIP and BVLC moments,IEEETransactions on Circuits and Systems for Video Technology 13 (9) (2003) 951–957.[11] C. Theoharatos, V.K. Pothos, N.A. Laskaris, G. Economou, Multivariate image similarity inthe compressed domain using statistical graph matching, Pattern Recognition 39 (10) (2006)1892–1904.[12] ManeshKokare, P.K. Biswas, B.N. Chatterji, Texture image retrieval using rotated waveletfilters, Pattern Recognition Letters 28 (10) (2007) 1240–1249.[13] Ju. Han, Kai-Kuang Ma, Rotation-invariant and scale-invariant Gabor features for textureimage retrieval, Image and Vision Computing 25 (9) (2007) 1474–1481.[14] B.S. Manjunath, P. Salembier, T. Sikora, Introduction to MPEG-7: Multimedia ContentDescription Interface, Wiley, Chichester, 2002.[15] D. Zhang, G. Lu, Review of shape representation and description techniques, PatternRecognition 37 (1) (2004) 1–19.[16] Xu. Xiaoqian, Dah-Jye Lee, Sameer Antani, L. Rodney Long, A spine X-ray image retrievalsystem using partial shape matching, IEEE Transactions on Information Technology inBiomedicine 12 (1) (2008) 100–108.[17] Chia-Hung Wei, Yue Li, Wing-Yin Chau, Chang-Tsun Li, Trademark image retrievalusingsynthetic features for describing global shape and interior structure, Pattern Recognition 42 (3)(2009) 386–394.[18] S. Liapis, G. Tziritas, Color and texture image retrieval using chromaticity histograms andwavelet frames, IEEE Transactions on Multimedia 6 (5) (2004) 676–686.[19] A. Vadivel, A.K. Majumdar, S. Sural, Characteristics of weighted feature vector in content-based image retrieval applications, Proceedings of IEEE International conference on IntelligentSensing and Information Processing, Chennai, India, Jan.2004, pp. 127–132.[20] Y.D. Chun. Content-based image retrieval using multiresolution color and texture features.Ph.D. thesis, Dept. Elect.Eng., Kyungpook National Univ., Daegu, Korea, Dec. 2005 [Online].Available: http://vcl.knu.ac.kr/phd_thesis/ydchun.pdf.[21] P.S. Hiremath, J. Pujari, Content based image retrieval using color, texture and shapefeatures, Proceedings of the 15th International Conference on Advanced Computing andCommunications, Guwahati, Inde, December 2007, pp. 780–784.[22] Y. Deng, B.S. Manjunath, C. Kenney, An efficient color representation for image retrieval,IEEE Transactions on Image Processing 10 (1) (2001) 140–147.[23] M. Jacob, M. Unser, Design of steerable filters for feature detection using Cannylike criteria,IEEE Transactions on Pattern Analysis and Machine Intelligence 26 (8) (2004) 1007–1019.[24] A. Khotanzad, Y.H. Hong, Invariant image recognition by Zernike moments, IEEETransactions on Pattern Anal. Mach. Intell 12 (5) (1990) 489–497.[25] K.Ganapathi Babu, A.Komali, V.Satish Kumar and A.S.K.Ratnam, “An Overview ofContent Based Image Retrieval Software Systems”, International Journal of ComputerEngineering & Technology (IJCET), Volume 3, Issue 2, 2012, pp. 424 - 432, ISSN Print: 0976 –6367, ISSN Online: 0976 – 6375.[25] Tarun Dhar Diwan and Upasana Sinha, “Performance Analysis is Basis on Color BasedImage Retrieval Technique”, International Journal of Computer Engineering & Technology(IJCET), Volume 4, Issue 1, 2013, pp. 131 - 140, ISSN Print: 0976 – 6367, ISSN Online: 0976 –6375.[26] Abhishek Choubey, Omprakash Firke and Bahgwan Swaroop Sharma, “Rotation andIllumination Invariant Image Retrieval using Texture Features”, International Journal ofElectronics and Communication Engineering &Technology (IJECET), Volume 3, Issue 2, 2012,pp. 48 - 55, ISSN Print: 0976- 6464, ISSN Online: 0976 –6472.