ISSN: 2277 – 9043                International Journal of Advanced Research in Computer Science and Electronics Engineerin...
ISSN: 2277 – 9043                 International Journal of Advanced Research in Computer Science and Electronics Engineeri...
ISSN: 2277 – 9043                International Journal of Advanced Research in Computer Science and Electronics Engineerin...
ISSN: 2277 – 9043                   International Journal of Advanced Research in Computer Science and Electronics Enginee...
ISSN: 2277 – 9043                     International Journal of Advanced Research in Computer Science and Electronics Engin...
ISSN: 2277 – 9043             International Journal of Advanced Research in Computer Science and Electronics Engineering (...
Upcoming SlideShare
Loading in …5
×

93 98

402 views
359 views

Published on

Published in: Technology, Art & Photos
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
402
On SlideShare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
Downloads
7
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide

93 98

  1. 1. ISSN: 2277 – 9043 International Journal of Advanced Research in Computer Science and Electronics Engineering (IJARCSEE) Volume 1, Issue 6, August 2012 Image Segmentation in Satellite Image using Optimal Texture Measures G.Viji1, N.Nimitha2,A.Kalarani2 1 Assistant Professor, P.S.R.Rengasamy college of Engineering for women, Sivakasi 2 Lecturer, M.Kumarasamy college of Engineering,karur. 2 Assistant Professor, P.S.R.Rengasamy college of Engineering for women, Sivakasi.Abstract— Texture in high resolution satellite images requires information source and provide current information on asubstantial amendment in the conventional segmentation periodic basis at low cost.algorithms. In this paper, a satellite image is segmented using Satellite image consists of micro textures and macrooptimal texture measures. Satellite image used in this paper is a textures. For micro textures the statistical approach seems tohigh resolution data which will provide more details of the urbanareas, but it seems evident that it will create additional problems be work well. The statistical approaches have included autoin terms of information extraction using automatic classification. correlation functions, digital transform, and gray level toneThis work improves the classification accuracy of intra-urban co-occurrence. For macro textures the approach seems to beland cover types. Four texture measures are evaluated using moving in the direction of using histograms of primitivegrey-level co-occurrence matrix (GLCM). Four texture indices properties and co-occurrence of primitive properties inwith six window sizes are obtained from satellite image. Principle structural and statistical. These techniques are not sufficient toComponent Analysis (PCA) is applied to these texture measures. segment high resolution images due to the variability ofThe resultant image is then compared with homogeneity texture spectral and structural information in such images [2].feature image, obtained using 7×7 window. The per pixel Thus the spatial pattern or texture analysis becomesclassification accuracy is improved in this work by varying thewindow size. necessary to segment high resolution image. The proposed method is based on the feature extraction from the gray levelKeywords - Gray Level Co-occurrence Matrix (GLCM), Principle co-occurrence matrix, which is a well known method forComponent Analysis (PCA), Remote Sensing, Satellite Image, analysing the texture features. The segmentation based on thisSegmentation. texture features can improve the accuracy of this interpretation. A problem that frequently arises when I. INTRODUCTION segmenting an image is that the number of feature variables or dimensionality is often quite large. It becomes necessary to Image segmentation plays an important role in human decrease the number of variables to manageable size, at thevision, computer vision and pattern recognition fields. same time, retaining as much discrimination information asSegmentation refers to the process of partitioning a digital possible. In this paper an algorithm called principleimage into multiple segments. The goal of segmentation is to component analysis is introduced to solve this problem.simplify and or change the representation of an image into The paper is organized as follows. First in Section II,something that is more meaningful and easier to analyse. Proposed Methodology is dealt, Principle ComponentImage segmentation is typically used to locate objects and Analysis (PCA) in Section III, Results and discussion are dealtboundaries (lines, curves, etc.) in images. More precisely, in Section IV. Finally conclusions are given in Section V.image segmentation is the process of assigning a label toevery pixel in an image such that pixels with the same label II. PROPOSED METHODOLOGYshare certain visual characteristics. In order to better explainthe structure of this work, the preliminary information about The Fig.1 shows that representation of the proposedthe satellite image and remote sensing is discussed [1]. methodology. The proposed methodology consists of two Remote sensing is a science of obtaining information about steps: Step1: optimal window size and Step2: optimalan object, area or phenomenon through the analysis of data texture measure. Feature extraction acquired by thisacquired by a device that is not in contact with the object [1]. experiment is derived from gray level co-occurrence matrix.Commonly remote sensing is referred to the collection and The more details of this texture analysis are shown by theanalysis of data regarding the earth using electromagnetic following subheadings.sensors, which are operated from the space borne platform.Satellite image is a remotely sensed one and defined as a A. Gray level Co-occurrence matrixpicture of the earth taken from an earth orbital satellite. Thisimage consists of buildings, roads, vegetations, water bodies Gray level co-occurrence matrix is the two dimensionaland other open areas. Satellite images are an important matrix of joint probabilities Pd,r(i,j) between pairs of pixels, separated by a distance, d, in a given direction, r. It can be 93 All Rights Reserved © 2012 IJARCSEE
  2. 2. ISSN: 2277 – 9043 International Journal of Advanced Research in Computer Science and Electronics Engineering (IJARCSEE) Volume 1, Issue 6, August 2012obtained by calculating how often a pixel with gray level Angular Second Moment [6] is a measure of homogeneityvalue i occurs horizontally adjacent to a pixel with the value j. of the image. It is high when the GLCM has few entries ofEach element (i,j) in GLCM specifies the number of times that large magnitude, low when all entries are almost equal. This isthe pixel with value i occurs horizontally adjacent to a pixel the opposite of entropy. This information is specified by thewith the value j. It is used to detect objects with different sizes matrix of relative frequencies Pd(i,j) with which twoand directions. The co-occurrence matrix values are calculated neighbouring pixels occur on the image, one with gray value ifor six window sizes (3×3,5×5,7×7,9×9,11×11,13×13) [3].It is and the other with gray value j.popular in texture description and based on the repeated In Step1 the classification procedure using texturaloccurrence of some gray level configuration in the texture. measures depends largely on the selected window size. TheThis configuration varies with distance in fine textures, slowly optimal window size chosen in our implementation is 7×7,in coarse textures. since it gives superior performance [3]. If the window size is too small, insufficient spatial information is extracted toB. Feature extraction characterise a specific land cover and if the window size is too large, it can overlap two types of ground cover and thus In order to estimate the similarity between different gray introduce erroneous spatial information.level co-occurrence matrices, [4] proposed 14 statistical In Step2 the analysis of the correlation matrix among all thefeatures extracted from them. To reduce the computational texture measures with the six window sizes highlights highcomplexity, only some of these features were selected. The correlations [3] between the same texture measures withdescription of 4 most relevant features that are widely used in different window sizes and between the different textureliterature [5, 6, 7] is given in Table1. These four features are measures with different window sizes. The four texturecalculated from the gray level co-occurrence matrix of measures are calculated for a window size and principledifferent window sizes(3×3,5×5,7×7,9×9,11×11,13×13). component analysis (PCA) is applied to the 24 texture measures [3]. Then, on the one hand, the first three components are extracted, while on the other hand, only the TABLE1 first component is extracted. Next a texture measure is TEXTURE MEASURES calculated for the six window sizes and PCA is applied forHomogeneity n 1 n 1 Pd (i, j )  1  i  j each type of texture measure. i 0 j 0 III. PRINCIPLE COMPONENT ANALYSISDissimilarity n 1 n 1  P (i, j) i  j i 0 j 0 d The steps involved in the implementation of PCA using the covariance method is shown below.Entropy n 1 n 1  P (i, j) log P (i, j) d d  Organize the data set i 0 j 0  Calculate the meanAngular Second n 1 n 1  Calculate the deviations from the meanMoment  P (i, j) i 0 j 0 d 2  Find the Covariance matrix.  Find the eigenvectors and eigenvalues of thewhere i,j – Coordinates in the co-occurrence matrix covariance matrix  Rearrange the eigenvectors and eigenvalues Pd (i,j) – Co-occurrence matrix value at the  Transform the eigen space into PCA parameter coordinates i,j IV. RESULTS & DISCUSSION n – Dimension of the co-occurrence matrix In this paper to improve the global accuracy, two types Homogeneity is a measure of the overall smoothness of an of images are taken. In first type, 10 texture feature imagesimage. It is high for GLCMs with elements localized near the are integrated and classified using threshold method. Indiagonal. The range of gray levels is small, Pd (i,j) will tend to second type, individual texture images are taken and classifiedbe clustered around the main diagonal [4]. Dissimilarity using threshold method. Both the results are compared withmeasures can be used to quantify the differences between two the homogeneity [7  7] textural measure. The visualizationimages. of the textural images show a simmilarity between the Entropy is a statistical measure of randomness that can be dissimilarity and the angular second moment because theseused to characterize the texture of the input image. It is high two textural indices measure the homogeneity of images aswhen the elements of GLCM have relatively equal value [6], shown in Fig 2(b) and 2(d). The high value areas (white) referlow when the elements are close to either 0 or 1(when the to homogeneous areas such as water. The low values (black)image is uniform in the window). Entropy is inversely characterize the heterogeneous areas such as the built-upproportional to GLCM energy. classes. 94 All Rights Reserved © 2012 IJARCSEE
  3. 3. ISSN: 2277 – 9043 International Journal of Advanced Research in Computer Science and Electronics Engineering (IJARCSEE) Volume 1, Issue 6, August 2012 Fig. 1 Strategy of the Textural Analysis Fig.3 shows that classification result of textural images. texture feature images (i.e. 1 &7) are not considered. SinceThe classification results, obtained using the integration of all against homogeneity feature image only, classificationtexture image is shown in Fig 3(a), which gives the high accuracy is compared.global accuracy than other textural image, because, here the From the Table 2, it is observed that, the accuracy ofregions are more homogeneous. Nevertheless, the integration of 10 texture feature images are high, whenhomogeneity measure with a 7×7 window size seems to be compared to other texture feature images. In Table 2, if theoptimal regarding the rate of correct classification and hence region is same for row and column, then the region isthe homogeneity feature image is used for comparison. In this correctly classified. Otherwise, the region is incorrectlyhomogeneity texture feature image, the four regions 1, 2, 3, 4 classified. For example, in the integration of 10 texture featurecorrespond to buildings, roads, and water and vegetations images, if the region is 1 for row and column, it represents theareas respectively. The number of pixels in these regions are correct classification of buildings. If the region is 1 for row486311, 24357, 1728 and 132 respectively. and 2 for column, then it represents incorrect classification of The success of proposed image segmentation is shown in buildings as roads. The number of pixels correctly classifiedthe form of confusion matrix, in Table 2. In this table the in region 1 is 483802, region 2 is 10651, region 3 is 884 andnumber of pixels correctly and incorrectly classified in various region 4 is 74. The other numbers in each row correspond toregions for different feature images, the integrated texture the incorrectly classified pixels.feature images are reported. Please note that homogeneity 95 All Rights Reserved © 2012 IJARCSEE
  4. 4. ISSN: 2277 – 9043 International Journal of Advanced Research in Computer Science and Electronics Engineering (IJARCSEE) Volume 1, Issue 6, August 2012 (a) (b) (c) (d) (e)Fig. 2 Extract of different co-occurrence-based textured measure: (a) original image; (b) angular second moment; (c) homogeneity; (d) dissimilarity; (e) entropy (a) (b) (c) (d) (e) (f) (g) (h) 96 All Rights Reserved © 2012 IJARCSEE
  5. 5. ISSN: 2277 – 9043 International Journal of Advanced Research in Computer Science and Electronics Engineering (IJARCSEE) Volume 1, Issue 6, August 2012Fig. 3 Classification results of textural images with the texture measure (Hom 7×7). (a) Integration of 10 texture feature images; (b) 3rd Texture feature image; (c) 4th Texture feature image; (d) 5th Texture feature image; (e) 6th Texture feature image; (f) 7rd Texture feature image; (g) 8th Texture feature image; (h) 9th Texture feature image; TABLE 2 CONFUSION MATRIX OF VARIOUS TEXTURAL IMAGES Accur homogeneity with a 7×7 window size. Satellite image consistsTexture -acy of both micro textures and macro textures. For micro textures Region 1 2 3 4 small window size is enough and for macro textures, largeimages (%) window size is required. For this reason, one can improve theIntegra- 1 483802 2509 0 0 per-pixel classification by varying the different window size. tion of 2 13661 10651 45 0 The co-occurrence based principle components (integration of 10 96.66 all textural images) which give the high accuracy than other texture 3 0 819 884 25 textural image. Moreover, as window size for texture analysis is feature related to image resolution and the contents within the image, it images 4 0 0 58 74 would be interesting to choose different window sizes 1 486311 0 0 0 according to the size of the features to be extracted. 2nd 94.92 2 24282 75 0 0 texture 3 1334 307 73 14 image REFERENCES 4 40 41 32 19 1 486311 0 0 0 3rd 94.92 2 24208 149 0 0 texture [1] ImagesManimala Singha et al “Color Image Segmentation for 3 1108 602 17 1 Satallite” International Journal on Computer Science and Engineering image 4 3 81 40 8 2011. 1 486311 0 0 0 [2] A.P.Carleer, O.Debeir, E.Wolff, “Assessment of very High Spatial 4th 94.92 Resolution Satellite Image Segmentations,” Photogrammetric 2 24208 149 0 0 texture Engineering and Remote Sensing, vol. 71, no.11, pp.1285-1294, 2005. 3 1108 602 17 1 image [3] A.Puissant, J.Hirsch, and C.Weber, “The utility of texture analysis 4 3 81 40 8 to improve per-pixel classification for high to very high spatial 1 423591 62095 617 8 resolution imagery,” International Journal of Remote Sensing., vol.26, 5th 85.7 2 8044 14762 1504 47 no.4, pp. 733-745, 2005. texture 3 16 857 767 88 [4] R.M.Haralick, K.Shanmugam, and I.Dinstein, “Textural Features image for Image Classification,” IEEE Transactions on Systems, Man, and 4 8 30 55 39 1 485437 874 0 0 Cybemetics, vol.SMC-3, no.6, pp. 610-621, Nov.1973. 6th 96 [5] S.Arivazhagan and L.Ganesan, “Texture Classification using 2 17310 7045 2 0 Wavelet Transform Pattern Recognition Letters,’ vol.24, pp.1513- texture 3 0 1215 507 6 1521, 2003. image 4 0 0 83 49 [6] A.Baraldi and F.Parmiggiani, “An investigations of the Textural 1 486309 2 0 0 Characteristics Associated with Gray Level Co-occurrence Matrix 8th 94.96 Statistical Parameters,” IEEE Transaction on Geoscience and Remote 2 24075 282 0 0 texture Sensing, vol.33, no.2, pp.293-304, 1995. 3 1081 549 83 15 image [7] R.M.Haralick, “Statistical and Structural Approaches to Texture,” 4 18 55 37 22 Proceedings of the IEEE, vol.67, no.5,pp. 786-804,May.1979. 1 472032 14235 44 6 H.Anys, A.Bannari, D.C.He, and D.Morin, ”Texture Analysis for 9th 2 15114 8883 356 4 93.9 the Mapping of Urban Areas using Airborne MEIS-II Images, ”In texture Proceedings of the First International Airborne Remote Sensing 3 1 1314 394 19 image Conference and Exhibition,vol.III,pp.231-245,Sep.1994. 4 0 54 58 20 [8] P.Dulyakam, Y.Rangsanseri, and P.Thitimajshima, ”Textural 1 472032 14235 44 6 10th 93.9 Classification of urban Environment using Gray level Co- occurrence 2 15114 8883 356 4 Matrix Approach,” 2nd International Conference on Earth Observation texture 3 1 1314 394 19 and Environmental Information, 2000. image 4 0 54 58 20 [9] J.S.Weszka, C.R.Dyer, and A.Rosenfeld, “A Comparative Study of Texture Measures for Terrain Classification,” IEEE Transaction on Region: 1-Buildings, 2-Roads, 3-Water, 4-Vegitations Systems, Man and Cybernetics, vol.SMC-6, no.4, 1976. [10] J.Gu, J.Chen, Q.M.Zhou and H.W.Zhang, “Quantitative Textural Parameter Selection for Residential Extraction from High V. CONCLUSIONS Resolution remotely Sensed Imagery,” The International Archives of the Photogrammetry,Remote Sensing and Spatial Information This paper confirms the utility of textural analysis to Sciences,col.B4,no.37, 2008.enhance the per-pixel classification accuracy for high resolution [11] G.Meinel and M.Neubert, “A Comparison of Segmentation Programs for High Resolution Remote Sensing Data,” Internationalimages, especially in urban areas where the images are Archives of Photogrammetry and Remote Sensing, vol.35, pp.1097-spectrally more heterogeneous. For the texture analysis, it is 1105, 2004.noted that the best co-occurrence based texture measure is the 97 All Rights Reserved © 2012 IJARCSEE
  6. 6. ISSN: 2277 – 9043 International Journal of Advanced Research in Computer Science and Electronics Engineering (IJARCSEE) Volume 1, Issue 6, August 2012[12] O.O.Yashon, J.Tetuko and R.Tateishi, ”Analysis of co- Land Cover Classification using SPOT Imagery,” IEEE Transactionsoccurrence and Discrete Wavelet Transform Textures for on Geoscience and Remote Sensing vol.28, pp.513- 519, 1990.differentiation of Forest and Non-forest Vegetation in Very High [14] N.Haala and C.Brenner, “Extraction of Buildings and Trees inResolution Optical-Sensor Imagery,” International Journal of Remote Urban Environments,” Photogrammetric Engineering and RemoteSensing,vol.29,no.12,pp.3417-3456, 2008. Sensing,”vol.54, pp.130-137, 1999.[13] W.K.Pratt, “Digital Image Processing,” 2nd edition (New York;Wiley).[14] D.J.Marcead, P.J.Howarth, J.M.M.Dubois, and D.J.Gratton,“Evaluation of the Gray Level Cooccurrence Matrix Method for Viji Gurusamy received the B.Engg. degree in Electronics and Communication Engineering from Anna University, Chennai, in 2008 and the Master of Engg. degree from Anna University, Tirunelveli, in 2010. From June 2010 to May 2012, She was worked in M.Kumarasamy College of Engg, Karur. Now she is currently working in P.S.R.Rengasamy College of Engg for women, Sivakasi. She had attended four international conferences and one national conference in various colleges. Her research area includes Digital Signal processing, Digital Image processing, Digital Communication. Kalarani Athilingam completed her B.Engg. degree in Electronics and Communication Engineering from Anna University, Chennai, in 2008 and the Master of Engg. degree from Anna University, Tirunelveli, in 2010. From June 2010 to till now, She is working in P.S.R.Rengasamy College of Engg for women, Sivakasi. Her research area includes Digital Electronics, Digital Image processing, Antenna, Communication. She has been attended several workshops and conferences in various engg colleges. Nimitha.N received the B.Engg. degree in Electronics and Communication Engineering from Anna University, Chennai, in 2006 and doing Master of Engg. Degree in Anna University, Coimbatore. From June 2008 to till now, She is working in M.Kumarasamy College of Engg, Karur. Her research area includes wireless networks, Digital Communication, Digital Image processing and optical communication. 98 All Rights Reserved © 2012 IJARCSEE

×