• Share
  • Email
  • Embed
  • Like
  • Save
  • Private Content
Effect of sub classes on the accuracy of the classified image
 

Effect of sub classes on the accuracy of the classified image

on

  • 503 views

 

Statistics

Views

Total Views
503
Views on SlideShare
503
Embed Views
0

Actions

Likes
1
Downloads
15
Comments
0

0 Embeds 0

No embeds

Accessibility

Categories

Upload Details

Uploaded via as Adobe PDF

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

    Effect of sub classes on the accuracy of the classified image Effect of sub classes on the accuracy of the classified image Document Transcript

    • InternationalINTERNATIONAL JOURNAL OF ELECTRONICS AND Journal of Electronics and Communication Engineering & Technology (IJECET), ISSN 0976 – 6464(Print), ISSN 0976 – 6472(Online) Volume 3, Issue 3, TECHNOLOGY (IJECET) COMMUNICATION ENGINEERING & October- December (2012), © IAEMEISSN 0976 – 6464(Print)ISSN 0976 – 6472(Online)Volume 3, Issue 3, October- December (2012), pp. 60-67 IJECET© IAEME: www.iaeme.com/ijecet.aspJournal Impact Factor (2012): 3.5930 (Calculated by GISI) ©IAEMEwww.jifactor.com EFFECT OF SUB-CLASSES ON THE ACCURACY OF THE CLASSIFIED IMAGE Satyanarayana Chanagala1, Appala Raju Uppala2 ,Yedukondalu Kamatham3 and M. Ravi Kumar4 Dept. of ECE, ACE Engineering College, Ankushapur, Ghatkesar Mandal, Ranga Reddy, Hyderabad, India ABSTRACT Remotely sensed image raw-data gathered by a satellite or aircraft needs to be corrected. Even images of seemingly flat areas are distorted by both the curvature of the Earth and the sensor being used. Hence there is a need for geometrically correcting an image so that it can be represented on a planar surface, confirm to other images, and have the integrity of a map. A map projection system is any system designed to represent the surface of a sphere or spheroid such as the Earth on a plane. For this work data from satellite image (IRS –P6) by LISS-III sensor of 23.5mresolution is used. This image corresponds to Ramtek region of Nagpur (Maharashtra state of India). This raw FCC image is preprocessed which includes importing, subsetting and Georeferencing and then extracting the useful information by supervised classification and then performing the accuracy analysis. After doing the accuracy analysis, emphasis is made on further improving the accuracy of the classified image. In this regard, the number of classes in the classified image has been increased. Surprisingly, accuracy got reduced! The reason for this is investigated. Key words: Remote Sensing, Geometric Corrections, Ground Control Points, Classification, Classes, Accuracy, Thematic classes. I. INTRODUCTION The raw data received from the satellite contains flaws [1], [2] and deficiencies which are given below: 1. Errors due to earth surface: While covering larger distance of earth’s surface (swath width) causes error in pixel size. And this leads to distortion. 2. Uncertainty in shape or pattern: The shape of earth is uncertain as there are peaks and troughs on its surface. 3. Atmospheric condition: with different atmospheric condition due to clouds, shadows, water vapour the information in the image is lost. 4. They may also occur due to changes in platform attitude (roll, pitch and yaw), altitude, earth rotation, earth curvature, panoramic distortion and detector delay. 60
    • International Journal of Electronics and Communication Engineering & Technology (IJECET), ISSN0976 – 6464(Print), ISSN 0976 – 6472(Online) Volume 3, Issue 3, October- December (2012), © IAEMETo overcome these flaws and deficiencies in order to get the originality of the data, it needs toundergo several steps of processing. This will vary from image to image depending on theinitial condition of the image and the information of interest.II.IMAGE RECTIFICATION & RESTORATIONGeometric distortions manifest themselves as errors in the position of a pixel relative to otherpixels in the scene and with respect to their absolute position within some defined mapprojection. If left uncorrected, these geometric distortions render any data extracted from theimage useless. This is particularly so if the information is to be compared to other data sets,be it from another image or a GIS data set. Distortions occur for many reasons. For instancedistortions occur due to changes in platform attitude (roll, pitch and yaw), altitude, earthrotation, earth curvature, panoramic distortion and detector delay. Most of these distortionscan be modelled mathematically and are removed before you buy an image. Changes inattitude however can be difficult to account for mathematically and so a procedure calledimage rectification is performed. Satellite systems are however geometrically quite stable andgeometric rectification is a simple procedure based on a mapping transformation relating realground coordinates, say in easting and northing, to image line and pixel coordinates.Rectification is a process of geometrically correcting [4],[6] an image so that it can berepresented on a planar surface, conform to other images or conform to a map as shown inFig. 3. That is, it is the process by which geometry of an image is made planimetric. It isnecessary when accurate area, distance and direction measurements are required to be madefrom the imagery. It is achieved by transforming the data from one grid system into anothergrid system using a geometric transformation.Rectification is not necessary if there is no distortion in the image. For example, if an imagefile is produced by scanning or digitizing a paper map that is in the desired projection system,then that image is already planar and does not require rectification unless there is some skewor rotation of the image. Scanning and digitizing produce images that are planar, but do notcontain any map coordinate information. These images need only to be geo-referenced, whichis a much simpler process than rectification. In many cases, the image header can simply beupdated with new map coordinate information. This involves redefining the map coordinateof the upper left corner of the image and the cell size (the area represented by each pixel).Ground Control Points (GCP) [5] are the specific pixels in the input image for which theoutput map coordinates are known. By using more points than necessary to solve thetransformation equations a least squares solution may be found that minimises the sum of thesquares of the errors. Care should be exercised when selecting ground control points as theirnumber, quality and distribution affect the result of the rectification.Once the mapping transformation has been determined a procedure called resampling isemployed. Resampling [5] matches the coordinates of image pixels to their real worldcoordinates and writes a new image on a pixel by pixel basis. Since the grid of pixels in thesource image rarely matches the grid for the reference image, the pixels are resampled, sothat new data file values for the output file can be calculated. 61
    • International Journal of Electronics and Communication Engineering & Technology (IJECET), ISSN0976 – 6464(Print), ISSN 0976 – 6472(Online) Volume 3, Issue 3, October- December (2012), © IAEMEFig.1. Image rectification (A & B) input and reference image with GCP locations, (C) usingpolynomial equations, the grids are fitted together, (D) using resampling method the outputgrid pixel values are assigned.III. INFORMATION EXTRACTIONImage ClassificationThe overall objective of image classification is to automatically categorize all pixels in animage into land cover classes or themes. Normally, multi-spectral data are used to performthe classification, and the spectral pattern present within the data for each pixel is used asnumerical basis for categorization. That is, different feature types manifest differentcombination of DNs based on their inherent spectral reflectance and emittance properties.The term classifier refers loosely to a computer program that implements vary so greatly.Therefore, it is essential that the analyst understands the alternative strategies for imageclassification.The traditional methods of classification mainly follow two approaches: unsupervised andsupervised. The unsupervised approach attempts spectral grouping that may have an unclearmeaning from the user’s point of view. Having established these, the analyst then tries toassociate an information class with each group. The unsupervised approach is often referredto as clustering and results in statistics that are for spectral, statistical clusters. In thesupervised approach to classification, the image analyst supervises the pixel categorizationprocess by specifying to the computer algorithm; numerical descriptors of the various landcover types present in the scene. To do this, representative sample sites of known cover types,called training areas or training sites, are used to compile a numerical interpretation key thatdescribes the spectral attributes for each feature type of interest. Each pixel in the data set isthen compared numerically to each category in the interpretation key and labelled with thename of the category it looks most like. In the supervised approach the user defines usefulinformation categories and then examines their spectral separability whereas in theunsupervised approach he first determines spectrally separable classes and then defines theirinformational utility.It has been found that in areas of complex terrain, the unsupervised approach is preferable tothe supervised one. In such conditions if the supervised approach is used, the user will havedifficulty in selecting training sites because of the variability of spectral response within eachclass. Consequently, a prior ground data collection can be very time consuming. Also, the 62
    • International Journal of Electronics and Communication Engineering & Technology (IJECET), ISSN0976 – 6464(Print), ISSN 0976 – 6472(Online) Volume 3, Issue 3, October- December (2012), © IAEMEsupervised approach is subjective in the sense that the analyst tries to classify informationcategories, which are often composed of several spectral classes whereas spectrallydistinguishable classes will be revealed by the unsupervised approach, and hence ground datacollection requirements may be reduced. Additionally, the unsupervised approach has thepotential advantage of revealing discriminable classes unknown from previous work.However, when definition of representative training areas is possible and statisticalinformation classes show a close correspondence, the results of supervised classification willbe superior to unsupervised classification.Unsupervised ClassificationUnsupervised classifiers do not utilize training data as the basis for classification. Rather, thisfamily of classifiers involves algorithms that examine the unknown pixels in an image andaggregate them into a number of classes based on the natural groupings or clusters present inthe image values. It performs very well in cases where the values within a given cover typeare close together in the measurement space, data in different classes are comparatively wellseparated.The classes that result from unsupervised classification are spectral classes because they arebased solely on the natural groupings in the image values, the identity of the spectral classeswill not be initially known. The analyst must compare the classified data with some form ofreference data (such as larger scale imagery or maps) to determine the identity andinformational value of the spectral classes. In the supervised approach we define usefulinformation categories and then examine their spectral separability; in the unsupervisedapproach we determine spectrally separable classes and then define their informational utility.There are numerous clustering algorithms that can be used to determine the natural spectralgroupings present in data set. One common form of clustering, called the “K-means”approach also called as ISODATA (Interaction Self-Organizing Data Analysis Technique)accepts from the analyst the number of clusters to be located in the data. The algorithm thenarbitrarily “seeds”, or locates, that number of cluster centres in the multidimensionalmeasurement space. Each pixel in the image is then assigned to the cluster whose arbitrarymean vector is closest. After all pixels have been classified in this manner, revised meanvectors for each of the clusters are computed. The revised means are then used as the basis ofreclassification of the image data. The procedure continues until there is no significantchange in the location of class mean vectors between successive iterations of the algorithm.Once this point is reached, the analyst determines the land cover identity of each spectralclass. Because the K-means approach is iterative, it is computationally intensive. Therefore, itis often applied only to image sub-areas rather than to full scenes.Fig.2. Validating the classified image with rectified image 63
    • International Journal of Electronics and Communication Engineering & Technology (IJECET), ISSN0976 – 6464(Print), ISSN 0976 – 6472(Online) Volume 3, Issue 3, October- December (2012), © IAEMESupervised ClassificationSupervised classification can be defined normally as the process of samples of knownidentity to classify pixels of unknown identity. Samples of known identity are those pixelslocated within training areas. Pixels located within these areas term the training samples usedto guide the classification algorithm to assigning specific spectral values to appropriateinformational class.The basic steps involved in a typical supervised classification procedure are illustrated onFig. 3. a. The training stage b. Feature selection c. Selection of appropriate classification algorithm d. Post classification smoothening e. Accuracy assessment Fig.3. Steps involved in a typical supervised classification procedure Fig.4. Classified image with different classesMaximum Likelihood Classification AlgorithmThe maximum likelihood decision rule assigns each pixel having pattern measurements orfeatures X to the class c whose units are most probable or likely to have given rise to featurevector x. It assumes that the training data statistics for each class in each band are normally 64
    • International Journal of Electronics and Communication Engineering & Technology (IJECET), ISSN0976 – 6464(Print), ISSN 0976 – 6472(Online) Volume 3, Issue 3, October- December (2012), © IAEMEdistributed, that is, Gaussian. In other words, training data with bi-or trimodal histograms in asingle band are not ideal. In such cases, the individual modes probably represent individualclasses that should be trained upon individually and labelled as separate classes. This wouldthen produce unimodal, Gaussian training class statistics that would fulfil the normaldistribution requirement.The Bayes’s decision rule is identical to the maximum likelihood decision rule that it doesnot assume that each class has equal probabilities. A priori probabilities have been usedsuccessfully as a way of incorporating the effects of relief and other terrain characteristics inimproving classification accuracy. The maximum likelihood and Bayes’s classificationrequire many more computations per pixel than either the parallelepiped or minimum-distance classification algorithms. They do not always produce superior results.Classification Accuracy AssessmentQuantitatively assessing classification accuracy requires the collection of some in situ data ora priori knowledge about some parts of the terrain which can then be compared with theremote sensing derived classification map. Thus to assess classification accuracy it isnecessary to compare two classification maps 1) the remote sensing derived map, and 2)assumed true map (in fact it may contain some error). The assumed true map may be derivedfrom in situ investigation or quite often from the interpretation of remotely sensed dataobtained at a larger scale or higher resolution.Classification Error MatrixOne of the most common means of expressing classification accuracy is the preparation ofclassification error matrix sometimes called confusion or a contingency table. Error matricescompare on a category by category basis, the relationship between known reference data(ground truth) and the corresponding results of an automated classification. Such matrices aresquare, with the number of rows and columns equal to the number of categories whoseclassification accuracy is being assessed. Table 1 is an error matrix that an image analyst hasprepared to determine how well a Classification has categorized a representative subset ofpixels used in the training process of a supervised classification. This matrix stems fromclassifying the sampled training set pixels and listing the known cover types used for training(columns) versus the Pixels actually classified into each land cover category by the classifier(rows). An error matrix expresses several characteristics about classification performance.For example, one can study the various classification errors of omission (exclusion) andcommission (inclusion). Note in Table 1 the training set pixels that are classified into theproper land cover categories are located along the major diagonal of the error matrix (runningfrom upper left to lower right). All non-diagonal elements of the matrix represent errors ofomission or commission. Omission errors correspond to non-diagonal column elements (e.g.16 pixels that should have classified as “sand” were omitted from that category). Commissionerrors are represented by non-diagonal row elements (e.g. 38 urban pixels plus 79 hay pixelswere improperly included in the corn category).Several other measures for e.g. the overall accuracy of classification can be computed fromthe error matrix. It is determined by dividing the total number correctly classified pixels (sumof elements along the major diagonal) by the total number of reference pixels. Likewise, theaccuracies of individual categories can be calculated by dividing the number of correctlyclassified pixels in each category by either the total number of pixels in the correspondingrows or column. Producers accuracy which indicates how well the training sets pixels of agiven cover type are classified can be determined by dividing the number of correctly 65
    • International Journal of Electronics and Communication Engineering & Technology (IJECET), ISSN0976 – 6464(Print), ISSN 0976 – 6472(Online) Volume 3, Issue 3, October December (2012), © IAEME October-classified pixels in each category by number of training sets used for that category (columntotal). Users accuracy is computed by dividing the number of correctly classified pixels ineach category by the total number of pixels that were classified in that category (row total). classifiedThis figure is a measure of commission error and indicates the probability that a pixelclassified into a given category actually represents that category on ground.Note that the error matrix in the table indicates an overall accuracy of 84%. However anproducers accuracy ranges from just 51% (urban) to 100% (water) and users accuracy rangesfrom 72% (sand) to 99% (water). This error matrix is based on training data. If the results aregood it indicates that the training samples are spectrally separable and the classificationworks well in the training areas. This aids in the training set refinement process, but indicateslittle about classifier performance else where in the scene. TABLE 1. ANALYSIS OF ERROR MATRIXIV. RESULTS AND DISCUSSIONAfter extraction of Information by pattern recognition with the help of supervised ecognitionclassification, accuracy analysis of the same was performed Next, in an attempt to further , performed.improve the overall accuracy, the no. of thematic classes for the same FCC image isincreased. Increased classes verses the overall accuracies are given in Fig. 5. Let’s look at thereason why this happened. Suppose for 6 thematic classes, forests are considered as a singleclass. And for 10 thematic classes forests are divided into dense and open forests. Accuracy classes,is defined as correctly classified pixels divided by total number of pixels. So for 6 classes, classesforests have single histogram and for 10 classes two different histograms for two types of classes,forests. Hence the histograms will overlap each other. The common area between the twohistograms corresponds to error pixels. This overlapping of histograms has resulted indecrease of the correctly classified pixels (Fig. (6)) and hence the accuracy is decreased decreased. 66
    • International Journal of Electronics and Communication Engineering & Technology (IJECET), ISSN0976 – 6464(Print), ISSN 0976 – 6472(Online) Volume 3, Issue 3, October December (2012), © IAEME October- Fig. 5 Number of Classes Vs Accuracy ig. 5. Fig. 6. Overlapping of histograms for 10 thematic classes .V. CONCLUSIONIn this paper emphasis is laid on increasing the overall accuracy of classified image with supervisedclassification technique by increasing the number of thematic classes. However the results showedotherwise. Hence researchers and policy makers have to be cautious while selecting the number of nthematic classes before taking up the overall accuracy assessment of the images which can be used forstrategic applications.REFERENCES[1]. Ling Peng; Zhongming Zhao; Linli Cui; Lu Wang,"Remote sensing study based on IRSA Remote Sensing Image Processing System," Geoscience and Remote Sensing Symposium, 2004. IGARSS 04. Proceedings.2004 IEEE International vol.7 (2004), 4829-4832. International,[2]. Satyanarayana Chanagala, Appala Raju Uppala ,Yedukondalu Kamatham and M. Ravi Kumar, “ Information Extraction through Multispectral Data processing in Remote Sensing Image Evaluation”, International Journal of Graphics & Image Processing,Vol. 2, issue 3, August Proces 3 2012,pp:224-228, ISSN: 2249 – 5452.[3]. Chien-Wen Chen, Tsung-Ching Lin, Shi Ching Shi-Huang Chen,Trieu-Kien Truong, "A Near Lossless Kien Wavelet-Based Compression Scheme for Satellite Images," Computer Science and Information Based Engineering, World Congress, vol. 6, pp. 528 532, 2009 WRI World Congress on Computer , 528-532, Science and Information Engineering, 2009.[4]. N.G.Kardoulas, A.C.Bird, A.I.Lawan, “Geometric Correction of SPOT and Landsat Imagery: A Comparison of Map- and GPS-Derived Control Points,” Photogrammetric Engineering & Remote Derived Sensing, Vol.62, No.10, October 1996, pp.1173 pp.1173-1177.[5]. J.Anthony Parker, Robert V. Kenyon, Donald E.Troxel, “Comparison of Interpolating Methods for Image Resampling,” IEEE transactions on medical imaging, Vol. MI 2, No.1, pp.31-39, MI-2, No. March 1983.[6]. Ardeshier Goshtasby, “Geometric Correction of Satellite Images using Composite Transformation Functions,” Twenty First International Symposium on Remote Sensing of Twenty-First Environment, Ann Arbor, Michigan, October 26-30,1987. 26 67