Lecture for landsat

11,605 views

Published on

About LANDSAT

Published in: Education, Technology
  • Be the first to comment

Lecture for landsat

  1. 1. Landsat properties
  2. 2. History: Landsat Players - NASA, USGS, USDA Great Sandy Desert, Australia – NASA Earth Observatory Garden City, Kansas – NASA Earth as Art
  3. 3. Landsat Timeline http://landsat.gsfc.nasa.gov/about/timeline.html
  4. 4. Landsat Sensors and Platforms Chesapeake Bay. Goddard, NASA. http://landsat.gsfc.nasa.gov/news/news-archive/soc_0017.html Platform Sensor Landsat 1 RBV, MSS Landsat 2 RBV, MSS Landsat 3 RBV, MSS Landsat 4 MSS, TM Landsat 5 MSS, TM Landsat 7 ETM+
  5. 5. Return Beam Vidicon (RBV) <ul><li>Three-spectral-band (green, red, near infrared) camera </li></ul><ul><ul><li>essentially a high-quality, calibrated television camera </li></ul></ul><ul><li>2-dimensional array form </li></ul><ul><li>Considered by some to be unsuccessful </li></ul>
  6. 6. Multispectral Scanner (MSS) <ul><li>Landsats 1-5 (data collected from 1972-1992) </li></ul><ul><li>Whiskbroom sensor </li></ul><ul><li>80m resolution, 185km swath </li></ul><ul><li>Landsats 1-3: </li></ul><ul><ul><li>Altitude: 920 km </li></ul></ul><ul><ul><li>18 day repeat coverage cycle </li></ul></ul><ul><li>Landsats 4&5: </li></ul><ul><ul><li>Altitude: 705km </li></ul></ul><ul><ul><li>16 day repeat coverage cycle </li></ul></ul>
  7. 7. UNC. http://www.cpc.unc.edu/projects/nangrong/data/spatial_data/remote_sensing/satellite_imagery/image_inventory/mss_images. EROS, USGS. http://edc.usgs.gov/products/satellite/mss.php Multispectral Scanner (MSS)
  8. 8. Thematic Mapper (TM) <ul><li>Whiskbroom sensor </li></ul><ul><li>Wavelength range: visible, NIR, MIR, TIR </li></ul><ul><li>16 detectors for each visible, NIR, MIR </li></ul><ul><li>4 detectors for TIR </li></ul><ul><li>30m resolution for visible, NIR, MIR </li></ul><ul><li>120m resolution for TIR </li></ul>
  9. 9. Enhanced Thematic Mapper Plus (ETM+) <ul><li>Whiskbroom scanner </li></ul><ul><li>183km swath, 705m altitude </li></ul><ul><li>16 day repeat cycle </li></ul><ul><li>30m 7 bands (RGB, NIR, SWIR1, SWIR2) </li></ul><ul><li>15m panchromatic band </li></ul><ul><li>60m TIR band </li></ul>EROS, USGS. http://landsat.gsfc.nasa.gov/about/etm+.html
  10. 10. Iraq. Earth Observatory, NASA
  11. 11. Landsat scan line corrector malfunction artifact
  12. 13. World Reference System (WRS) <ul><li>Global notation system for Landsat data </li></ul><ul><li>Each scene center is designated by Path and Row numbers </li></ul><ul><ul><li>Path: Longitudinal aspect of location; assigned from East to West </li></ul></ul><ul><ul><li>Row: Latitudinal center line of a frame of imagery </li></ul></ul><ul><li>WRS-1 used for Landsats 1-3 </li></ul><ul><li>WRS-2 used for Landsats 4-7 </li></ul><ul><ul><li>Corrected for differences in repeat cycles, coverage, swath patterns and path/row designators </li></ul></ul>Goddard, NASA. http://landsat.gsfc.nasa.gov/about/wrs.html
  13. 15. Radiometric properties <ul><li>Landsats-1, -2 and -3 all carried the Multispectral Scanner (MSS) </li></ul><ul><ul><li>Radiometric precision of 6 bits (64 possible values) </li></ul></ul><ul><ul><li>Four spectral bands </li></ul></ul><ul><li>Landsat-4 and -5 carried Thematic Mapper (TM) </li></ul><ul><ul><li>Radiometric precision of 8 bits (256 possible values) </li></ul></ul><ul><ul><li>Seven spectral bands </li></ul></ul><ul><ul><li>30 m spatial resolution </li></ul></ul><ul><ul><li>Radiometric error correction within 1 quantum level </li></ul></ul><ul><li>Landsat-7 carried Thematic Mapper (TM) </li></ul><ul><ul><li>Radiometric precision of 8 bits (256 possible values) </li></ul></ul><ul><ul><li>Bands 1-5 & 7 have 30 m resolution; 6 has 60 m and 8 has 15 m </li></ul></ul><ul><ul><li>Gain states that allow imaging in low gain states when image is bright; high gain states when image is dark </li></ul></ul><ul><ul><li>Can set gain for six surface categories (land, desert, ice, water, sea ice, volcano/night) </li></ul></ul>
  14. 16. Sun Elevation and Gain States Source: http://landsathandbook.gsfc.nasa.gov/handbook/handbook_htmls/chapter6/chapter6.html
  15. 17. Data order: http://glovis.usgs.gov/
  16. 18. Data order: http://glovis.usgs.gov/
  17. 19. Data information: http://landsat.usgs.gov/
  18. 20. Data information: http://landsat.usgs.gov/
  19. 21. Image preprocessing
  20. 22. Cubic Convolution (cont.) <ul><li>Potential use </li></ul><ul><ul><li>Preferred use on non-categorical data (continuous variables – e.g. temperature, % cover, etc.) </li></ul></ul><ul><ul><li>Conversion of values during pre-processing stage. </li></ul></ul>
  21. 23. Problem of Varying Illumination USDA Forest Service, Remote Sensing Applications Center, http://fsweb.rsac.fs.fed.us and UAS ENVS403
  22. 24. Band B has the Same Problem USDA Forest Service, Remote Sensing Applications Center, http://fsweb.rsac.fs.fed.us and UAS ENVS403
  23. 25. Ratio of Band A to Band B USDA Forest Service, Remote Sensing Applications Center, http://fsweb.rsac.fs.fed.us and UAS ENVS403
  24. 26. Conversion to reflectance and COST atmospheric correction
  25. 27. Radiance conversion <ul><li>TM Radiance: </li></ul><ul><li>Lsat = bias + gain * DN </li></ul><ul><li>ETM+ Radiance: </li></ul><ul><li>Lsat = ((LMAX λ - LMIN λ )/(QCALMAX-QCALMIN)) * ( QCAL -QCALMIN) + LMIN λ </li></ul><ul><li>Input data are contained in the metadata files of the Landsat TM (gain and bias for each band) or ETM+ (LMAX λ , LMIN λ , QCALMAX,QCALMIN) images </li></ul>Landsat 7 Science Data Users Handbook
  26. 29. Reflectance conversion without atmospheric correction <ul><li>ρ = (PI * L sat λ * d 2 )/(ESUN λ * cos θ ) </li></ul><ul><li>ρ – planetary reflectance </li></ul><ul><li>L sat λ – radiance at sensor </li></ul><ul><li>d – Earth-Sun distance in astronomical units </li></ul><ul><li>θ – solar zenith angle (90 – solar elevation) </li></ul><ul><li>ESUN λ mean (by band) solar exoatmospheric irradiance </li></ul>Landsat 7 Science Data Users Handbook
  27. 31. Landsat 7 Science Data Users Handbook ETM+ Band ESUN values Band 1 1969 Band 2 1840 Band 3 1551 Band 4 1044 Band 5 225.7 Band 7 82.07
  28. 32. Reflectance conversion + atmospheric correction (COST) <ul><li>REF= (PI*(Lsat-Lhaze)) (TAUv*(Eo*Cos(TZ)*TAUz+Edown)) </li></ul><ul><li>Lhaze: upwelling spectral radiance (path radiance), value derived from image using dark-object criteria; Calculated by using the dark object criteria (lowest value at the base of the slope of the histogram from either the blue or green band) </li></ul><ul><li>TAUv: atmospheric transmittance along the path from ground to sensor, assumed to be 1 because of nadir look angle </li></ul><ul><li>Eo: solar spectral irradiance </li></ul><ul><li>TZ: solar zenith angle, ThetaZ </li></ul><ul><li>TAUz: atmospheric transmittance along the path from the sun to the ground surface, </li></ul><ul><ul><li>=1-TZ 2 /2!+TZ 4 /4!-TZ 6 /6! </li></ul></ul><ul><li>Edown: downwelling spectral irradiance at the atmosphere </li></ul>Chavez, P.S. Jr (1996). Image-based atmospheric corrections – revisited and improved. Photogrammetric Engineering and Remote Sensing 62, 1025-1036.
  29. 33. Reflectance conversion + atmospheric correction (COST) <ul><li>Calculate radiance ( L sat) </li></ul><ul><li>d = 1 + 0.0167 * sin[2* PI * (JD – 93.5) / 365] </li></ul><ul><li>L λ 1% = (0.01 * d2 * cos 2 θ ) / (PI * ESUN λ ) </li></ul><ul><li>L λ haze = L λ min - L λ 1% </li></ul><ul><li>ρ = (PI * d 2 * (L sat - L λ haze )) / (ESUN λ * cos 2 θ ) </li></ul><ul><li>where JD is the Julian Date (or day of the year , ranges 1- 365), </li></ul><ul><li>θ is the solar zenith angle (calculated as 90 – solar elevation angle) </li></ul><ul><li>ESUN – incoming solar radiation by wavelength (see table) </li></ul>
  30. 35. Questions?
  31. 36. Spectral Data Transformation for Vegetation Mapping
  32. 37. From Lillesand and Kiefer 1994 Water has a low reflectance because it absorbs EM radiation in the VIS/RIR region Wavelength ( µm) Reflectance (%) Dry bare soil (gray-brown) Vegetation (green) Water (clear) 0.4 0 20 40 60 0.6 0.8 1.0 1.2 1.4 1.6 1.8 2.0 2.2 2.4 2.6
  33. 38. Vegetation and Surface Reflectance <ul><li>Key aspects of reflectance from leaf surfaces </li></ul><ul><ul><li>Chlorophyll and PAR </li></ul></ul><ul><ul><li>Water content </li></ul></ul><ul><ul><li>Leaf structures </li></ul></ul><ul><li>Multi-layer model of leaf/canopy reflectance </li></ul><ul><li>Temporal aspects of reflectance from vegetated surfaces </li></ul>
  34. 39. Internal Leaf Structure Chloroplasts Intercellular air labyrinth CO 2 in & O 2 out
  35. 40. Plant Pigments So, what absorbs EM energy in functioning leaves? (Reflectance  = 100 - Absorption  
  36. 41. Absorption by plant pigments carrying out photosynthesis leads to low plant reflectances in the 0.4 to 0.6  m range
  37. 42. Broadleaf Trees Changing Color Green leaves from a broadleaf tree beginning to change color as nutrients withdraw into the tree core Deciduous broadleaf tree with its colors changed and some leaves fallen on the ground
  38. 43. In situ Spectra of Fall Leaves Wavelength (µm) Reflectance (%) 0.90 0.60 0.50 0.40 0.30 0.00 0.35 0.60 0.85 1.10 1.35 1.60 1.85 2.35 2.10 0.20 0.10 Fall Leaves 0.80 0.70 2.60 Note reflectance from 0.4 to 0.6  m drops, but 0.6 to 0.7  m increases
  39. 44. Maple & Pine reflectance  maple pine - Pine trees have higher cellulose content than maple trees - Cellulose absorbs NIR radiation, and lowers reflectance
  40. 45. Trees are complex structures, whose multiple layers of leaves, twigs and branches Light interacts with individual leaves at a cellular level Light passing through a single leaf then interacts with the next canopy component it encounters
  41. 46. Reflectance from a vegetation canopy decreases as water content increases Water absorbs EM energy in the VIS/RIR region of the EM spectrum  higher water content results in lower reflections
  42. 47. http://research.umbc.edu/~tbenja1/leblon/module9.html See also Figure 10.1 in Jensen
  43. 48. Reflectance curve for a leaf generated from data collected by a spectroradiometer NIR Most digital VIS/IR spaceborne sensors have radiometers with red and near infrared channels Ratios of these two channels are used to create indices of vegetation cover, e.g., vegetation indices
  44. 49. Simple Vegetation Index (VI) <ul><li>VI = R NIR / R red </li></ul><ul><li>Where </li></ul><ul><li>R IR is the reflectance in the NIR band </li></ul><ul><li>R red is the reflectance in the red band </li></ul>
  45. 50. Normalized Difference Vegetation Index NDVI <ul><li>Let R = reflectance in the red channel </li></ul><ul><li>Let IR = reflectance in the near IR channel </li></ul><ul><li>IR - R </li></ul><ul><li>NDVI = __________ </li></ul><ul><li>IR + R </li></ul><ul><li>NDVI ~ amount of green biomass present on the surface </li></ul>
  46. 54. moderate severe
  47. 55. 22 band data set (shown in 7:4:3) Siberia
  48. 56. MSS Component Band 1 Band 2 Band 3 Band 4 Brightness 0.433 0.632 0.586 0.264 Greenness -0.290 -0.562 0.600 0.491 Yellowness -0.829 0.522 -0.039 0.194 &quot;Non-such&quot; 0.223 0.012 -0.543 0.810
  49. 57. TM Component Band 1 Band 2 Band 3 Band 4 Band 5 Band 7 Brightness 0.3037 0.2793 0.4343 0.5585 0.5082 0.1863 Greenness -0.2848 -0.2435 -0.5436 0.7243 0.0840 -0.1800 Wetness 0.1509 0.1793 0.3299 0.3406 -0.7112 -0.4572
  50. 58. Surface reflectance Component Band 1 Band 2 Band 3 Band 4 Band 5 Band 7 Brightness 0.2043 0.4158 0.5524 0.5741 0.3124 0.2303 Greenness -0.1603 -0.2819 -0.4934 0.7940 -0.0002 -0.1446 Wetness 0.0315 0.2021 0.3102 0.1594 -0.6806 -0.6109
  51. 59. Image classification
  52. 60. Image Classification <ul><li>The process of automatically dividing all pixels within a digital remote sensing image into </li></ul><ul><li>Land or surface-cover categories </li></ul><ul><li>Information themes or quantification of specific surface characteristics </li></ul>
  53. 61. Pre-classification masking <ul><ul><li>Masking out selected classes </li></ul></ul><ul><ul><ul><li>cloud and shadow masks </li></ul></ul></ul><ul><ul><ul><li>water masks </li></ul></ul></ul>
  54. 63. Landsat Band 4 NIR Water Shadow Cloud
  55. 64. Landsat Band 4 NIR Shadow Cloud
  56. 65. Masking out water <ul><li>NIR band threshold </li></ul><ul><li>Manual selection of objects </li></ul>shadow water wetlands dark fields
  57. 66. Final water mask
  58. 67. Supervised versus Unsupervised Classification <ul><li>Supervised classification – a procedure where the analyst guides or supervises the classification process by specifying numerical descriptors of the land cover types of interest </li></ul><ul><li>Unsupervised classification – the computer is allowed to aggregate groups of pixels into like clusters based upon different classification algorithms </li></ul>
  59. 68. Lillesand and Kiefer Figure 7-39
  60. 69. Training Areas and Supervised Classification <ul><li>Specified by the analyst to represent the land cover categories of interest </li></ul><ul><li>Used to compile a numerical “interpretation key” that describes the spectral attributes of the areas of interest </li></ul><ul><li>Each pixel in the scene is compared to the training areas, and then assigned to one of the categories </li></ul>
  61. 70. Training area selection: regions of interest
  62. 71. Training area selection: regions of interest
  63. 72. Decision Tree Classifier <ul><li>Decision tree classifiers use a simple set of rules to divide pixels into different land cover types (binary splits along the most lines of greatest separability) </li></ul>
  64. 76. Hybrid Classification Approach <ul><li>Perform an unsupervised classification to create a number of land cover categories within the area of interest </li></ul><ul><li>Carry out field surveys to identify the land cover type represented by different unsupervised clusters </li></ul><ul><li>Use a supervised approach to combine unsupervised clusters into similar land cover categories </li></ul>
  65. 77. Sources of Uncertainty in Image Classification <ul><li>Non-representative training areas </li></ul><ul><li>High variability in the spectral signatures for a land cover class </li></ul><ul><li>Mixed land cover within the pixel area </li></ul>
  66. 78. Post-classification processing
  67. 79. Additional class detection <ul><li>Land use-based classes </li></ul><ul><ul><li>Croplands </li></ul></ul><ul><li>Manual and automated mix for selection </li></ul>
  68. 80. Removing speckle <ul><li>Using sieving procedure to remove areas less than 5 pixels </li></ul><ul><li>Reduces data noise </li></ul><ul><li>Defines a minimum mapping unit </li></ul>
  69. 81. Accuracy Assessment
  70. 82. Accuracy assessment <ul><li>It is necessary to provide information about the accuracy of a giving mapping approach </li></ul><ul><li>- different applications require different levels of accuracy </li></ul>
  71. 83. Types of accuracy assessment/ validation <ul><li>Inventory assessment </li></ul><ul><li>- e.g. how many acres of forest we get through our classification vs. how many acres of forest is reported by forest service </li></ul><ul><li>Confusion matrix </li></ul><ul><li>- provides information on both – the accuracy of the amount mapped and the accuracy of geographic distribution </li></ul>
  72. 84. Inventory assessment <ul><li>Advantages: </li></ul><ul><li>- fairly straightforward method; </li></ul><ul><li>- comparison against ground data. </li></ul><ul><li>Disadvantages: </li></ul><ul><li>- strongly depends on the accuracy of provided information (underreporting, not up-to-date data, etc.) </li></ul><ul><li>- does not describe spatial accuracy </li></ul>
  73. 85. Confusion Matrix <ul><li>Advantages: </li></ul><ul><li>- provides statistics for both inventory and geographic information </li></ul><ul><li>Disadvantages: </li></ul><ul><li>- limited availability of comparable ground truth data (very difficult and expensive to collect, not available in many areas) </li></ul>
  74. 86. Validation sample selection <ul><li>NELDA example: </li></ul><ul><li>Minimum of 300 randomly selected points across the image with proportional representation of classes (adjusted to 30 min per class) </li></ul><ul><ul><li>Class Pixels Prop S Sample </li></ul></ul><ul><ul><li>TNDC 5% 15 30 </li></ul></ul><ul><ul><li>TNO 7% 21 30 </li></ul></ul><ul><ul><li>TMC 26% 78 78 </li></ul></ul><ul><ul><li>TMO 17% 51 51 </li></ul></ul>
  75. 87. Confusion matrix NELDA project example for aggregated classes
  76. 88. Confusion matrix NELDA project example for all classes
  77. 89. Statistics <ul><li>Overall accuracy = correct/total in % </li></ul><ul><li>Full legend accuracy = 541/647 = 0.8362 = 83.62% </li></ul><ul><li>Kappa value = (Observed agreement - Chance agreement)/(1 - Chance agreement) </li></ul><ul><li>Observed agreement = (210 + 37) / 252 = 0.98 </li></ul><ul><li>Chance agreement = (0.845*0.841) + (0.155*0.159) = 0.735 </li></ul><ul><li>Kappa = (0.98 - 0.735) / (1 – 0.735) = 0.925 </li></ul>Forest Grass Total Forest 210 (83.3%) 3 (1.2%) 213 (84.5%) Grass 2 (0.8%) 37 (14.7%) 39 (15.5%) Total 212 (84.1%) 40 (15.9%) 252
  78. 90. Change Detection
  79. 91. Map overlay problems <ul><li>Involves compiled error from individual datasets – extremely difficult to estimate error propagation </li></ul>
  80. 92. General approaches to change detection <ul><li>Image differencing </li></ul><ul><li>Select proper dates to account for phenology </li></ul><ul><ul><li>“ anniversary date images” </li></ul></ul><ul><li>Well preprocessed input images </li></ul><ul><ul><li>georegistration within 0.5 pixel </li></ul></ul><ul><ul><li>image normalization </li></ul></ul>
  81. 93. Change detection of tree dominated landscapes using Disturbance Index <ul><li>Classify “mature forests” in the image </li></ul><ul><li>Normalize Tasseled Cap Brightness, Greenness, and Wetness by mature forest parameters </li></ul><ul><ul><li>Br = (B - B µ) / B σ </li></ul></ul><ul><ul><li>Gr = (G - G µ) / G σ </li></ul></ul><ul><ul><li>Wr = (W - W µ) / W σ </li></ul></ul><ul><li>where Br, Br, Wr are rescaled brightness, greenness and wetness, B µ, G µ, and W µ - mean values for “mature forest” and B σ , G σ , W σ are standard deviation of the respective parameters for “mature forest” </li></ul>S.P. Healey, W.B. Cohen, Y. Zhiqiang, O. Krankina. 2005. Comparison of Tasseled Cap-Based Landsat Data Structures for Use in Forest Disturbance Detection. Remote Sensing of Environment 97: 301 – 310.
  82. 94. Change detection of tree dominated landscapes using Disturbance Index <ul><li>DI = Br – (Gr + Wr) </li></ul><ul><li>Change detection: </li></ul><ul><ul><li>DI date1 – DI date2 </li></ul></ul><ul><ul><li>Multi-date classification (maximum likelihood) </li></ul></ul>S.P. Healey, W.B. Cohen, Y. Zhiqiang, O. Krankina. 2005. Comparison of Tasseled Cap-Based Landsat Data Structures for Use in Forest Disturbance Detection. Remote Sensing of Environment 97: 301 – 310.
  83. 95. NELDA project example
  84. 96. Emerging new methods (trajectory analysis) Huang, C., Goward, S.N., Schleeweis, K., Thomas, N., Masek, J.G., & Zhu, Z. (2009) Dynamics of National Forests Assessed Using the Landsat Record: Case Studies in Eastern U.S. Remote Sensing of Environment. 113(7) : 1430-1442.
  85. 102. Accuracy assessment: random pixel selection <ul><li>Randomly distributed 300 points </li></ul><ul><ul><li>150 points for “unchanged” and 150 points for “changed” forests </li></ul></ul><ul><li>Within the categories validation pixels were distributed proportionally by the total number of pixels within the category </li></ul><ul><li>Analyst visually assigned the validation pixels to specific categories </li></ul><ul><li>Accuracy assessment was performed using the confusion matrix and Kappa values </li></ul>
  86. 103. Summer Cottage Development
  87. 104. Forest Harvest
  88. 105. NELDA example: Change detection random sample accuracy assessment
  89. 106. Accuracy assessment: analyst driven validation sample selection <ul><li>Analyst visually selected a higher number of validation pixels per class from the images </li></ul><ul><li>Accuracy assessment was performed using the confusion matrix and Kappa values </li></ul>
  90. 107. NELDA example: Change detection analyst driven accuracy assessment
  91. 108. Questions?

×