Digital image processing and interpretation
Upcoming SlideShare
Loading in...5
×
 

Like this? Share it with your network

Share

Digital image processing and interpretation

on

  • 391 views

Digital image processing and interpretation for remote sensing study.

Digital image processing and interpretation for remote sensing study.

Statistics

Views

Total Views
391
Views on SlideShare
391
Embed Views
0

Actions

Likes
0
Downloads
36
Comments
0

0 Embeds 0

No embeds

Accessibility

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

Digital image processing and interpretation Presentation Transcript

  • 1. Introduction to Digital Image Interpretation
  • 2. What is a Digital Image? Most remote sensing data can be represented in 2 interchangeable forms: Photograph-like imagery Arrays of digital brightness values
  • 3. Colour Composite Displays We typically create multispectral image displays or colour composite images by showing different image bands in varying display combinations.
  • 4. True Colour Composites
  • 5. Standard False Colour Composites
  • 6. Colour Composite Images
  • 7. Colour Composite Images
  • 8. General Appearance of Surface Features on Colour Composite Images Feature True Colour False Colour trees and bushes olive green red crops medium to light green pink to red wetland vegetation dark green to black dark red water shades of blue and green blue to black urban areas white to light blue blue to grey bare soil white to light grey blue to grey Source: U.S. Department of Defense, 1995. Multispectral Users Guide.
  • 9. Digital Image Processing Steps 1.Preprocessing 2.Enhancement 3.Transformation 4.Classification
  • 10. Image Preprocessing Operations aim to correct distorted or degraded image data to create a more faithful representation of the original scene. "rectification and restoration" spatial filtering radiometric restoration (destriping) geometric correction
  • 11. Preprocessing functions involve those operations that are normally required prior to the main data analysis and extraction of information, and are generally grouped as  radiometric corrections  geometric corrections. Radiometric corrections include correcting the data for sensor irregularities and unwanted sensor or atmospheric noise, and converting the data so they accurately represent the reflected or emitted radiation measured by the sensor. Geometric corrections include correcting for geometric distortions due to sensor-Earth geometry variations, and conversion of the data to real world coordinates (e.g. latitude and longitude) on the Earth's surface.
  • 12. Various methods of atmospheric correction can be applied ranging from detailed modeling of the atmospheric conditions during data acquisition, to simple calculations based solely on the image data. An example of the latter method is to examine the observed brightness values (digital numbers), in an area of shadow or for a very dark object (such as a large clear lake - A) and determine the minimum value (B). The correction is applied by subtracting the minimum observed value, determined for each specific band, from all pixel values in each respective band.
  • 13. Noise in an image may be due to irregularities or errors that occur in the sensor response and/or data recording and transmission. Common forms of noise include systematic striping or banding and dropped lines. Both of these effects should be corrected before further enhancement or classification is performed.
  • 14. Image Registration (Geo-referencing) Registration is the process of superimposing an image over a map or over another already registered data. The method of image registration or “geo-referencing” can be divided into two types: “image-to-imageregistration” and “image-to-map-registration”. Selected image data of the Khorat area was rectified with reference to the 1:50 000 scale topographic maps (image-to-map-registration). image-to-map-registration Further imagery was geo-referenced to this already registered satellite image using the image-to-image registration.
  • 15. The geometric registration process involves identifying the image coordinates (i.e. row, column) of several clearly discernible points, called ground control points (or GCPs), in the distorted image (A A1 to A4), and matching them to their true positions in ground coordinates (e.g. latitude, longitude). The true ground coordinates are typically measured from a map (B - B1 to B4), either in paper or digital format. This is image-to-map registration.
  • 16. Geometric registration may also be performed by registering one (or more) images to another image, instead of to geographic coordinates. This is called image-to-image registration and is often done prior to performing various image transformation procedures, In order to actually geometrically correct the original distorted image, a procedure called resampling is used to determine the digital values to place in the new pixel locations of the corrected output image. 3 common methods for resampling: Nearest neighbour, Bilinear interpolation,  Cubic convolution. Nearest neighbour resampling uses the digital value from the pixel in the original image which is nearest to the new pixel location in the corrected image. This is the simplest method and does not alter the original values, but may result in some pixel values being duplicated while others are lost. This method also tends to result in a disjointed or blocky image appearance.
  • 17. Bilinear interpolation resampling takes a weighted average of four pixels in the original image nearest to the new pixel location. The averaging process alters the original pixel values and creates entirely new digital values in the output image. This may be undesirable if further processing and analysis, such as classification based on spectral response, is to be done. If this is the case, resampling may best be done after the classification process.
  • 18. Cubic convolution resampling goes even further to calculate a distance weighted average of a block of sixteen pixels from the original image which surround the new output pixel location. As with bilinear interpolation, this method results in completely new pixel values. However, these two methods both produce images which have a much sharper appearance and avoid the blocky appearance of the nearest neighbour method.
  • 19. Spatial filtering • Spatial information – Things close together more alike than things further apart (spatial auto-correlation) – Many features of interest have spatial structure such as edges, shapes, patterns (roads, rivers, coastlines, irrigation patterns etc. etc.) • Spatial filters divided into two broad categories – Feature detection e.g. edges – Image enhancement e.g. smoothing “speckly” data e.g. RADAR 31
  • 20. Low/high frequency DN Gradual change = low frequency DN Rapid change = high frequency 32
  • 21. How do we exploit this? • Spatial filters highlight or suppress specific features based on spatial frequency – Related to texture – rapid changes of DN value = “rough”, slow changes (or none) = “smooth” 43 49 48 49 51 43 50 65 54 51 12 14 9 9 10 43 49 48 49 51 210 225 199 188 Smooth(ish) 189 Rough(ish) Darker, horizontal linear feature Bright, horizontal linear feature 33
  • 22. Convolution (spatial) filtering • Construct a “kernel” window (3x3, 5x5, 7x7 etc.) to enhances/remove these spatial feature • Compute weighted average of pixels in moving window, and assigning that average value to centre pixel. • choice of weights determines how filter affects image 34
  • 23. Convolution (spatial) filtering • Filter moves over all pixels in input, calculate value of central pixel each time e.g. 43 49 48 49 51 43 50 65 54 51 12 14 9 9 10 43 49 48 49 51 210 225 199 188 189 Input image ?? 1/9 1/9 1/9 1/9 1/9 1/9 ?? 1/9 1/9 ?? 1/9 filter Output image 35
  • 24. Convolution (spatial) filtering • For first pixel in output image – Output DN = 1/9*43 + 1/9*49 + 1/9*48 + 1/9*43 + 1/9*50 + 1/9*65 + 1/9*12 + 1/9*14 + 1/9*9 = 37 – Then move filter one place to right (blue square) and do same again so output DN = 1/9*(49+48+49+50+65+54+14+9+9) = 38.6 – And again….. DN = 1/9*(48+49+51+65+54+51+9+9+10) = 38.4 • This is mean filter • Acts to “smooth” or blur image 43 49 48 49 51 43 50 65 54 14 9 9 49 48 49 51 210 225 199 188 189 38.4 10 43 38.6 51 12 37 Output image 36
  • 25. Convolution (spatial) filtering • Mean filter known as low-pass filter i.e. allows low frequency information to pass through but smooths out higher frequency (rapidly changing DN values) – Used to remove high frequency “speckle” from data • Opposite is high-pass filter – Used to enhance high frequency information such as lines and point features while getting rid of low frequency information High pass 37
  • 26. Convolution (spatial) filtering • Can also have directional filters – Used to enhance edge information in a given direction – Special case of high-pass filter Vertical edge enhancement filter Horizontal edge enhancement filter 38
  • 27. Practical • Try out various filters of various sizes • See what effect each has, and construct your own filters – High-pass filters used for edge detection • Often used in machine vision applications (e.g. robotics and/or industrial applications) – Directional high-pass filters used to detect edges of specific orientation – Low-pass filters used to suppress high freq. information e.g. to remove “speckle” 39
  • 28. Example: low-pass filter •ERS 1 RADAR image, Norfolk, 18/4/97 •Original (left) and low-pass “smoothed” (right) 40
  • 29. Example: high-pass edge detection •SPOT image, Norfolk, 18/4/97 •Original (left) and directional high-pass filter (edge detection), right 41