APPLICATION OF REMOTE SENSING AND
GEOGRAPHICAL INFORMATION SYSTEM IN
CIVIL ENGINEERING
Date:
INSTRUCTOR
DR. MOHSIN SIDDIQUE
ASSIST. PROFESSOR
DEPARTMENT OF CIVIL ENGINEERING
Image: A reproduction or imitation of the form of a person or thing.
Principles of image interpretation
2
How to interpret an image?
3
Interpretation and analysis of remote sensing imagery involves the
identification and/or measurement of various targets in an image in order to
extract useful information about them
Elements of image interpretation
4
This is primarily a stimulus and response activity. The stimuli are
the elements of image interpretation.
The interpreter conveys his or her response to these stimuli with
descriptions and labels that are expressed in qualitative terms
e.g. likely, possible, or probable.
Very rarely do interpreter use definite statements with regard to
describing features identified on aerial photography.
Detection and identification
5
As opposed to detection and identification, the making of
measurements is primarily quantitative.
Measurements made by photo interpreters will get you close;
given high quality, high resolution, large-scale aerial
photographs and appropriate interpretation tools and
equipment, you can expect to be within meters.
whereas with photogrammetry if you employ the same type of
photography and the appropriate equipment you could expect
to be within centimeters.
Measurement
6
Interpreters are often required to identify objects from a study
of associated objects that they can identify; or to identify object
complexes from an analysis of their component objects.
Analysts may also be asked to examine an image, which depicts
the effects of some process, and suggest a possible or probable
cause.
A solution may not always consist of a positive identification. The
answer may be expressed as a number of likely scenarios with
statements of probability of correctness attached by the
interpreter.
Solving a problem
7
Primary ordering of image elements
8
Tone can be defined as each distinguishable variation from white to
black.
Color may be defined as each distinguishable variation on an image
produced by a multitude of combinations of hue, value and chroma.
If there is not sufficient contrast between an object and it's
background to permit, at least, detection there can be no
identification.
While a human interpreter may only be able to distinguish between
ten and twenty shades of gray; interpreters can distinguish at least
100 times more variations of color on color photography than shades
of gray on black and white photography.
Tone/Color
9
Tone/Color
10
Resolution can be defined as the ability of the entire photographic
system, including lens, exposure, processing, and other factors, to
render a sharply defined image.
An object or feature must be resolved in order to be detected and/or
identified. Resolution is one of the most difficult concepts to address in
image analysis because it can be discussed for camera lenses in terms
of so many line pairs per millimeter.
There are resolution targets that help to determine this when testing
camera lenses for metric quality.
Photo interpreters often talk about resolution in terms of ground
resolved distance which is the smallest normal contrast object that can
be identified and measured.
Resolution
11
Resolution
12
Size of objects in an image is a function of scale.
Size can be important in discriminating objects and features (cars vs.
trucks or buses, single family vs. multifamily residences, brush vs. trees,
etc. ).
In the use of size as a diagnostic characteristic both the relative and
absolute sizes of objects can be important.
Geometrical - size
13
Geometrical - size
14
The size of runways gives an indication of the types of aircraft that
can be accommodated.
Shape refers to the general form, structure, or outline of individual
objects
The shape of objects/features can provide diagnostic clues that aid
identification.
Man-made features have straight edges, natural features tend not to.
Roads can have right angle (90°) turns, railroads can't.
Geometrical - shape
15
Geometrical - shape
16
Goryokaku, an old castle in Hokkaido, is a diagnostic shape.
Other examples include freeway interchanges in Rainbow Bridge in Tokyo.
Texture is the frequency of change and arrangement of tones and is a
micro image characteristic.
The visual impression of smoothness or roughness of an area can often
be a valuable clue in image interpretation.
Pattern is the spatial arrangement of objects. Patterns can be either
man-made or natural and is a macro image characteristic.
Typically an orderly repetition of similar tones and textures will
produce a distinctive and ultimately recognizable pattern.
Still water bodies are typically fine textured, grass medium, brush
rough.
Spatial - texture and pattern
17
Spatial - texture and pattern
18
Height can add significant information in many types of interpretation
tasks, particularly those that deal with the analysis of man-made
features and vegetation.
How deep an excavation is can tell something about the amount of
material that was removed used in mining operations excavators.
Geologists like low sun angle photography because of the features
that shadow patterns can help identify (e.g. fault lines and fracture
patterns).
Infrared aerial photography shadows are typically very black and
can render targets in shadows uninterpretable.
height and shadow
19
height and shadow
20
height and shadow
21
Association takes into account the relationship between other
recognizable objects or features in proximity to the target of interest
The identification of features that one would expect to associate with
other features may provide information to facilitate identification.
Association
22
There is no single enhancement procedure which is best
Best one is that which best displays the features of interest to the Analyst
Sequence for digital image analysis
23
Digital image analysis is usually conducted using raster data rather than vector
data which is popular in GIS.
Data format - raster and vector
24
Preprocessing functions involve those operations that are normally required
prior to the main data analysis and extraction of information. These are
referred to as image restoration and rectification, and are intended to correct
for sensor and platform-specific radiometric and geometric distortions of
data.
Radiometric correction:
to adjust digital values for physical units.
These include correcting the data for sensor irregularities and unwanted
sensor or atmospheric noise, and converting the data so they accurately
represent the reflected or emitted radiation measured by the sensor
Geometric correction:
to bring an image into registration with a map.
These include correcting for geometric distortions due to sensor-Earth
geometry variations, and conversion of the data to real world coordinates
(e.g. latitude and longitude) on the Earth's surface.
Pre-processing
25
The first term in the below equation contains valid information about
ground reflectance.
The atmosphere interacts by adding a scattered component as path
radiance.
Theoretical radiometric correction
where
LTOA = total spectral radiance measured at the top of atmosphere
ρ = reflectance of object
E = irradiance on object (W/m2)
T = transmission of atmosphere
Lp = path radiance
26
Detectors and data systems are designed to produce a linear
response to incident spectral radiance.
A linear fit to the calibration data results in the following relationship
between radiance and digital number (DN) values for any given
channel.
Conventional radiometric correction
where DN = digital number value recorded
A = slope of response function (gain)
L = spectral radiance measured
B = intercept of response function (offset)
27
The geometric correction/registration process involves identifying the
image coordinates (i.e. row, column) of several clearly discernible
points, called ground control points (or GCPs), in the distorted image,
and matching them to their true positions in ground coordinates (e.g.
latitude, longitude)
Geometric correction
28
Geometric correction
29
In order to geometrically correct the original distorted image, a
procedure called resampling is used to determine the digital values to
place in the new pixel locations of the corrected output image.
The resampling process calculates the new pixel values from the
original digital pixel values in the uncorrected image
Geometric correction
30
There are three common methods for resampling
Nearest neighbor (NN): simply chooses the actual pixel nearest to the
point located in the image.
Bilinear (BL): uses three linear interpolations over the four surrounding
pixels.
Cubic convolution (CC): uses cubic polynomials fitting with the
surrounding sixteen pixels.
Interpolation methods
31
Comparison of interpolation methods
32
In the context of digital image analysis, feature extraction is not
geographical features but is rather statistical characteristics of an
image.
Feature extraction is to isolate the components within multi-spectral
bands that are most useful in portraying the essential elements of an
image.
It reduces the number of bands to be analyzed, thereby reducing
computational demands and time.
Feature extraction
33
Principal component analysis (PCA) is a technique used to reduce
multispectral image data sets lower dimensions for analysis.
PCA identifies the optimum linear combination of the original channels
that can account for variation of pixel values in an image.
Principal component analysis (PCA)
RGB false colour composit
34
Principal component analysis
35
Principal component analysis
36
Burnt scar mapping by PCA
37
38
Image enhancement is the process of improving
the visual appearance of digital images.
It is usually done by contrast enhancement or
histogram manipulation
Concept of an image histogram.
A histogram is a graphical representation of
the brightness values that comprise an image.
The brightness values (i.e. 0-255) are displayed
along the x-axis of the graph.
The frequency of occurrence of each of these
values in the image is shown on the y-axis
Image enhancement
There is no single enhancement procedure which is best
Best one is that which best displays the features of interest to the Analyst
39
Linear stretch: stretch using minimum
and maximum values.
Histogram equalization: stretch using a
nonlinear function derived from
distribution of intensities.
Density slicing (pseud coloring):
(Introducing color to a single-band
Image)
divide the range of values in a single
band by assigning each interval into
a color.
Image enhancement
Density slicing (pseud coloring):
Linear stretch:
Histogram equalization:
40
Image enhancement
41
Threshold can be applied to an image to isolate a feature represented by
specific range of DNs.
Thresholding
To calculate the area of lakes, DNs not
representing water are a distraction
Highest DN for water is 35 and is used
as the threshold
All DNs > 35 are assigned 255
(saturated to white);
DNs<=35 are assigned zero
(black)
The lakes are much more prominent in
the image after
42
Filtering is a process that selectively enhances or suppresses particular
wavelengths within an image
Two approaches to digitally filtering data
Convolution filtering in the spatial domain
Fourier analysis in the frequency domain
Filtering Techniques
43
Low pass filters
High pass filters
Low pass filters remove high frequency features from an image, by
diminishing the deviations from the central parameter.
The low-pass filters of MEAN and GAUSSIAN are usually used to
smooth an image
Filtering Techniques
44
Filtering Techniques
45
Digital image classification is the process of assigning pixels to classes.
Since, measured reflection values in an image depend on the local
characteristics of the earth surface; in other words there is a relationship
between land cover and measured reflection values
Therefore, by comparing pixels each other, it is possible to assemble groups
of similar pixels into classes and pixels within the same class are spectrally
similar each other, however, in practice, they have some variability within
classes.
Digital image classification
46
Classification can be done using a single band, (single spectral classification),
or using many bands (multi-spectral classification).
Single-spectral classification
Thresholding and level slicing
Multi-spectral classification
Minimum distance classifiers
Parallelepiped classifiers
Maximum likelihood classifiers
Supervised Classification
Un-Supervised Classification
Image classification approaches
47
Training fields are areas of known identify delineated on the digital image,
usually by specifying a rectangular area within the digital image.
Training data
48
49
Thresholding and level slicing
50
Thresholding and level slicing
51
Pixel observation from selected training sites
52
Minimum distance classifier
Minimum distance
classifier uses the
average values of the
spectral data that form
the training data as a
means of assigning
pixels to informational
categories.
This is mathematically
simple and
computationally
efficient, but it is not
sensitive to different
degrees of variance.
53
Parallelepiped classifier
Parallelepiped classifier is
to introduce category
variance by considering
the range of maximum
and minimum values in
training set.
If pixel lies outside all the
regions, it is classified as
unknown pixel.
Overlap is caused by the
presence of covariance
and it results in the poor
fitting to the category
training data.
54
Maximum likelihood
classifier is to evaluate
both the category
variance and covariance
when classifying an
unknown pixel.
It is based on the
assumption that training
data is Gaussian (normal
distribution function).
The probability density
functions are used to
classify an unidentified
pixel.
Maximum likelihood classifier
55
Classified data often manifest a salt-and-pepper appearance due to
the spectral variability.
Spatial filtering is a local operation in that pixel values in an original
image are modified on the basis of the gray levels of neighboring
pixels.
A moving window image processing operation is generally called
convolution.
A moving majority filter is effective for post classification smoothing to
determine the majority class within the searching window.
Post classification
56
Moving window concept
57
Post classification smoothing
58
Accuracy assessment of classification
59
Geographic products
60
Comments….
Questions….
Suggestions….
61
I am greatly thankful to all the information sources
(regarding remote sensing and GIS) on internet that I
accessed and utilized for the preparation of present
lecture.
Thank you !
Feel free contact

Image analysis basics and principles

  • 1.
    APPLICATION OF REMOTESENSING AND GEOGRAPHICAL INFORMATION SYSTEM IN CIVIL ENGINEERING Date: INSTRUCTOR DR. MOHSIN SIDDIQUE ASSIST. PROFESSOR DEPARTMENT OF CIVIL ENGINEERING
  • 2.
    Image: A reproductionor imitation of the form of a person or thing. Principles of image interpretation 2
  • 3.
    How to interpretan image? 3
  • 4.
    Interpretation and analysisof remote sensing imagery involves the identification and/or measurement of various targets in an image in order to extract useful information about them Elements of image interpretation 4
  • 5.
    This is primarilya stimulus and response activity. The stimuli are the elements of image interpretation. The interpreter conveys his or her response to these stimuli with descriptions and labels that are expressed in qualitative terms e.g. likely, possible, or probable. Very rarely do interpreter use definite statements with regard to describing features identified on aerial photography. Detection and identification 5
  • 6.
    As opposed todetection and identification, the making of measurements is primarily quantitative. Measurements made by photo interpreters will get you close; given high quality, high resolution, large-scale aerial photographs and appropriate interpretation tools and equipment, you can expect to be within meters. whereas with photogrammetry if you employ the same type of photography and the appropriate equipment you could expect to be within centimeters. Measurement 6
  • 7.
    Interpreters are oftenrequired to identify objects from a study of associated objects that they can identify; or to identify object complexes from an analysis of their component objects. Analysts may also be asked to examine an image, which depicts the effects of some process, and suggest a possible or probable cause. A solution may not always consist of a positive identification. The answer may be expressed as a number of likely scenarios with statements of probability of correctness attached by the interpreter. Solving a problem 7
  • 8.
    Primary ordering ofimage elements 8
  • 9.
    Tone can bedefined as each distinguishable variation from white to black. Color may be defined as each distinguishable variation on an image produced by a multitude of combinations of hue, value and chroma. If there is not sufficient contrast between an object and it's background to permit, at least, detection there can be no identification. While a human interpreter may only be able to distinguish between ten and twenty shades of gray; interpreters can distinguish at least 100 times more variations of color on color photography than shades of gray on black and white photography. Tone/Color 9
  • 10.
  • 11.
    Resolution can bedefined as the ability of the entire photographic system, including lens, exposure, processing, and other factors, to render a sharply defined image. An object or feature must be resolved in order to be detected and/or identified. Resolution is one of the most difficult concepts to address in image analysis because it can be discussed for camera lenses in terms of so many line pairs per millimeter. There are resolution targets that help to determine this when testing camera lenses for metric quality. Photo interpreters often talk about resolution in terms of ground resolved distance which is the smallest normal contrast object that can be identified and measured. Resolution 11
  • 12.
  • 13.
    Size of objectsin an image is a function of scale. Size can be important in discriminating objects and features (cars vs. trucks or buses, single family vs. multifamily residences, brush vs. trees, etc. ). In the use of size as a diagnostic characteristic both the relative and absolute sizes of objects can be important. Geometrical - size 13
  • 14.
    Geometrical - size 14 Thesize of runways gives an indication of the types of aircraft that can be accommodated.
  • 15.
    Shape refers tothe general form, structure, or outline of individual objects The shape of objects/features can provide diagnostic clues that aid identification. Man-made features have straight edges, natural features tend not to. Roads can have right angle (90°) turns, railroads can't. Geometrical - shape 15
  • 16.
    Geometrical - shape 16 Goryokaku,an old castle in Hokkaido, is a diagnostic shape. Other examples include freeway interchanges in Rainbow Bridge in Tokyo.
  • 17.
    Texture is thefrequency of change and arrangement of tones and is a micro image characteristic. The visual impression of smoothness or roughness of an area can often be a valuable clue in image interpretation. Pattern is the spatial arrangement of objects. Patterns can be either man-made or natural and is a macro image characteristic. Typically an orderly repetition of similar tones and textures will produce a distinctive and ultimately recognizable pattern. Still water bodies are typically fine textured, grass medium, brush rough. Spatial - texture and pattern 17
  • 18.
    Spatial - textureand pattern 18
  • 19.
    Height can addsignificant information in many types of interpretation tasks, particularly those that deal with the analysis of man-made features and vegetation. How deep an excavation is can tell something about the amount of material that was removed used in mining operations excavators. Geologists like low sun angle photography because of the features that shadow patterns can help identify (e.g. fault lines and fracture patterns). Infrared aerial photography shadows are typically very black and can render targets in shadows uninterpretable. height and shadow 19
  • 20.
  • 21.
  • 22.
    Association takes intoaccount the relationship between other recognizable objects or features in proximity to the target of interest The identification of features that one would expect to associate with other features may provide information to facilitate identification. Association 22
  • 23.
    There is nosingle enhancement procedure which is best Best one is that which best displays the features of interest to the Analyst Sequence for digital image analysis 23
  • 24.
    Digital image analysisis usually conducted using raster data rather than vector data which is popular in GIS. Data format - raster and vector 24
  • 25.
    Preprocessing functions involvethose operations that are normally required prior to the main data analysis and extraction of information. These are referred to as image restoration and rectification, and are intended to correct for sensor and platform-specific radiometric and geometric distortions of data. Radiometric correction: to adjust digital values for physical units. These include correcting the data for sensor irregularities and unwanted sensor or atmospheric noise, and converting the data so they accurately represent the reflected or emitted radiation measured by the sensor Geometric correction: to bring an image into registration with a map. These include correcting for geometric distortions due to sensor-Earth geometry variations, and conversion of the data to real world coordinates (e.g. latitude and longitude) on the Earth's surface. Pre-processing 25
  • 26.
    The first termin the below equation contains valid information about ground reflectance. The atmosphere interacts by adding a scattered component as path radiance. Theoretical radiometric correction where LTOA = total spectral radiance measured at the top of atmosphere ρ = reflectance of object E = irradiance on object (W/m2) T = transmission of atmosphere Lp = path radiance 26
  • 27.
    Detectors and datasystems are designed to produce a linear response to incident spectral radiance. A linear fit to the calibration data results in the following relationship between radiance and digital number (DN) values for any given channel. Conventional radiometric correction where DN = digital number value recorded A = slope of response function (gain) L = spectral radiance measured B = intercept of response function (offset) 27
  • 28.
    The geometric correction/registrationprocess involves identifying the image coordinates (i.e. row, column) of several clearly discernible points, called ground control points (or GCPs), in the distorted image, and matching them to their true positions in ground coordinates (e.g. latitude, longitude) Geometric correction 28
  • 29.
  • 30.
    In order togeometrically correct the original distorted image, a procedure called resampling is used to determine the digital values to place in the new pixel locations of the corrected output image. The resampling process calculates the new pixel values from the original digital pixel values in the uncorrected image Geometric correction 30
  • 31.
    There are threecommon methods for resampling Nearest neighbor (NN): simply chooses the actual pixel nearest to the point located in the image. Bilinear (BL): uses three linear interpolations over the four surrounding pixels. Cubic convolution (CC): uses cubic polynomials fitting with the surrounding sixteen pixels. Interpolation methods 31
  • 32.
  • 33.
    In the contextof digital image analysis, feature extraction is not geographical features but is rather statistical characteristics of an image. Feature extraction is to isolate the components within multi-spectral bands that are most useful in portraying the essential elements of an image. It reduces the number of bands to be analyzed, thereby reducing computational demands and time. Feature extraction 33
  • 34.
    Principal component analysis(PCA) is a technique used to reduce multispectral image data sets lower dimensions for analysis. PCA identifies the optimum linear combination of the original channels that can account for variation of pixel values in an image. Principal component analysis (PCA) RGB false colour composit 34
  • 35.
  • 36.
  • 37.
  • 38.
  • 39.
    Image enhancement isthe process of improving the visual appearance of digital images. It is usually done by contrast enhancement or histogram manipulation Concept of an image histogram. A histogram is a graphical representation of the brightness values that comprise an image. The brightness values (i.e. 0-255) are displayed along the x-axis of the graph. The frequency of occurrence of each of these values in the image is shown on the y-axis Image enhancement There is no single enhancement procedure which is best Best one is that which best displays the features of interest to the Analyst 39
  • 40.
    Linear stretch: stretchusing minimum and maximum values. Histogram equalization: stretch using a nonlinear function derived from distribution of intensities. Density slicing (pseud coloring): (Introducing color to a single-band Image) divide the range of values in a single band by assigning each interval into a color. Image enhancement Density slicing (pseud coloring): Linear stretch: Histogram equalization: 40
  • 41.
  • 42.
    Threshold can beapplied to an image to isolate a feature represented by specific range of DNs. Thresholding To calculate the area of lakes, DNs not representing water are a distraction Highest DN for water is 35 and is used as the threshold All DNs > 35 are assigned 255 (saturated to white); DNs<=35 are assigned zero (black) The lakes are much more prominent in the image after 42
  • 43.
    Filtering is aprocess that selectively enhances or suppresses particular wavelengths within an image Two approaches to digitally filtering data Convolution filtering in the spatial domain Fourier analysis in the frequency domain Filtering Techniques 43
  • 44.
    Low pass filters Highpass filters Low pass filters remove high frequency features from an image, by diminishing the deviations from the central parameter. The low-pass filters of MEAN and GAUSSIAN are usually used to smooth an image Filtering Techniques 44
  • 45.
  • 46.
    Digital image classificationis the process of assigning pixels to classes. Since, measured reflection values in an image depend on the local characteristics of the earth surface; in other words there is a relationship between land cover and measured reflection values Therefore, by comparing pixels each other, it is possible to assemble groups of similar pixels into classes and pixels within the same class are spectrally similar each other, however, in practice, they have some variability within classes. Digital image classification 46
  • 47.
    Classification can bedone using a single band, (single spectral classification), or using many bands (multi-spectral classification). Single-spectral classification Thresholding and level slicing Multi-spectral classification Minimum distance classifiers Parallelepiped classifiers Maximum likelihood classifiers Supervised Classification Un-Supervised Classification Image classification approaches 47
  • 48.
    Training fields areareas of known identify delineated on the digital image, usually by specifying a rectangular area within the digital image. Training data 48
  • 49.
  • 50.
  • 51.
  • 52.
    Pixel observation fromselected training sites 52
  • 53.
    Minimum distance classifier Minimumdistance classifier uses the average values of the spectral data that form the training data as a means of assigning pixels to informational categories. This is mathematically simple and computationally efficient, but it is not sensitive to different degrees of variance. 53
  • 54.
    Parallelepiped classifier Parallelepiped classifieris to introduce category variance by considering the range of maximum and minimum values in training set. If pixel lies outside all the regions, it is classified as unknown pixel. Overlap is caused by the presence of covariance and it results in the poor fitting to the category training data. 54
  • 55.
    Maximum likelihood classifier isto evaluate both the category variance and covariance when classifying an unknown pixel. It is based on the assumption that training data is Gaussian (normal distribution function). The probability density functions are used to classify an unidentified pixel. Maximum likelihood classifier 55
  • 56.
    Classified data oftenmanifest a salt-and-pepper appearance due to the spectral variability. Spatial filtering is a local operation in that pixel values in an original image are modified on the basis of the gray levels of neighboring pixels. A moving window image processing operation is generally called convolution. A moving majority filter is effective for post classification smoothing to determine the majority class within the searching window. Post classification 56
  • 57.
  • 58.
  • 59.
    Accuracy assessment ofclassification 59
  • 60.
  • 61.
    Comments…. Questions…. Suggestions…. 61 I am greatlythankful to all the information sources (regarding remote sensing and GIS) on internet that I accessed and utilized for the preparation of present lecture. Thank you ! Feel free contact