SlideShare a Scribd company logo
1 of 231
Download to read offline
Introduction
• Remote Sensing is an interesting and exploratory
science , as it provides images of areas in a fast
and “cost-efficient manner, and attempts to
demonstrate the “what is happening right now” in
a study area.
• Remote: because observation is done at a distance
without physical contact with the object of
interest
• Sensing: Detection of energy, such as light or
another form of electromagnetic energy
Definition
• The science and art of obtaining information
about an object, area, or phenomenon through the
analysis of data acquired by a device that is not in
contact with the object, area, or phenomenon
under investigation“.(L&K,1994).
• The term Remote Sensing means the sensing of
the Earth's surface from space by making use of
the properties of electromagnetic waves emitted,
reflected by the sensed objects, for the purpose of
improving natural resources management, land
use and the protection of the environment. (UN,
1999)
History
• Galileo introduced the telescope to astronomy in 1609
• 1827 - first photograph
• 1858 - first aerial photograph from a hot air balloon
• 1861-1865 - Balloon photography used in American
Civil War
• 1888 – ‘rocket’ cameras
• 1903 - pigeon-mounted camera patented
• 1906 - photograph from a kite
• 1914-1945 - Plane mounted Cameras WWI, WWII
• 1908 —First photos from an airplane
• 1909—Dresden International Photographic Exhibition
• 1914-1918 — World War I
1858 - First aerial (balloon) photo; picture
of Paris
Actual Pigeon Pictures
San Francisco from a kite, 1906
Cont…..
• 1957 - Sputnik-1
• 1960 - 1st meteorological satellite ‘TIROS-1’
launched
• 1967 - NASA ‘Earth Resource Technology
Satellite’ programme
• 1972 - ERTS (Landsat) 1 launched...
• 1970-1980 : Rapid advances in Digital Image
Processing
• 1986 : SPOT French Earth Observation Satellite
• 1980s : Development of Hyperspectral sensors
• 1990s : Global Remote Sensing system
Why Remote Sensing?
• Systematic data collection
• Information about three dimensions of real
objects
• Repeatability
• Global coverage
• The only solution sometimes for the otherwise
inaccessible areas
• Multipurpose information
Remote Sensing Process
• Energy Source or Illumination (A) - the first
requirement for remote sensing is to have an energy
source which illuminates or provides electromagnetic
energy to the target of interest.
• Radiation and the Atmosphere (B) - as the energy
travels from its source to the target, it will come in
contact with and interact with the atmosphere it
passes through. This interaction may take place a
second time as the energy travels from the target to
the sensor.
• Interaction with the Target (c) - once the energy
makes its way to the target through the atmosphere, it
interacts with the target depending on the properties
of both the target and the radiation.
• Recording of Energy by the Sensor (D) - after the
energy has been scattered by, or emitted from the
target, we require a sensor (remote - not in contact
with the target) to collect and record the
electromagnetic radiation.
• Transmission, Reception, and Processing (E) - the
energy recorded by the sensor has to be transmitted,
often in electronic form, to a receiving and processing
station where the data are processed into an image
(hardcopy and/or digital).
• Interpretation and Analysis (F) - the processed
image is interpreted, visually and/or digitally, to
extract information about the target which was
illuminated.
• Application (G) - the final element of the remote
sensing process is achieved when we apply the
information we have been able to extract from the
imagery about the target in order to better
understand it, reveal some new information, or
assist in solving a particular problem.
Applications
• Land Use and Land Cover
• Geologic and soil
• Agriculture
• Forestry
• Water/snow Resources
• Urban and Regional Planning
• Wildlife Ecology
• Archaeological
• Environment Assessment
• Natural Disaster
• Ocean and weather Monitoring
Remote Sensing
Sensor
Passive sensors-
Passive system record energy
reflected or emitted by a target
illuminated by sun.
e.g. normal photography, most optical
satellite sensors
Active sensors-
Active system illuminates target with
energy and measure reflection.
e.g. Radar sensors,
Passive Remote Sensing
Doesn’t employ any external source of energy.
 Measures either reflected radiation from Sun (can be
operated only during daytime) or the emitted radiation from
the surface (day/night operation).
 Suffers from variable illumination conditions of Sun and
influence of atmospheric conditions
Active Remote sensing
Has its own source of energy
 Active sensors emit a controlled beam of energy to the
surface and measure the amount of energy reflected back to
the sensor.
 Controlled illumination signal
 Day/night operation
Passive sensors : collect electromagnetic radiation
in the visible and infra-red part of the spectrum:
• Aerial Photographs
• Low resolution: Landsat,, SPOT, IRS
• High Resolution: Quickbird, IKONOS
Active sensors : generate their own radiation:
• Air-borne RADAR
• Space borne RADAR: ERS 1 / 2, Radarsat
• LiDAR (laser scanner)
Fundamental Physic of Remote Sensing
Remote Sensing relies on the measurement of
ElectroMagnetic (EM) energy.
The most important source of EM energy is the
sun Some sensors detect energy emitted by the
Earth itself or provide their own energy (Radar) .
All matter reflects, penetrate, observes and emits
of EMR in unique way, which called spectral
characteristics
Two characteristic of electromagnetic radiation
are particularly important for understanding
remote sensing. These are the wavelength and
frequency.
The wave length is the length of one
wave cycle which can be measured as
the distance between successive Wave
crests.
wave length is usually represented by
the lambda
wave length is measured in meter (M)
or some factors of meters such as
nanometers (nn) micrometers(um) ,
centimeters (cm)
Frequency refers to the number of cycles of waves passing a fixed
point per unit of time. Frequency is normally measured in hertz (Hz),
equivalent to one cycle per second and various multiples of hertz.
Wavelength and frequency are related by the following
formula
• Frequency, ν = c/λ
where, λ = wavelength
c = speed of light
= 3.00 x 108 m/s
Therefore, the two are inversely related to each other. The
shorter the wave length, the higher the frequency. The longer the
wave length, the lower frequency
Energy
Electro-magnetic spectrum (EMS)
• Electro-magnetic spectrum (EMS) is an array
of all EMR , which moves velocity of light,
characteristic of wavelength and frequency of
energy.
• The electromagnetic spectrum ranges from the
shorter wavelengths (including gamma and x-
rays) to the longer wavelengths (including
microwaves and broadcast radio waves)
• There are several regions of the
electromagnetic spectrum which are useful for
remote sensing
Cosmic
rays
Gamma
rays
U-V Infrared Micro-waves TV Radio Electric power
Visible spectrum
0.3m 0.4 0.5m 0.6 0.7m 10.0 15.0
W a v e l e n g t h
Blue Green Red
300nm 500nm 700nm
‘Optical range’
Ultraviolet Range
• This radiation is just beyond the violet portion of the
visible wavelengths
• Some Earth surface materials, primarily rocks and
minerals, emit visible light when illuminated by UV
radiation.
Visible Range
• The light which our eyes - our "remote sensors" can
detect is part of the visible spectrum.
• The visible wavelengths cover a range from
approximately 0.4 to 0.7 µm. The longest visible
wavelength is red and the shortest is violet .This is the
only portion of the spectrum we can associate with the
concept of colours.
Violet: 0.4 - 0.446 µm, Blue: 0.446 - 0.500 µm
Green: 0.500 - 0.578 µm , Yellow: 0.578 - 0.592 µm
Orange: 0.592 - 0.620 µm , Red: 0.620 - 0.7 µm
►Visible Blue Band (.45-.52 microns)
►Visible Green Band (.52-.60 microns)
►Visible Red Band (.63-.69 microns)
►Panchromatic Bands (.50-.90 microns)
Panchromatic Bands (.50-.90 micrometers)
►Wide range of sensitivity
►Visible to Near IR
►Higher spatial resolution
►Can be combined with other multi-spectral bands.
Visible Blue Band (.45-.52 micrometers)
 Greatest water penetration
 Greatest atmospheric scattering
 Greatest absorption
 Used for :
water depth ,water characteristics
detection of subsurface features ,soil and
vegetation discrimination
Visible Green Band (.52-.60 micrometers)
►Vegetation discrimination
►Urban Infrastructure
►Less affected by atmospheric scattering
►Sediment and Chlorophyll Concentration
Visible Red Band (.63-.69 micrometers)
Chlorophyll absorption band of healthy green
vegetation.
Vegetation type
Plant condition
Least affected by atmospheric scattering
Less water penetration but good near surface
information ie. Water quality, sediment, and chlorophyll.
Infrared Range
Infrared region approximately 0.7 µm to 100 µm - more
than 100 times as wide as the visible portion!
Divided two categories based on their radiation properties -
the reflected IR, and the emitted or thermal IR.
reflected IR region is used for remote sensing purposes in
ways very similar to radiation in the visible portion.
The reflected IR covers wavelengths from approximately
0.7 µm to 3.0 µm.
The thermal IR region is quite different than the visible and
reflected IR portions, as this energy is essentially the
radiation that is emitted from the Earth's surface in the form
of heat.
The thermal IR covers wavelengths from approximately 3.0
µm to 100 µm.
Microwave Range
• The portion of the spectrum of more recent interest to remote
sensing is the microwave region from about 1 mm to 1 m.
• This covers the longest wavelengths used for remote sensing.
The shorter wavelengths have properties similar to the thermal
infrared region while the longer wavelengths approach the
wavelengths used for radio broadcasts
• Because of their long wavelengths, compared to the visible
and infrared, microwaves have special properties that are
important for remote sensing.
• Longer wavelength microwave radiation can penetrate though
cloud, fog, haze etc as the longer wavelengths are not
susceptible to atmospheric scattering which affects shorter
optical wavelengths
Interaction with target
• There are three forms of interaction that can
take place when energy strikes.
• Absorption (A): radiation is absorbed into the
target
• Transmission (T): radiation passes through a
target
• Reflection (R): radiation "bounces" off the
target and is redirected
• The proportions of each interaction will
depend on the wavelength of the energy and
the material and condition of the feature.
Interaction in atmosphere and on land
R.S. Instrument
Sun
Clouds
transmitted
radiation
Scattered
radiation*
Atmospheric
absorption
Earth
Reflection processes Emission processes
Thermal emission
Atmospheric
emission
Reflected
radiation
scattered
radiation**
* selective radiation (bluish optical images)
** non-selective radiation (white clouds)
Atmosphere
Atmospheric
interactions
Atmospheric Scattering
• This occurs when the particles of gaseous
molecules present in the atmosphere cause the
EM waves to be redirected from the original
path
• Raleigh scattering : size atmospheric particles
< than the wavelengths of incoming radiation
• Mie scattering : size atmospheric particles ~
than the wavelengths of incoming radiation
• Non-selective scattering : size atmospheric
particles > than the wavelengths of incoming
radiation
Atmospheric windows
• Atmospheric windows is that portion of the
electromagnetic spectrum that can be transmitted
through the atmosphere without any distortion or
absorption. Light in certain wavelength regions
can penetrate the atmosphere well. These regions
are called atmospheric windows.
• Those areas of the spectrum which are not
severely influenced by atmospheric absorption
and thus, are useful to remote sensors, are called
atmospheric windows
Spectral Reflectance
• Spectral reflectance, (p)is the ratio of reflected
energy to incident energy as a function of
wavelength.
• The reflectance characteristics of the earth’s
surface features are expressed by spectral
reflectance, which is given by:
p= ( R / I) x 100
Where,(p)) = Spectral reflectance at a particular
wavelength.
R= Energy of wavelength reflected from object
I= Energy of wavelength incident upon the object
Spectral Reflectance Curve for vegetation
Factor affecting Spectral Signature on vegetation
• Pigmentation absorption (visible light for photosynthesis)
• Physiological structure (NIR reflectance) :
• Leaf moisture content : (Major :1.4, 1.9 and 2.7μm , Minor :
0.96 & 1.1μm).
• Soil background : Dark soil is better distinguishable from
vegetation in NIR
• Senescent vegetation : Due to aging , crop ripening, rise in
reflectance in blue and red wavelengths.
• Angular elevation of sun and sensor and Canopy geometry :
Reflectance of a rough tree canopy is dependent on solar angle.
• Phenological canopy changes (seasonal changes) :
e.g. ; For grasslands, Reflectance in red is maximized in autumn
and minimized in spring, NIR maximized in summer, minimized
in winter.
Spectral Reflectance Curve for Soil
Factor affecting Spectral Signature on vegetation
Moisture content
Texture
Structure
Spectral Reflectance Curve for Water
a) Ocean water
b) Turbid water
c) Water with chlorophyll
Factor affecting Spectral Signature on Water
• Majority of radiant flux incident is either
absorbed or transmitted
Visible: little absorbed, <5% reflected and rest
transmitted
NIR: Mostly absorbed
• Depth of water
• Suspended material within water
• Surface roughness
Remote Sensing Observation Platform
Sensor-the device that actually gathers the
remotely sensed data
Platform-the device to which the sensor is
attached
 The vehicles or carriers for remote sensing are called
the platform. Based on its altitude above earth surface
Typically platform are satellite and aircraft but they can
also, aero plane, balloons, kites. platform may be
classified as,
1) Ground borne
2) Air borne
3) Space borne
Air born and space
born platform have
been in use in remote
sensing of earth
resources .the ground
based remote sensing
system for earth
resources studies are
mainly used for
collecting the ground
truth or for
laboratory simulation
studies.
Path
An orbit is the course
of motion taken by the
satellite in space and
the ground trace of the
orbits called a 'Path‘
Row
The lines joining the
corresponding scene
centers of different
paths are parallel to the
equator are called
‘Rows’.
Orbits
The path followed by the
satellite is called orbit.
Inclined
Equatorial
Polar
Satellite orbital characteristics
Altitude
• It is the distance(in Km) from the satellite to the mean
surface level of the earth.
Inclination angle
• The angle (in degrees) between the orbit and the
equator.
Period
• It is the time(in minutes) required to complete one
full orbit. A polar satellite orbiting at an altitude of
800km has a period of 90mins
Repeat Cycle
• It is the time (in days) between two successive
identical orbits
Perigee & Apogee
• Perigee: It is the point in the
orbit where an earth satellite
is closest to the earth.
• Apogee: It is the point in the
orbit where an earth satellite
is farthest from the earth.
Swath
As satellite revolves around
the Earth, the sensor sees a
certain portion of the Earth
'surface. The area is known
as swath.
Ascending pass and Descending
pass
• The near polar satellites travel north
ward on one side of the
earth(ascending pass) and towards
South Pole on the second half of the
orbit(descending pass).
•The ascending pass is on the
shadowed side while the descending
pass is on the sunlit side.
•Optical sensors image the surface
on a descending pass, while active
sensors and emitted thermal and
microwave radiation can also image
the surface on ascending pass
Inclination angle
Equator
South Pole
Ground track
Ascending node
Orbit
Descending node
Types of Satellite Orbits
Geostationary Orbits
These satellite are also know as geosynchronous
satellite tor which at an altitude of around 36000 km.
above equator orbiting the earth and makes one
revolution in 24 hours, synchronous with earth
rotation. This platform are covering the same place
and give continuous near hemispheric coverage over
the same area day and night. Its covers is limit to 70°
N to 70° S latitude . these area mainly used for
communication and metrological application.
GOES (U.S.A) , METEOR (U.S.S.R), GMS (Japan)
Uses of Geostationary Orbits
Weather satellites : (GOES, METEOSAT, INSAT)
Communication : Telephone and television relay
satellites
Limited spatial coverage
Polar orbital satellite( Sun synchronous
satellite)
• There are earth satellite in which the orbital
plane is near polar and altitude (mostly 800-900
km) is such that the satellite passes over all
places on earth having the same latitude twice
in each orbit at the same local time. Through
these satellite the entire globe is covered on
regular basis and gives repetitive coverage on
periodic basis. all the remote sensing resources
satellite may be grouped in this category.
LANDSAT, SPOT, IRS
Resolution
Resolution is defined as the ability of the system to
render the information at the smallest discretely
separable quantity in term of distance (spatial),
wavelength band of EMR(spectral), time(temporal) and
radiation quantity (radiometric)
Spatial Resolution
Spatial Resolution is the projection of a detector
element or a slit onto the ground . In the other word
scanner’s spatial resolution is the ground segment
sensed at any instant. It also called ground resolution
element
Ground Resolutions = H*IFOV
The spatial
resolution at
which data are
acquired has two
effects – the
ability to identify
various feature
and quantify their
extent
Spectral Resolution
Spectral Resolutions describes the ability of the
sensor to define fine wavelength intervals i.e.
sampling the spatially segmented image in
different spectral intervals, thereby allowing the
spectral irradiance of the image to be determined.
Landsat TM
imagery
band 1 = blue
band 2 = green
band 3 = red
band 4 = near-ir
band 5 = mid-ir
band 6 = mid-ir
Single band 333
Multi band 123
Multi band 453 Multi band 432
Radiometric Resolutions
• This is the measure of the sensor to differentiate the
smallest change in the spectral reflectance between various
targets. the digitisation is referred to as quantisation and is
expressed as n binary bits. Thus 7 bits digitisation implies
2⁷ or 128 discreet levels(0-127)
Low Radiometric resolution High Radiometric resolution
Temporal Resolution
Temporal resolution is also called as the receptivity of the
satellite. It is the capability of the satellite to image to
exact same area as the same viewing angle at different
period time.
Types of Ground Tracking
Many electronic remote sensors acquire data
using scanning system. A Scanning system used
to collect data over a variety of different
wavelength ranges is called a multispectral
scanner(MSS) is the most commonly used
scanning system. There are two main modes of
scanning
Whisk Broom/ Across track
Push Broom/ Along Track
Whisk Broom/ Across track
• Using a rotating or oscillating mirror , such
system scan the terrain along scan lines that are
right angles to the flight line. The scanner to
repeatedly measure the energy from one side of
some 90˚ to 120 ˚.
Push Broom/ Along Track
• Push Broom/ Along Track scanners record
multispectral image data along a swath beneath
an aircraft. Also similar is the use of the forward
motion of the aircraft to build up a tow
dimensional image by recording successive scan
line that are oriented at right angle to the flight
direction.
PHOTOGRAMMETRY
The science of quantitative analysis of measurements from
Photograph
Photos – light
Gramma - to draw
Metron - to measure
Photogrammetry is defined as the art, science and
technology of obtaining reliable Information about physical
objects and the environment through process of recording,
measuring and interpreting photographic images and
patterns of recorded radiant electromagnetic energy and
other phenomena .As implied by its name, the science
originally consisted of analyzing photographs.
WHY PHOTOGRAMMETRY ?
• Very precise
• Time effective
• Cost effective
• Based on well established and tested
• algorithms.
• Less manual effort
• More geographic fidelity
• Corrects all sorts of distortions.
• Provide a reasonable geometric modeling alternative when little
is known about the geometric nature of the image data.
• Provide an integrated solution for multiple images or
photographs simultaneously
• Achieve a reasonable accuracy without a great number of
GCPs
• Create a three-dimensional stereo model or to extract the
elevation information
STEREOSCOPIC COVERAGE
Types of photogrammetry
• Terrestrial
• Aerial
• Satellite
BRANCHES OF PHOTOGRAMMETRY
Analog photogrammetry
In analog photogrammetry, optical or mechanical
instruments were used to reconstruct three-dimensional
geometry from two overlapping photographs. The main
product during this phase was topographic maps
Analytical photogrammetry
The computer replaces some
expensive optical and
mechanical components.
The resulting devices were
analog/digital hybrids.
 Analytical aerotriangulation,
analytical plotters, and orthophoto
projectors were the main
developments during this phase.
 Outputs of analytical
photogrammetry can be
topographic maps, but can also be
digital products, such as digital
maps and DEMs
Digital photogrammetry
•Digital photogrammetry is
applied to digital images that
are stored and processed on a
computer.
• Digital photogrammetry is
sometimes called softcopy
photogrammetry.
• The output products are in
digitalform, such as digital
maps, DEMs, and digital
orthophotos saved on
computer storage media.
Aerial Photography Factors
 Type of Aircraft
 Camera/Focal length
 Type of film
 Shape of project area/terrain
 Location of existing control
 Ground conditions
 Atmospheric conditions
 Restricted areas
Standard Aerial Film Types
Black & white
► Kodak Double-X Aerographic 2405: Black and white
medium-to-high speed, standard film for mapping and
charting
►Agfa Aviphot Pan 150: Fine grain film for high altitudes
Color
►Kodak Aerocolor Negative 2445: Color negative, high
speed film for mapping and reconnaissance.
►Infrared Kodak Aerochrome Infrared 2443: False color
reversal film, high dimensional stability for vegetation
surveys, camouflage detection and earth resources
Overlap between photos
60% and 80% Forward Lap and 30% Side Lap
60% Overlap
Stereo Pair
Overlap Region
Types of photographs
Vertical –
View straight down, expression angle 85° to 90°.
Low-oblique e -
Side view, horizon is not visible, depression angle
typically 20-85°.
High-oblique vantage-
Side view, horizon is visible, depression angle
typically less than 20°.
Geometry of an aerial photograph
Aerial Triangulation (AT). The process of establishing a
mathematical relationship between images, the camera or sensor
model, and the ground
Air base. The distance between two image exposure stations.
Average flying height. The distance between the camera position at
the time of exposure and the average ground elevation. Average
flying height can be determined by multiplying the focal length by
the image scale.
Base-height ratio (b/h). The ratio between the average flying height
of the camera and the distance between where the two overlapping
images were captured.
Block of photographs. Formed by the combined exposures of a
flight. For example, a traditional frame camera block might consist
of a number of parallel strips with a sidelap of 20- 30%, and an
overlap of 60%.
Bundle. The unit of photogrammetric triangulation after each point
measured in an image is connected with the perspective center by a
straight light ray. There is one bundle of light rays for each image.
Bundle block adjustment. A mathematical technique (triangulation)
that determines the position and orientation of each image as they
existed at the time of image capture, determines the ground
coordinates measured on overlap areas of multiple images, and
minimizes the error associated with the imagery, image
measurements, and GCPs. This is essentially a simultaneous
triangulation performed on all observations.
Calibration certificate/report. In aerial photography, the
manufacturer of the camera specifies the interior orientation in the
form of a certificate or report. Information includes focal length,
principal point offset, radial lens distortion data, and fiducial mark
coordinates.
Check point. An additional ground point used to independently
verify the degree of accuracy of a triangulation.
Control point. A point with known coordinates in a coordinate
system, expressed in the units (e.g., meters, feet, pixels, film units)
of the specified coordinate system.
Control point extension. The process of converting tie points to
control points. This technique requires the manual measurement of
ground points on photos of overlapping areas. The ground
coordinates associated with the GCPs are then determined using
photogrammetric techniques.
Exposure station. During image acquisition, each point in the flight
path at which the camera exposes the film.
Eye-base to height ratio. The eye-base is the distance between a
person’s eyes. The height is the distance between the eyes and the
image datum. When two images of a stereopair are adjusted in the
X and Y direction, the eye-base to height ratio is also changed.
Change the X and Y positions to compensate for parallax in the
images.
Fiducial. Four or eight reference markers fixed on the frame of an
aerial metric camera and visible in each exposure. Fiducials are
used to compute the transformation from pixel coordinates to image
coordinates.
Fiducial center. The center of an aerial photo; the intersection point
of lines constructed to connect opposite fiducials.
Focal length. The distance between the optical center of the lens and
where the optical axis intersects the image plane. Focal length of
each camera is determined in a laboratory environment.
Focal plane. The plane of the film or scanner used in obtaining an
aerial photo.
Ground Control Point (GCP). An easily identifiable point for which
the ground coordinates of the map coordinate system are known.
Nadir. The area on the ground directly beneath a scanner’s detectors.
Nadir point. The intersection of the focal axis and the image plane.
Parallax. "The apparent angular displacement of an object as seen in
an aerial photograph with respect to a point of reference or
coordinate system.
Perspective center. The optical center of a camera lens. 1. A point in
the image coordinate system defined by the x and y coordinates of
the principal point and the focal length of the sensor. 2. After
triangulation, a point in the ground coordinate system that defines
the sensor’s position relative to the ground.
Principal point. The point in the image plane onto which the
perspective center is projected.
Photographic Scale
Before a photograph can be used as a map
supplement or substitute, it is necessary to know
its scale. On a map, the scale is printed as a
representative fraction that expresses the ratio of
map distance to ground distance, For example:
RF=MD/ GD
On a photograph, the scale is also expressed as a
ratio, but is the ratio of the photo distance (PD) to
ground distance. For example:
RF=PD/GD
• The approximate scale or average scale (RF) of a vertical aerial
photograph is determined by either of two methods; the comparison
method or the focal length-flight altitude method.
The scale of a vertical aerial photograph is determined by
comparing the measured distance between two points on the
photograph with the measured ground distance between the same
two points.
The ground distance is determined by actual measurement on the
ground or by the use of the scale on a map of the same area. The
points selected on the photograph must be identifiable on the
ground or map of the same area and should be spaced in such a
manner that a line connecting them will pass through or nearly
through the center of the photograph
scale = f ÷ H
scale = photo distance ÷ ground distance
Example
• A camera equipped with a 152mm focal lens is
used to take a vertical photograph from a
flying height of 2780 m above mean sea level.
if the terrain is flat and located at an elevation
of 500m.what is the scale of the photograph?
Focal length = 0.152m (152/1000)
H=2780 m (flying height)
h= 500 m (ground elevation)
Scale = f 0.152m
H –h 2780 – 500
=
= 1/0.0000666
1:15000
Class Work
1) A camera equipped with a 206 mm focal lens is
used to take a vertical photograph from a flying
height of 5200 m above mean sea level. if the
terrain is flat and located at an elevation of
560m.what is the scale of the photograph?
2) Vertical photograph was taken at a flying height
5000 m above sea level using camera with 152
mm focal length
A) Determine the photo scale at point A and B
which lie at elevation of 1200 and 1960?
B) what the ground distance corresponds to a
30mm photo distance measured at each of these
elevation?
Scale (A) = f 0.152m 1:25000
H –h 5000 – 1200
Scale (B) = f 0.152m 1:20000
H –h 5000– 1960
Ground Distance
Distance(A) = 0.030(30/1000 ) ÷ 1 750m
Distance(B) = 0.030(30/1000 ) ÷ 1 600m
25000
20000
Image Displacement
On a planimetric map all features /details are show in
their correct horizontal position on a certain scale. This
is not so in the case of aerial photographs due to image
,displacement or distortion. A disturbance of the
principle of geometry is called displacement/distortion
Sources Of Distortions And Displacement
The main sources of displacement and distortion are the
optical or photographic deficiencies (film and paper
shrinkage, lens aberrations, filter aberrations, failure of
the film-flattening mechanism in the camera focal plane,
shutter malfunction), image motion, atmospheric
refraction of light rays, curvature of the earth, tilt, and
topography or relief.
Relief Displacement
• Shift or displacement in the photographic position of an
image caused by the relief of the object.
The amount of relief displacement is directly correlated
with the height or depth of the object and the distance of
the object from the nadir.
• This displacement is inversely correlated with the flying
altitude of the aircraft above the datum and the focal
length used.
• Higher altitude photography will produce less relief
displacement than lower altitude photography.
• Even though relief displacement constitutes a source of
errors in measuring horizontal distances on vertical aerial
photographs, it is not necessarily a nuisance; because of
relief displacement, we can determine the height of
objects (or difference in elevation between objects) and to
see in three dimension by viewing stereoscopic pairs of
aerial vertical photographs.
Displacement due to Tilt
• A photograph is considered tilted when the angle
between the perpendicular projection through the
center of the lens and the plumb line is greater
than 3˚.
• An aircraft or airborne not perfectly horizontal
causes a rotation of the camera about the x-axis
or about the y-axis.
• Rotation about the x-axis causes lateral tilt or y-
tilt, which is due to the aircraft being wing-up-
wing-downand displacing the nadir point along
Y-axis.
• Rotation about y-axis causes longitudinal tilt or
x-tilt or list, which is due to the nose of the
aircraft being up or down causing the nadir point
to be displaced along X-axis.
•Along the axis of tilt there is no displacement
relative to an equivalent untitled photograph as
this is the line where a tilted photograph and an
equivalent vertical photograph would match and
intersect on another.
Image Parallax
• The term parallax is given to the relative
movement of objects due to movement of the
observer
• The most obvious evidence of this phenomenon
is the apparent shift of nearby objects relative to
distant objects when travelling at speed in a
vehicle
• Parallax is often used to detect when an object is
correctly focused in a theodolite or level
telescope by moving the eye from side to side -
any movement between the object and the
crosshairs indicates that the focus is incorrect
• For a constant flying height (or constant camera
to datum distance) and tilt-free photographs the
parallax will be parallel to the base
• Parallax will be smaller for more distant objects,
or in the case of aerial photography, smaller for
points at lower heights
• The perception of depth in stereo photographs is
dependent on this parallax, as neighboring points
at different heights will exhibit different parallaxes
• As the parallax shifts apply to all points, a
continuous 3D impression of the object is given by
the stereo photographs
• To obtain stereoscopic coverage of an entire area,
the minimum overlap for any photography is 50%
A mathematical definition of parallax is :
p = x - x'
Where
x = left conjugate image coordinate
x' = right conjugate image coordinate
Microwave Remote Sensing
• Analyzing the information collected by the
sensors that operate in the microwave portion of
the electromagnetic spectrum is called as
Microwave Remote Sensing.
Wavelength : 1mm to 1m
• The capability to penetrate through precipitation
or into a surface layer is increased with longer
wavelengths.
• Radars operating at wavelengths greater than 2 cm
are not significantly affected by cloud cover,
however, rain does become a factor wavelengths
shorter than 4 cm.
Microwave Remote Sensing
Radar Bands Commonly Used For Sensing
• BAND WAVELENGTH (cm) FREQUENCY
• GHz (10 9Cycles/sec)
• Ka 0.75 -1.1 26.5 -40
• K 1.1 -1.67 18 -26.5
• Ku 1.67 -2.4 12.5 -18
• X 2.4 -3.8 8 -12.5
• C 3.8 -7.5 4 -8
• S 7.5 -15 2 -4
• L 15 -30 1 -2
• P 30 -100 0.3 -1
• Ka, K, and Ku bands: very short wavelengths
used in early airborne radar systems but uncommon
today.
• X-band: used extensively on airborne systems for
military reconnaissance and terrain mapping.
• C-band: common on many airborne research
systems (CCRS Convair-580 and NASA AirSAR)
and spaceborne systems (including ERS-1 and 2 and
RADARSAT).
• S-band: used on board the Russian ALMAZ
satellite.
• L-band: used onboard American SEASAT and
Japanese JERS-1 & ALOS PALSAR satellites and
NASA airborne system.
• P-band: longest radar wavelengths, used on NASA
experimental airborne research system.
Advantages
• Time independent.
• Weather independent.
( Some areas of Earth are
persistently cloud covered)
• Penetrate through clouds and
to a high degree through rain.
• Penetrates vegetation, dry
soil, dry snow
• Sensitive to moisture in soil,
vegetation and snow.
• Ability to collect data which
are far away from flight path.
Disadvantages
• Large antenna are
required
• Antennas are heavy
and have large power
requirement
• Interpretation of
microwave images are
difficult
• Geometry is
problematic in
undulating terrain
.
Passive microwave sensor
• Detects the naturally emitted
microwave energy within its
field of view. This emitted
energy is related to the
temperature and moisture
properties of the emitting
object or surface.
• Passive microwave sensors
are typically radiometers.
• Applications: Snow cover
mapping, Flood mapping,
Soil moisture mapping.
Active microwave sensor
• It provide own source of microwave
radiation to illuminate the target.
• divided into two categories: imaging
(RADAR) and non-imaging
(altimeters and scatterometers).
• Non-imaging microwave sensors are
profiling devices which take
measurements in one linear
dimension, as opposed to the two-
dimensional representation of
imaging sensors.
• It transmits a microwave (radio) signal
towards the target and detects the
backscattered portion of the signal.
Errors in Remote sensing Images
• Remote sensing data (in raw form) as received
from imaging sensors mounted on satellites
contain flaws or deficiencies.
• The correction of deficiencies and removal of
flaws present in the data is termed as pre-
processing.
• Image pre-processing can be classified into three
functional categories:
• Radiometric corrections
• Atmospheric corrections
• Geometric correction
Radiometric errors
It’s an error that influences the radiance or
radiometric values of a scene element(pixel).
Change the value (Digital Number, DN) stored in an
image.
System errors - minimized by cosmetic corrections
Atmospheric errors-minimized by atmospheric
corrections
Geometric errors
It s an error that is related to their spatial location.
change the position of a DN value.
minimized by geometric correct
Radiometric errors & its corrections
• Radiometric errors causes
• Sensor failures or system noise affects values
• Signal travelling through atmosphere ;atmosphere
affects the signal
• Sun illumination influences radiometric values
• Seasonal changes affect radiometric values
• Terrain influences radiance
Internal errors:-
• Introduced by remote sensing system.
• Generally systematic and may be identified.
• Corrected based on prelaunch or in flight
measurements.
External errors:-
• Introduced by the phenomena that vary in nature
through space and time.
• Sources are atmospheric, terrain elevation etc.
Radiometric Error sources
• Remote sensing
• System induced errors by mechanical, electrical or
communication failures
• Atmosphere induced errors by interaction of EM with
atmospheric constituents
Random Bad Pixels(Short Noise)
• Some times an individual detector
does not record spectral data for
an individual pixel . When this
occurs randomly,it is called a
badpixel.
• When there are numerous random
bad pixels found within the scene,
it is called shot noise because it
appears that the image was shot
by a shotgun.
• Normally these bad pixels contain
values of 0 or 255 (in8-bitdata)in
one or more of the bands.
Random Bad Pixels (correction)
• Locate each bad pixel in the band k dataset.
• A simple threes holding algorithm makes a pass
through the dataset and flags any pixel (BVi,j,k) having
a brightness value of zero (assuming values of 0
represent short noise and not a real land cover such as
water).
• Once identified, evaluate the eight pixels surrounding
the flagged pixel, as shown below:
Dropped lines
• Although detectors onboard orbiting satellites are well tested
and calibrated before launch but an entire line containing no
spectral information may be produced if an individual detector
in a scanning system (e.g., Landsat MSS or Landsat 7 ETM+)
fails to function properly.
• If a detector in a linear array (e.g., SPOT XS, IRS, QuickBird)
fails to function, this can result in an entire column of data
with no spectral information. Such defects are due to errors in
the scanning or sampling equipment, in the transmission or
recording of image data or in reproduction of CCT's.
• The bad line or column is commonly called a line or column
drop-out and seen as horizontal black (pixel value 0) or white
(pixel value 255) lines on the image.
Dropped lines (corrections)
• Correction is a cosmetic operation, for this no
data is available
• It is based on spatial auto-correlation of
continuous physical phenomena (neighboring
values tend to be similar)
Methods for dropped line correction
1. Replacement (line above, below)
2. Average line above and below
3. Replacement based on correlation between bands
Striping
• Horizontal or vertical (raw data), skewed (processed
data)
• Visible banding pattern over the whole image
• Changed characteristics of the sensor detectors
Striping (correction)
• To improve the visual appearance
• To represent equal ground leaving photon-flux with
the same DN
Methods for Striping correction
1. Use calibration data
No assumptions
2. Parametric histogram matching
Assumes equal area per class for each sensor
Assumes a linear sensor model and a normal
(Gaussian) distribution of the DN values
3. Non-parametric histogram matching
Assumes equal area per class for each sensor
Atmosphere induced errors
HAZE
• Scattered light reaching the sensor from the atmosphere
• Additive effect, reducing CONTRAST
SUNANGLE
• Time/Seasonal effect changing the atmospheric path
• Multiplicative effect
SKYLIGHT
• Scattered light reaching the sensor after being reflected
from the Earth’s surface
• Multiplicative effect
Haze Correction
Dark object subtraction method
• Assumption: infrared bands are not affected by
Haze
• Identify black bodies: clear water and shadow
zones with zero reflectance in the infrared bands
• Identify DN values at shorter wavelength bands of
the same pixel positions. These DN are entirely
due to haze
• Subtract the minimum of the DN values related
to blackbodies of a particular band from all the
pixel values of that band
Effects of Sun Illumination
• Position of the sun relative to the earth changes
depending on time of the day and the day of the
year
• Solar elevation angle: Time-and location
dependent
• In the northern hemisphere the solar elevation
angle is smaller in winter than in summer
• The solar zenith angle is equal to 90 degree minus
the solar elevation angle
• Irradiance varies with the seasonal changes in
solar elevation angle and the changing distance
between the earth and sun
Geometric Errors
These distortions may be due to several factors such as:
(i) the rotation of the Earth.
(ii) the motion of the scanning system,
(iii) the motion of the platform,
(iv) the platform altitude and attitude,
(v) the curvature of the Earth.
• The geometric distortions should be removed for the geometric
representation of the satellite imagery as close as possible to the
real world.
Geometric distortions are:
– Systematic
– Nonsystematic
• Systematic distortions are predictable in nature, and can
be accounted for by accurate modeling of the sensor and
platform motion, and the geometric relationship of the
platform with the Earth.
• Non-systematic distortions or random errors can not be
modeled and corrected in this way
SYSTEMATIC ERRORS
– Scan skew
– Mirror scan velocity
– Panoramic distortions
– Platform velocity
– Earth rotation
– Earth Curvature
Scan skew
• It is caused by the forward motion of the
platform during the time required for each
mirror sweep. The ground swath is not normal
to the ground track but is slightly skewed,
producing cross-scan geometric distortion. The
magnitude of correction is 0.082 km for MSS.
Mirror scan velocity
• The MSS mirror scanning rate is
usually not constant across a given
scan, producing along-scan geometric
distortion. The magnitude of the
correction is 0.37 km for MSS.
Panoramic distortions
For scanners used on space borne and
airborne remote sensing platforms the
(IFOV) is constant. As a result the
effective pixel size on the ground is
larger at the extremities of the scan line
than at the nadir. It produces along-
scan distortion
Platform velocity
• If the speed of the platform
changes the ground track covered
by successive mirror scans
changes producing along-track
scale distortion.
Earth rotation
• Rotation of earth in West-to-East
Direction
• Movement of satellite in North-
to-South Direction
Earth Curvature
• Aircraft scanning mechanism because of
their low altitude have small absolute
swath width are not affected by earth
curvature.
• Neither are space systems like IRS,
Landsat and Spot, because of the
narrowness of their swath. However wide
swath width space borne imaging systems
are affected.
• e.g. NOAA with a wide swath of 2700 km
is affected by it. The edges of the swath
the area of the earth’s surface viewed at a
given angular IFOV is larger than if the
curvature of the earth is ignored
NONSYSTEMATIC ERRORS
Platform altitude
If the platform
departs from its
normal altitude,
changes in scale
occur.
Attitude
One of the sensor system
axes usually maintained
normal to the earth’s
surface and introduces
geometric distortion
Digital Image
• An IMAGE is a Pictorial Representation of an object or a
scene.
Analog
Digital
What is a Digital Image ?
• Produced by Electro optical Sensors
• Composed of tiny equal areas, or picture elements abbreviated
as pixels or peel arranged in a rectangular array
• With each pixel is associated a number known as Digital
Number ( DN) or Brightness value (BV) or gray level which is
a record of variation in radiant energy in discrete form.
• An object reflecting more energy records a higher number for
itself on the digital image and vice versa.
• Digital Images of an area captured in different spectral ranges
(bands) by sensors onboard a remote sensing satellite.
• •A pixel is referred by its column, row, band number.
• Digital Image Data Formats
Commonly used formats
1) Band Sequential (BSQ)
2) Band Interleaved by Line (BIL)
3) Band Interleaved by Pixel (BIP)
• Each of these formats is usually preceded on the by
"header" and/or "trailer" information, which consists of
ancillary data about the date, altitude of the sensor,
attitude, sun angle, and so on.
• Such information is useful when geometrically or
radiometrically correcting the data.
Band Sequential Format (BSQ)
• Data for a single band for entire scene is written as one
file
• That is for each band there is separate file
Band Interleave by Line (Bil)
• Data for all the bands are written as line by line on the same
file.
• That is
• line1, band 1, line 1 band 2, line1 band 3, line2, band 1, line 2
band 2, line2 band 3 , line3, band 1, line 3 band 2, line3 band 3
etc
Band Interleave by Pixel (BIP)
• data for the pixels in all bands are written together.
That is
• pixel1 band1 pixel1 band2 pixel1 band3 pixel2 band1 pixel2
band2 pixel 2 band3 pixel3 band1 pixel3 band2 pixel3 band3
Advantages / Disadvantages
BSQ
• if one wanted the area in the center of a scene in four
bands, it would be necessary to read into this
location in four separate files to extract the desired
information. Many researchers like this format
because it is not necessary to read "serially" past
unwanted information if certain bands are of no
value
BIL/BIP
• It is a useful format if all the bands are to be used in
the analysis. If some bands are not of interest, the
format is inefficient since it is necessary to read
serially past all the unwanted data
Digital image processing
• Digital image processing can be defined as the
computer manipulation of digital values contained in
an image for the purposes of image correction, image
enhancement and feature extraction.
• Digital Image Processing A digital image processing
system consists of computer Hardware and Image
processing software necessary to analyze digital
image data
• DIGITAL IMAGE PROCESSING SYSTEM FUNCTIONS
Data Acquisition/Restoration
• {Compensates for data errors, noise and geometric
distortions introduced in the images during
acquisitioning and recording} i.e Preprocessing
(Radiometric and Geometric)
Image Enhancement
• {Alters the visual impact of the image on the
interpreter to improve the information content}
Information Extraction
• {Utilizes the decision making capability of
computers to recognize and classify pixels on the
basis of their signatures, Hyperspectral image
analysis }
Others
• {Photogrammetric Information Extraction ,
Metadata and Image/Map Lineage
Documentation , Image and Map Cartographic
Composition, Geographic Information Systems
(GIS), Integrated Image Processing and GIS ,
Utilities}
Major Commercial Digital Image Processing Systems
•ERDAS IMAGINE
•Leica Photogrammetry Suite
•ENVI
•IDRISI
•ER Mapper
•PCI Geomatica
•eCognition
•MATLAB
•Intergraph
RECTIFICATION
• is a process of geometrically correcting an image so
that it can be represented on a planar surface ,
conform to other images or conform to a map. i.e it
is the process by which geometry of an image is
made planimetric.
• It is necessary when accurate area , distance and
direction measurements are required to be made
from the imagery.
• It is achieved by transforming the data from one grid
system into another grid system using a geometric
transformation
• In other words process of establishing mathematical
relationship between the addresses of pixels in an
image with corresponding coordinates of those
pixels on another image or map or ground
• Two basic operations must be performed to
geometrically rectify a remotely sensed image to a
map coordinate system:
1.Spatial Interpolation: The geometric relationship
between input pixel location (row & column) and
associated map co-ordinates of the same point (x,y)
are identified.
• •This establishes the nature of the geometric co-
ordinate transformation parameters that must be
applied to rectify the original input image (x,y) to its
proper position in the rectified output image (X,Y).
• • Involves selecting Ground Control Points (GCPS)
and fitting polynomial equations using least squares
technique
• GROUND CONTROL POINT (GCP) is a
location on the surface of the Earth (e.g., a
road intersection) that can be identified on the
imagery and located accurately on a map.
• There are two distinct sets of coordinates
associated with each GCP: source or image
coordinates specified in i rows and j columns, and
Reference or map coordinates (e.g., x, y measured
in degrees of latitude and longitude, or meters in a
Universal Transverse Mercator projection).
• The paired coordinates (i, j and x, y) from many
GCPs can be modeled to derive geometric
transformation coefficients.
• These coefficients may be used to geometrically rectify
the remote sensor data to a standard datum and map
projection
• Accurate GCPs are essential for accurate rectification
• Sufficiently large number of GCPs should be selected
• Well dispersed GCPs result in more reliable rectification
• GCPs for Large Scale Imagery –Road intersections,
airport runways, towers buildings etc.
• for small scale imagery –larger features like Urban area
or Geological features can be used
• NOTE : landmarks that can vary (like lakes, other water
bodies, vegetation etc) should not be used.
• GCPs should be spread across image
• •Requires a minimum number depending on the type of
transformation
Intensity Interpolation
• Pixel brightness value must be determined.
• A pixel in the rectified image often requires a value
from the input pixel grid that does not fall neatly on
a row and column co-ordinate.
• For this reason resampling mechanism is used to
determine pixel brightness value.
Image Enhancement
• Image enhancement techniques improve the quality of
an image as perceived by a human. These techniques
are most useful because many satellite images when
examined on a colour display give inadequate
information for image interpretation.
• Modification of an image to alter its impact on viewer
Enhancements are used to make it easier for visual
interpretation and understanding of imagery.
• Process of making an image more interpretable for a
particular application to accentuate certain image
features for subsequent analysis or for image display .
• Useful since many satellite images give inadequate
information for image interpretation. The contrast
stretch, density slicing, edge enhancement, and spatial
filtering are the more commonly used techniques.
• Image enhancement is attempted after the image is
corrected for geometric and radiometric distortions
RADIOMETRIC ENHANCEMENT
Modification of brightness values of each pixel in an
image data set independently (Point operations).
SPECTRAL ENHANCEMENT
Enhancing images by transforming the values of each
pixel on a multiband basis
SPATIAL ENHANCEMENT
Modification of pixel values based on the values of
surrounding pixels. (Local operations)
Contrast
Contrast generally refers to the difference in luminance
or grey level values in an image and is an important
characteristic. It can be defined as the ratio of the
maximum intensity to the minimum intensity over an
image.
Reasons for low contrast of image data
I. The individual objects and background words and the
scene itself has a low contrast ratio.
II. Scattering of electromagnetic energy by the
atmosphere can reduce the contrast of a scene.
III. The remote sensing system may lack sufficient
sensitivity to detect and record the contrast of the terrain.
Contrast Enhancement
• Expands the original input values to make use of the
total range of the sensitivity of the display device.
Contrast enhancement techniques expand the range of
brightness values in an image so that the image can be
efficiently displayed in a manner desired by the analyst.
The density values in a scene are literally pulled farther
apart, that is, expanded over a greater range.
• The effect is to increase the visual contrast between two
areas of different uniform densities. This enables the
analyst to discriminate easily between areas initially
having a small difference in density.
Linear Contrast Enhancement
This is the simplest contrast stretch algorithm. The grey
values in the original image and the modified image follow a
linear relation in this algorithm.
A density number in the low range of the original histogram
is assigned to extremely black, and a value at the high end is
assigned to extremely white. The remaining pixel values are
distributed linearly between these extremes.
Non Linear Contrast Enhancement
• In these methods, the input and output data values follow a
non-linear transformation. The general form of the non-
linear contrast enhancement is defined by y = f (x), where x
is the input data value and y is the output data value.
• The non-linear contrast enhancement techniques have been
found to be useful for enhancing the colour contrast
between the nearly classes and subclasses of a main class.
Histogram Equalization
This is another non-linear contrast enhancement
technique. In this technique, histogram of the original
image is redistributed to produce a uniform population
density. This is obtained by grouping certain adjacent
grey values. Thus the number of grey levels in the
enhanced image is less than the number of grey levels
in the original image.
• In this technique, histogram of the original image is
redistributed to produce a uniform population density.
• Image analysts must be aware that while histogram
equalization often provides an image with the most
contrast of any enhancement technique, it may hide
much needed information.
• If one is trying to bring out information about data in
terrain shadows, or there are clouds in your data,
histogram equalization may not be appropriate.
Spatial Filtering
• Spatial Filtering is the process of dividing the image into its
constituent spatial frequencies, and selectively altering
certain spatial frequencies to emphasize some image
features.
• spatial frequency defined as number of changes in
Brightness Value per unit distance for any particular part of
an image. If there are very few changes in Brightness Value
once a given area in an image, this is referred to as low
frequency area. Conversely, if the Brightness Value change
dramatically over short distances, this is an area of high
frequency.
• Process of suppressing (de-emphasizing) certain frequencies
& passing (emphasizing) others.
• This technique increases the analyst’s ability to discriminate
detail.
• Local operation i.e. pixel value is modified based on the
values surrounding it.
• Used for enhancing certain features
• Removal of noise.
• Smoothening of image
• Ability to discriminate detail. The three types of spatial
filters used in remote sensor data processing are : Low pass
filters, Band pass filters and High pass filters.
Spatial Convolution Filtering
A linear spatial filter is a filter for which the brightness
value (BV i.j) at location i, j in the output image is a
function of some weighted average (linear combination) of
brightness values located in a particular spatial pattern
around the i, j location in the input image. This process of
evaluating the weighted neighboring pixel values is called
two-dimensional convolution filtering.
Filter Types
Low Pass Filters
• Block high frequency details
• Has a smoothening effect on images.
• Used for removal of noise
• Removal of “salt & pepper” noise
• Blurring of image especially at edges.
• High Pass Filters
• Preserves high frequencies and Removes slowly varying
components
• Emphasizes fine details
• Used for edge detection and enhancement
• Edges - Locations where transition from one category to other
occurs
Low Pass Filters
• Mean Filter
• Median Filter
• Mode Filter
• Minimum Filter
• Maximum Filter
• Olympic Filter
High Pass Filtering
–Linear
• Output brightness value is a function of linear combination of BV’s located
in a particular spatial pattern around the i,j location in the input image
–Non Linear
• use non linear combinations of pixels
• Edge Detection - Background is lost
• Edge Enhancement
• Delineates Edges and makes the shapes and details more prominent
•background is not lost.
Density Slicing
Density slicing is the process in which the pixel values are
sliced into different ranges and for each range a single
value or color is assigned in the output image. It is also
know as level slicing.
Density slicing is a digital data interpretation method
used in analysis of remotely sensed imagery to enhance
the information gathered from an individual brightness
band. Density slicing is done by dividing the range of
brightnesses in a single band into intervals, then assigning
each interval to a colour.
Density slicing may be thus used to introduce color to a single band
image. Density slicing is useful in enhancing images, particularly if
the pixel values are within a narrow range. It enhances the contrast
between different ranges of the pixel values.Remote Sensing-Digital
Image Processing-Image Enhancement
• The multispectral image data is usually strongly
correlated from one band to the other. The level
of a given picture element on one band can to
some extent be predicted from the level of that
same pixel in another band.
• Principal component analysis is a pre-processing
transformation that creates new images with the
uncorrelated values of different images. This is
accomplished by a linear transformation of
variables that corresponds toa rotation and
translation of the original coordinate system.
Principal component analysis
• Principal component analysis operates on all
bands together. Thus, it alleviates the difficulty of
selecting appropriate bands associated with the
band ratioing operation. Principal components
describe the data more efficiently than the
original band reflectance values. The first
principal component accounts for a maximum
portion of the variance in the data set, often as
high as 98%. Subsequent principal components
account for successively smaller portions of the
remaining variance.
The Normalized Difference Vegetation Index (NDVI)
• The Normalized Difference Vegetation Index (NDVI) is
a numerical indicator that uses the visible and near-
infrared bands of the electromagnetic spectrum, and is
adopted to analyze remote sensing measurements and
assess whether the target being observed contains live
green vegetation or not.
• The Normalised Difference Vegetation Index (NDVI)
gives a measure of the vegetative cover on the land
surface over wide areas. Dense vegetation shows up very
strongly in the imagery, and areas with little or no
vegetation are also clearly identified. NDVI also
identifies water and ice
• Generally, healthy vegetation will absorb most of the
visible light that falls on it, and reflects a large portion
of the near-infrared light. Unhealthy or sparse
vegetation reflects more visible light and less near-
infrared light. Bare soils on the other hand reflect
moderately in both the red and infrared portion of the
electromagnetic spectrum.
• The NDVI algorithm subtracts the red reflectance
values from the near-infrared and divides it by the
sum of near-infrared and red bands.
• The method, developed by NASA is known as the
Normalized Difference Vegetation Index (NDVI) and
is given by the equation (NIR-RED/NIR+RED),
NDVI= (NIR-RED) / (NIR+RED)
where RED and NIR correspond to channels 1 and 2
respectively. By normalizing the difference in this way,
the values can be scaled between a value of -1 to +1.
This also reduces the influence of atmospheric
absorptionWater typically has an NDVI value less than
0, bare soils between 0 and 0.1 and vegetation over
0.1.
. Cover Types RED NIR NDVI
Dense Vegetations 0.1 0.5 0.7
Dry Bare soil 0.269 0.283 0.025
Clouds 0.277 0.228 0.002
Snow and ice 0.375 0.342 -0.046
water 0.022 0.013 -0.257
IMAGE CLASSIFICATIONS
• The overall objective of image classification is to
automatically categorize all pixels in an image
into land covers classes or themes.
• Normally, multispectral data are used to perform
the classification, and the spectral pattern present
within the data for each pixel is used as numerical
basis for categorization.
• That is, different feature types manifest different
combination of DNs based on-their inherent
spectral reflectance and emittance properties.
• The term classifier refers loosely to a computer
program that implements a specific procedure for
image classification.
• Over the years scientists have devised many
classification strategies. From these alternatives
the analyst must select the classifier that will best
accomplish a specific task.
• At present it is not possible to state that a given
classifier is “best” for all situation because
characteristics of each image and the
circumstances for each study vary so greatly.
Therefore, it is essential that analyst understand
the alternative strategies for image classification
• The traditional methods of classification mainly follow
two approaches: unsupervised and supervised
Unsupervised Classification
• Unsupervised classifiers do not utilize training data as
the basis for classification. Rather, this family of
classifiers involves algorithms that examine the
unknown pixels in an image and aggregate them into a
number of classes based on the natural groupings or
clusters present in the image values.
• The classes that result from unsupervised classification
are spectral classes because they are based solely on
the natural groupings in the image values, the identity
of the spectral classes will not be initially known.
• There are numerous clustering algorithms that can
be used to determine the natural spectral groupings
present in data set. One common form of
clustering, called the “K-means” approach also
called as ISODATA (Interaction Self-Organizing
Data Analysis Technique) accepts from the analyst
the number of clusters to be located in the data.
Advantages
• 1. No extensive prior knowledge of the region is
required.
• 2. The opportunity for human error is minimized.
• 3. Unique classes are recognized as distinct units
Disadvantages and limitations
• Unsupervised classification identifies spectrally
homogeneous classes within the data; these classes do
not necessarily correspond to the informational
categories that are of interest to analyst. As a result, the
analyst is faced with the problem of matching spectral
classes generated by the classification to the
informational classes that are required by the ultimate
user of the information.
• Spectral properties of specific information classes will
change over time (on a seasonal basis, as well as over
the year). As a result, relationships between
informational classes and spectral classes are not
constant and relationships defined for one image can
seldom be extended to others
Supervised classification
• Supervised classification can be defined normally as the
process of samples of known identity to classify pixels of
unknown identity. Samples of known identity are those
pixels located within training areas. Pixels located within
these areas term the training samples used to guide the
classification algorithm to assigning specific spectral
values to appropriate informational class.
• The basic steps involved to a typical supervised
classification procedure are
1. The training stage
2. Feature selection
3. Selection of appropriate classification algorithm
4. Post classification smoothening
5. Accuracy assessment
Training data
• Training fields are areas of known identity delineated on
the digital image, usually by specifying the corner points
of a rectangular or polygonal area using line and column
numbers within the coordinate system of the digital
image. The analyst must, of course, know the correct
class for each area.
KEY CHARACTERISTICS OF TRAINING AREAS
• Shape
• Location
• Number
• Placement
• Uniformity
• Various supervised classification algorithms may be
used to assign an unknown pixel to one of a number of
classes. The choice of a particular classifier or decision
rule depends on the nature of the input data and the
desired output. Among the most frequently used
classification algorithms are the parallelepiped,
minimum distance, and maximum likelihood decision
rules.
1.Parallelepiped Classification
• It is a very simple supervised classifier. Here two
image bands are used to determine the training area of
the pixels in each band based on maximum and
minimum pixel values.
• .Although parallelepiped is the most accurate of the
classification techniques, it is not most widely used. It
leaves many unclassified pixels and also can have overlap
between training pixels. The data values of the candidate
pixel are compared to upper and lower limits.
• [Pixels inside of the rectangle (defined in standard
deviations), are assigned the value of that class signature.
• Pixels outside of the rectangle (defined by standard
deviations) are assigned a value of zero (NULL).
• Disadvantages: poor accuracy, and potential for a large
number of pixels classified as NULL.
• Advantages: A speedy algorithm useful for quick results.]
.
255
0 255127
127
19164
64
191
2.Minimum distance to means classification
• It is based on the minimum distance decision rule that
calculates the spectral distance between the
measurement vector for the candidate pixel and the
mean vector for each sample. Then it assigns the
candidate pixel to the class having the minimum
spectral distance.
• [Every pixel is assigned to the a category based on its
distance from cluster means.
• Standard Deviation is not taken into account.
• Disadvantages: generally produces poorer classification
results than maximum likelihood classifiers.
• Advantages: Useful when a quick examination of a
classification result is required. ]
255
0 255127
127
19164
64
191
3.Maximum Likelihood classification
• This Classification uses the training data by means of
estimating means and variances of the classes, which are
used to estimate probabilities and also consider the
variability of brightness values in each class.
• This classifier is based on Bayesian probability theory. It
is the most powerful classification methods when
accurate training data is provided and one of the most
widely used algorithm.
• [Pixels inside of a stated threshold (Standard Deviation
Ellipsoid) are assigned the value of that class signature.
• Pixels outside of a stated threshold (Standard Deviation
Ellipsoid) are assigned a value of zero (NULL).
Disadvantages: Much slower than the minimum distance
or parallelepiped classification algorithms. The potential
for a large number of NULL.
Advantages: more “accurate” results (depending on the
quality of ground truth, and whether or not the class has
a normal distribution).]
•
0 255127 19164
64
127
191
255
Supervised Classification
• Pre Define Classes
• Identify classification
errors
• Class Based on
information categories
• Sleeted training data not
sufficient and time
consuming and tedious
• Define class may not
match natural classes
Unsupervised Classification
• Unknown Classes
• No classification error
• Class based o spectral
properties
• Cluster may be
unidentifiable and
unexpected may be
unidentifiable
• Posterior cluster
identification is time
consuming and tedious
FuzzyClassifications
Soft or fuzzy classifiers - a pixel does not
belong fully to one class but it has different
degrees of membership in several classes. The
mixed pixel problem is more pronounced in
lower resolution data. In fuzzy classification, or
pixel unmixing, the proportion of the land
cover classes from a mixed pixel is calculated.
For example a vegetation classification might
included a pixel with grades of 0.68 for class
“forest”, 0.29 for “street” and 0.03 for “grass” .
0 – 30 -> Water
30 - 60 -> Forest wetland
60-90 -> Upland Forest
Hard classification
Fuzzy classification
ACCURACY ASSESSMENT
Classification Accuracy Assessment
• Quantitatively assessing classification accuracy
requires the collection of some in situ data or a
priori knowledge about some parts of the terrain,
which can then be compared with the remote
sensing derived classification map.
• Thus to asses classification accuracy it is
necessary to compare two classification maps 1)
the remote sensing derived map, and 2) assumed
true map (in fact it may contain some error). The
assumed true map may be derived from in situ
investigation or quite often from the interpretation
of remotely sensed data obtained at a larger scale
or higher resolution.
Classification Error Matrix
• One of the most common means of expressing classification
accuracy is the preparation of classification error matrix
sometimes called confusion or a contingency table. Error matrices
compare on a category-by-category basis, the relationship
between known reference data (ground truth) and the
corresponding results of an automated classification.
• Such matrices are square, with the number of rows and columns
equal to the number of categories whose classification accuracy is
being assessed. Table show error matrix that an image analyst has
prepared to determine how well a Classification has categorized a
representative subset of pixels used in the training process of a
supervised classification. This matrix stems from classifying the
sampled training set pixels and listing the known cover types used
for training (columns) versus the Pixels actually classified into
each land cover category by the classifier (rows).
.
Classification Ground Truth
LULC DF OF TC SC DC BA GS SN WB WE BU Total
DF 10 3 0 0 0 0 0 0 0 0 0 13
OF 1 15 2 0 0 0 0 0 0 0 0 18
TC 1 1 29 0 0 0 1 0 0 0 0 32
SC 0 0 0 8 1 0 0 0 0 0 0 9
DC 0 0 0 0 4 0 0 0 0 0 1 5
BA 0 0 0 0 0 4 0 0 0 0 0 4
GS 0 0 1 0 0 0 1 0 0 0 0 2
SN 0 0 0 0 0 0 0 3 0 0 0 3
WB 0 0 0 0 0 0 0 0 5 0 0 5
WE 0 0 0 0 0 0 0 0 0 1 0 1
BU 0 0 0 0 0 0 0 0 0 0 4 4
Total 12 19 32 8 5 4 2 3 5 1 5 96
Overall accuracy 87.5
PA 83.33 78.95 90.63 100.0 80.0 100.0 50.0 100.0 100 100.0 80.0
UA 76.92 83.33 90.63 88.89 80.0 100.0 50.0 100.0 100 100.0 100.0
Kappa 0.85
Kappa coefficient
• Discrete multivariate techniques have been used to statistically
evaluate the accuracy of remote sensing derived maps and error
matrices since 1983 and are widely adopted. These techniques are
appropriate as the remotely sensed data are discrete rather than
continuous and are also binomially or multinomial distributed
rather than normally distributed.
• Kappa analysis is a discrete multivariate technique for accuracy
assessment. Kappa analysis yields a Khat statistic that is the
measure of agreement of accuracy. The Khat statistic is computed
as
• Where r is the number of rows in the matrix xii is the number of
observations in row i and column i, and xi+ and
• x+i are the marginal totals for the row I and column i respectively
and N is the total number of observations.
Data Fusion is a process dealing with data and
information from multiple sources to achieve
refined/improved information for decision-making
Often, in case of data fusion, not only remote sensing
images are fused but also further ancillary data (e.g.
topographic maps, GPS coordinates, geophysical
information etc.) contribute to the resulting image.
Image Fusion is the combination of two or more
different images to form a new image by using a
certain algorithm.
Data Fusion
Image Fusion
IMAGE FUSION
• Image fusion is the process of combining of image from
different sources and related information form associated
datasets to produce a resultant image that retains the most
desirable information of each of the input image.
Why image fusion
 Most of the sensors operate in two modes: multispectral
mode and the panchromatic mode.
 Usually the multispectral mode has a better spectral
resolution than the panchromatic mode.
 Most of the satellite sensors are such that the panchromatic
mode has a better spatial resolution than the multispectral
mode,
To combine the advantages of spatial and spectral
resolutions of two different sensors, image fusion
techniques are applied
FINE
PAN
CORASE
XS
FUSED HI-RES XS
Aim and use of Image Fusion
Sharpening of Images
• improvement of spatial resolution
• preserve the interesting spectral resolution
 Enhance features not visible in either of the single
data alone
• improved interpretation results, due to the
different physical characteristics of the data
 Change Detection
 Substitute missing information in one image with
signals from another sensor image ( e.g. clouds in
NIR and Shadows in SAR)
Image Interpretation
• Image interpretation is defined as the act of examining
images to identify objects and judge their significance.
An interpreter studies remotely sensed data and attempts
through logical process to detect, identify, measure and
evaluate the significance of environmental and cultural
objects, patterns and spatial relationships. It is an
information extraction process.
Methods:-on hard copy, -on digital image
• Why image interpretation difficult compared to everyday
visual interpretation ?
– loss of sense of depth
– viewing perspective different
– different scale
ELEMENTS OF IMAGE INTERPRETATION
Shape
General form/structure of objecs: regular/irregular
Numerous components of the environment can be
identified with reasonable certainty merely by their shape.
This is true of both natural features and man-made objects.
Size
FUNCTION OF SCALE : Relative size is important
In many cases, the length, breadth, height, area and/or
volume of an object can be significant, whether these are
surface features (e.g. different tree species). The approximate
size of many objects can be judged by comparisons with
familiar features(e.g. roads) in the same scene.
Tone
RELATIVE BRIGHTNESS OR COLOR
We have seen how different objects emit or reflect different
wavelengths and intensities of radiant energy. Such differences
may be recorded as variations of picture tone, colour or density.
Which enable discrimination of many spatial variables, for
example, on land different crop types or at sea water bodies of
contrasting depths or temperatures. The terms 'light', 'medium' or
'dark' are used to describe variations in tone.
Pattern
 Spatial arrangement of visibly discernible objects
 Repetitive patterns of both natural and cultural features are quite
common, which is fortunate because much image interpretation is
aimed at the mapping and analysis of relatively complex features
rather than the more basic units of which they may be composed. Such
features include agricultural complexes (e.g. farms and orchards) and
terrain features (e.g. alluvial river valleys and coastal plains).
Texture
Arrangement and frequency of tonal variation (closely associated
with tone)smooth irregular
Same tone but different textures are possible!
Texture is an important image characteristic closely
associated with tone in the sense that it is a quality
thatpermits two areas of the same overall tone to be
differentiated on the basis of microtonal patterns. Common
image textures include smooth, rippled, mottled, lineated
and irregular. Unfortunately, texture analysis tends to be
rather subjective, since different interpreters may use the
same terms in slightly different ways. Texture is rarely the
only criterion of identification or correlation employed in
interpretation. More often it is invoked as the basis for a
subdivision of categories already established using more
fundamental criteria. For example:two rock units may have
the same tone but different textures.
Site / Association Site
•Relationship with other recognizable features in proximity to
target of interest
•At an advanced stage in image interpretation, the location of an
object with respect to terrain features of other objects may be
helpful in refining the identification and classification of certain
picture contents. For example some tree species are found more
commonly in one topographic situation than in others, while in
industrial areas the association of several clustered, identifiable
structures may help us determine the precise nature of the local
enterprise. For example, the combination of one or two tall
chimneys, a large central building, conveyors, cooling towers
and solid fuel piles point to the correct identification of a thermal
power station
Shadow
Hidden profiles may be revealed in silhouette (e.g. the shapes of
buildings or the forms of field boundaries). Shadows are
especially useful in geomorphological studies where micro relief
features may be easier to detect under conditions of low-angle
solar illumination than when the sun is high in the sky.
Unfortunately, deep shadows in areas of complex detail may
obscure significant features, e.g. the volume and distribution of
traffic on a city street.
Resolution
Capability to distinguish two closely spaced objects
Resolution of a sensor system may be defined as its capability to
discriminate two closely spaced objects from each other. More than
most other picture characteristics, resolution depends on aspects of the
remote sensing system itself, including its nature, design and
performance, as well as the ambient conditions during the sensing
programme and subsequent processing of the acquired data. An
interpreter must have a knowledge about the resolution of various
remote sensing data products.
Global Positioning System(GPS)
• In 1973 the U.S. DOD decided to establish,
develop, test, acquire, and deploy a spaceborne
Global Positioning System (GPS), resulting in
the NAVSTARGPS (NAVigation Satellite
Timing And Ranging Global Positioning
System).
• It is an all-weather, space based navigation
system development by the U.S. DOD to satisfy
the requirements for the military forces to
accurately determine their position, velocity, and
time in a common reference system, anywhere
on or near the Earth on a continuous basis. ”
• The Global Positioning System (GPS) is a space-
based satellite navigation system that provides
location and time information in all weather
conditions, anywhere on or near the Earth where
there is an clear line of sight to four or more GPS
satellites
• GPS was developed by the US department of defense
to simply accurate navigation
• GPS uses satellite and computer to computer position
any where on earth
• 24 satellite are active in the space , which informed
geographical situation.
• For 3- dimension latitude , longitude and altitude
require at least 4 satellite information accuracy
GPS General Characteristics
• Developed by the US DOD
• Provides
– Accurate Navigation
• 10 - 20 m
– Worldwide Coverage
– 24 hour access
– Common Coordinate System
• Designed to replace existing
navigation systems
• Accessible by Civil and Military
• Development costs estimate ~$12 billion
• Annual operating cost ~$400 million
• 3 Segments:
– Space: Satellites
– User: Receivers
– Control: Monitor & Control stations
• Prime Space Segment contractor: Rockwell International (now
Lockheed Martin)
• Coordinate Reference: WGS-84
• Operated by US Air Force Space Command (AFSC)
– Mission control center operations at Schriever (formerly
Falcon) AFB, Colorado Springs
Control Segment
1 Master Station
5 Monitoring Stations
Space Segment
NAVSTAR : NAVigation
Satellite Time and Ranging
24 Satellites (30)
20200 Km
User Segment
Receive Satellite Signal
GPS System Components
Space Segment
• 24 Satellites
– 4 satellites in 6
Orbital Planes
inclined at 55 Degrees
• 20200 Km above the Earth
• 12 Hourly orbits
– In view for 4-5 hours
• Designed to last 7.5 years
• Different Classifications
– Block 1, 2, 2A, 2R & 2 F
Equator
55
Control Segment
Master Control Station
Monitor Station
Ground Antenna
Colorado
Springs
Hawaii Ascension
Islands
Diego
Garcia
Kwajalein
Monitor and Control
• Master Control Station
– Responsible for collecting tracking data from the
monitoring stations and calculating satellite orbits and
clock parameters
• 5 Monitoring Stations
– Responsible for measuring pseudorange data. This
orbital tracking network is used to determine the
broadcast ephemeris and satellite clock modeling
– Ground Control Stations
– Responsible for upload of information to SV’s
GPS Segments
Monitor
Stations
Ground
Antennas
AFSCN
Master Control Station
Space Segment
User Segment
Control Segment
Satellite Constellation
Master Control Station
FAIRBANKS
USNO WASH D.C.
NEW ZEALAND
ECUADOR
ARGENTINA
ENGLAND
BAHRAIN
SOUTH
AFRICA
SOUTH
KOREA
COLORADO SPRINGS
VANDENBERG, AFB
HAWAII
CAPE CANAVERAL
ASCENSION
DIEGO
GARCIA
KWAJALEIN
TAHITI
Master Control Station (MCS) Advanced Ground Antenna
Ground Antenna (GA) Monitor Station (MS)
National Geospatial-Intelligence Agency (NGA) Tracking Station
Alternate Master Control Station (AMCS)
GPS ERROR
• Atmospheric Refraction
1.Ionsospheric Refraction
–Sun’s activity,
–Ionospheric thickness & proportions/concentration of ionized
particles,
–Season,
–Actual path (i.e. relative position of satellite & receiver) of the
wave i.e. signal
–Pseudo range errors vary from 0 to 15m at zenithal incidence to
as much as 45m for low incidence signals.
2. Tropospheric Refraction
– The delay depends on temperature, pressure, humidity, and
elevation of satellite.
• Atmospheric Delay
• Satellite Mask Angle
• Sources of Signal Interference
• Multipath Error
• Signal Obstruction
When something blocks the GPS signal.
Areas of Great Elevation Differences
• Canyons
• Mountain Obstruction
• Urban Environments
• Indoors
• Human Error
• Noise, Bias, and Blunder
Applications
• Topo and Locations
• Mapping
• Monitoring
• Volumes
• Photo control
• Construction Control and Stakeout
• Boundaries
• Seismic Stakeout
• Profiles
• Establishing Portable Control Stations (sharing with Total Stations)
• Agriculture - Slope Staking
• Tracking of people, vehicles
• Plate movements
• Sports (boating, hiking,…)
• Archeology
• Public Transport
• Emergency services

More Related Content

What's hot

Fundamentals of Remote Sensing
Fundamentals of Remote Sensing Fundamentals of Remote Sensing
Fundamentals of Remote Sensing
Pallab Jana
 
Remote sensing
Remote sensingRemote sensing
Remote sensing
Gokul Saud
 

What's hot (20)

Remote sensing - Sensors, Platforms and Satellite orbits
Remote sensing - Sensors, Platforms and Satellite orbitsRemote sensing - Sensors, Platforms and Satellite orbits
Remote sensing - Sensors, Platforms and Satellite orbits
 
Digital elevation model in GIS
Digital elevation model in GISDigital elevation model in GIS
Digital elevation model in GIS
 
Thermal Remote Sensing
Thermal Remote SensingThermal Remote Sensing
Thermal Remote Sensing
 
Chapter 1 (Introduction to remote sensing)
Chapter 1 (Introduction to remote sensing)Chapter 1 (Introduction to remote sensing)
Chapter 1 (Introduction to remote sensing)
 
Introduction to Remote Sensing
Introduction to Remote SensingIntroduction to Remote Sensing
Introduction to Remote Sensing
 
Microwave remote sensing
Microwave remote sensingMicrowave remote sensing
Microwave remote sensing
 
Remote sensing-presentaion
Remote sensing-presentaionRemote sensing-presentaion
Remote sensing-presentaion
 
Types of aerial photographs
Types of aerial photographsTypes of aerial photographs
Types of aerial photographs
 
Introduction to remote sensing and gis
Introduction to remote sensing and gisIntroduction to remote sensing and gis
Introduction to remote sensing and gis
 
Digital image processing
Digital image processingDigital image processing
Digital image processing
 
Remote sensing
Remote sensingRemote sensing
Remote sensing
 
Remote Sensing and its Application
Remote Sensing and its ApplicationRemote Sensing and its Application
Remote Sensing and its Application
 
Introduction to aerial photography and photogrammetry.ppt
Introduction to aerial photography and photogrammetry.pptIntroduction to aerial photography and photogrammetry.ppt
Introduction to aerial photography and photogrammetry.ppt
 
georeference
georeferencegeoreference
georeference
 
Digital Elevation Model (DEM)
Digital Elevation Model (DEM)Digital Elevation Model (DEM)
Digital Elevation Model (DEM)
 
Microwave remote sensing
Microwave remote sensingMicrowave remote sensing
Microwave remote sensing
 
Fundamentals of Remote Sensing
Fundamentals of Remote Sensing Fundamentals of Remote Sensing
Fundamentals of Remote Sensing
 
Remote Sensing PPT
Remote Sensing PPTRemote Sensing PPT
Remote Sensing PPT
 
Remote sensing
Remote sensingRemote sensing
Remote sensing
 
Spot satellite
Spot satelliteSpot satellite
Spot satellite
 

Similar to Remote sensing

rsgis-unitii-160731062950.pdf
rsgis-unitii-160731062950.pdfrsgis-unitii-160731062950.pdf
rsgis-unitii-160731062950.pdf
BSuresh26
 
Concepts of Remote Sensing: Process and Stages of Remote Sensing, Remote Sens...
Concepts of Remote Sensing: Process and Stages of Remote Sensing, Remote Sens...Concepts of Remote Sensing: Process and Stages of Remote Sensing, Remote Sens...
Concepts of Remote Sensing: Process and Stages of Remote Sensing, Remote Sens...
ShavnamMehta
 

Similar to Remote sensing (20)

.remote sensing.Ece 402 unit-2
.remote sensing.Ece 402 unit-2.remote sensing.Ece 402 unit-2
.remote sensing.Ece 402 unit-2
 
Remote sensing: Its Application & Types
Remote sensing: Its Application & TypesRemote sensing: Its Application & Types
Remote sensing: Its Application & Types
 
Iirs rstechnologypdf
Iirs rstechnologypdfIirs rstechnologypdf
Iirs rstechnologypdf
 
remote sensing-converted.pptx
remote sensing-converted.pptxremote sensing-converted.pptx
remote sensing-converted.pptx
 
APPLICATION OF REMOTE SENSING AND GIS IN AGRICULTURE
APPLICATION OF REMOTE SENSING AND GIS IN AGRICULTUREAPPLICATION OF REMOTE SENSING AND GIS IN AGRICULTURE
APPLICATION OF REMOTE SENSING AND GIS IN AGRICULTURE
 
Introduction to Remote Sensing- by Wankie Richman
Introduction to Remote Sensing- by Wankie RichmanIntroduction to Remote Sensing- by Wankie Richman
Introduction to Remote Sensing- by Wankie Richman
 
Remote Sensing_2020-21 (1).pdf
Remote Sensing_2020-21  (1).pdfRemote Sensing_2020-21  (1).pdf
Remote Sensing_2020-21 (1).pdf
 
Remote sensing in entomology
Remote sensing in entomologyRemote sensing in entomology
Remote sensing in entomology
 
RS - Presentation by AP2023.pptx
RS - Presentation by AP2023.pptxRS - Presentation by AP2023.pptx
RS - Presentation by AP2023.pptx
 
2 Intro RS.pdf
2 Intro RS.pdf2 Intro RS.pdf
2 Intro RS.pdf
 
Remote Sensing
Remote SensingRemote Sensing
Remote Sensing
 
detail information of advance total station and remote sensing
detail information of advance total station and remote sensingdetail information of advance total station and remote sensing
detail information of advance total station and remote sensing
 
remote sensing for study.docx
remote sensing for study.docxremote sensing for study.docx
remote sensing for study.docx
 
Remote sensing
Remote sensingRemote sensing
Remote sensing
 
Remote Sensing - Fundamentals
Remote Sensing - FundamentalsRemote Sensing - Fundamentals
Remote Sensing - Fundamentals
 
Chapter 2 RS.pptx
Chapter 2 RS.pptxChapter 2 RS.pptx
Chapter 2 RS.pptx
 
rsgis-unitii-160731062950.pdf
rsgis-unitii-160731062950.pdfrsgis-unitii-160731062950.pdf
rsgis-unitii-160731062950.pdf
 
Concepts of Remote Sensing: Process and Stages of Remote Sensing, Remote Sens...
Concepts of Remote Sensing: Process and Stages of Remote Sensing, Remote Sens...Concepts of Remote Sensing: Process and Stages of Remote Sensing, Remote Sens...
Concepts of Remote Sensing: Process and Stages of Remote Sensing, Remote Sens...
 
Unit iv remote sensing
Unit iv remote sensingUnit iv remote sensing
Unit iv remote sensing
 
Basic of Remote Sensing
Basic of Remote SensingBasic of Remote Sensing
Basic of Remote Sensing
 

Recently uploaded

XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
ssuser89054b
 
notes on Evolution Of Analytic Scalability.ppt
notes on Evolution Of Analytic Scalability.pptnotes on Evolution Of Analytic Scalability.ppt
notes on Evolution Of Analytic Scalability.ppt
MsecMca
 
VIP Call Girls Palanpur 7001035870 Whatsapp Number, 24/07 Booking
VIP Call Girls Palanpur 7001035870 Whatsapp Number, 24/07 BookingVIP Call Girls Palanpur 7001035870 Whatsapp Number, 24/07 Booking
VIP Call Girls Palanpur 7001035870 Whatsapp Number, 24/07 Booking
dharasingh5698
 
Call Now ≽ 9953056974 ≼🔝 Call Girls In New Ashok Nagar ≼🔝 Delhi door step de...
Call Now ≽ 9953056974 ≼🔝 Call Girls In New Ashok Nagar  ≼🔝 Delhi door step de...Call Now ≽ 9953056974 ≼🔝 Call Girls In New Ashok Nagar  ≼🔝 Delhi door step de...
Call Now ≽ 9953056974 ≼🔝 Call Girls In New Ashok Nagar ≼🔝 Delhi door step de...
9953056974 Low Rate Call Girls In Saket, Delhi NCR
 

Recently uploaded (20)

Work-Permit-Receiver-in-Saudi-Aramco.pptx
Work-Permit-Receiver-in-Saudi-Aramco.pptxWork-Permit-Receiver-in-Saudi-Aramco.pptx
Work-Permit-Receiver-in-Saudi-Aramco.pptx
 
Minimum and Maximum Modes of microprocessor 8086
Minimum and Maximum Modes of microprocessor 8086Minimum and Maximum Modes of microprocessor 8086
Minimum and Maximum Modes of microprocessor 8086
 
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
 
notes on Evolution Of Analytic Scalability.ppt
notes on Evolution Of Analytic Scalability.pptnotes on Evolution Of Analytic Scalability.ppt
notes on Evolution Of Analytic Scalability.ppt
 
22-prompt engineering noted slide shown.pdf
22-prompt engineering noted slide shown.pdf22-prompt engineering noted slide shown.pdf
22-prompt engineering noted slide shown.pdf
 
Navigating Complexity: The Role of Trusted Partners and VIAS3D in Dassault Sy...
Navigating Complexity: The Role of Trusted Partners and VIAS3D in Dassault Sy...Navigating Complexity: The Role of Trusted Partners and VIAS3D in Dassault Sy...
Navigating Complexity: The Role of Trusted Partners and VIAS3D in Dassault Sy...
 
(INDIRA) Call Girl Meerut Call Now 8617697112 Meerut Escorts 24x7
(INDIRA) Call Girl Meerut Call Now 8617697112 Meerut Escorts 24x7(INDIRA) Call Girl Meerut Call Now 8617697112 Meerut Escorts 24x7
(INDIRA) Call Girl Meerut Call Now 8617697112 Meerut Escorts 24x7
 
UNIT - IV - Air Compressors and its Performance
UNIT - IV - Air Compressors and its PerformanceUNIT - IV - Air Compressors and its Performance
UNIT - IV - Air Compressors and its Performance
 
Block diagram reduction techniques in control systems.ppt
Block diagram reduction techniques in control systems.pptBlock diagram reduction techniques in control systems.ppt
Block diagram reduction techniques in control systems.ppt
 
KubeKraft presentation @CloudNativeHooghly
KubeKraft presentation @CloudNativeHooghlyKubeKraft presentation @CloudNativeHooghly
KubeKraft presentation @CloudNativeHooghly
 
Thermal Engineering -unit - III & IV.ppt
Thermal Engineering -unit - III & IV.pptThermal Engineering -unit - III & IV.ppt
Thermal Engineering -unit - III & IV.ppt
 
Call Girls Wakad Call Me 7737669865 Budget Friendly No Advance Booking
Call Girls Wakad Call Me 7737669865 Budget Friendly No Advance BookingCall Girls Wakad Call Me 7737669865 Budget Friendly No Advance Booking
Call Girls Wakad Call Me 7737669865 Budget Friendly No Advance Booking
 
VIP Call Girls Palanpur 7001035870 Whatsapp Number, 24/07 Booking
VIP Call Girls Palanpur 7001035870 Whatsapp Number, 24/07 BookingVIP Call Girls Palanpur 7001035870 Whatsapp Number, 24/07 Booking
VIP Call Girls Palanpur 7001035870 Whatsapp Number, 24/07 Booking
 
Introduction to Serverless with AWS Lambda
Introduction to Serverless with AWS LambdaIntroduction to Serverless with AWS Lambda
Introduction to Serverless with AWS Lambda
 
Unit 1 - Soil Classification and Compaction.pdf
Unit 1 - Soil Classification and Compaction.pdfUnit 1 - Soil Classification and Compaction.pdf
Unit 1 - Soil Classification and Compaction.pdf
 
University management System project report..pdf
University management System project report..pdfUniversity management System project report..pdf
University management System project report..pdf
 
Call Now ≽ 9953056974 ≼🔝 Call Girls In New Ashok Nagar ≼🔝 Delhi door step de...
Call Now ≽ 9953056974 ≼🔝 Call Girls In New Ashok Nagar  ≼🔝 Delhi door step de...Call Now ≽ 9953056974 ≼🔝 Call Girls In New Ashok Nagar  ≼🔝 Delhi door step de...
Call Now ≽ 9953056974 ≼🔝 Call Girls In New Ashok Nagar ≼🔝 Delhi door step de...
 
Bhosari ( Call Girls ) Pune 6297143586 Hot Model With Sexy Bhabi Ready For ...
Bhosari ( Call Girls ) Pune  6297143586  Hot Model With Sexy Bhabi Ready For ...Bhosari ( Call Girls ) Pune  6297143586  Hot Model With Sexy Bhabi Ready For ...
Bhosari ( Call Girls ) Pune 6297143586 Hot Model With Sexy Bhabi Ready For ...
 
VIP Model Call Girls Kothrud ( Pune ) Call ON 8005736733 Starting From 5K to ...
VIP Model Call Girls Kothrud ( Pune ) Call ON 8005736733 Starting From 5K to ...VIP Model Call Girls Kothrud ( Pune ) Call ON 8005736733 Starting From 5K to ...
VIP Model Call Girls Kothrud ( Pune ) Call ON 8005736733 Starting From 5K to ...
 
Thermal Engineering Unit - I & II . ppt
Thermal Engineering  Unit - I & II . pptThermal Engineering  Unit - I & II . ppt
Thermal Engineering Unit - I & II . ppt
 

Remote sensing

  • 1. Introduction • Remote Sensing is an interesting and exploratory science , as it provides images of areas in a fast and “cost-efficient manner, and attempts to demonstrate the “what is happening right now” in a study area. • Remote: because observation is done at a distance without physical contact with the object of interest • Sensing: Detection of energy, such as light or another form of electromagnetic energy
  • 2. Definition • The science and art of obtaining information about an object, area, or phenomenon through the analysis of data acquired by a device that is not in contact with the object, area, or phenomenon under investigation“.(L&K,1994). • The term Remote Sensing means the sensing of the Earth's surface from space by making use of the properties of electromagnetic waves emitted, reflected by the sensed objects, for the purpose of improving natural resources management, land use and the protection of the environment. (UN, 1999)
  • 3. History • Galileo introduced the telescope to astronomy in 1609 • 1827 - first photograph • 1858 - first aerial photograph from a hot air balloon • 1861-1865 - Balloon photography used in American Civil War • 1888 – ‘rocket’ cameras • 1903 - pigeon-mounted camera patented • 1906 - photograph from a kite • 1914-1945 - Plane mounted Cameras WWI, WWII • 1908 —First photos from an airplane • 1909—Dresden International Photographic Exhibition • 1914-1918 — World War I
  • 4. 1858 - First aerial (balloon) photo; picture of Paris Actual Pigeon Pictures
  • 5. San Francisco from a kite, 1906
  • 6. Cont….. • 1957 - Sputnik-1 • 1960 - 1st meteorological satellite ‘TIROS-1’ launched • 1967 - NASA ‘Earth Resource Technology Satellite’ programme • 1972 - ERTS (Landsat) 1 launched... • 1970-1980 : Rapid advances in Digital Image Processing • 1986 : SPOT French Earth Observation Satellite • 1980s : Development of Hyperspectral sensors • 1990s : Global Remote Sensing system
  • 7. Why Remote Sensing? • Systematic data collection • Information about three dimensions of real objects • Repeatability • Global coverage • The only solution sometimes for the otherwise inaccessible areas • Multipurpose information
  • 8. Remote Sensing Process • Energy Source or Illumination (A) - the first requirement for remote sensing is to have an energy source which illuminates or provides electromagnetic energy to the target of interest. • Radiation and the Atmosphere (B) - as the energy travels from its source to the target, it will come in contact with and interact with the atmosphere it passes through. This interaction may take place a second time as the energy travels from the target to the sensor. • Interaction with the Target (c) - once the energy makes its way to the target through the atmosphere, it interacts with the target depending on the properties of both the target and the radiation.
  • 9. • Recording of Energy by the Sensor (D) - after the energy has been scattered by, or emitted from the target, we require a sensor (remote - not in contact with the target) to collect and record the electromagnetic radiation. • Transmission, Reception, and Processing (E) - the energy recorded by the sensor has to be transmitted, often in electronic form, to a receiving and processing station where the data are processed into an image (hardcopy and/or digital).
  • 10. • Interpretation and Analysis (F) - the processed image is interpreted, visually and/or digitally, to extract information about the target which was illuminated. • Application (G) - the final element of the remote sensing process is achieved when we apply the information we have been able to extract from the imagery about the target in order to better understand it, reveal some new information, or assist in solving a particular problem.
  • 11.
  • 12. Applications • Land Use and Land Cover • Geologic and soil • Agriculture • Forestry • Water/snow Resources • Urban and Regional Planning • Wildlife Ecology • Archaeological • Environment Assessment • Natural Disaster • Ocean and weather Monitoring
  • 13. Remote Sensing Sensor Passive sensors- Passive system record energy reflected or emitted by a target illuminated by sun. e.g. normal photography, most optical satellite sensors Active sensors- Active system illuminates target with energy and measure reflection. e.g. Radar sensors,
  • 14. Passive Remote Sensing Doesn’t employ any external source of energy.  Measures either reflected radiation from Sun (can be operated only during daytime) or the emitted radiation from the surface (day/night operation).  Suffers from variable illumination conditions of Sun and influence of atmospheric conditions Active Remote sensing Has its own source of energy  Active sensors emit a controlled beam of energy to the surface and measure the amount of energy reflected back to the sensor.  Controlled illumination signal  Day/night operation
  • 15. Passive sensors : collect electromagnetic radiation in the visible and infra-red part of the spectrum: • Aerial Photographs • Low resolution: Landsat,, SPOT, IRS • High Resolution: Quickbird, IKONOS Active sensors : generate their own radiation: • Air-borne RADAR • Space borne RADAR: ERS 1 / 2, Radarsat • LiDAR (laser scanner)
  • 16. Fundamental Physic of Remote Sensing Remote Sensing relies on the measurement of ElectroMagnetic (EM) energy. The most important source of EM energy is the sun Some sensors detect energy emitted by the Earth itself or provide their own energy (Radar) . All matter reflects, penetrate, observes and emits of EMR in unique way, which called spectral characteristics Two characteristic of electromagnetic radiation are particularly important for understanding remote sensing. These are the wavelength and frequency.
  • 17. The wave length is the length of one wave cycle which can be measured as the distance between successive Wave crests. wave length is usually represented by the lambda wave length is measured in meter (M) or some factors of meters such as nanometers (nn) micrometers(um) , centimeters (cm) Frequency refers to the number of cycles of waves passing a fixed point per unit of time. Frequency is normally measured in hertz (Hz), equivalent to one cycle per second and various multiples of hertz.
  • 18. Wavelength and frequency are related by the following formula • Frequency, ν = c/λ where, λ = wavelength c = speed of light = 3.00 x 108 m/s Therefore, the two are inversely related to each other. The shorter the wave length, the higher the frequency. The longer the wave length, the lower frequency Energy
  • 19. Electro-magnetic spectrum (EMS) • Electro-magnetic spectrum (EMS) is an array of all EMR , which moves velocity of light, characteristic of wavelength and frequency of energy. • The electromagnetic spectrum ranges from the shorter wavelengths (including gamma and x- rays) to the longer wavelengths (including microwaves and broadcast radio waves) • There are several regions of the electromagnetic spectrum which are useful for remote sensing
  • 20. Cosmic rays Gamma rays U-V Infrared Micro-waves TV Radio Electric power Visible spectrum 0.3m 0.4 0.5m 0.6 0.7m 10.0 15.0 W a v e l e n g t h Blue Green Red 300nm 500nm 700nm ‘Optical range’
  • 21. Ultraviolet Range • This radiation is just beyond the violet portion of the visible wavelengths • Some Earth surface materials, primarily rocks and minerals, emit visible light when illuminated by UV radiation. Visible Range • The light which our eyes - our "remote sensors" can detect is part of the visible spectrum. • The visible wavelengths cover a range from approximately 0.4 to 0.7 µm. The longest visible wavelength is red and the shortest is violet .This is the only portion of the spectrum we can associate with the concept of colours.
  • 22. Violet: 0.4 - 0.446 µm, Blue: 0.446 - 0.500 µm Green: 0.500 - 0.578 µm , Yellow: 0.578 - 0.592 µm Orange: 0.592 - 0.620 µm , Red: 0.620 - 0.7 µm ►Visible Blue Band (.45-.52 microns) ►Visible Green Band (.52-.60 microns) ►Visible Red Band (.63-.69 microns) ►Panchromatic Bands (.50-.90 microns)
  • 23. Panchromatic Bands (.50-.90 micrometers) ►Wide range of sensitivity ►Visible to Near IR ►Higher spatial resolution ►Can be combined with other multi-spectral bands. Visible Blue Band (.45-.52 micrometers)  Greatest water penetration  Greatest atmospheric scattering  Greatest absorption  Used for : water depth ,water characteristics detection of subsurface features ,soil and vegetation discrimination
  • 24. Visible Green Band (.52-.60 micrometers) ►Vegetation discrimination ►Urban Infrastructure ►Less affected by atmospheric scattering ►Sediment and Chlorophyll Concentration Visible Red Band (.63-.69 micrometers) Chlorophyll absorption band of healthy green vegetation. Vegetation type Plant condition Least affected by atmospheric scattering Less water penetration but good near surface information ie. Water quality, sediment, and chlorophyll.
  • 25. Infrared Range Infrared region approximately 0.7 µm to 100 µm - more than 100 times as wide as the visible portion! Divided two categories based on their radiation properties - the reflected IR, and the emitted or thermal IR. reflected IR region is used for remote sensing purposes in ways very similar to radiation in the visible portion. The reflected IR covers wavelengths from approximately 0.7 µm to 3.0 µm. The thermal IR region is quite different than the visible and reflected IR portions, as this energy is essentially the radiation that is emitted from the Earth's surface in the form of heat. The thermal IR covers wavelengths from approximately 3.0 µm to 100 µm.
  • 26. Microwave Range • The portion of the spectrum of more recent interest to remote sensing is the microwave region from about 1 mm to 1 m. • This covers the longest wavelengths used for remote sensing. The shorter wavelengths have properties similar to the thermal infrared region while the longer wavelengths approach the wavelengths used for radio broadcasts • Because of their long wavelengths, compared to the visible and infrared, microwaves have special properties that are important for remote sensing. • Longer wavelength microwave radiation can penetrate though cloud, fog, haze etc as the longer wavelengths are not susceptible to atmospheric scattering which affects shorter optical wavelengths
  • 27. Interaction with target • There are three forms of interaction that can take place when energy strikes. • Absorption (A): radiation is absorbed into the target • Transmission (T): radiation passes through a target • Reflection (R): radiation "bounces" off the target and is redirected • The proportions of each interaction will depend on the wavelength of the energy and the material and condition of the feature.
  • 28. Interaction in atmosphere and on land R.S. Instrument Sun Clouds transmitted radiation Scattered radiation* Atmospheric absorption Earth Reflection processes Emission processes Thermal emission Atmospheric emission Reflected radiation scattered radiation** * selective radiation (bluish optical images) ** non-selective radiation (white clouds) Atmosphere Atmospheric interactions
  • 29. Atmospheric Scattering • This occurs when the particles of gaseous molecules present in the atmosphere cause the EM waves to be redirected from the original path • Raleigh scattering : size atmospheric particles < than the wavelengths of incoming radiation • Mie scattering : size atmospheric particles ~ than the wavelengths of incoming radiation • Non-selective scattering : size atmospheric particles > than the wavelengths of incoming radiation
  • 30. Atmospheric windows • Atmospheric windows is that portion of the electromagnetic spectrum that can be transmitted through the atmosphere without any distortion or absorption. Light in certain wavelength regions can penetrate the atmosphere well. These regions are called atmospheric windows. • Those areas of the spectrum which are not severely influenced by atmospheric absorption and thus, are useful to remote sensors, are called atmospheric windows
  • 31.
  • 32. Spectral Reflectance • Spectral reflectance, (p)is the ratio of reflected energy to incident energy as a function of wavelength. • The reflectance characteristics of the earth’s surface features are expressed by spectral reflectance, which is given by: p= ( R / I) x 100 Where,(p)) = Spectral reflectance at a particular wavelength. R= Energy of wavelength reflected from object I= Energy of wavelength incident upon the object
  • 33. Spectral Reflectance Curve for vegetation
  • 34. Factor affecting Spectral Signature on vegetation • Pigmentation absorption (visible light for photosynthesis) • Physiological structure (NIR reflectance) : • Leaf moisture content : (Major :1.4, 1.9 and 2.7μm , Minor : 0.96 & 1.1μm). • Soil background : Dark soil is better distinguishable from vegetation in NIR • Senescent vegetation : Due to aging , crop ripening, rise in reflectance in blue and red wavelengths. • Angular elevation of sun and sensor and Canopy geometry : Reflectance of a rough tree canopy is dependent on solar angle. • Phenological canopy changes (seasonal changes) : e.g. ; For grasslands, Reflectance in red is maximized in autumn and minimized in spring, NIR maximized in summer, minimized in winter.
  • 36. Factor affecting Spectral Signature on vegetation Moisture content Texture Structure
  • 37. Spectral Reflectance Curve for Water a) Ocean water b) Turbid water c) Water with chlorophyll
  • 38. Factor affecting Spectral Signature on Water • Majority of radiant flux incident is either absorbed or transmitted Visible: little absorbed, <5% reflected and rest transmitted NIR: Mostly absorbed • Depth of water • Suspended material within water • Surface roughness
  • 39. Remote Sensing Observation Platform Sensor-the device that actually gathers the remotely sensed data Platform-the device to which the sensor is attached  The vehicles or carriers for remote sensing are called the platform. Based on its altitude above earth surface Typically platform are satellite and aircraft but they can also, aero plane, balloons, kites. platform may be classified as, 1) Ground borne 2) Air borne 3) Space borne
  • 40. Air born and space born platform have been in use in remote sensing of earth resources .the ground based remote sensing system for earth resources studies are mainly used for collecting the ground truth or for laboratory simulation studies.
  • 41. Path An orbit is the course of motion taken by the satellite in space and the ground trace of the orbits called a 'Path‘ Row The lines joining the corresponding scene centers of different paths are parallel to the equator are called ‘Rows’. Orbits The path followed by the satellite is called orbit. Inclined Equatorial Polar
  • 42. Satellite orbital characteristics Altitude • It is the distance(in Km) from the satellite to the mean surface level of the earth. Inclination angle • The angle (in degrees) between the orbit and the equator. Period • It is the time(in minutes) required to complete one full orbit. A polar satellite orbiting at an altitude of 800km has a period of 90mins Repeat Cycle • It is the time (in days) between two successive identical orbits
  • 43. Perigee & Apogee • Perigee: It is the point in the orbit where an earth satellite is closest to the earth. • Apogee: It is the point in the orbit where an earth satellite is farthest from the earth. Swath As satellite revolves around the Earth, the sensor sees a certain portion of the Earth 'surface. The area is known as swath.
  • 44. Ascending pass and Descending pass • The near polar satellites travel north ward on one side of the earth(ascending pass) and towards South Pole on the second half of the orbit(descending pass). •The ascending pass is on the shadowed side while the descending pass is on the sunlit side. •Optical sensors image the surface on a descending pass, while active sensors and emitted thermal and microwave radiation can also image the surface on ascending pass Inclination angle Equator South Pole Ground track Ascending node Orbit Descending node
  • 45. Types of Satellite Orbits Geostationary Orbits
  • 46. These satellite are also know as geosynchronous satellite tor which at an altitude of around 36000 km. above equator orbiting the earth and makes one revolution in 24 hours, synchronous with earth rotation. This platform are covering the same place and give continuous near hemispheric coverage over the same area day and night. Its covers is limit to 70° N to 70° S latitude . these area mainly used for communication and metrological application. GOES (U.S.A) , METEOR (U.S.S.R), GMS (Japan) Uses of Geostationary Orbits Weather satellites : (GOES, METEOSAT, INSAT) Communication : Telephone and television relay satellites Limited spatial coverage
  • 47. Polar orbital satellite( Sun synchronous satellite)
  • 48. • There are earth satellite in which the orbital plane is near polar and altitude (mostly 800-900 km) is such that the satellite passes over all places on earth having the same latitude twice in each orbit at the same local time. Through these satellite the entire globe is covered on regular basis and gives repetitive coverage on periodic basis. all the remote sensing resources satellite may be grouped in this category. LANDSAT, SPOT, IRS
  • 49. Resolution Resolution is defined as the ability of the system to render the information at the smallest discretely separable quantity in term of distance (spatial), wavelength band of EMR(spectral), time(temporal) and radiation quantity (radiometric) Spatial Resolution Spatial Resolution is the projection of a detector element or a slit onto the ground . In the other word scanner’s spatial resolution is the ground segment sensed at any instant. It also called ground resolution element Ground Resolutions = H*IFOV
  • 50. The spatial resolution at which data are acquired has two effects – the ability to identify various feature and quantify their extent
  • 51.
  • 52. Spectral Resolution Spectral Resolutions describes the ability of the sensor to define fine wavelength intervals i.e. sampling the spatially segmented image in different spectral intervals, thereby allowing the spectral irradiance of the image to be determined. Landsat TM imagery band 1 = blue band 2 = green band 3 = red band 4 = near-ir band 5 = mid-ir band 6 = mid-ir
  • 53. Single band 333 Multi band 123 Multi band 453 Multi band 432
  • 54. Radiometric Resolutions • This is the measure of the sensor to differentiate the smallest change in the spectral reflectance between various targets. the digitisation is referred to as quantisation and is expressed as n binary bits. Thus 7 bits digitisation implies 2⁷ or 128 discreet levels(0-127) Low Radiometric resolution High Radiometric resolution
  • 55. Temporal Resolution Temporal resolution is also called as the receptivity of the satellite. It is the capability of the satellite to image to exact same area as the same viewing angle at different period time.
  • 56. Types of Ground Tracking Many electronic remote sensors acquire data using scanning system. A Scanning system used to collect data over a variety of different wavelength ranges is called a multispectral scanner(MSS) is the most commonly used scanning system. There are two main modes of scanning Whisk Broom/ Across track Push Broom/ Along Track
  • 57. Whisk Broom/ Across track • Using a rotating or oscillating mirror , such system scan the terrain along scan lines that are right angles to the flight line. The scanner to repeatedly measure the energy from one side of some 90˚ to 120 ˚.
  • 58.
  • 59. Push Broom/ Along Track • Push Broom/ Along Track scanners record multispectral image data along a swath beneath an aircraft. Also similar is the use of the forward motion of the aircraft to build up a tow dimensional image by recording successive scan line that are oriented at right angle to the flight direction.
  • 60.
  • 61.
  • 62.
  • 63.
  • 64.
  • 65.
  • 66.
  • 67.
  • 68.
  • 69.
  • 70.
  • 71.
  • 72.
  • 73. PHOTOGRAMMETRY The science of quantitative analysis of measurements from Photograph Photos – light Gramma - to draw Metron - to measure Photogrammetry is defined as the art, science and technology of obtaining reliable Information about physical objects and the environment through process of recording, measuring and interpreting photographic images and patterns of recorded radiant electromagnetic energy and other phenomena .As implied by its name, the science originally consisted of analyzing photographs.
  • 74. WHY PHOTOGRAMMETRY ? • Very precise • Time effective • Cost effective • Based on well established and tested • algorithms. • Less manual effort • More geographic fidelity • Corrects all sorts of distortions. • Provide a reasonable geometric modeling alternative when little is known about the geometric nature of the image data. • Provide an integrated solution for multiple images or photographs simultaneously • Achieve a reasonable accuracy without a great number of GCPs • Create a three-dimensional stereo model or to extract the elevation information
  • 76. Types of photogrammetry • Terrestrial • Aerial • Satellite
  • 77. BRANCHES OF PHOTOGRAMMETRY Analog photogrammetry In analog photogrammetry, optical or mechanical instruments were used to reconstruct three-dimensional geometry from two overlapping photographs. The main product during this phase was topographic maps
  • 78. Analytical photogrammetry The computer replaces some expensive optical and mechanical components. The resulting devices were analog/digital hybrids.  Analytical aerotriangulation, analytical plotters, and orthophoto projectors were the main developments during this phase.  Outputs of analytical photogrammetry can be topographic maps, but can also be digital products, such as digital maps and DEMs
  • 79. Digital photogrammetry •Digital photogrammetry is applied to digital images that are stored and processed on a computer. • Digital photogrammetry is sometimes called softcopy photogrammetry. • The output products are in digitalform, such as digital maps, DEMs, and digital orthophotos saved on computer storage media.
  • 80. Aerial Photography Factors  Type of Aircraft  Camera/Focal length  Type of film  Shape of project area/terrain  Location of existing control  Ground conditions  Atmospheric conditions  Restricted areas
  • 81. Standard Aerial Film Types Black & white ► Kodak Double-X Aerographic 2405: Black and white medium-to-high speed, standard film for mapping and charting ►Agfa Aviphot Pan 150: Fine grain film for high altitudes Color ►Kodak Aerocolor Negative 2445: Color negative, high speed film for mapping and reconnaissance. ►Infrared Kodak Aerochrome Infrared 2443: False color reversal film, high dimensional stability for vegetation surveys, camouflage detection and earth resources
  • 82. Overlap between photos 60% and 80% Forward Lap and 30% Side Lap 60% Overlap Stereo Pair Overlap Region
  • 83. Types of photographs Vertical – View straight down, expression angle 85° to 90°. Low-oblique e - Side view, horizon is not visible, depression angle typically 20-85°. High-oblique vantage- Side view, horizon is visible, depression angle typically less than 20°.
  • 84.
  • 85. Geometry of an aerial photograph
  • 86.
  • 87. Aerial Triangulation (AT). The process of establishing a mathematical relationship between images, the camera or sensor model, and the ground Air base. The distance between two image exposure stations. Average flying height. The distance between the camera position at the time of exposure and the average ground elevation. Average flying height can be determined by multiplying the focal length by the image scale. Base-height ratio (b/h). The ratio between the average flying height of the camera and the distance between where the two overlapping images were captured. Block of photographs. Formed by the combined exposures of a flight. For example, a traditional frame camera block might consist of a number of parallel strips with a sidelap of 20- 30%, and an overlap of 60%.
  • 88. Bundle. The unit of photogrammetric triangulation after each point measured in an image is connected with the perspective center by a straight light ray. There is one bundle of light rays for each image. Bundle block adjustment. A mathematical technique (triangulation) that determines the position and orientation of each image as they existed at the time of image capture, determines the ground coordinates measured on overlap areas of multiple images, and minimizes the error associated with the imagery, image measurements, and GCPs. This is essentially a simultaneous triangulation performed on all observations. Calibration certificate/report. In aerial photography, the manufacturer of the camera specifies the interior orientation in the form of a certificate or report. Information includes focal length, principal point offset, radial lens distortion data, and fiducial mark coordinates. Check point. An additional ground point used to independently verify the degree of accuracy of a triangulation.
  • 89. Control point. A point with known coordinates in a coordinate system, expressed in the units (e.g., meters, feet, pixels, film units) of the specified coordinate system. Control point extension. The process of converting tie points to control points. This technique requires the manual measurement of ground points on photos of overlapping areas. The ground coordinates associated with the GCPs are then determined using photogrammetric techniques. Exposure station. During image acquisition, each point in the flight path at which the camera exposes the film. Eye-base to height ratio. The eye-base is the distance between a person’s eyes. The height is the distance between the eyes and the image datum. When two images of a stereopair are adjusted in the X and Y direction, the eye-base to height ratio is also changed. Change the X and Y positions to compensate for parallax in the images. Fiducial. Four or eight reference markers fixed on the frame of an aerial metric camera and visible in each exposure. Fiducials are used to compute the transformation from pixel coordinates to image coordinates.
  • 90. Fiducial center. The center of an aerial photo; the intersection point of lines constructed to connect opposite fiducials. Focal length. The distance between the optical center of the lens and where the optical axis intersects the image plane. Focal length of each camera is determined in a laboratory environment. Focal plane. The plane of the film or scanner used in obtaining an aerial photo. Ground Control Point (GCP). An easily identifiable point for which the ground coordinates of the map coordinate system are known. Nadir. The area on the ground directly beneath a scanner’s detectors. Nadir point. The intersection of the focal axis and the image plane. Parallax. "The apparent angular displacement of an object as seen in an aerial photograph with respect to a point of reference or coordinate system.
  • 91. Perspective center. The optical center of a camera lens. 1. A point in the image coordinate system defined by the x and y coordinates of the principal point and the focal length of the sensor. 2. After triangulation, a point in the ground coordinate system that defines the sensor’s position relative to the ground. Principal point. The point in the image plane onto which the perspective center is projected.
  • 92. Photographic Scale Before a photograph can be used as a map supplement or substitute, it is necessary to know its scale. On a map, the scale is printed as a representative fraction that expresses the ratio of map distance to ground distance, For example: RF=MD/ GD On a photograph, the scale is also expressed as a ratio, but is the ratio of the photo distance (PD) to ground distance. For example: RF=PD/GD
  • 93. • The approximate scale or average scale (RF) of a vertical aerial photograph is determined by either of two methods; the comparison method or the focal length-flight altitude method. The scale of a vertical aerial photograph is determined by comparing the measured distance between two points on the photograph with the measured ground distance between the same two points. The ground distance is determined by actual measurement on the ground or by the use of the scale on a map of the same area. The points selected on the photograph must be identifiable on the ground or map of the same area and should be spaced in such a manner that a line connecting them will pass through or nearly through the center of the photograph
  • 94. scale = f ÷ H scale = photo distance ÷ ground distance Example • A camera equipped with a 152mm focal lens is used to take a vertical photograph from a flying height of 2780 m above mean sea level. if the terrain is flat and located at an elevation of 500m.what is the scale of the photograph?
  • 95. Focal length = 0.152m (152/1000) H=2780 m (flying height) h= 500 m (ground elevation) Scale = f 0.152m H –h 2780 – 500 = = 1/0.0000666 1:15000
  • 96. Class Work 1) A camera equipped with a 206 mm focal lens is used to take a vertical photograph from a flying height of 5200 m above mean sea level. if the terrain is flat and located at an elevation of 560m.what is the scale of the photograph?
  • 97. 2) Vertical photograph was taken at a flying height 5000 m above sea level using camera with 152 mm focal length A) Determine the photo scale at point A and B which lie at elevation of 1200 and 1960? B) what the ground distance corresponds to a 30mm photo distance measured at each of these elevation?
  • 98. Scale (A) = f 0.152m 1:25000 H –h 5000 – 1200 Scale (B) = f 0.152m 1:20000 H –h 5000– 1960 Ground Distance Distance(A) = 0.030(30/1000 ) ÷ 1 750m Distance(B) = 0.030(30/1000 ) ÷ 1 600m 25000 20000
  • 99. Image Displacement On a planimetric map all features /details are show in their correct horizontal position on a certain scale. This is not so in the case of aerial photographs due to image ,displacement or distortion. A disturbance of the principle of geometry is called displacement/distortion Sources Of Distortions And Displacement The main sources of displacement and distortion are the optical or photographic deficiencies (film and paper shrinkage, lens aberrations, filter aberrations, failure of the film-flattening mechanism in the camera focal plane, shutter malfunction), image motion, atmospheric refraction of light rays, curvature of the earth, tilt, and topography or relief.
  • 100. Relief Displacement • Shift or displacement in the photographic position of an image caused by the relief of the object. The amount of relief displacement is directly correlated with the height or depth of the object and the distance of the object from the nadir. • This displacement is inversely correlated with the flying altitude of the aircraft above the datum and the focal length used. • Higher altitude photography will produce less relief displacement than lower altitude photography.
  • 101.
  • 102. • Even though relief displacement constitutes a source of errors in measuring horizontal distances on vertical aerial photographs, it is not necessarily a nuisance; because of relief displacement, we can determine the height of objects (or difference in elevation between objects) and to see in three dimension by viewing stereoscopic pairs of aerial vertical photographs.
  • 103. Displacement due to Tilt • A photograph is considered tilted when the angle between the perpendicular projection through the center of the lens and the plumb line is greater than 3˚. • An aircraft or airborne not perfectly horizontal causes a rotation of the camera about the x-axis or about the y-axis. • Rotation about the x-axis causes lateral tilt or y- tilt, which is due to the aircraft being wing-up- wing-downand displacing the nadir point along Y-axis.
  • 104. • Rotation about y-axis causes longitudinal tilt or x-tilt or list, which is due to the nose of the aircraft being up or down causing the nadir point to be displaced along X-axis. •Along the axis of tilt there is no displacement relative to an equivalent untitled photograph as this is the line where a tilted photograph and an equivalent vertical photograph would match and intersect on another.
  • 105. Image Parallax • The term parallax is given to the relative movement of objects due to movement of the observer • The most obvious evidence of this phenomenon is the apparent shift of nearby objects relative to distant objects when travelling at speed in a vehicle • Parallax is often used to detect when an object is correctly focused in a theodolite or level telescope by moving the eye from side to side - any movement between the object and the crosshairs indicates that the focus is incorrect
  • 106. • For a constant flying height (or constant camera to datum distance) and tilt-free photographs the parallax will be parallel to the base • Parallax will be smaller for more distant objects, or in the case of aerial photography, smaller for points at lower heights • The perception of depth in stereo photographs is dependent on this parallax, as neighboring points at different heights will exhibit different parallaxes • As the parallax shifts apply to all points, a continuous 3D impression of the object is given by the stereo photographs
  • 107. • To obtain stereoscopic coverage of an entire area, the minimum overlap for any photography is 50% A mathematical definition of parallax is : p = x - x' Where x = left conjugate image coordinate x' = right conjugate image coordinate
  • 108. Microwave Remote Sensing • Analyzing the information collected by the sensors that operate in the microwave portion of the electromagnetic spectrum is called as Microwave Remote Sensing. Wavelength : 1mm to 1m • The capability to penetrate through precipitation or into a surface layer is increased with longer wavelengths. • Radars operating at wavelengths greater than 2 cm are not significantly affected by cloud cover, however, rain does become a factor wavelengths shorter than 4 cm.
  • 110. Radar Bands Commonly Used For Sensing • BAND WAVELENGTH (cm) FREQUENCY • GHz (10 9Cycles/sec) • Ka 0.75 -1.1 26.5 -40 • K 1.1 -1.67 18 -26.5 • Ku 1.67 -2.4 12.5 -18 • X 2.4 -3.8 8 -12.5 • C 3.8 -7.5 4 -8 • S 7.5 -15 2 -4 • L 15 -30 1 -2 • P 30 -100 0.3 -1
  • 111. • Ka, K, and Ku bands: very short wavelengths used in early airborne radar systems but uncommon today. • X-band: used extensively on airborne systems for military reconnaissance and terrain mapping. • C-band: common on many airborne research systems (CCRS Convair-580 and NASA AirSAR) and spaceborne systems (including ERS-1 and 2 and RADARSAT). • S-band: used on board the Russian ALMAZ satellite. • L-band: used onboard American SEASAT and Japanese JERS-1 & ALOS PALSAR satellites and NASA airborne system. • P-band: longest radar wavelengths, used on NASA experimental airborne research system.
  • 112. Advantages • Time independent. • Weather independent. ( Some areas of Earth are persistently cloud covered) • Penetrate through clouds and to a high degree through rain. • Penetrates vegetation, dry soil, dry snow • Sensitive to moisture in soil, vegetation and snow. • Ability to collect data which are far away from flight path. Disadvantages • Large antenna are required • Antennas are heavy and have large power requirement • Interpretation of microwave images are difficult • Geometry is problematic in undulating terrain
  • 113. .
  • 114. Passive microwave sensor • Detects the naturally emitted microwave energy within its field of view. This emitted energy is related to the temperature and moisture properties of the emitting object or surface. • Passive microwave sensors are typically radiometers. • Applications: Snow cover mapping, Flood mapping, Soil moisture mapping.
  • 115. Active microwave sensor • It provide own source of microwave radiation to illuminate the target. • divided into two categories: imaging (RADAR) and non-imaging (altimeters and scatterometers). • Non-imaging microwave sensors are profiling devices which take measurements in one linear dimension, as opposed to the two- dimensional representation of imaging sensors. • It transmits a microwave (radio) signal towards the target and detects the backscattered portion of the signal.
  • 116. Errors in Remote sensing Images
  • 117.
  • 118. • Remote sensing data (in raw form) as received from imaging sensors mounted on satellites contain flaws or deficiencies. • The correction of deficiencies and removal of flaws present in the data is termed as pre- processing. • Image pre-processing can be classified into three functional categories: • Radiometric corrections • Atmospheric corrections • Geometric correction
  • 119. Radiometric errors It’s an error that influences the radiance or radiometric values of a scene element(pixel). Change the value (Digital Number, DN) stored in an image. System errors - minimized by cosmetic corrections Atmospheric errors-minimized by atmospheric corrections Geometric errors It s an error that is related to their spatial location. change the position of a DN value. minimized by geometric correct
  • 120. Radiometric errors & its corrections • Radiometric errors causes • Sensor failures or system noise affects values • Signal travelling through atmosphere ;atmosphere affects the signal • Sun illumination influences radiometric values • Seasonal changes affect radiometric values • Terrain influences radiance
  • 121. Internal errors:- • Introduced by remote sensing system. • Generally systematic and may be identified. • Corrected based on prelaunch or in flight measurements. External errors:- • Introduced by the phenomena that vary in nature through space and time. • Sources are atmospheric, terrain elevation etc. Radiometric Error sources • Remote sensing • System induced errors by mechanical, electrical or communication failures • Atmosphere induced errors by interaction of EM with atmospheric constituents
  • 122. Random Bad Pixels(Short Noise) • Some times an individual detector does not record spectral data for an individual pixel . When this occurs randomly,it is called a badpixel. • When there are numerous random bad pixels found within the scene, it is called shot noise because it appears that the image was shot by a shotgun. • Normally these bad pixels contain values of 0 or 255 (in8-bitdata)in one or more of the bands.
  • 123. Random Bad Pixels (correction) • Locate each bad pixel in the band k dataset. • A simple threes holding algorithm makes a pass through the dataset and flags any pixel (BVi,j,k) having a brightness value of zero (assuming values of 0 represent short noise and not a real land cover such as water). • Once identified, evaluate the eight pixels surrounding the flagged pixel, as shown below:
  • 124. Dropped lines • Although detectors onboard orbiting satellites are well tested and calibrated before launch but an entire line containing no spectral information may be produced if an individual detector in a scanning system (e.g., Landsat MSS or Landsat 7 ETM+) fails to function properly. • If a detector in a linear array (e.g., SPOT XS, IRS, QuickBird) fails to function, this can result in an entire column of data with no spectral information. Such defects are due to errors in the scanning or sampling equipment, in the transmission or recording of image data or in reproduction of CCT's. • The bad line or column is commonly called a line or column drop-out and seen as horizontal black (pixel value 0) or white (pixel value 255) lines on the image.
  • 125.
  • 126. Dropped lines (corrections) • Correction is a cosmetic operation, for this no data is available • It is based on spatial auto-correlation of continuous physical phenomena (neighboring values tend to be similar) Methods for dropped line correction 1. Replacement (line above, below) 2. Average line above and below 3. Replacement based on correlation between bands
  • 127. Striping • Horizontal or vertical (raw data), skewed (processed data) • Visible banding pattern over the whole image • Changed characteristics of the sensor detectors
  • 128. Striping (correction) • To improve the visual appearance • To represent equal ground leaving photon-flux with the same DN Methods for Striping correction 1. Use calibration data No assumptions 2. Parametric histogram matching Assumes equal area per class for each sensor Assumes a linear sensor model and a normal (Gaussian) distribution of the DN values 3. Non-parametric histogram matching Assumes equal area per class for each sensor
  • 129. Atmosphere induced errors HAZE • Scattered light reaching the sensor from the atmosphere • Additive effect, reducing CONTRAST SUNANGLE • Time/Seasonal effect changing the atmospheric path • Multiplicative effect SKYLIGHT • Scattered light reaching the sensor after being reflected from the Earth’s surface • Multiplicative effect
  • 130.
  • 131. Haze Correction Dark object subtraction method • Assumption: infrared bands are not affected by Haze • Identify black bodies: clear water and shadow zones with zero reflectance in the infrared bands • Identify DN values at shorter wavelength bands of the same pixel positions. These DN are entirely due to haze • Subtract the minimum of the DN values related to blackbodies of a particular band from all the pixel values of that band
  • 132. Effects of Sun Illumination • Position of the sun relative to the earth changes depending on time of the day and the day of the year • Solar elevation angle: Time-and location dependent • In the northern hemisphere the solar elevation angle is smaller in winter than in summer • The solar zenith angle is equal to 90 degree minus the solar elevation angle • Irradiance varies with the seasonal changes in solar elevation angle and the changing distance between the earth and sun
  • 133.
  • 134.
  • 135. Geometric Errors These distortions may be due to several factors such as: (i) the rotation of the Earth. (ii) the motion of the scanning system, (iii) the motion of the platform, (iv) the platform altitude and attitude, (v) the curvature of the Earth. • The geometric distortions should be removed for the geometric representation of the satellite imagery as close as possible to the real world. Geometric distortions are: – Systematic – Nonsystematic
  • 136. • Systematic distortions are predictable in nature, and can be accounted for by accurate modeling of the sensor and platform motion, and the geometric relationship of the platform with the Earth. • Non-systematic distortions or random errors can not be modeled and corrected in this way SYSTEMATIC ERRORS – Scan skew – Mirror scan velocity – Panoramic distortions – Platform velocity – Earth rotation – Earth Curvature
  • 137. Scan skew • It is caused by the forward motion of the platform during the time required for each mirror sweep. The ground swath is not normal to the ground track but is slightly skewed, producing cross-scan geometric distortion. The magnitude of correction is 0.082 km for MSS.
  • 138. Mirror scan velocity • The MSS mirror scanning rate is usually not constant across a given scan, producing along-scan geometric distortion. The magnitude of the correction is 0.37 km for MSS. Panoramic distortions For scanners used on space borne and airborne remote sensing platforms the (IFOV) is constant. As a result the effective pixel size on the ground is larger at the extremities of the scan line than at the nadir. It produces along- scan distortion
  • 139. Platform velocity • If the speed of the platform changes the ground track covered by successive mirror scans changes producing along-track scale distortion. Earth rotation • Rotation of earth in West-to-East Direction • Movement of satellite in North- to-South Direction
  • 140. Earth Curvature • Aircraft scanning mechanism because of their low altitude have small absolute swath width are not affected by earth curvature. • Neither are space systems like IRS, Landsat and Spot, because of the narrowness of their swath. However wide swath width space borne imaging systems are affected. • e.g. NOAA with a wide swath of 2700 km is affected by it. The edges of the swath the area of the earth’s surface viewed at a given angular IFOV is larger than if the curvature of the earth is ignored
  • 141. NONSYSTEMATIC ERRORS Platform altitude If the platform departs from its normal altitude, changes in scale occur. Attitude One of the sensor system axes usually maintained normal to the earth’s surface and introduces geometric distortion
  • 142. Digital Image • An IMAGE is a Pictorial Representation of an object or a scene. Analog Digital What is a Digital Image ? • Produced by Electro optical Sensors • Composed of tiny equal areas, or picture elements abbreviated as pixels or peel arranged in a rectangular array • With each pixel is associated a number known as Digital Number ( DN) or Brightness value (BV) or gray level which is a record of variation in radiant energy in discrete form. • An object reflecting more energy records a higher number for itself on the digital image and vice versa. • Digital Images of an area captured in different spectral ranges (bands) by sensors onboard a remote sensing satellite. • •A pixel is referred by its column, row, band number.
  • 143. • Digital Image Data Formats Commonly used formats 1) Band Sequential (BSQ) 2) Band Interleaved by Line (BIL) 3) Band Interleaved by Pixel (BIP) • Each of these formats is usually preceded on the by "header" and/or "trailer" information, which consists of ancillary data about the date, altitude of the sensor, attitude, sun angle, and so on. • Such information is useful when geometrically or radiometrically correcting the data.
  • 144. Band Sequential Format (BSQ) • Data for a single band for entire scene is written as one file • That is for each band there is separate file
  • 145. Band Interleave by Line (Bil) • Data for all the bands are written as line by line on the same file. • That is • line1, band 1, line 1 band 2, line1 band 3, line2, band 1, line 2 band 2, line2 band 3 , line3, band 1, line 3 band 2, line3 band 3 etc
  • 146. Band Interleave by Pixel (BIP) • data for the pixels in all bands are written together. That is • pixel1 band1 pixel1 band2 pixel1 band3 pixel2 band1 pixel2 band2 pixel 2 band3 pixel3 band1 pixel3 band2 pixel3 band3
  • 147. Advantages / Disadvantages BSQ • if one wanted the area in the center of a scene in four bands, it would be necessary to read into this location in four separate files to extract the desired information. Many researchers like this format because it is not necessary to read "serially" past unwanted information if certain bands are of no value BIL/BIP • It is a useful format if all the bands are to be used in the analysis. If some bands are not of interest, the format is inefficient since it is necessary to read serially past all the unwanted data
  • 148. Digital image processing • Digital image processing can be defined as the computer manipulation of digital values contained in an image for the purposes of image correction, image enhancement and feature extraction. • Digital Image Processing A digital image processing system consists of computer Hardware and Image processing software necessary to analyze digital image data
  • 149. • DIGITAL IMAGE PROCESSING SYSTEM FUNCTIONS Data Acquisition/Restoration • {Compensates for data errors, noise and geometric distortions introduced in the images during acquisitioning and recording} i.e Preprocessing (Radiometric and Geometric) Image Enhancement • {Alters the visual impact of the image on the interpreter to improve the information content}
  • 150. Information Extraction • {Utilizes the decision making capability of computers to recognize and classify pixels on the basis of their signatures, Hyperspectral image analysis } Others • {Photogrammetric Information Extraction , Metadata and Image/Map Lineage Documentation , Image and Map Cartographic Composition, Geographic Information Systems (GIS), Integrated Image Processing and GIS , Utilities}
  • 151. Major Commercial Digital Image Processing Systems •ERDAS IMAGINE •Leica Photogrammetry Suite •ENVI •IDRISI •ER Mapper •PCI Geomatica •eCognition •MATLAB •Intergraph
  • 152. RECTIFICATION • is a process of geometrically correcting an image so that it can be represented on a planar surface , conform to other images or conform to a map. i.e it is the process by which geometry of an image is made planimetric. • It is necessary when accurate area , distance and direction measurements are required to be made from the imagery. • It is achieved by transforming the data from one grid system into another grid system using a geometric transformation • In other words process of establishing mathematical relationship between the addresses of pixels in an image with corresponding coordinates of those pixels on another image or map or ground
  • 153. • Two basic operations must be performed to geometrically rectify a remotely sensed image to a map coordinate system: 1.Spatial Interpolation: The geometric relationship between input pixel location (row & column) and associated map co-ordinates of the same point (x,y) are identified. • •This establishes the nature of the geometric co- ordinate transformation parameters that must be applied to rectify the original input image (x,y) to its proper position in the rectified output image (X,Y). • • Involves selecting Ground Control Points (GCPS) and fitting polynomial equations using least squares technique
  • 154. • GROUND CONTROL POINT (GCP) is a location on the surface of the Earth (e.g., a road intersection) that can be identified on the imagery and located accurately on a map. • There are two distinct sets of coordinates associated with each GCP: source or image coordinates specified in i rows and j columns, and Reference or map coordinates (e.g., x, y measured in degrees of latitude and longitude, or meters in a Universal Transverse Mercator projection). • The paired coordinates (i, j and x, y) from many GCPs can be modeled to derive geometric transformation coefficients.
  • 155. • These coefficients may be used to geometrically rectify the remote sensor data to a standard datum and map projection • Accurate GCPs are essential for accurate rectification • Sufficiently large number of GCPs should be selected • Well dispersed GCPs result in more reliable rectification • GCPs for Large Scale Imagery –Road intersections, airport runways, towers buildings etc. • for small scale imagery –larger features like Urban area or Geological features can be used • NOTE : landmarks that can vary (like lakes, other water bodies, vegetation etc) should not be used. • GCPs should be spread across image • •Requires a minimum number depending on the type of transformation
  • 156. Intensity Interpolation • Pixel brightness value must be determined. • A pixel in the rectified image often requires a value from the input pixel grid that does not fall neatly on a row and column co-ordinate. • For this reason resampling mechanism is used to determine pixel brightness value.
  • 157. Image Enhancement • Image enhancement techniques improve the quality of an image as perceived by a human. These techniques are most useful because many satellite images when examined on a colour display give inadequate information for image interpretation. • Modification of an image to alter its impact on viewer Enhancements are used to make it easier for visual interpretation and understanding of imagery. • Process of making an image more interpretable for a particular application to accentuate certain image features for subsequent analysis or for image display .
  • 158. • Useful since many satellite images give inadequate information for image interpretation. The contrast stretch, density slicing, edge enhancement, and spatial filtering are the more commonly used techniques. • Image enhancement is attempted after the image is corrected for geometric and radiometric distortions RADIOMETRIC ENHANCEMENT Modification of brightness values of each pixel in an image data set independently (Point operations). SPECTRAL ENHANCEMENT Enhancing images by transforming the values of each pixel on a multiband basis SPATIAL ENHANCEMENT Modification of pixel values based on the values of surrounding pixels. (Local operations)
  • 159. Contrast Contrast generally refers to the difference in luminance or grey level values in an image and is an important characteristic. It can be defined as the ratio of the maximum intensity to the minimum intensity over an image. Reasons for low contrast of image data I. The individual objects and background words and the scene itself has a low contrast ratio. II. Scattering of electromagnetic energy by the atmosphere can reduce the contrast of a scene. III. The remote sensing system may lack sufficient sensitivity to detect and record the contrast of the terrain.
  • 160. Contrast Enhancement • Expands the original input values to make use of the total range of the sensitivity of the display device. Contrast enhancement techniques expand the range of brightness values in an image so that the image can be efficiently displayed in a manner desired by the analyst. The density values in a scene are literally pulled farther apart, that is, expanded over a greater range. • The effect is to increase the visual contrast between two areas of different uniform densities. This enables the analyst to discriminate easily between areas initially having a small difference in density.
  • 161. Linear Contrast Enhancement This is the simplest contrast stretch algorithm. The grey values in the original image and the modified image follow a linear relation in this algorithm. A density number in the low range of the original histogram is assigned to extremely black, and a value at the high end is assigned to extremely white. The remaining pixel values are distributed linearly between these extremes.
  • 162. Non Linear Contrast Enhancement • In these methods, the input and output data values follow a non-linear transformation. The general form of the non- linear contrast enhancement is defined by y = f (x), where x is the input data value and y is the output data value. • The non-linear contrast enhancement techniques have been found to be useful for enhancing the colour contrast between the nearly classes and subclasses of a main class. Histogram Equalization This is another non-linear contrast enhancement technique. In this technique, histogram of the original image is redistributed to produce a uniform population density. This is obtained by grouping certain adjacent grey values. Thus the number of grey levels in the enhanced image is less than the number of grey levels in the original image.
  • 163. • In this technique, histogram of the original image is redistributed to produce a uniform population density. • Image analysts must be aware that while histogram equalization often provides an image with the most contrast of any enhancement technique, it may hide much needed information. • If one is trying to bring out information about data in terrain shadows, or there are clouds in your data, histogram equalization may not be appropriate.
  • 164.
  • 165.
  • 166. Spatial Filtering • Spatial Filtering is the process of dividing the image into its constituent spatial frequencies, and selectively altering certain spatial frequencies to emphasize some image features. • spatial frequency defined as number of changes in Brightness Value per unit distance for any particular part of an image. If there are very few changes in Brightness Value once a given area in an image, this is referred to as low frequency area. Conversely, if the Brightness Value change dramatically over short distances, this is an area of high frequency. • Process of suppressing (de-emphasizing) certain frequencies & passing (emphasizing) others. • This technique increases the analyst’s ability to discriminate detail. • Local operation i.e. pixel value is modified based on the values surrounding it.
  • 167. • Used for enhancing certain features • Removal of noise. • Smoothening of image • Ability to discriminate detail. The three types of spatial filters used in remote sensor data processing are : Low pass filters, Band pass filters and High pass filters. Spatial Convolution Filtering A linear spatial filter is a filter for which the brightness value (BV i.j) at location i, j in the output image is a function of some weighted average (linear combination) of brightness values located in a particular spatial pattern around the i, j location in the input image. This process of evaluating the weighted neighboring pixel values is called two-dimensional convolution filtering.
  • 168. Filter Types Low Pass Filters • Block high frequency details • Has a smoothening effect on images. • Used for removal of noise • Removal of “salt & pepper” noise • Blurring of image especially at edges. • High Pass Filters • Preserves high frequencies and Removes slowly varying components • Emphasizes fine details • Used for edge detection and enhancement • Edges - Locations where transition from one category to other occurs
  • 169. Low Pass Filters • Mean Filter • Median Filter • Mode Filter • Minimum Filter • Maximum Filter • Olympic Filter High Pass Filtering –Linear • Output brightness value is a function of linear combination of BV’s located in a particular spatial pattern around the i,j location in the input image –Non Linear • use non linear combinations of pixels • Edge Detection - Background is lost • Edge Enhancement • Delineates Edges and makes the shapes and details more prominent •background is not lost.
  • 170. Density Slicing Density slicing is the process in which the pixel values are sliced into different ranges and for each range a single value or color is assigned in the output image. It is also know as level slicing. Density slicing is a digital data interpretation method used in analysis of remotely sensed imagery to enhance the information gathered from an individual brightness band. Density slicing is done by dividing the range of brightnesses in a single band into intervals, then assigning each interval to a colour.
  • 171. Density slicing may be thus used to introduce color to a single band image. Density slicing is useful in enhancing images, particularly if the pixel values are within a narrow range. It enhances the contrast between different ranges of the pixel values.Remote Sensing-Digital Image Processing-Image Enhancement
  • 172. • The multispectral image data is usually strongly correlated from one band to the other. The level of a given picture element on one band can to some extent be predicted from the level of that same pixel in another band. • Principal component analysis is a pre-processing transformation that creates new images with the uncorrelated values of different images. This is accomplished by a linear transformation of variables that corresponds toa rotation and translation of the original coordinate system. Principal component analysis
  • 173. • Principal component analysis operates on all bands together. Thus, it alleviates the difficulty of selecting appropriate bands associated with the band ratioing operation. Principal components describe the data more efficiently than the original band reflectance values. The first principal component accounts for a maximum portion of the variance in the data set, often as high as 98%. Subsequent principal components account for successively smaller portions of the remaining variance.
  • 174. The Normalized Difference Vegetation Index (NDVI) • The Normalized Difference Vegetation Index (NDVI) is a numerical indicator that uses the visible and near- infrared bands of the electromagnetic spectrum, and is adopted to analyze remote sensing measurements and assess whether the target being observed contains live green vegetation or not. • The Normalised Difference Vegetation Index (NDVI) gives a measure of the vegetative cover on the land surface over wide areas. Dense vegetation shows up very strongly in the imagery, and areas with little or no vegetation are also clearly identified. NDVI also identifies water and ice
  • 175. • Generally, healthy vegetation will absorb most of the visible light that falls on it, and reflects a large portion of the near-infrared light. Unhealthy or sparse vegetation reflects more visible light and less near- infrared light. Bare soils on the other hand reflect moderately in both the red and infrared portion of the electromagnetic spectrum. • The NDVI algorithm subtracts the red reflectance values from the near-infrared and divides it by the sum of near-infrared and red bands. • The method, developed by NASA is known as the Normalized Difference Vegetation Index (NDVI) and is given by the equation (NIR-RED/NIR+RED),
  • 176. NDVI= (NIR-RED) / (NIR+RED) where RED and NIR correspond to channels 1 and 2 respectively. By normalizing the difference in this way, the values can be scaled between a value of -1 to +1. This also reduces the influence of atmospheric absorptionWater typically has an NDVI value less than 0, bare soils between 0 and 0.1 and vegetation over 0.1.
  • 177. . Cover Types RED NIR NDVI Dense Vegetations 0.1 0.5 0.7 Dry Bare soil 0.269 0.283 0.025 Clouds 0.277 0.228 0.002 Snow and ice 0.375 0.342 -0.046 water 0.022 0.013 -0.257
  • 178. IMAGE CLASSIFICATIONS • The overall objective of image classification is to automatically categorize all pixels in an image into land covers classes or themes. • Normally, multispectral data are used to perform the classification, and the spectral pattern present within the data for each pixel is used as numerical basis for categorization. • That is, different feature types manifest different combination of DNs based on-their inherent spectral reflectance and emittance properties.
  • 179. • The term classifier refers loosely to a computer program that implements a specific procedure for image classification. • Over the years scientists have devised many classification strategies. From these alternatives the analyst must select the classifier that will best accomplish a specific task. • At present it is not possible to state that a given classifier is “best” for all situation because characteristics of each image and the circumstances for each study vary so greatly. Therefore, it is essential that analyst understand the alternative strategies for image classification
  • 180. • The traditional methods of classification mainly follow two approaches: unsupervised and supervised Unsupervised Classification • Unsupervised classifiers do not utilize training data as the basis for classification. Rather, this family of classifiers involves algorithms that examine the unknown pixels in an image and aggregate them into a number of classes based on the natural groupings or clusters present in the image values. • The classes that result from unsupervised classification are spectral classes because they are based solely on the natural groupings in the image values, the identity of the spectral classes will not be initially known.
  • 181.
  • 182. • There are numerous clustering algorithms that can be used to determine the natural spectral groupings present in data set. One common form of clustering, called the “K-means” approach also called as ISODATA (Interaction Self-Organizing Data Analysis Technique) accepts from the analyst the number of clusters to be located in the data. Advantages • 1. No extensive prior knowledge of the region is required. • 2. The opportunity for human error is minimized. • 3. Unique classes are recognized as distinct units
  • 183. Disadvantages and limitations • Unsupervised classification identifies spectrally homogeneous classes within the data; these classes do not necessarily correspond to the informational categories that are of interest to analyst. As a result, the analyst is faced with the problem of matching spectral classes generated by the classification to the informational classes that are required by the ultimate user of the information. • Spectral properties of specific information classes will change over time (on a seasonal basis, as well as over the year). As a result, relationships between informational classes and spectral classes are not constant and relationships defined for one image can seldom be extended to others
  • 184. Supervised classification • Supervised classification can be defined normally as the process of samples of known identity to classify pixels of unknown identity. Samples of known identity are those pixels located within training areas. Pixels located within these areas term the training samples used to guide the classification algorithm to assigning specific spectral values to appropriate informational class. • The basic steps involved to a typical supervised classification procedure are 1. The training stage 2. Feature selection 3. Selection of appropriate classification algorithm 4. Post classification smoothening 5. Accuracy assessment
  • 185. Training data • Training fields are areas of known identity delineated on the digital image, usually by specifying the corner points of a rectangular or polygonal area using line and column numbers within the coordinate system of the digital image. The analyst must, of course, know the correct class for each area. KEY CHARACTERISTICS OF TRAINING AREAS • Shape • Location • Number • Placement • Uniformity
  • 186.
  • 187. • Various supervised classification algorithms may be used to assign an unknown pixel to one of a number of classes. The choice of a particular classifier or decision rule depends on the nature of the input data and the desired output. Among the most frequently used classification algorithms are the parallelepiped, minimum distance, and maximum likelihood decision rules. 1.Parallelepiped Classification • It is a very simple supervised classifier. Here two image bands are used to determine the training area of the pixels in each band based on maximum and minimum pixel values.
  • 188. • .Although parallelepiped is the most accurate of the classification techniques, it is not most widely used. It leaves many unclassified pixels and also can have overlap between training pixels. The data values of the candidate pixel are compared to upper and lower limits. • [Pixels inside of the rectangle (defined in standard deviations), are assigned the value of that class signature. • Pixels outside of the rectangle (defined by standard deviations) are assigned a value of zero (NULL). • Disadvantages: poor accuracy, and potential for a large number of pixels classified as NULL. • Advantages: A speedy algorithm useful for quick results.]
  • 190. 2.Minimum distance to means classification • It is based on the minimum distance decision rule that calculates the spectral distance between the measurement vector for the candidate pixel and the mean vector for each sample. Then it assigns the candidate pixel to the class having the minimum spectral distance. • [Every pixel is assigned to the a category based on its distance from cluster means. • Standard Deviation is not taken into account. • Disadvantages: generally produces poorer classification results than maximum likelihood classifiers. • Advantages: Useful when a quick examination of a classification result is required. ]
  • 192. 3.Maximum Likelihood classification • This Classification uses the training data by means of estimating means and variances of the classes, which are used to estimate probabilities and also consider the variability of brightness values in each class. • This classifier is based on Bayesian probability theory. It is the most powerful classification methods when accurate training data is provided and one of the most widely used algorithm. • [Pixels inside of a stated threshold (Standard Deviation Ellipsoid) are assigned the value of that class signature. • Pixels outside of a stated threshold (Standard Deviation Ellipsoid) are assigned a value of zero (NULL).
  • 193. Disadvantages: Much slower than the minimum distance or parallelepiped classification algorithms. The potential for a large number of NULL. Advantages: more “accurate” results (depending on the quality of ground truth, and whether or not the class has a normal distribution).] •
  • 195. Supervised Classification • Pre Define Classes • Identify classification errors • Class Based on information categories • Sleeted training data not sufficient and time consuming and tedious • Define class may not match natural classes Unsupervised Classification • Unknown Classes • No classification error • Class based o spectral properties • Cluster may be unidentifiable and unexpected may be unidentifiable • Posterior cluster identification is time consuming and tedious
  • 196. FuzzyClassifications Soft or fuzzy classifiers - a pixel does not belong fully to one class but it has different degrees of membership in several classes. The mixed pixel problem is more pronounced in lower resolution data. In fuzzy classification, or pixel unmixing, the proportion of the land cover classes from a mixed pixel is calculated. For example a vegetation classification might included a pixel with grades of 0.68 for class “forest”, 0.29 for “street” and 0.03 for “grass” .
  • 197.
  • 198. 0 – 30 -> Water 30 - 60 -> Forest wetland 60-90 -> Upland Forest Hard classification Fuzzy classification
  • 199. ACCURACY ASSESSMENT Classification Accuracy Assessment • Quantitatively assessing classification accuracy requires the collection of some in situ data or a priori knowledge about some parts of the terrain, which can then be compared with the remote sensing derived classification map. • Thus to asses classification accuracy it is necessary to compare two classification maps 1) the remote sensing derived map, and 2) assumed true map (in fact it may contain some error). The assumed true map may be derived from in situ investigation or quite often from the interpretation of remotely sensed data obtained at a larger scale or higher resolution.
  • 200. Classification Error Matrix • One of the most common means of expressing classification accuracy is the preparation of classification error matrix sometimes called confusion or a contingency table. Error matrices compare on a category-by-category basis, the relationship between known reference data (ground truth) and the corresponding results of an automated classification. • Such matrices are square, with the number of rows and columns equal to the number of categories whose classification accuracy is being assessed. Table show error matrix that an image analyst has prepared to determine how well a Classification has categorized a representative subset of pixels used in the training process of a supervised classification. This matrix stems from classifying the sampled training set pixels and listing the known cover types used for training (columns) versus the Pixels actually classified into each land cover category by the classifier (rows).
  • 201. . Classification Ground Truth LULC DF OF TC SC DC BA GS SN WB WE BU Total DF 10 3 0 0 0 0 0 0 0 0 0 13 OF 1 15 2 0 0 0 0 0 0 0 0 18 TC 1 1 29 0 0 0 1 0 0 0 0 32 SC 0 0 0 8 1 0 0 0 0 0 0 9 DC 0 0 0 0 4 0 0 0 0 0 1 5 BA 0 0 0 0 0 4 0 0 0 0 0 4 GS 0 0 1 0 0 0 1 0 0 0 0 2 SN 0 0 0 0 0 0 0 3 0 0 0 3 WB 0 0 0 0 0 0 0 0 5 0 0 5 WE 0 0 0 0 0 0 0 0 0 1 0 1 BU 0 0 0 0 0 0 0 0 0 0 4 4 Total 12 19 32 8 5 4 2 3 5 1 5 96 Overall accuracy 87.5 PA 83.33 78.95 90.63 100.0 80.0 100.0 50.0 100.0 100 100.0 80.0 UA 76.92 83.33 90.63 88.89 80.0 100.0 50.0 100.0 100 100.0 100.0 Kappa 0.85
  • 202. Kappa coefficient • Discrete multivariate techniques have been used to statistically evaluate the accuracy of remote sensing derived maps and error matrices since 1983 and are widely adopted. These techniques are appropriate as the remotely sensed data are discrete rather than continuous and are also binomially or multinomial distributed rather than normally distributed. • Kappa analysis is a discrete multivariate technique for accuracy assessment. Kappa analysis yields a Khat statistic that is the measure of agreement of accuracy. The Khat statistic is computed as • Where r is the number of rows in the matrix xii is the number of observations in row i and column i, and xi+ and • x+i are the marginal totals for the row I and column i respectively and N is the total number of observations.
  • 203. Data Fusion is a process dealing with data and information from multiple sources to achieve refined/improved information for decision-making Often, in case of data fusion, not only remote sensing images are fused but also further ancillary data (e.g. topographic maps, GPS coordinates, geophysical information etc.) contribute to the resulting image. Image Fusion is the combination of two or more different images to form a new image by using a certain algorithm. Data Fusion Image Fusion IMAGE FUSION
  • 204. • Image fusion is the process of combining of image from different sources and related information form associated datasets to produce a resultant image that retains the most desirable information of each of the input image. Why image fusion  Most of the sensors operate in two modes: multispectral mode and the panchromatic mode.  Usually the multispectral mode has a better spectral resolution than the panchromatic mode.  Most of the satellite sensors are such that the panchromatic mode has a better spatial resolution than the multispectral mode, To combine the advantages of spatial and spectral resolutions of two different sensors, image fusion techniques are applied
  • 207. Aim and use of Image Fusion Sharpening of Images • improvement of spatial resolution • preserve the interesting spectral resolution  Enhance features not visible in either of the single data alone • improved interpretation results, due to the different physical characteristics of the data  Change Detection  Substitute missing information in one image with signals from another sensor image ( e.g. clouds in NIR and Shadows in SAR)
  • 208. Image Interpretation • Image interpretation is defined as the act of examining images to identify objects and judge their significance. An interpreter studies remotely sensed data and attempts through logical process to detect, identify, measure and evaluate the significance of environmental and cultural objects, patterns and spatial relationships. It is an information extraction process. Methods:-on hard copy, -on digital image • Why image interpretation difficult compared to everyday visual interpretation ? – loss of sense of depth – viewing perspective different – different scale
  • 209. ELEMENTS OF IMAGE INTERPRETATION
  • 210. Shape General form/structure of objecs: regular/irregular Numerous components of the environment can be identified with reasonable certainty merely by their shape. This is true of both natural features and man-made objects.
  • 211. Size FUNCTION OF SCALE : Relative size is important In many cases, the length, breadth, height, area and/or volume of an object can be significant, whether these are surface features (e.g. different tree species). The approximate size of many objects can be judged by comparisons with familiar features(e.g. roads) in the same scene.
  • 212. Tone RELATIVE BRIGHTNESS OR COLOR We have seen how different objects emit or reflect different wavelengths and intensities of radiant energy. Such differences may be recorded as variations of picture tone, colour or density. Which enable discrimination of many spatial variables, for example, on land different crop types or at sea water bodies of contrasting depths or temperatures. The terms 'light', 'medium' or 'dark' are used to describe variations in tone.
  • 213. Pattern  Spatial arrangement of visibly discernible objects  Repetitive patterns of both natural and cultural features are quite common, which is fortunate because much image interpretation is aimed at the mapping and analysis of relatively complex features rather than the more basic units of which they may be composed. Such features include agricultural complexes (e.g. farms and orchards) and terrain features (e.g. alluvial river valleys and coastal plains).
  • 214. Texture Arrangement and frequency of tonal variation (closely associated with tone)smooth irregular Same tone but different textures are possible! Texture is an important image characteristic closely associated with tone in the sense that it is a quality thatpermits two areas of the same overall tone to be differentiated on the basis of microtonal patterns. Common image textures include smooth, rippled, mottled, lineated and irregular. Unfortunately, texture analysis tends to be rather subjective, since different interpreters may use the same terms in slightly different ways. Texture is rarely the only criterion of identification or correlation employed in interpretation. More often it is invoked as the basis for a subdivision of categories already established using more fundamental criteria. For example:two rock units may have the same tone but different textures.
  • 215.
  • 216. Site / Association Site •Relationship with other recognizable features in proximity to target of interest •At an advanced stage in image interpretation, the location of an object with respect to terrain features of other objects may be helpful in refining the identification and classification of certain picture contents. For example some tree species are found more commonly in one topographic situation than in others, while in industrial areas the association of several clustered, identifiable structures may help us determine the precise nature of the local enterprise. For example, the combination of one or two tall chimneys, a large central building, conveyors, cooling towers and solid fuel piles point to the correct identification of a thermal power station
  • 217.
  • 218. Shadow Hidden profiles may be revealed in silhouette (e.g. the shapes of buildings or the forms of field boundaries). Shadows are especially useful in geomorphological studies where micro relief features may be easier to detect under conditions of low-angle solar illumination than when the sun is high in the sky. Unfortunately, deep shadows in areas of complex detail may obscure significant features, e.g. the volume and distribution of traffic on a city street.
  • 219. Resolution Capability to distinguish two closely spaced objects Resolution of a sensor system may be defined as its capability to discriminate two closely spaced objects from each other. More than most other picture characteristics, resolution depends on aspects of the remote sensing system itself, including its nature, design and performance, as well as the ambient conditions during the sensing programme and subsequent processing of the acquired data. An interpreter must have a knowledge about the resolution of various remote sensing data products.
  • 220. Global Positioning System(GPS) • In 1973 the U.S. DOD decided to establish, develop, test, acquire, and deploy a spaceborne Global Positioning System (GPS), resulting in the NAVSTARGPS (NAVigation Satellite Timing And Ranging Global Positioning System). • It is an all-weather, space based navigation system development by the U.S. DOD to satisfy the requirements for the military forces to accurately determine their position, velocity, and time in a common reference system, anywhere on or near the Earth on a continuous basis. ”
  • 221. • The Global Positioning System (GPS) is a space- based satellite navigation system that provides location and time information in all weather conditions, anywhere on or near the Earth where there is an clear line of sight to four or more GPS satellites • GPS was developed by the US department of defense to simply accurate navigation • GPS uses satellite and computer to computer position any where on earth • 24 satellite are active in the space , which informed geographical situation. • For 3- dimension latitude , longitude and altitude require at least 4 satellite information accuracy
  • 222. GPS General Characteristics • Developed by the US DOD • Provides – Accurate Navigation • 10 - 20 m – Worldwide Coverage – 24 hour access – Common Coordinate System • Designed to replace existing navigation systems • Accessible by Civil and Military
  • 223. • Development costs estimate ~$12 billion • Annual operating cost ~$400 million • 3 Segments: – Space: Satellites – User: Receivers – Control: Monitor & Control stations • Prime Space Segment contractor: Rockwell International (now Lockheed Martin) • Coordinate Reference: WGS-84 • Operated by US Air Force Space Command (AFSC) – Mission control center operations at Schriever (formerly Falcon) AFB, Colorado Springs
  • 224. Control Segment 1 Master Station 5 Monitoring Stations Space Segment NAVSTAR : NAVigation Satellite Time and Ranging 24 Satellites (30) 20200 Km User Segment Receive Satellite Signal GPS System Components
  • 225. Space Segment • 24 Satellites – 4 satellites in 6 Orbital Planes inclined at 55 Degrees • 20200 Km above the Earth • 12 Hourly orbits – In view for 4-5 hours • Designed to last 7.5 years • Different Classifications – Block 1, 2, 2A, 2R & 2 F Equator 55
  • 226. Control Segment Master Control Station Monitor Station Ground Antenna Colorado Springs Hawaii Ascension Islands Diego Garcia Kwajalein Monitor and Control
  • 227. • Master Control Station – Responsible for collecting tracking data from the monitoring stations and calculating satellite orbits and clock parameters • 5 Monitoring Stations – Responsible for measuring pseudorange data. This orbital tracking network is used to determine the broadcast ephemeris and satellite clock modeling – Ground Control Stations – Responsible for upload of information to SV’s
  • 228. GPS Segments Monitor Stations Ground Antennas AFSCN Master Control Station Space Segment User Segment Control Segment Satellite Constellation Master Control Station FAIRBANKS USNO WASH D.C. NEW ZEALAND ECUADOR ARGENTINA ENGLAND BAHRAIN SOUTH AFRICA SOUTH KOREA COLORADO SPRINGS VANDENBERG, AFB HAWAII CAPE CANAVERAL ASCENSION DIEGO GARCIA KWAJALEIN TAHITI Master Control Station (MCS) Advanced Ground Antenna Ground Antenna (GA) Monitor Station (MS) National Geospatial-Intelligence Agency (NGA) Tracking Station Alternate Master Control Station (AMCS)
  • 229. GPS ERROR • Atmospheric Refraction 1.Ionsospheric Refraction –Sun’s activity, –Ionospheric thickness & proportions/concentration of ionized particles, –Season, –Actual path (i.e. relative position of satellite & receiver) of the wave i.e. signal –Pseudo range errors vary from 0 to 15m at zenithal incidence to as much as 45m for low incidence signals. 2. Tropospheric Refraction – The delay depends on temperature, pressure, humidity, and elevation of satellite.
  • 230. • Atmospheric Delay • Satellite Mask Angle • Sources of Signal Interference • Multipath Error • Signal Obstruction When something blocks the GPS signal. Areas of Great Elevation Differences • Canyons • Mountain Obstruction • Urban Environments • Indoors • Human Error • Noise, Bias, and Blunder
  • 231. Applications • Topo and Locations • Mapping • Monitoring • Volumes • Photo control • Construction Control and Stakeout • Boundaries • Seismic Stakeout • Profiles • Establishing Portable Control Stations (sharing with Total Stations) • Agriculture - Slope Staking • Tracking of people, vehicles • Plate movements • Sports (boating, hiking,…) • Archeology • Public Transport • Emergency services