Remote sensing involves acquiring data about objects through measurements made at a distance without direct contact. It uses sensors on platforms like satellites and aircraft to measure electromagnetic radiation reflected or emitted from the Earth's surface. There are various sensor types that measure different portions of the electromagnetic spectrum. Image processing involves enhancing images and extracting information through techniques like pre-processing, classification, and change detection. Pre-processing corrects errors and artifacts in raw images through steps like radiometric correction, geometric correction, and atmospheric correction. Classification involves categorizing pixels into land cover classes using methods like supervised classification, which relies on user-defined training data, and unsupervised classification, which groups pixels automatically.
Remote Sensing Basics and Image Processing Techniques
1. Basics of Remote Sensing and Image Processing
Mehul R. Pandya
Scientist
AED, BPSG, EPSA
Space Applications Centre, Ahmedabad
2. Remote Sensing
• Remote sensing – is the science of deriving
inferences about objects from measurements,
made at a distance, without coming into
physical contact with the objects under study.
• It is a process of studying interaction of EM
radiation with different objects – land, water,
atmosphere, etc without physical contact by
an instrument from a distant platform.
3.
4. Components of Remote Sensing and Applications
Acquisition of data from space
Development of Sensor
Image Processing Applications
Societal Benefits
RS Observations
Data Product Generation
Data reception
5.
6. Active Passive
Various sensor types used in RS Applications
•Radiometers (in
visible, near &
thermal infrared)
•Imaging
spectrometer
LIDAR
Multi-frequency
Microwave
Radiometer
•Synthetic
Aperture RADAR
(SAR)
•Scatterometer
•Altimeter
Microwave
Optical
Visible
UV NIR SWIR Thermal IR
Ka K Ku X C S L P
7. What is an image?
• Data that are organized in a grid of columns and rows
• Usually represents a geographical area
X-axis
13. Example of different spatial
resolution
LISS-IV, 5.8 m
AWiFS, 56 m
LISS-III, 24 m
14. Swath
• Sensors collect 2D images of the surface in a swath
below the sensor
• Example: IRS-AWiFS has a 740 km swath
Landsat has a 185 km swath
15. Spectral resolution: Measuring Light- Bands
• Human eyes only ‘measure’ visible light
• Sensors can measure other portions of EMS
Bands
16. Spectral signatures: Basis for discriminating various
Earth surface features
0.4 0.8 1.2 1.6 2.0 2.4
R
E
F
L
E
C
T
A
N
C
E
(%)
WAVELENGTH (µm)
WATER (Shallow/Deep)
VEGETATION
SILTY CLAY SOIL
MUCK SOIL
0
20
40
60
80
GREEN BAND
(0.5-0.6 µm)
RED BAND
(0.6-0.7 µm)
NEAR IR
(0.7-0.9 µm)
TRUE COLOR
COMPOSITE
BLUE BAND
(0.4-0.5 µm)
FALSE COLOR
COMPOSITE
1- SAND 2-VEGETATION 3-WATER
3
WATER
17. 5:30 8:30 11:30 16:00
25 Jun 29 Sep 09 Oct 13 Oct 14 Nov 04 Dec
17 Jan 13 Feb 17 Mar 02 Apr 05 May 25 May
25 Jun 29 Sep 09 Oct 13 Oct 14 Nov 04 Dec
17 Jan 13 Feb 17 Mar 02 Apr 05 May 25 May
17 Jan. 13 Feb 17 Mar 02 Apr 05 May 25 May
Multi Temporal Observation
20. What is pre-processing
• Every “raw” remotely sensed image contains a
number of artifacts and errors
• Correcting such errors and artifacts before
further use is termed pre-processing
• The term comes from the fact that PRE-
processing is required for a correct PROCESSING
to take place
• The boundary line between pre-processing and
processing is often fuzzy
21. Image Pre-Processing
• Create a more faithful representation through:
–Geometric correction
–Radiometric correction
–Atmospheric correction
• Can also make it easier to interpret using
“image enhancement”
• Rectification – remove distortion (platform,
sensor, earth, atmosphere) ….
22. Factors affecting RS image
• Which factors influence RS image acquisition?
– Sensor characteristics
– Earth/satellite (geometry)
– Acquisition method: satellite or airborne
– Atmosphere (scattering, absorption…)
– Others: …
• However, Remote Sensing images are comparable:
– In time (e.g., monitoring )
– Between sensors (e.g., MODIS and IRS)
– Between different acquisition by same sensor
23. Radiometric correction
• Radiometric correction, or radiometric calibration, is a
procedure meant to correctly estimate the target
reflectance from the measured incoming radiation
• The radiometric calibration includes the following steps:
– Sensor normalization
• correcting the data for Sensor Irregularities (sensor noise)
• Converting the data so they accurately represent the reflected or
emitted radiation measured by the sensor.
– DN to at-sensor radiance conversion
24. Geometric correction
• Transforming a RS image to make it compatible with a given type of Earth
surface representation is termed GEOMETRIC CORRECTION
• Creating a equation relating each pair of pixel coordinates in the image
with a geographic coordinate pair is called GEOREFERENCING
• Geometric correction often implies COREGISTRATION of an image to
another – reference – image or map
25. What is image processing
• Is enhancing an image or extracting
information or features from an image
• Computerized routines for information
extraction (eg, pattern recognition,
classification) from remotely sensed
images to obtain categories of
information about specific features.
• ….
26. Image Enhancement
• Image Enhancement: Improving the interpretability
of the image by increasing apparent contrast among
various features.
– Contrast manipulation: Gray-level thresholding, level
slicing, and contrast stretching.
– Spatial feature manipulation: Spatial filtering, edge
enhancement, and Fourier analysis.
– Multi-image manipulation: Band ratioing, principal
components, vegetation components, canonical
components…
• image reduction, image magnification, transect extraction, contrast
adjustments (linear and non-linear), band ratioing, spatial filtering,
fourier transformations, principle components analysis, texture
transformations, and image sharpening
29. Spatial Feature Enhancement
(local operation)
•Spatial filtering/ Convolution:
•Low-pass filter: emphasizes regional spatial
trends, deemphasizes local variability
•High-pass filter: emphasizes local spatial
variability
•Edge Enhancement: combines both filters
to sharpen edges in image
30. Image classification
•This is the technique of turning RS data into meaningful categories
representing surface conditions or classes (feature extraction)
•Spectral pattern recognition procedures classifies a pixel based on
its pattern of radiance measurements in each band: more common
and easy to use
•Spatial pattern recognition classifies a pixel based on its
relationship to surrounding pixels: more complex and difficult to
implement
•Temporal pattern recognition: looks at changes in pixels over time
to assist in feature recognition
31. Spectral Classification
Two types of classification:
•Supervised:
•A priori knowledge of classes
•Tell the computer what to look for
•Unsupervised:
•Ex post approach
•Let the computer look for natural clusters
•Then try to classify those based on posterior interpretation
32. Supervised Classification
• Better for cases where validity of classification depends
on a priori knowledge of the technician; already know
what “types” you plan to classify
• Conventional cover classes are recognized in the scene
from prior knowledge or other GIS/ imagery layers
• Training sites are chosen for each of those classes
• Each training site “class” results in a cloud of points in n
dimensional “measurement space,” representing
variability of different pixels spectral signatures in that
class
33. Supervised Classification
•Here are a bunch of pre-chosen training sites of known cover type
Source: http://mercator.upc.es/nicktutorial/Sect1/nicktutor_1-15.html
Source: F.F. Sabins, Jr., 1987, Remote Sensing: Principles and Interpretation.
34. Supervised Classification
•The next step is for the computer to assign each pixel to the spectral class is
appears to belong to, based on the DN’s of its constituent bands
•Clustering algorithms look at “clouds” of pixels in spectral “measurement space”
from training areas to determine which “cloud” a given non-training pixel falls in.
Source: http://mercator.upc.es/nicktutorial/Sect1/nicktutor_1-15.html
Source: F.F. Sabins, Jr., 1987, Remote Sensing: Principles and Interpretation.
35. Supervised Classification
• Algorithms include
– Minimum distance to means classification (Chain Method)
– Gaussian Maximum likelihood classification
– Parallelpiped classification
• Each will give a slightly different result
• The simplest method is “minimum distance” in which
a theoretical center point of point cloud is plotted,
based on mean values, and an unknown point is
assigned to the nearest of these. That point is then
assigned that cover class.
37. Unsupervised Classification
•Assumes no prior knowledge
•Computer groups all pixels
according to their spectral
relationships and looks for
natural clusterings
•Assumes that data in
different cover class will not
belong to same grouping
•Once created, the analyst
assesses their utility and can
adjust clustering parameters
Source: F.F. Sabins, Jr., 1987, Remote Sensing: Principles and Interpretation.
Spectral class 1
Spectral class 2