HELLO!!
I AM KNIGHTFOX
you can find me at
kaushalgadariya@gmail.com
DIGITAL IMAGE
PROCESSING
CONTENTS
1. Data products
2. Types of data products
3. Data interpretation
4. Visual interpretation
5. Digital interpretation
6. Image correction/
rectification
7. Image enhancement
8. Information extraction
9. Image classification
10.Unsupervised
classification
11.Supervised classification
DATA PRODUCTS • Digital data products
• Pictorial data products
DATA PRODUCTS
The RS data are two types
Pictorial
data
Digital
data
DIGITAL DATA PRODUCTS
It gives in the form of array of small cells having
quantitative values, which is the function of EM energy
radiated from all objects within the field of view.
A digital data product called digital image.
It is a 2-D array of pixels (picture element)
Each pixel represents an area on the earth’s surface.
It has an intensive value (in digital number) & location
address (reflected by the raw & column number).
The intensity value represents the measured solar
radiance in a given wavelength band reflected from
ground.
The location of pixel has its own latitude & longitude
value (to easy find).
The digital image is a vast matrix of no. & its stored in
magnetic tapes or in a computer compatible image.
DIGITAL IMAGE
A digital image is a two dimensional array of discrete
image elements or pixels representing spatial
distribution of different parameters such as
electromagnetic radiation, emission, temperature, or
some geophysical or topographical elevation etc.
DIGITAL NUMBER
Each pixel associated with a pixel value is also called
brightness value, represented as Digital Number (DN),
which are stored as binary digits in a computer.
The Value represented in a grey scale is in certain
ranges such as 0-255 (8 bit image: 2^8 = 256). 0 and
255 represent black and white respectively.
PICTORIAL DATA PRODUCTS
It gives info. In form of image & aerial photographs.
These are generally taken by sophisticated cameras
which use visible parts of electromagnetic energy.
Therefore, aerial photographs give exact view of
objects on reduce scale.
The images are may be black/ white/ coloured.
(depends on camera).
The image provided by satellite called satellite image.
These images are taken by sensors which uses:
 RGB- visible portion of EMR
 Infrared- invisible
Satellite image may be black/ white/ coloured
Black/ white image
It can produce each band
of digital data.
For particular band, B/W
image is generated by
assigning different
shades of grey (B/W) to
its digital data.
Coloured images
(red, blue, green) visible
part of EMR
It generates different
shades of RGB to
particular band data.
When three are
combines, gives multi-
coloured imagery.
Conditions:
1.If images are taken in blue, green, red bands They
can combine to glue natural coloured image. OR
2.If the images are taken in
 & then they are combined together it will produce a False
Colour Composite (FCC) image.
 As we known vegetation are using near infrared & as above
given infrared assign red colour.
 The vegetation shown in red colour.
 The FCC image does not give the exact view of the earth
surface likewise aerial photographs.
Green
band
Red
band
Infrared band
& assign them with
blue green red
 Only an expert can interpret it.
DATA
INTERPRETATION
Visual interpretation
Digital interpretation
DATA INTERPRETATION
As we known there is two major types of remote
sensing data products
 Pictorial data products
 Such as aerial photographs & satellite imageries are interpreted visually.
 Digital data products
 The digital images are interpreted mathematically by using computer software.
So, there are two ways of remote sensing data
interpretation
Visual
interpretation
Digital
interpretation
VISUAL
INTERPRETATION
• intro
• limitations
VISUAL INTERPRETATION
Both aerial photographs & satellite images are interpreted
visually.
There is a term “Photogrammetry” is the science which study
interpretation of aerial photographs.
Instruments required for interpret aerial photographs
 Pocket stereoscope
 Mirror stereoscope
 Plotter
All above instruments are used in photogrammetry for
measuring Area, Height, Slopes of different parts of earth
photographed.
SATELLITE IMAGE
INTERPRETATION
It is an art of examining images for the purpose
of identifying objects & judging their significance.
Note*- interpretation study RS image logically &
attempt to identify, measure & evaluate the
significance of nature & cultural features.
Image interpretation requires extensive training & is
labour intensive.
Information Extraction from image is based on
characteristic of image features such as
Size
Shape
Tone
Texture
Shadow
Pattern
Association
LIMITATION OF VISUAL
INTERPRETATION
The range grey values product on a print limited in
comparison to digital form.
Human eyes can recognize limited no. of colour tunes,
so full advantage of radiometric resolution cannot be
used.
Visual interpretation poses serious limitation when we
want to combine data from various sources.
DIGITAL
INTERPRETATION
• Image correction
• Image enhancement
• Image extraction
DIGITAL INTERPRETATION
It facilitates quantitative analysis of digital data with
the help of computers to extract information about
earth surface.
It is popularly known as “Image Processing”.
Image processing deal with
 Image correction/ rectification
 Image enhancement
 Information extraction
IMAGE CORRECTION
it means to correct the errors in digital image.
Errors are resulted due to two reasons or type of error
done.
 Radiometric error
 Geometric error
Both radiometric and geometric errors/ noise in image
are reduced through different techniques with the help
of software computer.
ERRORS/DISTORTION
Radiometric error
when errors are resulted
due to defect in sensor
(ex. If one of the
detector out of N no. of
detectors does not
work).
Geometric error
when errors are resulted
due to earth rotation,
space craft velocity,
atmospheric attenuation.
IMAGE RECTIFICATION AND
RESTORATION
Remotely sensed images are taken from a great
distance from the surface of the earth.
So, It affected by various parameters such as
atmosphere, solar illumination, earth rotation, sensor
motion with its optical, mechanical, electrical
components etc.
which causes distortion in the imagery.
The intent of image rectification and restoration is to
correct these distortions arises in image acquisition
process both geometrically and radiometrically.
ERROR GENERATION
Errorgeneration
Digital image
acquisition unit
digital camera
along-track scanner
across-track
scanner
platforms
Air borne
space Bourne
atmosphere
TYPES OF IMAGE
CORRECTION
Geometric
correction
radiometric
correction
GEOMETRIC DISTORTION
Geometric distortions are the errors in the position of
a pixel relative to other pixels in the scene and with
respect to their absolute position in a particular
projection system.
TYPES OF GEOMETRIC
DISTORTION
systematic or predictable
random or unpredictable
Random distortions and residual unknown systematic
distortions are difficult to account mathematically.
These are corrected using a procedure called
rectification.
RECTIFICATION
Rectification is process of correcting an image so that
it can be represented on a plain surface.
It includes the process known as rubber sheeting,
which involves stretching and warping image to
georegister using control points shown in the image
and known control points on the ground called Ground
Control Point (GCP).
Rectification is also called georeferencing.
GROUND CONTROL POINT
(GCP)
The GCPs are chosen such that these can be easily
identified on the image and whose geographic co-
ordinates can be obtained from ground (using GPS) or
from existing geo-referenced map or image.
HOW TO RECTIFY?
image-to-map rectification
image-to-image registration
GEOMETRIC CORRECTION
PROCEDURE
spatial interpolation
•It accounts for the
intensity interpolation
•It accounts for the
RADIOMETRIC DISTORTION
Radiance value recorded for each pixel represents the
reflectance/emittance property of the surface features.
But the recorded value does not coincide with the
actual reflectance/ emittance of the objects.
FACTORS CAUSES
RADIOMETRIC DISTORTIONS
Sun’s azimuth
and elevation
viewing
geometry
atmospheric
conditions
sensor
characteristics
RADIOMETRIC CORRECTION
To obtain the actual reflectance/ emittance of surface
features, radiometric distortions must be
compensated.
The main purpose of radiometric correction is to
reduce the influence of errors.
NOISE
An error due to sensor’s
characteristic is referred as noise.
TYPES OF RADIOMETRIC
CORRECTION
correction
sun angle and
topographic correction
atmospheric correction
detector response
calibration
De-striping
removal of missing
scanned lines
random noise removal
vignette removal
IMAGE ENHANCEMENT
It deals with manipulation of data for improving its
quality for interpretation.
Some digital image lacks adequate contrast, as a
result different objects cannot be recognized properly.
So, image requires contrast improvement.
Through different image enhancement techniques,
contrast is improved in digital image.
INFORMATION EXTRACTION
After image correction/ rectification and contrast
enhancement, info. Are extracted from the digital image,
which is the ultimate goal of interpreters.
1. In spectral values of pixels are analysed through computer to
identify/ classify objects on the earth surface.
2. Spectrally homogeneous pixels in the image are grouped
together & differentiated from other groups.
3. In this way, different features of earth, are recognized &
classified.
4. Thee field knowledge & other sources of info. Also help in
recognition & classification processes.
IMAGE
CLASSIFICATION
• Unsupervised
classification
• Supervised classification
• Hybrid classification
• classifiers
IMAGE CLASSIFICATION
Two major categories of image classification
techniques include
unsupervised (calculated
by software) classification
supervised (human-
guided) classification
UNSUPERVISED
CLASSIFICATION
where the outcomes (groupings of pixels with
common characteristics) are based on the software
analysis of an image without the user providing
sample classes.
The computer uses techniques to determine which
pixels are related and groups them into classes.
The user can specify which algorism the software will
use and the desired number of output classes but
otherwise does not aid in the classification processes.
However, the user must have knowledge of the area
being classified when the groupings of pixels with
common characteristics produced by the computer
have to be related to actual features on the ground
(such as wetlands, developed areas, coniferous
SUPERVISED CLASSIFICATION
It is based on the idea that a user can select pixels in
an image that are representative of specific classes
and then direct the image processing software to use
these training sites as references for the classification
of all other pixels in the image.
Training sites (also known as testing sets or input
classes) are selected based on the knowledge of the
user.
The user also sets the bounds for how similar other
pixels must be to group them together.
These bounds are often set based on the spectral
characteristics of the training area, plus or minus a
certain increment (often based on ‘brightness’ or
strength of reflection in specific spectral bands).
The user also designated the number of classes that
the image is classified into.
SUPERVISED CLASSIFICATION
ALGORITHMS
Minimum Distance
to mean classifier
Gaussian Maximum
Likelihood
Classifier
Parallelepiped
classifier
Note*:
many analysts use a combination of supervised and
unsupervised classification processes to develop final
output and classified maps and is known as HYBRID
Classification.
THANK YOU

Digital image processing

  • 1.
    HELLO!! I AM KNIGHTFOX youcan find me at kaushalgadariya@gmail.com
  • 2.
  • 3.
    CONTENTS 1. Data products 2.Types of data products 3. Data interpretation 4. Visual interpretation 5. Digital interpretation 6. Image correction/ rectification 7. Image enhancement 8. Information extraction 9. Image classification 10.Unsupervised classification 11.Supervised classification
  • 4.
    DATA PRODUCTS •Digital data products • Pictorial data products
  • 5.
    DATA PRODUCTS The RSdata are two types Pictorial data Digital data
  • 6.
    DIGITAL DATA PRODUCTS Itgives in the form of array of small cells having quantitative values, which is the function of EM energy radiated from all objects within the field of view. A digital data product called digital image. It is a 2-D array of pixels (picture element) Each pixel represents an area on the earth’s surface.
  • 7.
    It has anintensive value (in digital number) & location address (reflected by the raw & column number). The intensity value represents the measured solar radiance in a given wavelength band reflected from ground. The location of pixel has its own latitude & longitude value (to easy find). The digital image is a vast matrix of no. & its stored in magnetic tapes or in a computer compatible image.
  • 8.
    DIGITAL IMAGE A digitalimage is a two dimensional array of discrete image elements or pixels representing spatial distribution of different parameters such as electromagnetic radiation, emission, temperature, or some geophysical or topographical elevation etc.
  • 9.
    DIGITAL NUMBER Each pixelassociated with a pixel value is also called brightness value, represented as Digital Number (DN), which are stored as binary digits in a computer. The Value represented in a grey scale is in certain ranges such as 0-255 (8 bit image: 2^8 = 256). 0 and 255 represent black and white respectively.
  • 10.
    PICTORIAL DATA PRODUCTS Itgives info. In form of image & aerial photographs. These are generally taken by sophisticated cameras which use visible parts of electromagnetic energy. Therefore, aerial photographs give exact view of objects on reduce scale. The images are may be black/ white/ coloured. (depends on camera). The image provided by satellite called satellite image.
  • 11.
    These images aretaken by sensors which uses:  RGB- visible portion of EMR  Infrared- invisible Satellite image may be black/ white/ coloured
  • 12.
    Black/ white image Itcan produce each band of digital data. For particular band, B/W image is generated by assigning different shades of grey (B/W) to its digital data. Coloured images (red, blue, green) visible part of EMR It generates different shades of RGB to particular band data. When three are combines, gives multi- coloured imagery.
  • 13.
    Conditions: 1.If images aretaken in blue, green, red bands They can combine to glue natural coloured image. OR 2.If the images are taken in  & then they are combined together it will produce a False Colour Composite (FCC) image.  As we known vegetation are using near infrared & as above given infrared assign red colour.  The vegetation shown in red colour.  The FCC image does not give the exact view of the earth surface likewise aerial photographs. Green band Red band Infrared band & assign them with blue green red
  • 14.
     Only anexpert can interpret it.
  • 15.
  • 16.
    DATA INTERPRETATION As weknown there is two major types of remote sensing data products  Pictorial data products  Such as aerial photographs & satellite imageries are interpreted visually.  Digital data products  The digital images are interpreted mathematically by using computer software. So, there are two ways of remote sensing data interpretation Visual interpretation Digital interpretation
  • 17.
  • 18.
    VISUAL INTERPRETATION Both aerialphotographs & satellite images are interpreted visually. There is a term “Photogrammetry” is the science which study interpretation of aerial photographs. Instruments required for interpret aerial photographs  Pocket stereoscope  Mirror stereoscope  Plotter All above instruments are used in photogrammetry for measuring Area, Height, Slopes of different parts of earth photographed.
  • 19.
    SATELLITE IMAGE INTERPRETATION It isan art of examining images for the purpose of identifying objects & judging their significance. Note*- interpretation study RS image logically & attempt to identify, measure & evaluate the significance of nature & cultural features. Image interpretation requires extensive training & is labour intensive.
  • 20.
    Information Extraction fromimage is based on characteristic of image features such as Size Shape Tone Texture Shadow Pattern Association
  • 21.
    LIMITATION OF VISUAL INTERPRETATION Therange grey values product on a print limited in comparison to digital form. Human eyes can recognize limited no. of colour tunes, so full advantage of radiometric resolution cannot be used. Visual interpretation poses serious limitation when we want to combine data from various sources.
  • 22.
    DIGITAL INTERPRETATION • Image correction •Image enhancement • Image extraction
  • 23.
    DIGITAL INTERPRETATION It facilitatesquantitative analysis of digital data with the help of computers to extract information about earth surface. It is popularly known as “Image Processing”. Image processing deal with  Image correction/ rectification  Image enhancement  Information extraction
  • 24.
    IMAGE CORRECTION it meansto correct the errors in digital image. Errors are resulted due to two reasons or type of error done.  Radiometric error  Geometric error Both radiometric and geometric errors/ noise in image are reduced through different techniques with the help of software computer.
  • 25.
    ERRORS/DISTORTION Radiometric error when errorsare resulted due to defect in sensor (ex. If one of the detector out of N no. of detectors does not work). Geometric error when errors are resulted due to earth rotation, space craft velocity, atmospheric attenuation.
  • 26.
    IMAGE RECTIFICATION AND RESTORATION Remotelysensed images are taken from a great distance from the surface of the earth. So, It affected by various parameters such as atmosphere, solar illumination, earth rotation, sensor motion with its optical, mechanical, electrical components etc. which causes distortion in the imagery. The intent of image rectification and restoration is to correct these distortions arises in image acquisition process both geometrically and radiometrically.
  • 27.
    ERROR GENERATION Errorgeneration Digital image acquisitionunit digital camera along-track scanner across-track scanner platforms Air borne space Bourne atmosphere
  • 28.
  • 29.
    GEOMETRIC DISTORTION Geometric distortionsare the errors in the position of a pixel relative to other pixels in the scene and with respect to their absolute position in a particular projection system.
  • 30.
    TYPES OF GEOMETRIC DISTORTION systematicor predictable random or unpredictable Random distortions and residual unknown systematic distortions are difficult to account mathematically. These are corrected using a procedure called rectification.
  • 31.
    RECTIFICATION Rectification is processof correcting an image so that it can be represented on a plain surface. It includes the process known as rubber sheeting, which involves stretching and warping image to georegister using control points shown in the image and known control points on the ground called Ground Control Point (GCP). Rectification is also called georeferencing.
  • 32.
    GROUND CONTROL POINT (GCP) TheGCPs are chosen such that these can be easily identified on the image and whose geographic co- ordinates can be obtained from ground (using GPS) or from existing geo-referenced map or image.
  • 33.
    HOW TO RECTIFY? image-to-maprectification image-to-image registration
  • 34.
    GEOMETRIC CORRECTION PROCEDURE spatial interpolation •Itaccounts for the intensity interpolation •It accounts for the
  • 35.
    RADIOMETRIC DISTORTION Radiance valuerecorded for each pixel represents the reflectance/emittance property of the surface features. But the recorded value does not coincide with the actual reflectance/ emittance of the objects.
  • 36.
    FACTORS CAUSES RADIOMETRIC DISTORTIONS Sun’sazimuth and elevation viewing geometry atmospheric conditions sensor characteristics
  • 37.
    RADIOMETRIC CORRECTION To obtainthe actual reflectance/ emittance of surface features, radiometric distortions must be compensated. The main purpose of radiometric correction is to reduce the influence of errors.
  • 38.
    NOISE An error dueto sensor’s characteristic is referred as noise.
  • 39.
    TYPES OF RADIOMETRIC CORRECTION correction sunangle and topographic correction atmospheric correction detector response calibration De-striping removal of missing scanned lines random noise removal vignette removal
  • 40.
    IMAGE ENHANCEMENT It dealswith manipulation of data for improving its quality for interpretation. Some digital image lacks adequate contrast, as a result different objects cannot be recognized properly. So, image requires contrast improvement. Through different image enhancement techniques, contrast is improved in digital image.
  • 41.
    INFORMATION EXTRACTION After imagecorrection/ rectification and contrast enhancement, info. Are extracted from the digital image, which is the ultimate goal of interpreters. 1. In spectral values of pixels are analysed through computer to identify/ classify objects on the earth surface. 2. Spectrally homogeneous pixels in the image are grouped together & differentiated from other groups. 3. In this way, different features of earth, are recognized & classified. 4. Thee field knowledge & other sources of info. Also help in recognition & classification processes.
  • 42.
    IMAGE CLASSIFICATION • Unsupervised classification • Supervisedclassification • Hybrid classification • classifiers
  • 43.
    IMAGE CLASSIFICATION Two majorcategories of image classification techniques include unsupervised (calculated by software) classification supervised (human- guided) classification
  • 44.
    UNSUPERVISED CLASSIFICATION where the outcomes(groupings of pixels with common characteristics) are based on the software analysis of an image without the user providing sample classes. The computer uses techniques to determine which pixels are related and groups them into classes.
  • 45.
    The user canspecify which algorism the software will use and the desired number of output classes but otherwise does not aid in the classification processes. However, the user must have knowledge of the area being classified when the groupings of pixels with common characteristics produced by the computer have to be related to actual features on the ground (such as wetlands, developed areas, coniferous
  • 46.
    SUPERVISED CLASSIFICATION It isbased on the idea that a user can select pixels in an image that are representative of specific classes and then direct the image processing software to use these training sites as references for the classification of all other pixels in the image. Training sites (also known as testing sets or input classes) are selected based on the knowledge of the user. The user also sets the bounds for how similar other pixels must be to group them together.
  • 47.
    These bounds areoften set based on the spectral characteristics of the training area, plus or minus a certain increment (often based on ‘brightness’ or strength of reflection in specific spectral bands). The user also designated the number of classes that the image is classified into.
  • 48.
    SUPERVISED CLASSIFICATION ALGORITHMS Minimum Distance tomean classifier Gaussian Maximum Likelihood Classifier Parallelepiped classifier
  • 49.
    Note*: many analysts usea combination of supervised and unsupervised classification processes to develop final output and classified maps and is known as HYBRID Classification.
  • 50.