SlideShare a Scribd company logo
1 of 261
Digital Image Processing
Shashi Kant Sharma
Image Processing Books
• Gonzalez, R. C. and Woods, R. E., "Digital
Image Processing", Prentice Hall.
• Jain, A. K., "Fundamentals of Digital Image
Processing", PHI Learning, 1
st
Ed.
• Bernd, J., "Digital Image Processing", Springer,
6
th
Ed.
• Burger, W. and Burge, M. J., "Principles of
Digital Image Processing", Springer
• Scherzer, O., " Handbook of Mathematical
Methods in Imaging", Springer
Thursday, January 5, 2023 2
Why we need Image Processing?
• Improvement of pictorial information for
human perception
• Image processing for autonomus machine
applications
• Efficient storage and transmission
Thursday, January 5, 2023 3
What is digital image processing?
• An image may be defined as a two dimensional
function f(x,y), where ‘x’ and ‘y’ are spatial(plane)
coordinates and the amplitude of ‘f’ at any pair of
coordinates (x,y) is called the intensity or gray
level of the image at that point.
• When x,y, and the amplitude values of ‘f’ are all
finite, descrete quantities we call the image a
digital image.
• The field of digital image processing refers to
processing digital images by means of digital
computers.
Thursday, January 5, 2023 4
What is digital image processing?
(Cont…)
Thursday, January 5, 2023 5
Image Processing Applications
• Automobile driver assistance
– Lane departure warning
– Adaptive cruise control
– Obstacle warning
• Digital Photography
– Image Enhancement
– Compression
– Color manipulation
– Image editing
– Digital cameras
• Sports analysis
– sports refereeing and commentary
– 3D visualization and tracking sports actions
Thursday, January 5, 2023 6
Image Processing Applications(Cont…)
• Film and Video
– Editing
– Special effects
• Image Database
– Content based image retrieval
– visual search of products
– Face recognition
• Industrial Automation and Inspection
– vision-guided robotics
– Inspection systems
• Medical and Biomedical
– Surgical assistance
– Sensor fusion
– Vision based diagnosis
• Astronomy
– Astronomical Image Enhancement
– Chemical/Spectral Analysis
Thursday, January 5, 2023 7
Image Processing Applications(Cont...)
• Arial Photography
– Image Enhancement
– Missile Guidance
– Geological Mapping
• Robotics
– Autonomous Vehicles
• Security and Safety
– Biometry verification (face, iris)
– Surveillance (fences, swimming pools)
• Military
– Tracking and localizing
– Detection
– Missile guidance
• Traffic and Road Monitoring
– Traffic monitoring
– Adaptive traffic lights
Thursday, January 5, 2023 8
Brief History of IP
• In 1920s, submarine cables were used to transmit
digitized newspaper pictures between London &
New York – using Bartlane cable picture
transmission System.
• Specialized printing equipments(eg. Telegraphic
printer) used to code the picture for cable
transmission and its reproduction on the
receiving end.
• In 1921, printing procedure was changed to
photographic reproduction from tapes perforated
at telegraph receiving terminals.
• This improved both tonal quality & resolution.
Thursday, January 5, 2023 9
Brief History of IP(Cont…)
Thursday, January 5, 2023 10
Brief History of IP(Cont…)
• Bartlane system was capable of coding 5 distinct
brightness levels. This was increased to 15 levels
by 1929.
• Improvement of processing techniques continued
for next 35 years .
• In 1964 computer processing techniques were
used to improve the pictures of moon tranmitted
by ranger 7 at Jet Propulsion Laboratory.
• This was the basis of modern Image Processing
techniques.
Thursday, January 5, 2023 11
Image Processing Steps
Thursday, January 5, 2023 12
Components of IP System
Thursday, January 5, 2023 13
Image Acquisition Process
Thursday, January 5, 2023 14
Image Sensing and Acquisition
Thursday, January 5, 2023 15
Image Sensing and Acquisition(Cont…)
• Image acquisition using a single sensor
Thursday, January 5, 2023 16
Image Sensing and Acquisition(Cont…)
• Using sensor strips
Thursday, January 5, 2023 17
Image Representation
Thursday, January 5, 2023 18
x
y
IMAGE
An image is a 2-D light intensity function F(X,Y).
F(X,Y) = R(X,Y)* I(X,Y) , where
R(X,Y) = Reflectivity of the surface of the corresponding image point.
I(X,Y) = Intensity of the incident light.
A digital image F(X,Y) is discretized both in spatial coordinates and brightness.
It can be considered as a matrix whose row, column indices specify a point in
the image & the element value identifies gray level value at that point known
as pixel or pels.
Image Representation (Cont..)
Thursday, January 5, 2023 19
(0,0) (0,1) ... (0, 1)
(1,0) (1,1) ... (1, 1)
( , )
... ... ... ...
( 1,0) ( 1,1) ... ( 1, 1)
f f f N
f f f N
f x y
f M f M f M N

 
 

 

 
 
   
 
Image Representation in Matrix form
Image Representation (Cont..)
Thursday, January 5, 2023 20
Image Representation (Cont..)
Thursday, January 5, 2023 21
Image Representation (Cont..)
Thursday, January 5, 2023 22
( , ) ( , ) ( , )
( , ): intensity at the point ( , )
( , ): illumination at the point ( , )
(the amount of source illumination incident on the scene)
( , ): reflectance/transmissivity
f x y i x y r x y
f x y x y
i x y x y
r x y

at the point ( , )
(the amount of illumination reflected/transmitted by the object)
where 0 < ( , ) < and 0 < ( , ) < 1
x y
i x y r x y

Image Representation (Cont..)
• By theory of real numbers :
Between any two given points there are infinite
number of points.
• Now by this theory :
An image should be represented by infinite
number of points.
Each such image point may contain one of the
infinitely many possible intensity/color values
needing infinite number of bits.
Obviously such a representation is not possible in
any digital computer.
Thursday, January 5, 2023 23
Image Sampling and Quantization
• By above slides we came to know that we need to
find some other way to represent an image in
digital format.
• So we will consider some discrete set of points
known as grid and in each rectangular grid
consider intensity of a particular point. This
process is known as sampling.
• Image representation by 2-d finite matrix –
Sampling
• Each matrix element represented by one of the
finite set of discrete values - Quantization
Thursday, January 5, 2023 24
Thursday, January 5, 2023 25
Image Sampling and Quantization
Thursday, January 5, 2023 26
Thursday, January 5, 2023 27
Thursday, January 5, 2023 28
Thursday, January 5, 2023 29
Thursday, January 5, 2023 30
Thursday, January 5, 2023 31
Thursday, January 5, 2023 32
Thursday, January 5, 2023 33
Thursday, January 5, 2023 34
Colour Image Processing
• Why we need CIP when we get information from
black and white image itself?
1. Colour is a very powerful descriptor & using the
colour information we can extract the objects of
interest from an image very easily which is not so easy
in some cases using black & white pr simple gray level
image.
2. Human eyes can distinguish between thousands of
colours & colour shades whereas when we talk about
only black and white image or gray scale image we
can distinguish only about dozens of intensity
distinguishness or different gray levels.
Thursday, January 5, 2023 35
Color Image processing(Cont…)
• The color that human perceive in an object =
the light reflected from the object
Illumination source
scene
reflection
Humen eye
Colour Image Processing(Cont...)
• In CIP there are 2 major areas:
1.FULL CIP : Image which are acquired by full colour
TV camera or by full color scanner, than, you find
that all the colour you perceive they are present in
the images.
2.PSEUDO CIP : Is a problem where we try to assign
certain colours to a range of gray levels. Pseudo CIP is
mostly used for human interpretation.
So here it is very difficult to distinguish between two
ranges which are very nearer to each other or gray
intensity value are very near to each other.
Thursday, January 5, 2023 37
Colour Image Processing(Cont...)
• Problem with CIP
Interpretation of color from human eye is a
psycophisological problem and we have not yet
been fully understand what is the mechanism by
which we really interpret a color.
Thursday, January 5, 2023 38
Colour Image Processing(Cont...)
• In 1666 Isacc Newton discover color spectrum
by optical prism.
Thursday, January 5, 2023 39
Colour Image Processing(Cont...)
• We can perceive the color depending on the nature of light
which is reflected by the object surface.
• Spectrum of light or spectrum of energy in the visible range that
we are able to perceive a color(400 nm to 700 nm)
Thursday, January 5, 2023 40
Colour Image Processing(Cont...)
• Attribute of Light
Achromatic Light : A light which has no color
component i.e., the only attribute which
describes that particular light is the intensity of
the light.
Chromatic Light : Contain color component.
• 3 quantities that describe the quality of light:
Radiance
Luminance
Brightness
Thursday, January 5, 2023 41
Colour Image Processing(Cont...)
• Radiance : Total amount of energy which comes
out of a light (Unit : watts)
• Luminance : Amount of energy that is perceived
by an observer (Unit : Lumens)
• Brightness : It is a subjective thing. Practically we
can’t measure brightness.
We have 3 primary colors:
Red
Blue
Green
Thursday, January 5, 2023 42
Colour Image Processing(Cont...)
• Newton discovered 7 different color but only 3
colors i.e., red, green and blue are the primary
colors. Why?
Because by mixing these 3 colors in some proportion
we can get all other colors.
There are around 6-7 millions cone cells in our eyes
which are responsible for color sensations.
Around 65% cone cells are sensitive to red color.
Around 33% cone cells are sensitive to green color.
Around 2% cone cells are sensitive to blue color.
Thursday, January 5, 2023 43
Colour Image Processing(Cont...)
• According to CIE standard
Red have wavelength : 700 nm
Green have wavelength : 546.1 nm
Blue have wavelength : 435.6 nm
But, practically :
Red is sensitive to 450 nm to 700 nm
Green is sensitive to 400 nm to 650 nm
Blue is sensitive to 400 nm to 550 nm
Thursday, January 5, 2023 44
Colour Image Processing(Cont...)
• Note : In practical no single wavelength can
specify any particular color.
• By spectrum color also we can see that there is
no clear cut boundaries between any two color.
• One color slowly or smoothly get merged into
another color i.e., there is no clear cut boundary
between transition of color in spectrum.
• So, we can say a band of color give red, green
and blue color sensation respectively.
Thursday, January 5, 2023 45
Colour Image Processing(Cont...)
• Mixing of Primary color generates the secondary
colors i.e.,
 RED+BLUE=Magenta
 GREEN+BLUE = Cyan
 RED+GREEN = yellow
• Here red, green and blue are the primary color
and magenta, cyan and yellow are the secondary
color.
• Pigments : The primary color of pigment is
defined as wavelength which are absorbed by the
pigment and it reflect other wavelength.
Thursday, January 5, 2023 46
Colour Image Processing(Cont...)
• Primary color of light should be opposite of primary color of
pigment i.e., magenta , cyan and yellow are primary color of
pigment.
• If we mix red, green and blue color in appropriate proportion
we get white light and similarly when we mix magenta, cyan
and yellow we get black color.
Thursday, January 5, 2023 47
Colour Image Processing(Cont...)
• For hardware i.e., camera, printer, display
device, scanner this above concept of color is
used i.e., concept of primary color
component.
• But when we perceive a color for human
beings we don’t think that how much
red,green and blue components are mixed in
that particular color.
• So the way by which we human differentiate
or recognize or distinguish color are :
Brightness, Hue and Saturation.
Thursday, January 5, 2023 48
Colour Image Processing(Cont...)
• Spectrum colors are not diluted i.e., spectrum
colors are fully saturated . It means no white
light or white component are added to it.
• Example: Pink is not spectrum color.
Red + white = pink
Here red is fully saturated.
• So, Hue+Saturation indicates chromaticity of
light and Brightness gives some sentation of
intensity.
Thursday, January 5, 2023 49
Colour Image Processing(Cont...)
• Brightness : Achromatic notion of Intensity.
• Hue : It represents the dominant wavelength
present in a mixture of colors.
• Saturation : eg., when we say color is red i.e.,
we may have various shades of red. So
saturation indicates what is the purity of red
i.e., what is the amount of light which has
been mixed to that particular color to make it
a diluted one.
Thursday, January 5, 2023 50
Colour Image Processing(Cont...)
• The amount of red, green and blue
component is needed to get another color
component is known as tristimulus.
• Tristimulus = (X,Y,Z)
• Chromatic cofficient for red = X/(X+Y+Z) , for
green = Y/(X+Y+Z) , for blue = Z/(X+Y+Z).
• Here X+Y+Z=1
• So any color can be specified by its chromatic
cofficient or a color can be specified by a
chromaticity diagram.
Thursday, January 5, 2023 51
Colour Image Processing(Cont...)
• Here Z = 1-(X+Y) , In chromaticity diagram around the
boundary we have all the color of the spectrum colors and
point of equal energy is : white color.
Thursday, January 5, 2023 52
Colour Image Processing(Cont...)
• Color Models : A coordinate system within
which a specified color will be represented by
a single point.
• RGB , CMY , CMYK : Hardware oriented
• HSI : Hue , Saturation and Intensity :
Application oriented / Perception oriented
• In HSI model : I part gives you gray scale
information. H & S taken together gives us
chromatic information.
Thursday, January 5, 2023 53
Colour Image Processing(Cont...)
• RGB Color Model : Here a color model is represented
by 3 primary colors i.e., red , green and blue.
• In RGB color model we can have 224 different color
combinations but practically 216 different colors can
be represented by RGB model.
• RGB color model is based on Cartesian coordinate
system.
• This is an additive color model
• Active displays, such as computer monitors and
television sets, emit combinations of red, green and
blue light.
Thursday, January 5, 2023 54
Colour Image Processing(Cont...)
• RGB Color Model
Thursday, January 5, 2023 55
Colour Image Processing(Cont...)
• RGB Color Model
• RGB 24-bit color cube is shown below
Thursday, January 5, 2023 56
Colour Image Processing(Cont...)
• RGB example:
Thursday, January 5, 2023 57
Original Green Band Blue Band
Red Band
Colour Image Processing(Cont...)
• CMY Color Model : secondary colors of light, or
primary colors of pigments & Used to generate
hardcopy output
Thursday, January 5, 2023 58
Source: www.hp.com
Passive displays, such as colour inkjet printers, absorb light instead of
emitting it. Combinations of cyan, magenta and yellow inks are used. This
is a subtractive colour model.
Colour Image Processing(Cont...)
• Equal proportion of CMY gives a muddy black
color i.e., it is not a pure black color. So, to get
pure black color with CMY another
component is also specified known as Black
component i.e., we get CMYK model.
• In CMYK “K” is the black component.
Thursday, January 5, 2023 59
































B
G
R
Y
M
C
1
1
1
Colour Image Processing(Cont...)
• HSI Color Model (Based on human perception of
colors )
• H = What is the dominant specified color present in a
particular color. It is a subjective measure of color.
• S = How much a pure spectrum color is really diluted
by mixing white color to it i.e., Mixing more “white”
with a color reduces its saturation.
If we mix white color in different proportion with a
color we get different shades of that color.
• I = Chromatic notation of brightness of black and
white image i.e., the brightness or darkness of an
object.
Thursday, January 5, 2023 60
Colour Image Processing(Cont...)
• HSI Color Model
Thursday, January 5, 2023 61
H
dominant
wavelength
S
purity
% white
I
Intensity
Colour Image Processing(Cont...)
• HSI Color Model
Thursday, January 5, 2023 62
RGB -> HSI model
Colour Image Processing(Cont...)
• HSI Color Model
Thursday, January 5, 2023 63
Colour Image Processing(Cont...)
• Pseudo-color Image Processing
Assign colors to gray values based on a specified
criterion
For human visualization and interpretation of
gray-scale events
Intensity slicing
Gray level to color transformations
Thursday, January 5, 2023 64
Colour Image Processing(Cont...)
• Pseudo-color Image Processing(cont…)
Intensity slicing
 Here first consider an intensity image to be a 3D plane.
 Place a plane which is parallel to XY plane(it will slice the
plane into two different hubs).
 We can assign different color on two different sides of
the plane i.e., any pixel whose intensity level is above
the plane will be coded with one color and any pixel
below the plane will be coded with the other.
 Level that lie on the plane itself may be arbitrarily
assigned one of the two colors.
Thursday, January 5, 2023 65
Colour Image Processing(Cont...)
Intensity slicing
 Geometric interpretation of the intensity slicing
technique
Thursday, January 5, 2023 66
Colour Image Processing(Cont...)
Intensity slicing
 Let we have total ‘L’ number of intensity values: 0 to (L-1)
 L0 corresponds to black [ f(x , y) = 0]
 LL-1 corresponds to white [ f(x , y) = L-1]
 Suppose ‘P’ number of planes perpendicular to the
intensity axis i.e., they are parallel to the image plane and
these planes will be placed at the intensity values given
by L1,L2,L3,………,LP.
 Where , 0< P < L-1.
Thursday, January 5, 2023 67
Colour Image Processing(Cont...)
• Intensity slicing
 The P planes partition the gray scale(intensity)
into (P+1) intervals, V1,V2,V3,………,VP+1.
 Color assigned to location (x,y) is given by the
relation
f(x , y) = Ck if f(x , y) ∈ Vk
Thursday, January 5, 2023 68
Colour Image Processing(Cont...)
• Intensity slicing
 Give ROI(region of interest) one color and rest part
other color
 Keep ROI as it is and rest assign one color
 Keep rest as it is and give ROI one color
Thursday, January 5, 2023 69
Colour Image Processing(Cont...)
• Pseudo-coloring is also used from gray to color
image transformation.
• Gray level to color transformation
Thursday, January 5, 2023 70
Colour Image Processing(Cont...)
• Gray level to color transformation
fR(X,Y) = f(x,y)
fG(X,Y) = 0.33f(x,y)
fB(X,Y) = 0.11f(x,y)
 Combining these 3 planes we get the pseudo
color image.
 Application of Pseudo CIP : Machine using at
railways and airport for bag checking.
Thursday, January 5, 2023 71
Image Enhancement
• Intensity Transformation Functions
• Enhancing an image provides better contrast and a more
detailed image as compare to non enhanced image.
Image enhancement has very applications. It is used to
enhance medical images, images captured in remote
sensing, images from satellite e.t.c
• The transformation function has been given below
s = T ( r )
• where r is the pixels of the input image and s is the pixels
of the output image. T is a transformation function that
maps each value of r to each value of s.
Thursday, January 5, 2023 72
Image Enhancement(Cont…)
• Image enhancement can be done through gray
level transformations which are discussed below.
• There are three basic gray level transformation.
• Linear
• Logarithmic
• Power – law
Thursday, January 5, 2023 73
Image Enhancement(Cont…)
• Linear Transformation
 Linear transformation includes simple identity and negative
transformation.
 Identity transition is shown by a straight line. In this transition,
each value of the input image is directly mapped to each other
value of output image. That results in the same input image
and output image. And hence is called identity transformation.
• Negative Transformation
 The second linear transformation is negative transformation,
which is invert of identity transformation. In negative
transformation, each value of the input image is subtracted
from the L-1 and mapped onto the output image.
Thursday, January 5, 2023 74
Image Enhancement(Cont…)
Thursday, January 5, 2023 75
Image Enhancement(Cont…)
• Negative Transformation
s = (L – 1) – r
s = 255 – r
Thursday, January 5, 2023 76
Image Enhancement(Cont…)
• Logarithmic Transformations
 The log transformations can be defined by this
formula
s = c log(r + 1).
 Where s and r are the pixel values of the output
and the input image and c is a constant. The value
1 is added to each of the pixel value of the input
image because if there is a pixel intensity of 0 in
the image, then log (0) is equal to infinity. So 1 is
added, to make the minimum value at least 1.
Thursday, January 5, 2023 77
Image Enhancement(Cont…)
• Logarithmic Transformations
 In log transformation we decrease the dynamic range of a
particular intensity i.e., here intensity of the pixels are
increased which we require to get more information. The
maximum information is contained in the center pixel.
 Log transformation is mainly applied in frequency domain.
Thursday, January 5, 2023 78
Image Enhancement(Cont…)
• Logarithmic Transformation
Thursday, January 5, 2023 79
Image Enhancement(Cont…)
• Power – Law transformations
• This symbol γ is called gamma, due to which this
transformation is also known as gamma
transformation.
Thursday, January 5, 2023 80
s = crγ, c,γ –positive constants
curve the grayscale components either to brighten
the intensity (when γ < 1) or darken the intensity
(when γ > 1).
Image Enhancement(Cont…)
• Power – Law transformations
Thursday, January 5, 2023 81
Image Enhancement(Cont…)
• Power – Law transformations
• Variation in the value of γ varies the enhancement
of the images. Different display devices / monitors
have their own gamma correction, that’s why they
display their image at different intensity.
• This type of transformation is used for enhancing
images for different type of display devices. The
gamma of different display devices is different.
For example Gamma of CRT lies in between of 1.8
to 2.5, that means the image displayed on CRT is
dark.
Thursday, January 5, 2023 82
Image Enhancement(Cont…)
• Power – Law transformations
 Gamma Correction
 Different camera or video recorder devices do not
correctly capture luminance. (they are not linear)
Different display devices (monitor, phone screen, TV) do
not display luminance correctly neither. So, one needs to
correct them, therefore the gamma correction function
is needed. Gamma correction function is used to correct
image's luminance.
s=cr^γ
s=cr^(1/2.5)
Thursday, January 5, 2023 83
Image Enhancement(Cont…)
Thursday, January 5, 2023 84
Image Enhancement(Cont…)
Thursday, January 5, 2023 85
Image Enhancement(Cont…)
• Piecewise-Linear Transformation Functions
 Three types:
 Contrast Stretching
 Intensity Level Slicing
 Bit-Plane Slicing
Thursday, January 5, 2023 86
Image Enhancement(Cont…)
• Contrast stretching
 Aims increase the dynamic range of the gray
levels in the image being processed.
 Contrast stretching is a process that expands the
range of intensity levels in a image so that it
spans the full intensity range of the recording
medium or display device.
 Contrast-stretching transformations increase the
contrast between the darks and the lights
Thursday, January 5, 2023 87
Image Enhancement(Cont…)
• Contrast stretching
Thursday, January 5, 2023 88
Image Enhancement(Cont…)
• Contrast stretching
 The locations of (r1,s1) and (r2,s2) control the shape of
the transformation function.
– If r1= s1 and r2= s2 the transformation is a linear
function and produces no changes.
– If r1=r2, s1=0 and s2=L-1, the transformation becomes
a thresholding function that creates a binary image.
– Intermediate values of (r1,s1) and (r2,s2) produce
various degrees of spread in the gray levels of the
output image, thus affecting its contrast.
– Generally, r1≤r2 and s1≤s2 is assumed.
Thursday, January 5, 2023 89
Image Enhancement(Cont…)
Thursday, January 5, 2023 90
Thresholding function
Image Enhancement(Cont…)
• Intensity-level slicing
 Highlighting a specific range of gray levels in an
image.
 One way is to display a high value for all gray levels
in the range of interest and a low value for all
other gray levels (binary image).
 The second approach is to brighten the desired
range of gray levels but preserve the background
and gray-level tonalities in the image
Thursday, January 5, 2023 91
Image Enhancement(Cont…)
• Intensity Level Slicing
Thursday, January 5, 2023 92
Image Enhancement(Cont…)
• Bit-Plane Slicing
• To highlight the contribution made to the total
image appearance by specific bits.
– i.e. Assuming that each pixel is represented by 8 bits,
the image is composed of 8 1-bit planes.
– Plane 0 contains the least significant bit and plane 7
contains the most significant bit.
– Only the higher order bits (top four) contain visually
significant data. The other bit planes contribute the
more subtle details.
– Plane 7 corresponds exactly with an image thresholded
at gray level 128.
Thursday, January 5, 2023 93
Image Enhancement(Cont…)
• Bit-Plane Slicing
Thursday, January 5, 2023 94
Image Enhancement(Cont…)
• Histogram Processing
 Two Types : (a). Histogram Stretching (b). Histogram
equalization
 Histogram Stretching
 Contrast is the difference between maximum and
minimum pixel intensity.
 Pictorial view to represent the distribution of pixel which
tell frequency of pixel.
Thursday, January 5, 2023 95
Image Enhancement(Cont…)
• The histogram of digital image with gray values
is the discrete function
Thursday, January 5, 2023 96
1
1
0 ,
,
, 
L
r
r
r 
n
n
r
p k
k 
)
(
nk: Number of pixels with gray value rk
n: total Number of pixels in the image
The function p(rk) represents the fraction of the total
number of pixels with gray value rk.
The shape of a histogram provides useful information for
contrast enhancement.
Image Enhancement(Cont…)
• Histogram Processing
Thursday, January 5, 2023 97
Dark image
Bright image
Image Enhancement(Cont…)
• Histogram Processing
Thursday, January 5, 2023 98
Low contrast image
High contrast image
Image Enhancement(Cont…)
• Histogram Stretching
Thursday, January 5, 2023 99
Image Enhancement(Cont…)
• Histogram Stretching (cont…)
• In the above example (0,8) is smin and smax respectively and
rmin = 0 , rmax = 4 is given.
• S-0 = (8 – 0) / (4 – 0) * (r – 0)
• s=(8/4) r
• S= 2r
• Now we have a relation between r and s.
• So get different values of ‘s’ for given rmin to rmax .
Thursday, January 5, 2023 100
Image Enhancement(Cont…)
• Histogram Stretching (cont…)
Thursday, January 5, 2023 101
Image Enhancement(Cont…)
• Histogram Equalization
– Recalculate the picture gray levels to make the
distribution more equalized
– Used widely in image editing tools and computer
vision algorithms
– Can also be applied to color images
Thursday, January 5, 2023 102
Objective of histogram equalization
• We want to find T(r) so that
Ps(s) is a flat line.
Historgram, color v.4e
103
sk
rk
L-1
L-1 r
T(r)
0
Objective:
To find the
Relation s=T(r)
Pr(r)
r
Ps(s)=a constant
s
L-1
L-1
L-1
Equalized distribution
Input random distribution
The original image
The probability of
these levels are lower)
The probability of
these levels are higher
The probability of all
levels are the same
In Ps(s)
s=T(r)
we want to prove ps(s)= constant
)
3
(
calculus
of
thorem
l
Fundementa
•
)
2
(
)
(
)
(
)
(
)
(
by
sides
both
ate
differenti
1
)
(
)
(
Theory
y
probabilit
Basic
•















x
a
s
r
r
s
r
s
f(x)
f(t)dt
dx
d
dr
ds
s
p
r
p
ds
dr
r
p
s
p
ds
dr
r
p
ds
s
p
constant
1
1
)
(
show
(3)
and
(2)
(1),
formula
and
formula
above
with the
continue
:
Exercise
...
)
(
)
1
(
)
(
)
1
(
)
(
)
1
(
)
(
:
)
1
(
,
)
(
)
(
Since
0
0









 












L
s
p
dr
dw
w
p
L
d
dr
r
dT
dr
ds
dw
w
p
L
r
T
s
from
dr
r
dT
dr
ds
r
T
s
s
r
r
r
r
104
•
 





r
r dw
w
p
L
r
T
s
0
)
1
(
)
(
)
1
(
)
(
Image Enhancement(Cont…)
• Histogram Equalization
Thursday, January 5, 2023 105
• Let rk, k[0..L-1] be intensity levels and let p(rk) be its
normalized histogram function.
• Histogram equalization is applying the transformation of
‘r’ to get ‘s’ where ‘r’ belongs to 0 to L-1.
• As, T(r) is continuous & differentiable
ʃPss ds=ʃprr dr =1
differentiating w.r.t ‘s’ we get :
Image Enhancement(Cont…)
• Histogram Equalization(cont…)
 So, e.q. (1)
 The transformation function T(r) for histogram
equalization is :
 Differentiate w.r.t ‘r’ :
 As we know, , SO,
 From eq.(1) we get, which is a constant.
Thursday, January 5, 2023 106
s
p
r
p
dr
ds
s
r





r
r dw
w
p
L
r
T
S
0
)
(
)
1
(
)
(




r
r dw
w
p
dr
d
L
r
T
dr
d
dr
ds
0
)
(
)
1
(
)
(
Histogram Equalization : Discrete form for
practical use
• From the continuous form (1) to discrete form
1
,..,
2
,
1
,
0
,
1
make
to
need
we
,
)
(
If
:
histogram
normlzied
a
obtain
that to
Recall
)
(
)
1
(
)
(
)
1
(
)
(
)
1
(
)
(
0
0
0



















L
k
n
MN
L
s
MN
n
r
p
r
P
L
r
T
s
dw
w
p
L
r
T
s
k
j
j
k
k
k
k
j
j
r
k
k
r
r
107
Histogram Equalization - Example
• Let f be an image with size 64x64 pixels and L=8 and let f has the intensity
distribution as shown in the table
p r(rk )=nk/MN
nk
rk
0.19
790
0
0.25
1023
2
0.21
850
1
0.16
656
3
0.08
329
4
0.06
245
5
0.03
122
6
0.02
81
7
.
00
.
7
,
86
.
6
,
65
.
6
,
23
.
6
,
67
.
5
,
55
.
4
08
.
3
))
(
)
(
(
7
)
(
7
)
(
33
.
1
)
(
7
)
(
7
)
(
7
6
5
4
3
2
1
0
1
0
1
1
0
0
0
0
0



















s
s
s
s
s
s
r
p
r
p
r
p
r
T
s
r
p
r
p
r
T
s
r
r
j
j
r
r
j
j
r
round the values to the nearest integer
Thursday, January 5, 2023 109
Histogram Equalization - Example
Thursday, January 5, 2023 110
Filtering
• Image filtering is used to:
 Remove noise
 Sharpen contrast
 Highlight contours
 Detect edges
 Image filters can be classified as linear or nonlinear.
 Linear filters are also know as convolution filters as
they can be represented using a matrix multiplication.
 Thresholding and image equalisation are examples of
nonlinear operations, as is the median filter.
Thursday, January 5, 2023 111
Filtering(cont…)
• There are two types of processing:
• Point Processing (eg. Histogram equalization)
• Mask Processing
 Two types of filtering methods:
• Smoothing
Linear (Average Filter) and Non-Linear (Median
Filter)
• Sharpening
Laplacian
Gradient
Thursday, January 5, 2023 112
Filtering(Cont…)
Thursday, January 5, 2023 113
output image
Filtering(Cont…)
• Correlation [ 1-D & 2-D]
• Convolution [ 1-D & 2-D]
• In correlation we use weight to get output image and for
applying convolution we just rotate the weight 180
degree.
• Eg. Weight
• After 180 degree rotation
• After 180 degree rotation
Thursday, January 5, 2023 114
1 2 3
3 2 1
1 2 3
4 5 6
7 8 9
9 8 7
6 5 4
3 2 1
Filtering(Cont…)
• 1-D Correlation
• I =
• W=
• Output =
• For convolution just rotate mask 180 degree.
Thursday, January 5, 2023 115
1 2 3 4
1 2 3
[(2*1)+(3*2)]
8
[(1*1)+(2*2)+
(3*3)]
14
[(1*2)+(2*3)+
(3*4)]
20
[(1*3)+(4*2)]
11
Filtering(Cont…)
• A filtering method is linear when the output is a
weighted sum of the input pixels. Eg. Average filter
• Methods that do not satisfy the above property are
called non-linear. Eg. Median filter
• Average (or mean) filtering is a method of ‘smoothing’
images by reducing the amount of intensity variation
between neighbouring pixels.
• The average filter works by moving through the image
pixel by pixel, replacing each value with the average value
of neighbouring pixels, including itself.
Thursday, January 5, 2023 116
Filtering(Cont…)
• Average filter mask (2-D):
Thursday, January 5, 2023 117
Filtering(Cont…)
Thursday, January 5, 2023 118
Filtering(Cont…)
Thursday, January 5, 2023 119
Filtering(Cont…)
Thursday, January 5, 2023 120
Filtering(Cont…)
Thursday, January 5, 2023 121
Filtering(Cont…)
Thursday, January 5, 2023 122
Filtering(Cont…)
• When we apply average filter noise is removed
but blurring is introduced and to remove blurring
we use weighted filter.
Thursday, January 5, 2023 123
Filtering(Cont…)
• Median Filter (non-linear filter)
• Very effective in removing salt and pepper or impulsive noise
while preserving image detail
• Disadvantages: computational complexity, non linear filter
• The median filter works by moving through the image pixel by
pixel, replacing each value with the median value of
neighbouring pixels.
• The pattern of neighbours is called the "window", which
slides, pixel by pixel over the entire image 2 pixel, over the
entire image.
• The median is calculated by first sorting all the pixel values
from the window into numerical order, and then replacing the
pixel being considered with the middle (median) pixel value.
Thursday, January 5, 2023 124
Filtering(Cont…)
• Median Filter Example:
Thursday, January 5, 2023 125
Filtering(Cont…)
Thursday, January 5, 2023 126
Filtering(Cont…)
Thursday, January 5, 2023 127
Filtering(Cont…)
Thursday, January 5, 2023 128
Filtering(Cont…)
Thursday, January 5, 2023 129
Filtering(Cont…)
Thursday, January 5, 2023 130
• From left to right: the results of a 3 x 3, 5 x 5 and 7 x 7 median filter
Filtering(Cont…)
 Sharpening(high pass filter) is performed by noting only the
gray level changes in the image that is the differentiation.
• Sharpening is used for edge detection ,line detection, point
detection and it also highlight changes.
 Operation of Image Differentiation
• Enhance edges and discontinuities (magnitude of output gray
level >>0)
• De-emphasize areas with slowly varying gray-level values
(output gray level: 0)
 Mathematical Basis of Filtering for Image Sharpening
• First-order and second-order derivatives
• Approximation in discrete-space domain
• Implementation by mask filtering
Thursday, January 5, 2023 131
Filtering(Cont…)
 Common sharpening filters:
• Gradient (1st order derivative)
• Laplacian (2nd order derivative)
• Taking the derivative of an image results in sharpening
the image.
• The derivative of an image (i.e., 2D function) can be
computed using the gradient.
Thursday, January 5, 2023 132
Filtering(Cont…)
 Gradient (rotation variant or non-isotropic)
Thursday, January 5, 2023 133
or
Sensitive to
vertical
edges
Sensitive to
horizontal
edges
Filtering(Cont…)
 Gradient
Thursday, January 5, 2023 134
Kernels used in prewitt edge detection
Filtering(Cont…)
• Laplacian
Thursday, January 5, 2023 135
Original Mask
, C = +1 or C= -1
Filtering(Cont…)
• Laplacian(rotation invariant or isotropic)
Thursday, January 5, 2023 136
(b)Extended
Laplacian
mask to
increase
sharpness
and it covers
diagonal
also, so ,
provide good
results.
Image Transforms
• Many times, image processing tasks are best
performed in a domain other than the spatial
domain.
• Key steps
(1) Transform the image
(2) Carry the task(s) in the transformed domain.
(3) Apply inverse transform to return to the spatial
domain.
Math Review - Complex numbers
• Real numbers:
1
-5.2

• Complex numbers
4.2 + 3.7i
9.4447 – 6.7i
-5.2 (-5.2 + 0i)
1


i
We often denote in EE i by j
Math Review - Complex numbers
• Complex numbers
4.2 + 3.7i
9.4447 – 6.7i
-5.2 (-5.2 + 0i)
• General Form
Z = a + bi
Re(Z) = a
Im(Z) = b
• Amplitude
A = | Z | = √(a2 + b2)
• Phase
 =  Z = tan-1(b/a)
Real and imaginary parts
Math Review – Complex Numbers
• Polar Coordinate
Z = a + bi
• Amplitude
A = √(a2 + b2)
• Phase
 = tan-1(b/a)
a
b


A

Math Review – Complex Numbers and
Cosine Waves
• Cosine wave has three properties
– Frequency
– Amplitude
– Phase
• Complex number has two properties
– Amplitude
– Wave
• Complex numbers to represent cosine waves at varying frequency
– Frequency 1: Z1 = 5 +2i
– Frequency 2: Z2 = -3 + 4i
– Frequency 3: Z3 = 1.3 – 1.6i
Simple but great idea !!
Fourier Transforms & its Properties
• Jean Baptiste Joseph Fourier (1768-1830)
Thursday, January 5, 2023 142
• Had crazy idea (1807):
• Any periodic function can be
rewritten as a weighted sum of
Sines and Cosines of different
frequencies.
• Don’t believe it?
– Neither did Lagrange,
Laplace, Poisson and other
big wigs
– Not translated into English
until 1878!
• But it’s true!
– called Fourier Series
– Possibly the greatest tool
used in Engineering
Fourier Transforms & its Properties
• In image processing:
– Instead of time domain: spatial domain (normal image
space)
– frequency domain: space in which each image value at
image position F represents the amount that the
intensity values in image I vary over a specific distance
related to F
Thursday, January 5, 2023 143
Fourier Transforms & its Properties
• Fourier Transforms & Inverse Fourier Transforms
Thursday, January 5, 2023 144
Fourier Transforms & its Properties
Thursday, January 5, 2023 145
Fourier Transforms & its Properties
Thursday, January 5, 2023 146
Fourier Transforms & its Properties
Thursday, January 5, 2023 147
Fourier Transforms & its Properties
• As we deal with 2-d discrete images so we need to
discuss 2-D discrete Fourier Transforms.
Thursday, January 5, 2023 148
Fourier Transforms & its Properties
• Inverse F.T
Thursday, January 5, 2023 149
Fourier Transforms & its Properties
• If the image is represented as square array i.e.,
M=N than F.T and I.F.T is given by equation:
Thursday, January 5, 2023 150
Fourier Transforms & its Properties
Thursday, January 5, 2023 151
Fourier Transforms & its Properties
• Separability property
Thursday, January 5, 2023 152
Fourier Transforms & its Properties
Thursday, January 5, 2023 153
Fourier Transforms & its Properties
• Periodicity: The DFT and its inverse are periodic wit
period N.
Thursday, January 5, 2023 154
Fourier Transforms & its Properties
Thursday, January 5, 2023 155
• Scaling: If a signal is multiply by a scalar quantity ‘a’
than its Fourier Transformation is also multiplied by
same scalar quantity ‘a’.
Fourier Transforms & its Properties
• Distributivity:
Thursday, January 5, 2023 156
but …
Fourier Transforms & its Properties
• Average:
Thursday, January 5, 2023 157
Average:
F(u,v) at u=0, v=0:
So:
Fourier Transforms & its Properties
Thursday, January 5, 2023 158
Frequency Domain Filters
Frequency Domain Filters
Frequency Domain Filters
• Low pass filter: it allows low frequency range signal to pass as
output.(Useful for noise suppression)
• High pass filter: it allows high pass frequency range to pass as
output. (Useful for edge detection)
• D(u,v) is distance of (u , v) in frequency domain from the origin of
the frequency rectangle.
• Do implies that all signals lies in this range i.e., D(u,v)<= Do all
low pass frequency to pass to the output and rest are not allowed
to pass as output.
Frequency Domain Filters
Frequency domain filters
Frequency Domain Filters
• In the above example, for the same cut of
frequency the blurring is more in Ideal low
pass filter than in butterworth filter and as cut
of frequency increases the number of
undesired lines are increased in Ideal low pass
filter than in butterworth filter.
Frequency domain filters
Frequency domain filters
Image Restoration
• Image restoration and image enhancement share a
common goal: to improve image for human perception
• Image enhancement is mainly a subjective process in
which individuals’ opinions are involved in process
design.
• Image restoration is mostly an objective process which:
• utilizes a prior knowledge of degradation phenomenon to
recover image.
• models the degradation and then to recover the original
image.
• The objective of restoration is to obtain an image
estimate which is as close as possible to the original
input image.
Thursday, January 5, 2023 168
Thursday, January 5, 2023 169
Image Restoration
Thursday, January 5, 2023 170
g(x,y)=f(x,y)*h(x,y)+h(x,y)
G(u,v)=F(u,v)H(u,v)+N(u,v)
If H is a linear, position-invariant process (filter), the degraded
image is given in the spatial domain by:
whose equivalent frequency domain representation is:
where h(x,y) is a system that causes image distortion and h(x,y) is noise.
Image Restoration
Thursday, January 5, 2023 171
g(x,y)=f(x,y)+h(x,y)
G(u,v)=F(u,v)+N(u,v)
Thursday, January 5, 2023 172
Thursday, January 5, 2023 173
Thursday, January 5, 2023 174
Thursday, January 5, 2023 175
Thursday, January 5, 2023 176
Thursday, January 5, 2023 177
Thursday, January 5, 2023 178
Thursday, January 5, 2023 179
Thursday, January 5, 2023 180
Thursday, January 5, 2023 181
Thursday, January 5, 2023 182
Thursday, January 5, 2023 183
Thursday, January 5, 2023 184
Image Restoration
• Homomorphic Filter
• In some images, the quality of the image has reduced
because of non-uniform illumination.
• Homomorphic filtering can be used to perform illumination
correction.
 We can view an image f(x,y) as a product of two components:
 F(x,y) = r(x,y). i(x,y) , where
 r(x,y) = Reflectivity of the surface of the corresponding image
point.
 i(x,y) = Intensity of the incident light.
 The above equation is known as illumination-reflectance
model.
Thursday, January 5, 2023 185
Image Restoration
• The illumination-reflectance model can be used to
address the problem of improving the quality of
an image that has been acquired under poor
illumination conditions.
• For many images, the illumination is the primary
contributor to the dynamic range and varies
slowly in space. While reflectance component
r(x,y) represents the details of object and varies
rapidly in space.
Thursday, January 5, 2023 186
Image Restoration
• The illumination & the reflectance components are to be
handled seperately, the logarithm of input function f(x,y)
is taken because f(x,y) is product of i(x,y) & r(x,y). The log
of f(x,y) seperates the components as illustrated below:
ln[f(x,y)] = ln[i(x,y) . r(x,y)]
ln[f(x,y)]= ln[i(x,y)] + ln[r(x,y)]
• Talking Fourier Transforms of the above equation :
F(u,v) = FI (u,v) + FR(u,v)
Where FI (u,v) & FR(u,v) are the Fourier transforms of
illumination and reflectance components respectively.
Thursday, January 5, 2023 187
Image Restoration
• Then the desired filter function H(u,v) can be applied
seperatily to illumination & the reflectance components
as shown below:
F(u,v).H(u,v) = FI (u,v). H(u,v) + FR(u,v). H(u,v)
• In order to visualize the image , Inverse fourier trasforms
followed by exponential function is applied:
= F-1 [F(u,v).H(u,v)] = F-1 [FI (u,v). H(u,v)] + F-1 [FR(u,v). H(u,v)]
• The desired enhanced image is obtained by taking
exponential operation as:
Thursday, January 5, 2023 188
Inverse Filter
after we obtain H(u,v), we can estimate F(u,v) by the inverse filter:
)
,
(
)
,
(
)
,
(
)
,
(
)
,
(
)
,
(
ˆ
v
u
H
v
u
N
v
u
F
v
u
H
v
u
G
v
u
F 


From degradation model:
)
,
(
)
,
(
)
,
(
)
,
( v
u
N
v
u
H
v
u
F
v
u
G 

Noise is enhanced
when H(u,v) is small.
In practical, the inverse filter is not
Popularly used.
Inverse Filter: Example
6
/
5
2
2
)
(
0025
.
0
)
,
( v
u
e
v
u
H 


Original image
Blurred image
Due to Turbulence
Result of applying
the full filter
Result of applying
the filter with D0=70
Result of applying
the filter with D0=40
Result of applying
the filter with D0=85
Wiener Filter: Minimum Mean Square Error Filter
Objective: optimize mean square error:  
2
2
)
ˆ
( f
f
E
e 

)
,
(
)
,
(
/
)
,
(
)
,
(
)
,
(
)
,
(
1
)
,
(
)
,
(
/
)
,
(
)
,
(
)
,
(
)
,
(
)
,
(
)
,
(
)
,
(
)
,
(
)
,
(
)
,
(
ˆ
2
2
2
*
2
*
v
u
G
v
u
S
v
u
S
v
u
H
v
u
H
v
u
H
v
u
G
v
u
S
v
u
S
v
u
H
v
u
H
v
u
G
v
u
S
v
u
H
v
u
S
v
u
S
v
u
H
v
u
F
f
f
f
f






























h
h
h
Wiener Filter Formula:
where
H(u,v) = Degradation function
Sh(u,v) = Power spectrum of noise
Sf(u,v) = Power spectrum of the undegraded image
Approximation of Wiener Filter
)
,
(
)
,
(
/
)
,
(
)
,
(
)
,
(
)
,
(
1
)
,
(
ˆ
2
2
v
u
G
v
u
S
v
u
S
v
u
H
v
u
H
v
u
H
v
u
F
f 









h
Wiener Filter Formula:
Approximated Formula:
)
,
(
)
,
(
)
,
(
)
,
(
1
)
,
(
ˆ
2
2
v
u
G
K
v
u
H
v
u
H
v
u
H
v
u
F










Difficult to estimate
Practically, K is chosen manually to obtained the best visual result!
Wiener Filter: Example
Original image
Result of the inverse
filter with D0=70
Result of the
Wiener filter
Blurred image
Due to Turbulence
Wiener Filter
•It is better than inverse filter.
•It incorporates both the degradation function and the
statistical characteristics of the noise( mean, spectrum etc..)
into the restoration process.
•Here we consider image and noise as random function.
•Objective is to find of the uncorrupted image f such
that mean square error between them is minimize.
•Assumption :
•Image and noise are not correlated.
•Any one of them should have zero mean.
•Gray levels in the estimate are linear functions of the
degraded image.
Image Compression
Thursday, January 5, 2023 195
Image Compression
Thursday, January 5, 2023 196
Image Compression
Thursday, January 5, 2023 197
Image Compression
Thursday, January 5, 2023 198
Image Compression
Thursday, January 5, 2023 199
Image Compression
Thursday, January 5, 2023 200
Image Compression
Thursday, January 5, 2023 201
Image Compression
Thursday, January 5, 2023 202
Image Compression
Thursday, January 5, 2023 203
Image Compression
Thursday, January 5, 2023 204
Image Compression
Thursday, January 5, 2023 205
Thursday, January 5, 2023 206
Image Compression
Thursday, January 5, 2023 207
Huffman Coding
Thursday, January 5, 2023 208
Thursday, January 5, 2023 209
010100111100 = a3a1a2a2a6
Thursday, January 5, 2023 210
Thursday, January 5, 2023 211
• a2,a1,a3,a5,a4 = (0.25,0.25,0.2,0.15,0.15)
• Consider a five symbol sequence
{a1,a2,a3,a3,a4} from a four symbol source
code. Generate the arithmetic code for the
same.
Thursday, January 5, 2023 212
Image Compression
Thursday, January 5, 2023 213
Image Compression
Thursday, January 5, 2023 214
Image Compression
Thursday, January 5, 2023 215
Image Compression
Thursday, January 5, 2023 216
Image Compression
Thursday, January 5, 2023 217
Image Compression
Thursday, January 5, 2023 218
Image Compression
Thursday, January 5, 2023 219
Image Compression
Thursday, January 5, 2023 220
Thursday, January 5, 2023 221
Entropy
encoder
Image Compression
Thursday, January 5, 2023 222
Thursday, January 5, 2023 223
Thursday, January 5, 2023 224
Image segmentation
&
Representation
225
• Image segmentation
– ex: edge-based, region-based
• Image representation (Boundary Representation)
– ex: Chain code , polygonal approximation
– Image description (Boundary Descriptors)
– ex: boundary-based, regional-based
226
OUTLINE
Image Segmentation
Image segmentation(cont…)
• Segmentation is used to subdivide an image into
its constituent parts or objects.
• This step determines the eventual success or
failure of image analysis.
• Generally, the segmentation is carried out only up
to the objects of interest are isolated. e..g. face
detection.
• The goal of segmentation is to simplify and/or
change the representation of an image into
something that is more meaningful and easier to
analyse.
Classification of the Segmentation techniques
Image
Segmentation
Discontinuity Similarity
e.g.
- Point Detection
- Line Detection
- Edge Detection
e.g.
- Thresholding
- Region Growing
- Region splitting
&
merging
• There are three basic types of gray-level discontinuities in a
digital image: points, lines, and edges
• The most common way to look for discontinuities is to run a
mask through the image.
• We say that a point, line, and edge has been detected at the
location on which the mask is centered if ,where
230
edge-based segmentation(1)
R T
 1 1 2 2 9 9
......
R w z w z w z
   
edge-based segmentation(2)
• Point detection
a point detection mask
• Line detection
a line detection mask
231
edge-based segmentation(3)
• Edge detection: Gradient
operation
232
x
y
f
G x
G f
y
f




 
 
    
 
 
 
1
2 2 2
( ) x y
f mag f G G
 
    
 
1
( , ) tan ( )
y
x
G
x y
G
 

edge-based segmentation(4)
• Edge detection: Laplacian
operation
233
2 2
2
2 2
f f
f
x y
 
  
 
2
2
2 2
2 2
4
( )
r
r
h r e 



 

    
 
234
Region Based Segmentation
Region Growing
Region growing techniques start with one pixel of a potential region and try to
grow it by adding adjacent pixels till the pixels being compared are too disimilar.
• The first pixel selected can be just the first unlabeled pixel in the image or a set of
seed pixels can be chosen from the image.
• Usually a statistical test is used to decide which pixels can be added to a region.
• Region Growing technique
• Assign some seed point.
• Assign some threshold value{value that is nearer to maximum value
of pixel value}
• Compare seed point with pixel value around it.
Thursday, January 5, 2023 235
• Eg. Threshold < 3
• Answer of the above image through region
growing technique is:
Thursday, January 5, 2023 236
• Region Splitting and Merging
• Threshold <=3
• Split the image into equal parts.
• If (Maximum pixel value – Minimum Pixel value) does not satisfy
the threshold constraint than again split region.
Thursday, January 5, 2023 237
Thursday, January 5, 2023 238
Thursday, January 5, 2023 239
Thursday, January 5, 2023 240
Boundary Representation
• Image regions (including segments) can be represented by
either the border or the pixels of the region. These can be
viewed as external or internal characteristics, respectively.
• Chain codes
•
Boundary Representation
Chain Codes
Boundary Representation
Chain Codes
• Chain codes can be based on either 4-connectedness
or 8-connectedness.
• The first difference of the chain code:
– This difference is obtained by counting the number of
direction changes (in a counterclockwise direction)
– For example, the first difference of the 4-direction chain
code 10103322 is 3133030.
• Assuming the first difference code represent a closed
path, rotation normalization can be achieved by
circularly shifting the number of the code so that the
list of numbers forms the smallest possible integer.
• Size normalization can be achieved by adjusting the
size of the resampling grid.
Thursday, January 5, 2023 244
Thursday, January 5, 2023 245
Boundary Representation
Polygonal Approximations
• Polygonal approximations: to represent a boundary by straight line
segments, and a closed path becomes a polygon.
• The number of straight line segments used determines the accuracy of the
approximation.
• Only the minimum required number of sides necessary to preserve the
needed shape information should be used (Minimum perimeter polygons).
• A larger number of sides will only add noise to the model.
Boundary Representation
Polygonal Approximations
• Minimum perimeter polygons: (Merging and splitting)
– Merging and splitting are often used together to ensure that
vertices appear where they would naturally in the boundary.
– A least squares criterion to a straight line is used to stop the
processing.
248
Hough Transform
• The Hough transform is a method for detecting
lines or curves specified by a parametric function.
• If the parameters are p1, p2, … pn, then the Hough
procedure uses an n-dimensional accumulator array
in which it accumulates votes for the correct parameters
of the lines or curves found on the image.
y = mx + b
image
m
b
accumulator
Q. Given 3 points, Use Hough Transform to draw a line joining
these points : (1,1), (2,2) & (3,3).
Thursday, January 5, 2023 249
Thursday, January 5, 2023 250
Question. Given 5 points, use Hough transform to draw a line joining the points (1,4) , (2,3),
(3,1), (4,1), (5,0). (RTU-2016)
Boundary Descriptors
• There are several simple geometric measures
that can be useful for describing a boundary.
– The length of a boundary: the number of pixels
along a boundary gives a rough approximation of
its length.
– Curvature: the rate of change of slope
• To measure a curvature accurately at a point in a digital
boundary is difficult
• The difference between the slops of adjacent boundary
segments is used as a descriptor of curvature at the
point of intersection of segments
Boundary Descriptors
Shape Numbers
First difference
• The shape number of a boundary is defined as the first
difference of smallest magnitude.
• The order n of a shape number is defined as the number of
digits in its representation.
Boundary Descriptors
Shape Numbers
Boundary Descriptors
Shape Numbers
Boundary Descriptors
Fourier Descriptors
• This is a way of using the Fourier transform to
analyze the shape of a boundary.
– The x-y coordinates of the boundary are treated as the real
and imaginary parts of a complex number.
– Then the list of coordinates is Fourier transformed using
the DFT (chapter 4).
– The Fourier coefficients are called the Fourier descriptors.
– The basic shape of the region is determined by the first
several coefficients, which represent lower frequencies.
– Higher frequency terms provide information on the fine
detail of the boundary.
Boundary Descriptors
Fourier Descriptors
Regional Descriptors
• Some simple descriptors
– The area of a region: the number of pixels in the
region
– The perimeter of a region: the length of its
boundary
– The compactness of a region: (perimeter)2/area
– The mean and median of the gray levels
– The minimum and maximum gray-level values
– The number of pixels with values above and below
the mean
Regional Descriptors
Example
Regional Descriptors
Topological Descriptors
Topological property 1:
the number of holes (H)
Topological property 2:
the number of connected
components (C)
Regional Descriptors
Topological Descriptors
Topological property 3:
Euler number: the number of connected components subtract the number of holes
E = C - H
E=0 E= -1
Regional Descriptors
Topological Descriptors
Topological
property 4:
the largest
connected
component.

More Related Content

Similar to DIP-CHAPTERs

chAPTER1CV.pptx is abouter computer vision in artificial intelligence
chAPTER1CV.pptx is abouter computer vision in artificial intelligencechAPTER1CV.pptx is abouter computer vision in artificial intelligence
chAPTER1CV.pptx is abouter computer vision in artificial intelligence
shesnasuneer
 
computervision1.pptx its about computer vision
computervision1.pptx its about computer visioncomputervision1.pptx its about computer vision
computervision1.pptx its about computer vision
shesnasuneer
 
Week06 bme429-cbir
Week06 bme429-cbirWeek06 bme429-cbir
Week06 bme429-cbir
Ikram Moalla
 
1 [Autosaved].pptx
1 [Autosaved].pptx1 [Autosaved].pptx
1 [Autosaved].pptx
SsdSsd5
 
Computer Vision - Image Formation.pdf
Computer Vision - Image Formation.pdfComputer Vision - Image Formation.pdf
Computer Vision - Image Formation.pdf
AmmarahMajeed
 

Similar to DIP-CHAPTERs (20)

matdid950092.pdf
matdid950092.pdfmatdid950092.pdf
matdid950092.pdf
 
DIP-Questions.pdf
DIP-Questions.pdfDIP-Questions.pdf
DIP-Questions.pdf
 
Ch2
Ch2Ch2
Ch2
 
chAPTER1CV.pptx is abouter computer vision in artificial intelligence
chAPTER1CV.pptx is abouter computer vision in artificial intelligencechAPTER1CV.pptx is abouter computer vision in artificial intelligence
chAPTER1CV.pptx is abouter computer vision in artificial intelligence
 
computervision1.pptx its about computer vision
computervision1.pptx its about computer visioncomputervision1.pptx its about computer vision
computervision1.pptx its about computer vision
 
Chap_1_Digital_Image_Fundamentals_DD (2).pdf
Chap_1_Digital_Image_Fundamentals_DD (2).pdfChap_1_Digital_Image_Fundamentals_DD (2).pdf
Chap_1_Digital_Image_Fundamentals_DD (2).pdf
 
IT6005 digital image processing question bank
IT6005   digital image processing question bankIT6005   digital image processing question bank
IT6005 digital image processing question bank
 
Week06 bme429-cbir
Week06 bme429-cbirWeek06 bme429-cbir
Week06 bme429-cbir
 
1 [Autosaved].pptx
1 [Autosaved].pptx1 [Autosaved].pptx
1 [Autosaved].pptx
 
Lec3: Pre-Processing Medical Images
Lec3: Pre-Processing Medical ImagesLec3: Pre-Processing Medical Images
Lec3: Pre-Processing Medical Images
 
PPT s12-machine vision-s2
PPT s12-machine vision-s2PPT s12-machine vision-s2
PPT s12-machine vision-s2
 
computervision1.pdf it is about computer vision
computervision1.pdf it is about computer visioncomputervision1.pdf it is about computer vision
computervision1.pdf it is about computer vision
 
Chapter 1 and 2 gonzalez and woods
Chapter 1 and 2 gonzalez and woodsChapter 1 and 2 gonzalez and woods
Chapter 1 and 2 gonzalez and woods
 
IMAGE RETRIEVAL USING QUADRATIC DISTANCE BASED ON COLOR FEATURE AND PYRAMID S...
IMAGE RETRIEVAL USING QUADRATIC DISTANCE BASED ON COLOR FEATURE AND PYRAMID S...IMAGE RETRIEVAL USING QUADRATIC DISTANCE BASED ON COLOR FEATURE AND PYRAMID S...
IMAGE RETRIEVAL USING QUADRATIC DISTANCE BASED ON COLOR FEATURE AND PYRAMID S...
 
It 603
It 603It 603
It 603
 
It 603
It 603It 603
It 603
 
Shadow Detection Using MatLAB
Shadow Detection Using MatLABShadow Detection Using MatLAB
Shadow Detection Using MatLAB
 
It 603
It 603It 603
It 603
 
Computer Vision - Image Formation.pdf
Computer Vision - Image Formation.pdfComputer Vision - Image Formation.pdf
Computer Vision - Image Formation.pdf
 
Extended fuzzy c means clustering algorithm in segmentation of noisy images
Extended fuzzy c means clustering algorithm in segmentation of noisy imagesExtended fuzzy c means clustering algorithm in segmentation of noisy images
Extended fuzzy c means clustering algorithm in segmentation of noisy images
 

Recently uploaded

SPLICE Working Group: Reusable Code Examples
SPLICE Working Group:Reusable Code ExamplesSPLICE Working Group:Reusable Code Examples
SPLICE Working Group: Reusable Code Examples
Peter Brusilovsky
 
Transparency, Recognition and the role of eSealing - Ildiko Mazar and Koen No...
Transparency, Recognition and the role of eSealing - Ildiko Mazar and Koen No...Transparency, Recognition and the role of eSealing - Ildiko Mazar and Koen No...
Transparency, Recognition and the role of eSealing - Ildiko Mazar and Koen No...
EADTU
 

Recently uploaded (20)

FICTIONAL SALESMAN/SALESMAN SNSW 2024.pdf
FICTIONAL SALESMAN/SALESMAN SNSW 2024.pdfFICTIONAL SALESMAN/SALESMAN SNSW 2024.pdf
FICTIONAL SALESMAN/SALESMAN SNSW 2024.pdf
 
UChicago CMSC 23320 - The Best Commit Messages of 2024
UChicago CMSC 23320 - The Best Commit Messages of 2024UChicago CMSC 23320 - The Best Commit Messages of 2024
UChicago CMSC 23320 - The Best Commit Messages of 2024
 
8 Tips for Effective Working Capital Management
8 Tips for Effective Working Capital Management8 Tips for Effective Working Capital Management
8 Tips for Effective Working Capital Management
 
Mattingly "AI & Prompt Design: Named Entity Recognition"
Mattingly "AI & Prompt Design: Named Entity Recognition"Mattingly "AI & Prompt Design: Named Entity Recognition"
Mattingly "AI & Prompt Design: Named Entity Recognition"
 
ANTI PARKISON DRUGS.pptx
ANTI         PARKISON          DRUGS.pptxANTI         PARKISON          DRUGS.pptx
ANTI PARKISON DRUGS.pptx
 
Major project report on Tata Motors and its marketing strategies
Major project report on Tata Motors and its marketing strategiesMajor project report on Tata Motors and its marketing strategies
Major project report on Tata Motors and its marketing strategies
 
How To Create Editable Tree View in Odoo 17
How To Create Editable Tree View in Odoo 17How To Create Editable Tree View in Odoo 17
How To Create Editable Tree View in Odoo 17
 
Basic Civil Engineering notes on Transportation Engineering & Modes of Transport
Basic Civil Engineering notes on Transportation Engineering & Modes of TransportBasic Civil Engineering notes on Transportation Engineering & Modes of Transport
Basic Civil Engineering notes on Transportation Engineering & Modes of Transport
 
24 ĐỀ THAM KHẢO KÌ THI TUYỂN SINH VÀO LỚP 10 MÔN TIẾNG ANH SỞ GIÁO DỤC HẢI DƯ...
24 ĐỀ THAM KHẢO KÌ THI TUYỂN SINH VÀO LỚP 10 MÔN TIẾNG ANH SỞ GIÁO DỤC HẢI DƯ...24 ĐỀ THAM KHẢO KÌ THI TUYỂN SINH VÀO LỚP 10 MÔN TIẾNG ANH SỞ GIÁO DỤC HẢI DƯ...
24 ĐỀ THAM KHẢO KÌ THI TUYỂN SINH VÀO LỚP 10 MÔN TIẾNG ANH SỞ GIÁO DỤC HẢI DƯ...
 
Book Review of Run For Your Life Powerpoint
Book Review of Run For Your Life PowerpointBook Review of Run For Your Life Powerpoint
Book Review of Run For Your Life Powerpoint
 
Andreas Schleicher presents at the launch of What does child empowerment mean...
Andreas Schleicher presents at the launch of What does child empowerment mean...Andreas Schleicher presents at the launch of What does child empowerment mean...
Andreas Schleicher presents at the launch of What does child empowerment mean...
 
Including Mental Health Support in Project Delivery, 14 May.pdf
Including Mental Health Support in Project Delivery, 14 May.pdfIncluding Mental Health Support in Project Delivery, 14 May.pdf
Including Mental Health Support in Project Delivery, 14 May.pdf
 
The Story of Village Palampur Class 9 Free Study Material PDF
The Story of Village Palampur Class 9 Free Study Material PDFThe Story of Village Palampur Class 9 Free Study Material PDF
The Story of Village Palampur Class 9 Free Study Material PDF
 
SPLICE Working Group: Reusable Code Examples
SPLICE Working Group:Reusable Code ExamplesSPLICE Working Group:Reusable Code Examples
SPLICE Working Group: Reusable Code Examples
 
TỔNG HỢP HƠN 100 ĐỀ THI THỬ TỐT NGHIỆP THPT TOÁN 2024 - TỪ CÁC TRƯỜNG, TRƯỜNG...
TỔNG HỢP HƠN 100 ĐỀ THI THỬ TỐT NGHIỆP THPT TOÁN 2024 - TỪ CÁC TRƯỜNG, TRƯỜNG...TỔNG HỢP HƠN 100 ĐỀ THI THỬ TỐT NGHIỆP THPT TOÁN 2024 - TỪ CÁC TRƯỜNG, TRƯỜNG...
TỔNG HỢP HƠN 100 ĐỀ THI THỬ TỐT NGHIỆP THPT TOÁN 2024 - TỪ CÁC TRƯỜNG, TRƯỜNG...
 
AIM of Education-Teachers Training-2024.ppt
AIM of Education-Teachers Training-2024.pptAIM of Education-Teachers Training-2024.ppt
AIM of Education-Teachers Training-2024.ppt
 
Transparency, Recognition and the role of eSealing - Ildiko Mazar and Koen No...
Transparency, Recognition and the role of eSealing - Ildiko Mazar and Koen No...Transparency, Recognition and the role of eSealing - Ildiko Mazar and Koen No...
Transparency, Recognition and the role of eSealing - Ildiko Mazar and Koen No...
 
Analyzing and resolving a communication crisis in Dhaka textiles LTD.pptx
Analyzing and resolving a communication crisis in Dhaka textiles LTD.pptxAnalyzing and resolving a communication crisis in Dhaka textiles LTD.pptx
Analyzing and resolving a communication crisis in Dhaka textiles LTD.pptx
 
Supporting Newcomer Multilingual Learners
Supporting Newcomer  Multilingual LearnersSupporting Newcomer  Multilingual Learners
Supporting Newcomer Multilingual Learners
 
When Quality Assurance Meets Innovation in Higher Education - Report launch w...
When Quality Assurance Meets Innovation in Higher Education - Report launch w...When Quality Assurance Meets Innovation in Higher Education - Report launch w...
When Quality Assurance Meets Innovation in Higher Education - Report launch w...
 

DIP-CHAPTERs

  • 2. Image Processing Books • Gonzalez, R. C. and Woods, R. E., "Digital Image Processing", Prentice Hall. • Jain, A. K., "Fundamentals of Digital Image Processing", PHI Learning, 1 st Ed. • Bernd, J., "Digital Image Processing", Springer, 6 th Ed. • Burger, W. and Burge, M. J., "Principles of Digital Image Processing", Springer • Scherzer, O., " Handbook of Mathematical Methods in Imaging", Springer Thursday, January 5, 2023 2
  • 3. Why we need Image Processing? • Improvement of pictorial information for human perception • Image processing for autonomus machine applications • Efficient storage and transmission Thursday, January 5, 2023 3
  • 4. What is digital image processing? • An image may be defined as a two dimensional function f(x,y), where ‘x’ and ‘y’ are spatial(plane) coordinates and the amplitude of ‘f’ at any pair of coordinates (x,y) is called the intensity or gray level of the image at that point. • When x,y, and the amplitude values of ‘f’ are all finite, descrete quantities we call the image a digital image. • The field of digital image processing refers to processing digital images by means of digital computers. Thursday, January 5, 2023 4
  • 5. What is digital image processing? (Cont…) Thursday, January 5, 2023 5
  • 6. Image Processing Applications • Automobile driver assistance – Lane departure warning – Adaptive cruise control – Obstacle warning • Digital Photography – Image Enhancement – Compression – Color manipulation – Image editing – Digital cameras • Sports analysis – sports refereeing and commentary – 3D visualization and tracking sports actions Thursday, January 5, 2023 6
  • 7. Image Processing Applications(Cont…) • Film and Video – Editing – Special effects • Image Database – Content based image retrieval – visual search of products – Face recognition • Industrial Automation and Inspection – vision-guided robotics – Inspection systems • Medical and Biomedical – Surgical assistance – Sensor fusion – Vision based diagnosis • Astronomy – Astronomical Image Enhancement – Chemical/Spectral Analysis Thursday, January 5, 2023 7
  • 8. Image Processing Applications(Cont...) • Arial Photography – Image Enhancement – Missile Guidance – Geological Mapping • Robotics – Autonomous Vehicles • Security and Safety – Biometry verification (face, iris) – Surveillance (fences, swimming pools) • Military – Tracking and localizing – Detection – Missile guidance • Traffic and Road Monitoring – Traffic monitoring – Adaptive traffic lights Thursday, January 5, 2023 8
  • 9. Brief History of IP • In 1920s, submarine cables were used to transmit digitized newspaper pictures between London & New York – using Bartlane cable picture transmission System. • Specialized printing equipments(eg. Telegraphic printer) used to code the picture for cable transmission and its reproduction on the receiving end. • In 1921, printing procedure was changed to photographic reproduction from tapes perforated at telegraph receiving terminals. • This improved both tonal quality & resolution. Thursday, January 5, 2023 9
  • 10. Brief History of IP(Cont…) Thursday, January 5, 2023 10
  • 11. Brief History of IP(Cont…) • Bartlane system was capable of coding 5 distinct brightness levels. This was increased to 15 levels by 1929. • Improvement of processing techniques continued for next 35 years . • In 1964 computer processing techniques were used to improve the pictures of moon tranmitted by ranger 7 at Jet Propulsion Laboratory. • This was the basis of modern Image Processing techniques. Thursday, January 5, 2023 11
  • 12. Image Processing Steps Thursday, January 5, 2023 12
  • 13. Components of IP System Thursday, January 5, 2023 13
  • 15. Image Sensing and Acquisition Thursday, January 5, 2023 15
  • 16. Image Sensing and Acquisition(Cont…) • Image acquisition using a single sensor Thursday, January 5, 2023 16
  • 17. Image Sensing and Acquisition(Cont…) • Using sensor strips Thursday, January 5, 2023 17
  • 18. Image Representation Thursday, January 5, 2023 18 x y IMAGE An image is a 2-D light intensity function F(X,Y). F(X,Y) = R(X,Y)* I(X,Y) , where R(X,Y) = Reflectivity of the surface of the corresponding image point. I(X,Y) = Intensity of the incident light. A digital image F(X,Y) is discretized both in spatial coordinates and brightness. It can be considered as a matrix whose row, column indices specify a point in the image & the element value identifies gray level value at that point known as pixel or pels.
  • 19. Image Representation (Cont..) Thursday, January 5, 2023 19 (0,0) (0,1) ... (0, 1) (1,0) (1,1) ... (1, 1) ( , ) ... ... ... ... ( 1,0) ( 1,1) ... ( 1, 1) f f f N f f f N f x y f M f M f M N                    Image Representation in Matrix form
  • 22. Image Representation (Cont..) Thursday, January 5, 2023 22 ( , ) ( , ) ( , ) ( , ): intensity at the point ( , ) ( , ): illumination at the point ( , ) (the amount of source illumination incident on the scene) ( , ): reflectance/transmissivity f x y i x y r x y f x y x y i x y x y r x y  at the point ( , ) (the amount of illumination reflected/transmitted by the object) where 0 < ( , ) < and 0 < ( , ) < 1 x y i x y r x y 
  • 23. Image Representation (Cont..) • By theory of real numbers : Between any two given points there are infinite number of points. • Now by this theory : An image should be represented by infinite number of points. Each such image point may contain one of the infinitely many possible intensity/color values needing infinite number of bits. Obviously such a representation is not possible in any digital computer. Thursday, January 5, 2023 23
  • 24. Image Sampling and Quantization • By above slides we came to know that we need to find some other way to represent an image in digital format. • So we will consider some discrete set of points known as grid and in each rectangular grid consider intensity of a particular point. This process is known as sampling. • Image representation by 2-d finite matrix – Sampling • Each matrix element represented by one of the finite set of discrete values - Quantization Thursday, January 5, 2023 24
  • 25. Thursday, January 5, 2023 25 Image Sampling and Quantization
  • 35. Colour Image Processing • Why we need CIP when we get information from black and white image itself? 1. Colour is a very powerful descriptor & using the colour information we can extract the objects of interest from an image very easily which is not so easy in some cases using black & white pr simple gray level image. 2. Human eyes can distinguish between thousands of colours & colour shades whereas when we talk about only black and white image or gray scale image we can distinguish only about dozens of intensity distinguishness or different gray levels. Thursday, January 5, 2023 35
  • 36. Color Image processing(Cont…) • The color that human perceive in an object = the light reflected from the object Illumination source scene reflection Humen eye
  • 37. Colour Image Processing(Cont...) • In CIP there are 2 major areas: 1.FULL CIP : Image which are acquired by full colour TV camera or by full color scanner, than, you find that all the colour you perceive they are present in the images. 2.PSEUDO CIP : Is a problem where we try to assign certain colours to a range of gray levels. Pseudo CIP is mostly used for human interpretation. So here it is very difficult to distinguish between two ranges which are very nearer to each other or gray intensity value are very near to each other. Thursday, January 5, 2023 37
  • 38. Colour Image Processing(Cont...) • Problem with CIP Interpretation of color from human eye is a psycophisological problem and we have not yet been fully understand what is the mechanism by which we really interpret a color. Thursday, January 5, 2023 38
  • 39. Colour Image Processing(Cont...) • In 1666 Isacc Newton discover color spectrum by optical prism. Thursday, January 5, 2023 39
  • 40. Colour Image Processing(Cont...) • We can perceive the color depending on the nature of light which is reflected by the object surface. • Spectrum of light or spectrum of energy in the visible range that we are able to perceive a color(400 nm to 700 nm) Thursday, January 5, 2023 40
  • 41. Colour Image Processing(Cont...) • Attribute of Light Achromatic Light : A light which has no color component i.e., the only attribute which describes that particular light is the intensity of the light. Chromatic Light : Contain color component. • 3 quantities that describe the quality of light: Radiance Luminance Brightness Thursday, January 5, 2023 41
  • 42. Colour Image Processing(Cont...) • Radiance : Total amount of energy which comes out of a light (Unit : watts) • Luminance : Amount of energy that is perceived by an observer (Unit : Lumens) • Brightness : It is a subjective thing. Practically we can’t measure brightness. We have 3 primary colors: Red Blue Green Thursday, January 5, 2023 42
  • 43. Colour Image Processing(Cont...) • Newton discovered 7 different color but only 3 colors i.e., red, green and blue are the primary colors. Why? Because by mixing these 3 colors in some proportion we can get all other colors. There are around 6-7 millions cone cells in our eyes which are responsible for color sensations. Around 65% cone cells are sensitive to red color. Around 33% cone cells are sensitive to green color. Around 2% cone cells are sensitive to blue color. Thursday, January 5, 2023 43
  • 44. Colour Image Processing(Cont...) • According to CIE standard Red have wavelength : 700 nm Green have wavelength : 546.1 nm Blue have wavelength : 435.6 nm But, practically : Red is sensitive to 450 nm to 700 nm Green is sensitive to 400 nm to 650 nm Blue is sensitive to 400 nm to 550 nm Thursday, January 5, 2023 44
  • 45. Colour Image Processing(Cont...) • Note : In practical no single wavelength can specify any particular color. • By spectrum color also we can see that there is no clear cut boundaries between any two color. • One color slowly or smoothly get merged into another color i.e., there is no clear cut boundary between transition of color in spectrum. • So, we can say a band of color give red, green and blue color sensation respectively. Thursday, January 5, 2023 45
  • 46. Colour Image Processing(Cont...) • Mixing of Primary color generates the secondary colors i.e.,  RED+BLUE=Magenta  GREEN+BLUE = Cyan  RED+GREEN = yellow • Here red, green and blue are the primary color and magenta, cyan and yellow are the secondary color. • Pigments : The primary color of pigment is defined as wavelength which are absorbed by the pigment and it reflect other wavelength. Thursday, January 5, 2023 46
  • 47. Colour Image Processing(Cont...) • Primary color of light should be opposite of primary color of pigment i.e., magenta , cyan and yellow are primary color of pigment. • If we mix red, green and blue color in appropriate proportion we get white light and similarly when we mix magenta, cyan and yellow we get black color. Thursday, January 5, 2023 47
  • 48. Colour Image Processing(Cont...) • For hardware i.e., camera, printer, display device, scanner this above concept of color is used i.e., concept of primary color component. • But when we perceive a color for human beings we don’t think that how much red,green and blue components are mixed in that particular color. • So the way by which we human differentiate or recognize or distinguish color are : Brightness, Hue and Saturation. Thursday, January 5, 2023 48
  • 49. Colour Image Processing(Cont...) • Spectrum colors are not diluted i.e., spectrum colors are fully saturated . It means no white light or white component are added to it. • Example: Pink is not spectrum color. Red + white = pink Here red is fully saturated. • So, Hue+Saturation indicates chromaticity of light and Brightness gives some sentation of intensity. Thursday, January 5, 2023 49
  • 50. Colour Image Processing(Cont...) • Brightness : Achromatic notion of Intensity. • Hue : It represents the dominant wavelength present in a mixture of colors. • Saturation : eg., when we say color is red i.e., we may have various shades of red. So saturation indicates what is the purity of red i.e., what is the amount of light which has been mixed to that particular color to make it a diluted one. Thursday, January 5, 2023 50
  • 51. Colour Image Processing(Cont...) • The amount of red, green and blue component is needed to get another color component is known as tristimulus. • Tristimulus = (X,Y,Z) • Chromatic cofficient for red = X/(X+Y+Z) , for green = Y/(X+Y+Z) , for blue = Z/(X+Y+Z). • Here X+Y+Z=1 • So any color can be specified by its chromatic cofficient or a color can be specified by a chromaticity diagram. Thursday, January 5, 2023 51
  • 52. Colour Image Processing(Cont...) • Here Z = 1-(X+Y) , In chromaticity diagram around the boundary we have all the color of the spectrum colors and point of equal energy is : white color. Thursday, January 5, 2023 52
  • 53. Colour Image Processing(Cont...) • Color Models : A coordinate system within which a specified color will be represented by a single point. • RGB , CMY , CMYK : Hardware oriented • HSI : Hue , Saturation and Intensity : Application oriented / Perception oriented • In HSI model : I part gives you gray scale information. H & S taken together gives us chromatic information. Thursday, January 5, 2023 53
  • 54. Colour Image Processing(Cont...) • RGB Color Model : Here a color model is represented by 3 primary colors i.e., red , green and blue. • In RGB color model we can have 224 different color combinations but practically 216 different colors can be represented by RGB model. • RGB color model is based on Cartesian coordinate system. • This is an additive color model • Active displays, such as computer monitors and television sets, emit combinations of red, green and blue light. Thursday, January 5, 2023 54
  • 55. Colour Image Processing(Cont...) • RGB Color Model Thursday, January 5, 2023 55
  • 56. Colour Image Processing(Cont...) • RGB Color Model • RGB 24-bit color cube is shown below Thursday, January 5, 2023 56
  • 57. Colour Image Processing(Cont...) • RGB example: Thursday, January 5, 2023 57 Original Green Band Blue Band Red Band
  • 58. Colour Image Processing(Cont...) • CMY Color Model : secondary colors of light, or primary colors of pigments & Used to generate hardcopy output Thursday, January 5, 2023 58 Source: www.hp.com Passive displays, such as colour inkjet printers, absorb light instead of emitting it. Combinations of cyan, magenta and yellow inks are used. This is a subtractive colour model.
  • 59. Colour Image Processing(Cont...) • Equal proportion of CMY gives a muddy black color i.e., it is not a pure black color. So, to get pure black color with CMY another component is also specified known as Black component i.e., we get CMYK model. • In CMYK “K” is the black component. Thursday, January 5, 2023 59                                 B G R Y M C 1 1 1
  • 60. Colour Image Processing(Cont...) • HSI Color Model (Based on human perception of colors ) • H = What is the dominant specified color present in a particular color. It is a subjective measure of color. • S = How much a pure spectrum color is really diluted by mixing white color to it i.e., Mixing more “white” with a color reduces its saturation. If we mix white color in different proportion with a color we get different shades of that color. • I = Chromatic notation of brightness of black and white image i.e., the brightness or darkness of an object. Thursday, January 5, 2023 60
  • 61. Colour Image Processing(Cont...) • HSI Color Model Thursday, January 5, 2023 61 H dominant wavelength S purity % white I Intensity
  • 62. Colour Image Processing(Cont...) • HSI Color Model Thursday, January 5, 2023 62 RGB -> HSI model
  • 63. Colour Image Processing(Cont...) • HSI Color Model Thursday, January 5, 2023 63
  • 64. Colour Image Processing(Cont...) • Pseudo-color Image Processing Assign colors to gray values based on a specified criterion For human visualization and interpretation of gray-scale events Intensity slicing Gray level to color transformations Thursday, January 5, 2023 64
  • 65. Colour Image Processing(Cont...) • Pseudo-color Image Processing(cont…) Intensity slicing  Here first consider an intensity image to be a 3D plane.  Place a plane which is parallel to XY plane(it will slice the plane into two different hubs).  We can assign different color on two different sides of the plane i.e., any pixel whose intensity level is above the plane will be coded with one color and any pixel below the plane will be coded with the other.  Level that lie on the plane itself may be arbitrarily assigned one of the two colors. Thursday, January 5, 2023 65
  • 66. Colour Image Processing(Cont...) Intensity slicing  Geometric interpretation of the intensity slicing technique Thursday, January 5, 2023 66
  • 67. Colour Image Processing(Cont...) Intensity slicing  Let we have total ‘L’ number of intensity values: 0 to (L-1)  L0 corresponds to black [ f(x , y) = 0]  LL-1 corresponds to white [ f(x , y) = L-1]  Suppose ‘P’ number of planes perpendicular to the intensity axis i.e., they are parallel to the image plane and these planes will be placed at the intensity values given by L1,L2,L3,………,LP.  Where , 0< P < L-1. Thursday, January 5, 2023 67
  • 68. Colour Image Processing(Cont...) • Intensity slicing  The P planes partition the gray scale(intensity) into (P+1) intervals, V1,V2,V3,………,VP+1.  Color assigned to location (x,y) is given by the relation f(x , y) = Ck if f(x , y) ∈ Vk Thursday, January 5, 2023 68
  • 69. Colour Image Processing(Cont...) • Intensity slicing  Give ROI(region of interest) one color and rest part other color  Keep ROI as it is and rest assign one color  Keep rest as it is and give ROI one color Thursday, January 5, 2023 69
  • 70. Colour Image Processing(Cont...) • Pseudo-coloring is also used from gray to color image transformation. • Gray level to color transformation Thursday, January 5, 2023 70
  • 71. Colour Image Processing(Cont...) • Gray level to color transformation fR(X,Y) = f(x,y) fG(X,Y) = 0.33f(x,y) fB(X,Y) = 0.11f(x,y)  Combining these 3 planes we get the pseudo color image.  Application of Pseudo CIP : Machine using at railways and airport for bag checking. Thursday, January 5, 2023 71
  • 72. Image Enhancement • Intensity Transformation Functions • Enhancing an image provides better contrast and a more detailed image as compare to non enhanced image. Image enhancement has very applications. It is used to enhance medical images, images captured in remote sensing, images from satellite e.t.c • The transformation function has been given below s = T ( r ) • where r is the pixels of the input image and s is the pixels of the output image. T is a transformation function that maps each value of r to each value of s. Thursday, January 5, 2023 72
  • 73. Image Enhancement(Cont…) • Image enhancement can be done through gray level transformations which are discussed below. • There are three basic gray level transformation. • Linear • Logarithmic • Power – law Thursday, January 5, 2023 73
  • 74. Image Enhancement(Cont…) • Linear Transformation  Linear transformation includes simple identity and negative transformation.  Identity transition is shown by a straight line. In this transition, each value of the input image is directly mapped to each other value of output image. That results in the same input image and output image. And hence is called identity transformation. • Negative Transformation  The second linear transformation is negative transformation, which is invert of identity transformation. In negative transformation, each value of the input image is subtracted from the L-1 and mapped onto the output image. Thursday, January 5, 2023 74
  • 76. Image Enhancement(Cont…) • Negative Transformation s = (L – 1) – r s = 255 – r Thursday, January 5, 2023 76
  • 77. Image Enhancement(Cont…) • Logarithmic Transformations  The log transformations can be defined by this formula s = c log(r + 1).  Where s and r are the pixel values of the output and the input image and c is a constant. The value 1 is added to each of the pixel value of the input image because if there is a pixel intensity of 0 in the image, then log (0) is equal to infinity. So 1 is added, to make the minimum value at least 1. Thursday, January 5, 2023 77
  • 78. Image Enhancement(Cont…) • Logarithmic Transformations  In log transformation we decrease the dynamic range of a particular intensity i.e., here intensity of the pixels are increased which we require to get more information. The maximum information is contained in the center pixel.  Log transformation is mainly applied in frequency domain. Thursday, January 5, 2023 78
  • 79. Image Enhancement(Cont…) • Logarithmic Transformation Thursday, January 5, 2023 79
  • 80. Image Enhancement(Cont…) • Power – Law transformations • This symbol γ is called gamma, due to which this transformation is also known as gamma transformation. Thursday, January 5, 2023 80 s = crγ, c,γ –positive constants curve the grayscale components either to brighten the intensity (when γ < 1) or darken the intensity (when γ > 1).
  • 81. Image Enhancement(Cont…) • Power – Law transformations Thursday, January 5, 2023 81
  • 82. Image Enhancement(Cont…) • Power – Law transformations • Variation in the value of γ varies the enhancement of the images. Different display devices / monitors have their own gamma correction, that’s why they display their image at different intensity. • This type of transformation is used for enhancing images for different type of display devices. The gamma of different display devices is different. For example Gamma of CRT lies in between of 1.8 to 2.5, that means the image displayed on CRT is dark. Thursday, January 5, 2023 82
  • 83. Image Enhancement(Cont…) • Power – Law transformations  Gamma Correction  Different camera or video recorder devices do not correctly capture luminance. (they are not linear) Different display devices (monitor, phone screen, TV) do not display luminance correctly neither. So, one needs to correct them, therefore the gamma correction function is needed. Gamma correction function is used to correct image's luminance. s=cr^γ s=cr^(1/2.5) Thursday, January 5, 2023 83
  • 86. Image Enhancement(Cont…) • Piecewise-Linear Transformation Functions  Three types:  Contrast Stretching  Intensity Level Slicing  Bit-Plane Slicing Thursday, January 5, 2023 86
  • 87. Image Enhancement(Cont…) • Contrast stretching  Aims increase the dynamic range of the gray levels in the image being processed.  Contrast stretching is a process that expands the range of intensity levels in a image so that it spans the full intensity range of the recording medium or display device.  Contrast-stretching transformations increase the contrast between the darks and the lights Thursday, January 5, 2023 87
  • 88. Image Enhancement(Cont…) • Contrast stretching Thursday, January 5, 2023 88
  • 89. Image Enhancement(Cont…) • Contrast stretching  The locations of (r1,s1) and (r2,s2) control the shape of the transformation function. – If r1= s1 and r2= s2 the transformation is a linear function and produces no changes. – If r1=r2, s1=0 and s2=L-1, the transformation becomes a thresholding function that creates a binary image. – Intermediate values of (r1,s1) and (r2,s2) produce various degrees of spread in the gray levels of the output image, thus affecting its contrast. – Generally, r1≤r2 and s1≤s2 is assumed. Thursday, January 5, 2023 89
  • 90. Image Enhancement(Cont…) Thursday, January 5, 2023 90 Thresholding function
  • 91. Image Enhancement(Cont…) • Intensity-level slicing  Highlighting a specific range of gray levels in an image.  One way is to display a high value for all gray levels in the range of interest and a low value for all other gray levels (binary image).  The second approach is to brighten the desired range of gray levels but preserve the background and gray-level tonalities in the image Thursday, January 5, 2023 91
  • 92. Image Enhancement(Cont…) • Intensity Level Slicing Thursday, January 5, 2023 92
  • 93. Image Enhancement(Cont…) • Bit-Plane Slicing • To highlight the contribution made to the total image appearance by specific bits. – i.e. Assuming that each pixel is represented by 8 bits, the image is composed of 8 1-bit planes. – Plane 0 contains the least significant bit and plane 7 contains the most significant bit. – Only the higher order bits (top four) contain visually significant data. The other bit planes contribute the more subtle details. – Plane 7 corresponds exactly with an image thresholded at gray level 128. Thursday, January 5, 2023 93
  • 94. Image Enhancement(Cont…) • Bit-Plane Slicing Thursday, January 5, 2023 94
  • 95. Image Enhancement(Cont…) • Histogram Processing  Two Types : (a). Histogram Stretching (b). Histogram equalization  Histogram Stretching  Contrast is the difference between maximum and minimum pixel intensity.  Pictorial view to represent the distribution of pixel which tell frequency of pixel. Thursday, January 5, 2023 95
  • 96. Image Enhancement(Cont…) • The histogram of digital image with gray values is the discrete function Thursday, January 5, 2023 96 1 1 0 , , ,  L r r r  n n r p k k  ) ( nk: Number of pixels with gray value rk n: total Number of pixels in the image The function p(rk) represents the fraction of the total number of pixels with gray value rk. The shape of a histogram provides useful information for contrast enhancement.
  • 97. Image Enhancement(Cont…) • Histogram Processing Thursday, January 5, 2023 97 Dark image Bright image
  • 98. Image Enhancement(Cont…) • Histogram Processing Thursday, January 5, 2023 98 Low contrast image High contrast image
  • 99. Image Enhancement(Cont…) • Histogram Stretching Thursday, January 5, 2023 99
  • 100. Image Enhancement(Cont…) • Histogram Stretching (cont…) • In the above example (0,8) is smin and smax respectively and rmin = 0 , rmax = 4 is given. • S-0 = (8 – 0) / (4 – 0) * (r – 0) • s=(8/4) r • S= 2r • Now we have a relation between r and s. • So get different values of ‘s’ for given rmin to rmax . Thursday, January 5, 2023 100
  • 101. Image Enhancement(Cont…) • Histogram Stretching (cont…) Thursday, January 5, 2023 101
  • 102. Image Enhancement(Cont…) • Histogram Equalization – Recalculate the picture gray levels to make the distribution more equalized – Used widely in image editing tools and computer vision algorithms – Can also be applied to color images Thursday, January 5, 2023 102
  • 103. Objective of histogram equalization • We want to find T(r) so that Ps(s) is a flat line. Historgram, color v.4e 103 sk rk L-1 L-1 r T(r) 0 Objective: To find the Relation s=T(r) Pr(r) r Ps(s)=a constant s L-1 L-1 L-1 Equalized distribution Input random distribution The original image The probability of these levels are lower) The probability of these levels are higher The probability of all levels are the same In Ps(s) s=T(r)
  • 104. we want to prove ps(s)= constant ) 3 ( calculus of thorem l Fundementa • ) 2 ( ) ( ) ( ) ( ) ( by sides both ate differenti 1 ) ( ) ( Theory y probabilit Basic •                x a s r r s r s f(x) f(t)dt dx d dr ds s p r p ds dr r p s p ds dr r p ds s p constant 1 1 ) ( show (3) and (2) (1), formula and formula above with the continue : Exercise ... ) ( ) 1 ( ) ( ) 1 ( ) ( ) 1 ( ) ( : ) 1 ( , ) ( ) ( Since 0 0                        L s p dr dw w p L d dr r dT dr ds dw w p L r T s from dr r dT dr ds r T s s r r r r 104 •        r r dw w p L r T s 0 ) 1 ( ) ( ) 1 ( ) (
  • 105. Image Enhancement(Cont…) • Histogram Equalization Thursday, January 5, 2023 105 • Let rk, k[0..L-1] be intensity levels and let p(rk) be its normalized histogram function. • Histogram equalization is applying the transformation of ‘r’ to get ‘s’ where ‘r’ belongs to 0 to L-1. • As, T(r) is continuous & differentiable ʃPss ds=ʃprr dr =1 differentiating w.r.t ‘s’ we get :
  • 106. Image Enhancement(Cont…) • Histogram Equalization(cont…)  So, e.q. (1)  The transformation function T(r) for histogram equalization is :  Differentiate w.r.t ‘r’ :  As we know, , SO,  From eq.(1) we get, which is a constant. Thursday, January 5, 2023 106 s p r p dr ds s r      r r dw w p L r T S 0 ) ( ) 1 ( ) (     r r dw w p dr d L r T dr d dr ds 0 ) ( ) 1 ( ) (
  • 107. Histogram Equalization : Discrete form for practical use • From the continuous form (1) to discrete form 1 ,.., 2 , 1 , 0 , 1 make to need we , ) ( If : histogram normlzied a obtain that to Recall ) ( ) 1 ( ) ( ) 1 ( ) ( ) 1 ( ) ( 0 0 0                    L k n MN L s MN n r p r P L r T s dw w p L r T s k j j k k k k j j r k k r r 107
  • 108. Histogram Equalization - Example • Let f be an image with size 64x64 pixels and L=8 and let f has the intensity distribution as shown in the table p r(rk )=nk/MN nk rk 0.19 790 0 0.25 1023 2 0.21 850 1 0.16 656 3 0.08 329 4 0.06 245 5 0.03 122 6 0.02 81 7 . 00 . 7 , 86 . 6 , 65 . 6 , 23 . 6 , 67 . 5 , 55 . 4 08 . 3 )) ( ) ( ( 7 ) ( 7 ) ( 33 . 1 ) ( 7 ) ( 7 ) ( 7 6 5 4 3 2 1 0 1 0 1 1 0 0 0 0 0                    s s s s s s r p r p r p r T s r p r p r T s r r j j r r j j r round the values to the nearest integer
  • 109. Thursday, January 5, 2023 109
  • 110. Histogram Equalization - Example Thursday, January 5, 2023 110
  • 111. Filtering • Image filtering is used to:  Remove noise  Sharpen contrast  Highlight contours  Detect edges  Image filters can be classified as linear or nonlinear.  Linear filters are also know as convolution filters as they can be represented using a matrix multiplication.  Thresholding and image equalisation are examples of nonlinear operations, as is the median filter. Thursday, January 5, 2023 111
  • 112. Filtering(cont…) • There are two types of processing: • Point Processing (eg. Histogram equalization) • Mask Processing  Two types of filtering methods: • Smoothing Linear (Average Filter) and Non-Linear (Median Filter) • Sharpening Laplacian Gradient Thursday, January 5, 2023 112
  • 113. Filtering(Cont…) Thursday, January 5, 2023 113 output image
  • 114. Filtering(Cont…) • Correlation [ 1-D & 2-D] • Convolution [ 1-D & 2-D] • In correlation we use weight to get output image and for applying convolution we just rotate the weight 180 degree. • Eg. Weight • After 180 degree rotation • After 180 degree rotation Thursday, January 5, 2023 114 1 2 3 3 2 1 1 2 3 4 5 6 7 8 9 9 8 7 6 5 4 3 2 1
  • 115. Filtering(Cont…) • 1-D Correlation • I = • W= • Output = • For convolution just rotate mask 180 degree. Thursday, January 5, 2023 115 1 2 3 4 1 2 3 [(2*1)+(3*2)] 8 [(1*1)+(2*2)+ (3*3)] 14 [(1*2)+(2*3)+ (3*4)] 20 [(1*3)+(4*2)] 11
  • 116. Filtering(Cont…) • A filtering method is linear when the output is a weighted sum of the input pixels. Eg. Average filter • Methods that do not satisfy the above property are called non-linear. Eg. Median filter • Average (or mean) filtering is a method of ‘smoothing’ images by reducing the amount of intensity variation between neighbouring pixels. • The average filter works by moving through the image pixel by pixel, replacing each value with the average value of neighbouring pixels, including itself. Thursday, January 5, 2023 116
  • 117. Filtering(Cont…) • Average filter mask (2-D): Thursday, January 5, 2023 117
  • 123. Filtering(Cont…) • When we apply average filter noise is removed but blurring is introduced and to remove blurring we use weighted filter. Thursday, January 5, 2023 123
  • 124. Filtering(Cont…) • Median Filter (non-linear filter) • Very effective in removing salt and pepper or impulsive noise while preserving image detail • Disadvantages: computational complexity, non linear filter • The median filter works by moving through the image pixel by pixel, replacing each value with the median value of neighbouring pixels. • The pattern of neighbours is called the "window", which slides, pixel by pixel over the entire image 2 pixel, over the entire image. • The median is calculated by first sorting all the pixel values from the window into numerical order, and then replacing the pixel being considered with the middle (median) pixel value. Thursday, January 5, 2023 124
  • 125. Filtering(Cont…) • Median Filter Example: Thursday, January 5, 2023 125
  • 130. Filtering(Cont…) Thursday, January 5, 2023 130 • From left to right: the results of a 3 x 3, 5 x 5 and 7 x 7 median filter
  • 131. Filtering(Cont…)  Sharpening(high pass filter) is performed by noting only the gray level changes in the image that is the differentiation. • Sharpening is used for edge detection ,line detection, point detection and it also highlight changes.  Operation of Image Differentiation • Enhance edges and discontinuities (magnitude of output gray level >>0) • De-emphasize areas with slowly varying gray-level values (output gray level: 0)  Mathematical Basis of Filtering for Image Sharpening • First-order and second-order derivatives • Approximation in discrete-space domain • Implementation by mask filtering Thursday, January 5, 2023 131
  • 132. Filtering(Cont…)  Common sharpening filters: • Gradient (1st order derivative) • Laplacian (2nd order derivative) • Taking the derivative of an image results in sharpening the image. • The derivative of an image (i.e., 2D function) can be computed using the gradient. Thursday, January 5, 2023 132
  • 133. Filtering(Cont…)  Gradient (rotation variant or non-isotropic) Thursday, January 5, 2023 133 or Sensitive to vertical edges Sensitive to horizontal edges
  • 134. Filtering(Cont…)  Gradient Thursday, January 5, 2023 134 Kernels used in prewitt edge detection
  • 135. Filtering(Cont…) • Laplacian Thursday, January 5, 2023 135 Original Mask , C = +1 or C= -1
  • 136. Filtering(Cont…) • Laplacian(rotation invariant or isotropic) Thursday, January 5, 2023 136 (b)Extended Laplacian mask to increase sharpness and it covers diagonal also, so , provide good results.
  • 137. Image Transforms • Many times, image processing tasks are best performed in a domain other than the spatial domain. • Key steps (1) Transform the image (2) Carry the task(s) in the transformed domain. (3) Apply inverse transform to return to the spatial domain.
  • 138. Math Review - Complex numbers • Real numbers: 1 -5.2  • Complex numbers 4.2 + 3.7i 9.4447 – 6.7i -5.2 (-5.2 + 0i) 1   i We often denote in EE i by j
  • 139. Math Review - Complex numbers • Complex numbers 4.2 + 3.7i 9.4447 – 6.7i -5.2 (-5.2 + 0i) • General Form Z = a + bi Re(Z) = a Im(Z) = b • Amplitude A = | Z | = √(a2 + b2) • Phase  =  Z = tan-1(b/a) Real and imaginary parts
  • 140. Math Review – Complex Numbers • Polar Coordinate Z = a + bi • Amplitude A = √(a2 + b2) • Phase  = tan-1(b/a) a b   A 
  • 141. Math Review – Complex Numbers and Cosine Waves • Cosine wave has three properties – Frequency – Amplitude – Phase • Complex number has two properties – Amplitude – Wave • Complex numbers to represent cosine waves at varying frequency – Frequency 1: Z1 = 5 +2i – Frequency 2: Z2 = -3 + 4i – Frequency 3: Z3 = 1.3 – 1.6i Simple but great idea !!
  • 142. Fourier Transforms & its Properties • Jean Baptiste Joseph Fourier (1768-1830) Thursday, January 5, 2023 142 • Had crazy idea (1807): • Any periodic function can be rewritten as a weighted sum of Sines and Cosines of different frequencies. • Don’t believe it? – Neither did Lagrange, Laplace, Poisson and other big wigs – Not translated into English until 1878! • But it’s true! – called Fourier Series – Possibly the greatest tool used in Engineering
  • 143. Fourier Transforms & its Properties • In image processing: – Instead of time domain: spatial domain (normal image space) – frequency domain: space in which each image value at image position F represents the amount that the intensity values in image I vary over a specific distance related to F Thursday, January 5, 2023 143
  • 144. Fourier Transforms & its Properties • Fourier Transforms & Inverse Fourier Transforms Thursday, January 5, 2023 144
  • 145. Fourier Transforms & its Properties Thursday, January 5, 2023 145
  • 146. Fourier Transforms & its Properties Thursday, January 5, 2023 146
  • 147. Fourier Transforms & its Properties Thursday, January 5, 2023 147
  • 148. Fourier Transforms & its Properties • As we deal with 2-d discrete images so we need to discuss 2-D discrete Fourier Transforms. Thursday, January 5, 2023 148
  • 149. Fourier Transforms & its Properties • Inverse F.T Thursday, January 5, 2023 149
  • 150. Fourier Transforms & its Properties • If the image is represented as square array i.e., M=N than F.T and I.F.T is given by equation: Thursday, January 5, 2023 150
  • 151. Fourier Transforms & its Properties Thursday, January 5, 2023 151
  • 152. Fourier Transforms & its Properties • Separability property Thursday, January 5, 2023 152
  • 153. Fourier Transforms & its Properties Thursday, January 5, 2023 153
  • 154. Fourier Transforms & its Properties • Periodicity: The DFT and its inverse are periodic wit period N. Thursday, January 5, 2023 154
  • 155. Fourier Transforms & its Properties Thursday, January 5, 2023 155 • Scaling: If a signal is multiply by a scalar quantity ‘a’ than its Fourier Transformation is also multiplied by same scalar quantity ‘a’.
  • 156. Fourier Transforms & its Properties • Distributivity: Thursday, January 5, 2023 156 but …
  • 157. Fourier Transforms & its Properties • Average: Thursday, January 5, 2023 157 Average: F(u,v) at u=0, v=0: So:
  • 158. Fourier Transforms & its Properties Thursday, January 5, 2023 158
  • 159.
  • 162. Frequency Domain Filters • Low pass filter: it allows low frequency range signal to pass as output.(Useful for noise suppression) • High pass filter: it allows high pass frequency range to pass as output. (Useful for edge detection) • D(u,v) is distance of (u , v) in frequency domain from the origin of the frequency rectangle. • Do implies that all signals lies in this range i.e., D(u,v)<= Do all low pass frequency to pass to the output and rest are not allowed to pass as output.
  • 165. Frequency Domain Filters • In the above example, for the same cut of frequency the blurring is more in Ideal low pass filter than in butterworth filter and as cut of frequency increases the number of undesired lines are increased in Ideal low pass filter than in butterworth filter.
  • 168. Image Restoration • Image restoration and image enhancement share a common goal: to improve image for human perception • Image enhancement is mainly a subjective process in which individuals’ opinions are involved in process design. • Image restoration is mostly an objective process which: • utilizes a prior knowledge of degradation phenomenon to recover image. • models the degradation and then to recover the original image. • The objective of restoration is to obtain an image estimate which is as close as possible to the original input image. Thursday, January 5, 2023 168
  • 169. Thursday, January 5, 2023 169
  • 170. Image Restoration Thursday, January 5, 2023 170 g(x,y)=f(x,y)*h(x,y)+h(x,y) G(u,v)=F(u,v)H(u,v)+N(u,v) If H is a linear, position-invariant process (filter), the degraded image is given in the spatial domain by: whose equivalent frequency domain representation is: where h(x,y) is a system that causes image distortion and h(x,y) is noise.
  • 171. Image Restoration Thursday, January 5, 2023 171 g(x,y)=f(x,y)+h(x,y) G(u,v)=F(u,v)+N(u,v)
  • 172. Thursday, January 5, 2023 172
  • 173. Thursday, January 5, 2023 173
  • 174. Thursday, January 5, 2023 174
  • 175. Thursday, January 5, 2023 175
  • 176. Thursday, January 5, 2023 176
  • 177. Thursday, January 5, 2023 177
  • 178. Thursday, January 5, 2023 178
  • 179. Thursday, January 5, 2023 179
  • 180. Thursday, January 5, 2023 180
  • 181. Thursday, January 5, 2023 181
  • 182. Thursday, January 5, 2023 182
  • 183. Thursday, January 5, 2023 183
  • 184. Thursday, January 5, 2023 184
  • 185. Image Restoration • Homomorphic Filter • In some images, the quality of the image has reduced because of non-uniform illumination. • Homomorphic filtering can be used to perform illumination correction.  We can view an image f(x,y) as a product of two components:  F(x,y) = r(x,y). i(x,y) , where  r(x,y) = Reflectivity of the surface of the corresponding image point.  i(x,y) = Intensity of the incident light.  The above equation is known as illumination-reflectance model. Thursday, January 5, 2023 185
  • 186. Image Restoration • The illumination-reflectance model can be used to address the problem of improving the quality of an image that has been acquired under poor illumination conditions. • For many images, the illumination is the primary contributor to the dynamic range and varies slowly in space. While reflectance component r(x,y) represents the details of object and varies rapidly in space. Thursday, January 5, 2023 186
  • 187. Image Restoration • The illumination & the reflectance components are to be handled seperately, the logarithm of input function f(x,y) is taken because f(x,y) is product of i(x,y) & r(x,y). The log of f(x,y) seperates the components as illustrated below: ln[f(x,y)] = ln[i(x,y) . r(x,y)] ln[f(x,y)]= ln[i(x,y)] + ln[r(x,y)] • Talking Fourier Transforms of the above equation : F(u,v) = FI (u,v) + FR(u,v) Where FI (u,v) & FR(u,v) are the Fourier transforms of illumination and reflectance components respectively. Thursday, January 5, 2023 187
  • 188. Image Restoration • Then the desired filter function H(u,v) can be applied seperatily to illumination & the reflectance components as shown below: F(u,v).H(u,v) = FI (u,v). H(u,v) + FR(u,v). H(u,v) • In order to visualize the image , Inverse fourier trasforms followed by exponential function is applied: = F-1 [F(u,v).H(u,v)] = F-1 [FI (u,v). H(u,v)] + F-1 [FR(u,v). H(u,v)] • The desired enhanced image is obtained by taking exponential operation as: Thursday, January 5, 2023 188
  • 189. Inverse Filter after we obtain H(u,v), we can estimate F(u,v) by the inverse filter: ) , ( ) , ( ) , ( ) , ( ) , ( ) , ( ˆ v u H v u N v u F v u H v u G v u F    From degradation model: ) , ( ) , ( ) , ( ) , ( v u N v u H v u F v u G   Noise is enhanced when H(u,v) is small. In practical, the inverse filter is not Popularly used.
  • 190. Inverse Filter: Example 6 / 5 2 2 ) ( 0025 . 0 ) , ( v u e v u H    Original image Blurred image Due to Turbulence Result of applying the full filter Result of applying the filter with D0=70 Result of applying the filter with D0=40 Result of applying the filter with D0=85
  • 191. Wiener Filter: Minimum Mean Square Error Filter Objective: optimize mean square error:   2 2 ) ˆ ( f f E e   ) , ( ) , ( / ) , ( ) , ( ) , ( ) , ( 1 ) , ( ) , ( / ) , ( ) , ( ) , ( ) , ( ) , ( ) , ( ) , ( ) , ( ) , ( ) , ( ˆ 2 2 2 * 2 * v u G v u S v u S v u H v u H v u H v u G v u S v u S v u H v u H v u G v u S v u H v u S v u S v u H v u F f f f f                               h h h Wiener Filter Formula: where H(u,v) = Degradation function Sh(u,v) = Power spectrum of noise Sf(u,v) = Power spectrum of the undegraded image
  • 192. Approximation of Wiener Filter ) , ( ) , ( / ) , ( ) , ( ) , ( ) , ( 1 ) , ( ˆ 2 2 v u G v u S v u S v u H v u H v u H v u F f           h Wiener Filter Formula: Approximated Formula: ) , ( ) , ( ) , ( ) , ( 1 ) , ( ˆ 2 2 v u G K v u H v u H v u H v u F           Difficult to estimate Practically, K is chosen manually to obtained the best visual result!
  • 193. Wiener Filter: Example Original image Result of the inverse filter with D0=70 Result of the Wiener filter Blurred image Due to Turbulence
  • 194. Wiener Filter •It is better than inverse filter. •It incorporates both the degradation function and the statistical characteristics of the noise( mean, spectrum etc..) into the restoration process. •Here we consider image and noise as random function. •Objective is to find of the uncorrupted image f such that mean square error between them is minimize. •Assumption : •Image and noise are not correlated. •Any one of them should have zero mean. •Gray levels in the estimate are linear functions of the degraded image.
  • 206. Thursday, January 5, 2023 206
  • 209. Thursday, January 5, 2023 209 010100111100 = a3a1a2a2a6
  • 210. Thursday, January 5, 2023 210
  • 211. Thursday, January 5, 2023 211
  • 212. • a2,a1,a3,a5,a4 = (0.25,0.25,0.2,0.15,0.15) • Consider a five symbol sequence {a1,a2,a3,a3,a4} from a four symbol source code. Generate the arithmetic code for the same. Thursday, January 5, 2023 212
  • 221. Thursday, January 5, 2023 221 Entropy encoder
  • 223. Thursday, January 5, 2023 223
  • 224. Thursday, January 5, 2023 224
  • 226. • Image segmentation – ex: edge-based, region-based • Image representation (Boundary Representation) – ex: Chain code , polygonal approximation – Image description (Boundary Descriptors) – ex: boundary-based, regional-based 226 OUTLINE
  • 228. Image segmentation(cont…) • Segmentation is used to subdivide an image into its constituent parts or objects. • This step determines the eventual success or failure of image analysis. • Generally, the segmentation is carried out only up to the objects of interest are isolated. e..g. face detection. • The goal of segmentation is to simplify and/or change the representation of an image into something that is more meaningful and easier to analyse.
  • 229. Classification of the Segmentation techniques Image Segmentation Discontinuity Similarity e.g. - Point Detection - Line Detection - Edge Detection e.g. - Thresholding - Region Growing - Region splitting & merging
  • 230. • There are three basic types of gray-level discontinuities in a digital image: points, lines, and edges • The most common way to look for discontinuities is to run a mask through the image. • We say that a point, line, and edge has been detected at the location on which the mask is centered if ,where 230 edge-based segmentation(1) R T  1 1 2 2 9 9 ...... R w z w z w z    
  • 231. edge-based segmentation(2) • Point detection a point detection mask • Line detection a line detection mask 231
  • 232. edge-based segmentation(3) • Edge detection: Gradient operation 232 x y f G x G f y f                    1 2 2 2 ( ) x y f mag f G G          1 ( , ) tan ( ) y x G x y G   
  • 233. edge-based segmentation(4) • Edge detection: Laplacian operation 233 2 2 2 2 2 f f f x y        2 2 2 2 2 2 4 ( ) r r h r e              
  • 234. 234 Region Based Segmentation Region Growing Region growing techniques start with one pixel of a potential region and try to grow it by adding adjacent pixels till the pixels being compared are too disimilar. • The first pixel selected can be just the first unlabeled pixel in the image or a set of seed pixels can be chosen from the image. • Usually a statistical test is used to decide which pixels can be added to a region.
  • 235. • Region Growing technique • Assign some seed point. • Assign some threshold value{value that is nearer to maximum value of pixel value} • Compare seed point with pixel value around it. Thursday, January 5, 2023 235
  • 236. • Eg. Threshold < 3 • Answer of the above image through region growing technique is: Thursday, January 5, 2023 236
  • 237. • Region Splitting and Merging • Threshold <=3 • Split the image into equal parts. • If (Maximum pixel value – Minimum Pixel value) does not satisfy the threshold constraint than again split region. Thursday, January 5, 2023 237
  • 238. Thursday, January 5, 2023 238
  • 239. Thursday, January 5, 2023 239
  • 240. Thursday, January 5, 2023 240
  • 241. Boundary Representation • Image regions (including segments) can be represented by either the border or the pixels of the region. These can be viewed as external or internal characteristics, respectively. • Chain codes •
  • 243. Boundary Representation Chain Codes • Chain codes can be based on either 4-connectedness or 8-connectedness. • The first difference of the chain code: – This difference is obtained by counting the number of direction changes (in a counterclockwise direction) – For example, the first difference of the 4-direction chain code 10103322 is 3133030. • Assuming the first difference code represent a closed path, rotation normalization can be achieved by circularly shifting the number of the code so that the list of numbers forms the smallest possible integer. • Size normalization can be achieved by adjusting the size of the resampling grid.
  • 244. Thursday, January 5, 2023 244
  • 245. Thursday, January 5, 2023 245
  • 246. Boundary Representation Polygonal Approximations • Polygonal approximations: to represent a boundary by straight line segments, and a closed path becomes a polygon. • The number of straight line segments used determines the accuracy of the approximation. • Only the minimum required number of sides necessary to preserve the needed shape information should be used (Minimum perimeter polygons). • A larger number of sides will only add noise to the model.
  • 247. Boundary Representation Polygonal Approximations • Minimum perimeter polygons: (Merging and splitting) – Merging and splitting are often used together to ensure that vertices appear where they would naturally in the boundary. – A least squares criterion to a straight line is used to stop the processing.
  • 248. 248 Hough Transform • The Hough transform is a method for detecting lines or curves specified by a parametric function. • If the parameters are p1, p2, … pn, then the Hough procedure uses an n-dimensional accumulator array in which it accumulates votes for the correct parameters of the lines or curves found on the image. y = mx + b image m b accumulator
  • 249. Q. Given 3 points, Use Hough Transform to draw a line joining these points : (1,1), (2,2) & (3,3). Thursday, January 5, 2023 249
  • 250. Thursday, January 5, 2023 250 Question. Given 5 points, use Hough transform to draw a line joining the points (1,4) , (2,3), (3,1), (4,1), (5,0). (RTU-2016)
  • 251. Boundary Descriptors • There are several simple geometric measures that can be useful for describing a boundary. – The length of a boundary: the number of pixels along a boundary gives a rough approximation of its length. – Curvature: the rate of change of slope • To measure a curvature accurately at a point in a digital boundary is difficult • The difference between the slops of adjacent boundary segments is used as a descriptor of curvature at the point of intersection of segments
  • 252. Boundary Descriptors Shape Numbers First difference • The shape number of a boundary is defined as the first difference of smallest magnitude. • The order n of a shape number is defined as the number of digits in its representation.
  • 255. Boundary Descriptors Fourier Descriptors • This is a way of using the Fourier transform to analyze the shape of a boundary. – The x-y coordinates of the boundary are treated as the real and imaginary parts of a complex number. – Then the list of coordinates is Fourier transformed using the DFT (chapter 4). – The Fourier coefficients are called the Fourier descriptors. – The basic shape of the region is determined by the first several coefficients, which represent lower frequencies. – Higher frequency terms provide information on the fine detail of the boundary.
  • 257. Regional Descriptors • Some simple descriptors – The area of a region: the number of pixels in the region – The perimeter of a region: the length of its boundary – The compactness of a region: (perimeter)2/area – The mean and median of the gray levels – The minimum and maximum gray-level values – The number of pixels with values above and below the mean
  • 259. Regional Descriptors Topological Descriptors Topological property 1: the number of holes (H) Topological property 2: the number of connected components (C)
  • 260. Regional Descriptors Topological Descriptors Topological property 3: Euler number: the number of connected components subtract the number of holes E = C - H E=0 E= -1