2. CONTENTS
β’ Some Basic Intensity Transformation
Functions,
β’ Histogram Processing,
β’ Fundamentals of Spatial Filtering
β’ Smoothing Spatial Filters
β’ Sharpening Spatial Filters
β’ Frequency Domain filtering
3. INTRODUCTION
β’ Image enhancing is method of processing an image such that
the output image is more suitable than original one for
βspecificβ application.
β’ Image enhancement types are classified into two types
1. Spatial Domain Methods
2. Frequency Domain Methods
β’ Spatial domain or plane refers to the image plane itself and
processing in this domain involves direct manipulation of
pixels in that image.
β’ Frequency Domain involves modifying the image which is
Fourier transformed.
4. (Contd)
β’ There is no general theory for image enhancement.
β’ When an image is processed for visual interpretation, the
viewers are the ultimate judges.
β’ When Image processing is for machine perception, the
evaluation task is somewhat easier .
5. Spatial Domain
β’ The term spatial refers to the aggregate of pixels composing
an image.
β’ Spatial domain methods are the procedures that operate
directly on the images.
β’ Spatial domain process are denoted by the following
equation.
g(x, y)= T(f(x,y))
β’ T is operator which is defined over neighbored of (x,y)
β’ T can be operate on set of input images, such as performing
pixel by pixel addition of k images for noise reduction.
β’ The principle of defining a neighborhood about a point (x,y) is
to use a square or rectangular sub-image area centered at(x,y)
7. β’ The figure shows the center of the sub-image is moved from
pixel to pixel , from top of the left corner. The operator is
applied at each location (x,y) to get output g at that location.
β’ The simplest form of T, when neighbored of size 1X1, and here
value of g depends only upon the value of f(x,y) and T
becomes intensity function of the form
s=T(r)
β’ Here r and s are variable defining gray-scale of f(x,y) and g(x,y)
respectively .
8.
9. β’ If T(r) is of the form shown in fig a then output image will be
of higher contrast than original image , by darkening the levels
below m and brightening the levels above m this is known as
contrast stretching.
β’ T(r) of the form shown in fig b, creates a two level image. In
this kind of image enhancement at any point, depends on
intensity at that particular point. This is known as point
processing.
β’ Larger neighborhood allows considerably more flexibility .
Principle is to use function of value f(x,y) in neighborhood to
determine value of g at x,y.
10. β’ One of the methods used in spatial domain
filtering are using masks or filters.
β’ A mask is a small 2-D array in which values of
the masks determine the nature of the
process.
β’ Enhancement of image using mask or filter
referred as Masking or filtering.
11. Some Basic Gray Level Transformations
β’ These are simplest among all enhancement techniques.
β’ The values of pixels before and after processing will be
represented by r and s respectively.
β’ Values are related by the expression s=T(r) . Where T is a
operation which maps value of r into s.
β’ Since we are dealing with digital quantities, values of the
transformation function are stored in one dimensional array.
And mappings from r to s are implemented via table lookups.
β’ There are basically 3 types of transformation.
β’ Linear, Logarithmic, power law transformations.
12. Linear Transformation
β’ It is also known as negative and identity
transformations.
β’ The negative of an image with gray levels[0 to
L-1] is obtained by negative transformation
which is shown in next figure,
β’ s=L-1-r
14. β’ Reversing the intensity level of an image in this
manner produces equivalent of an photographic
negative.
β’ This enhancing method is particularly used for
enhancing white or gray level embedded in darker
regions of an image.
β’ Ref: Next Figure
15.
16. Log-Transformations
β’ It is represented by equation s=c log(1+r)
β’ Where c is constant and r >>0.
β’ The shape of the log curve in graph shows that this
transformation maps narrow range gray values of input levels
to wider output level.
β’ During log transformation, the dark pixels in an image are
expanded as compare to the higher pixel values.
β’ The higher pixel values are kind of compressed in log
transformation. This result in following image enhancement.
β’ This reduces the contrast of brighter regions.
17. Power-Law Transformation
β’ 2 kinds of transformation.
β’ Nth Transformation
β’ 1/N th Transformation.
β’ The transformation is given by the equation
s=cr^Ζ³.
β’ Due to the symbol Ζ³(Gamma) this transformation is also
known as Gamma Transformation.
β’ Variation in the value of gamma varies the image
enhancement. This is compensated by gamma correction
β’ Different display devices have their own gamma correction
factor.
18.
19. Piecewise Linear Transformation
Function
β’ Its complimentary to the methods used earlier.
β’ its advantageous compared to methods used
earlier(Arbitrarily Complex).
οΆ Contrast Stretching:
β’ One of the simplest piecewise linear transformation method.
β’ Low contrast image can result from poor illumination, lack of
dynamic range in the image sensor, or even wrong setting of
the lens aperture.
β’ Idea behind contrast stretching is to increase the dynamic
range of the gray levels.
20.
21. β’ Previous figure shows the transformation used for contrast
stretching.
β’ The location of points( r1,s1) and (r2, s2) controls the shape of
the transformation.
β’ If r1=s1 and r2=s2 then there wont be any changes in the gray
levels in the output image.
β’ If r1=r2 s1=0 and s2=L-1 then it becomes thresholding
function which will result in a binary image.
β’ Intermediate values of (r1,s1) , (r2,s2) produces various levels
of spreads in the gray levels of an image thus it prevents
intensity artifacts in the image.
22. β’ Fig 3.10 b shows the 8 bit image with low contrast
β’ 3.10 c shows the result of the contrast stretching obtained
by (r1,s1)=(rmin, 0) and (r2,s2)=(rmax, L-1)
β’ Thus in the output image contrast may be stretched from
original range to maximum range( 0 to L-1) .
23. Gray Level Slicing( Clipping)
β’ Highlighting specific level of gray level is desired in an image
β’ Examples: Enhancing masses of water in an satellite imagery,
enhancing flaws in x-ray imaging.
β’ There are several way of doing slicing but most of the
methods are variations of two basic themes
1. Display a high gray vales to gray levels which are in range of
interest and low value for all other value.( s=L-1 and s=0)
2. It brightens the desired range of gray level, but preserves the
background and grey level tonalities in the image.(s=L-1, s=r)
24.
25. Bit Plane Slicing
β’ The gray level of each pixel in digital image is
stored as one or more bits in a computer.
β’ For an 8 bit image 0 is encoded as 00000000
and 255 is represented as 11111111.
β’ Bit plane slicing represents image with fewer
bits and it enhance the image by focusing.
26. Lets Take the Example
β’
1 2 3
4 5 0
3 2 5
β’ First find the maximum pixel value in matrix: 5
β’ To represent 5, 3 bits is necessary.
β’
001 010 011
100 101 000
011 010 101
β’ Now represent the resultant matrix into n number of scalar matrix .
N represents number of bits.
β’
0 0 0
1 1 0
0 0 1
0 1 1
0 0 0
1 1 0
1 0 1
0 1 0
1 0 1
29. Histogram Processing
β’ The Histogram of the digital Image having r gray levels in the
range[0 to L-1] is the discrete function h(rk)=nk. Where rk
represents the k th gray level and nk represents the number of
pixels in the image having gray level rk.
β’ Common practice is to normalize the histogram by dividing
each of its value by the total number of pixels in the image.
β’ Normalized Histogram is given by P(rk)=nk /n where k ranges
from 0 to L-1.
β’ P(rk) gives the probability of occurrence of gray level rk .
30. (Cont)
β’ Sum of all the components of the normalized histogram is 1.
β’ Histograms are the basis for numerous spatial domain
techniques.
β’ Histogram manipulation is used for image enhancement.
β’ Histograms are very popular for real time image processing.
β’ Histograms are used for Enhancement, Statistics, Compression
and Segmentation.
β’ Consider the fig 3.15 where it consist 4 images having low
contrast, high contrast, dark and bright characteristics.
32. (Cont)
β’ The horizontal axis of each histogram represents the gray level
values rk.
β’ Vertical axis represents the h(rk)= nk or P(rk) if histograms are
normalized.
β’ We can note that in the dark image , the histogram
components are concentrated at the low side of the grey
scale.
β’ Similarly the histogram components of the bright image are
biased towards the higher side of grey scale.
33. Histogram Equalization
β’ Itβs a technique for adjusting image intensities to enhance the contrast of
images.
β’ Consider an image which is represented as mr X mc forms having gray
levels βrβ in the range 0 to L-1.
β’ In the initial part we discussed that r has been normalized to the range
[0 ,1]. Where 0 represents black and 1 represents white. And also we
considered a discrete function which allows pixels value to be in the
range[ 0 to L-1].
β’ For a value r , lets consider a transformation s=T(r) where 0<=r <=1
β’ Assume T(r) that it will satisfies the following conditions
1. T(r) is single valued and monotonically increasing in the range 0<=r <=1
2. 0<=T(r)<=1 for the r value 0<=r <=1.
34. β’ The requirement in (1) that T(r) to be single valued will
ensures that the function will have inverse transformation and
monotonicity condition preserves the increasing order from
black to white in the output image.
β’ 2nd condition will ensure that output level will be in the same
range as input levels.
β’ The inverse transformation to get back the r values is of the
form r=T-1 (s) where 0<=s<=1.
36. β’ The grey levels in an image may be viewed as random
variables in the intervals [0, 1].
β’ PDF is one of the main descriptors of the random variables.
β’ Let pr(r) and ps(s) denotes PDF of two random variables r and
s.
β’ If pr(r) and T(r) are known, T-1 (s) satisfies the 1st condition
then ps(s) will be obtained by
ps(s)= pr(r)mod(dr/ds)β¦β¦β¦β¦β¦..(1)
β’ A transformation function of particular importance has the
form s=T(r) = 0
π
π‘ π(π€) β¦β¦..(1.1) where w is the dummy
variable of integration.
37. β’ The RHS of the previous equation is CDF ( Cumulative
Distribution Function) of r.
β’ Given Transformation Tr(r) , we find ps(s) by applying equation
1.
β’ We know from basic calculus that the derivative of definite
integral with respect to upper limit is simply integrand
evaluated at that limit.
ππ
ππ
=
ππ(π)
ππ
=
π[ 0
π
π π
(π€)]
ππ
= pr(r)β¦β¦β¦2
β’ Substituting the dr/ds value in equation 1 we will get ps(s)=1
38. β’ Because ps(s) is the probability density function it follows that
it must be 0 outside the interval[0,1].
β’ In this case because its integral over all of its values is one,
ps(s) is Uniform probability distributed function.
β’ For discrete values we deal with probabilities and summation.
The probabilities of occurrence of grey level rk in an image is
given by pr(rk)=nk/n where k ranges from 0 to L-1.
β’ The discrete version of the transformation function 1.1 is
given by β¦β¦β¦.(4)
39. β’ Thus the processed image is obtained by mapping each pixel
with level rk in the input image to the corresponding value sk
in the output image.
β’ The transformation given by the equation 4 is known as
histogram equalization or histogram linearization .
40. Histogram Matching
β’ In image processing, histogram matching or histogram specification is the
transformation of an image so that its histogram matches a specified histogram.
β’ Consider a grayscale input image X. It has a probability density function pr(r),
where r is a grayscale value, and pr(r) is the probability of that value.
β’ Now consider a desired output probability density function pz(z). A transformation
of pr(r) is needed to convert it to pz(z).
β’ Each pdf can easily be mapped to its cumulative density function by
π ππ =
π=0
π
ππ(ππ)
πΊ π§π =
π=0
π
ππ§(π§π)
β’ The idea is to map each r value in X to the z value that has the same probability
41. Enhancement using Arithmetic and Logical
Operations.
β’ Image arithmetic applies one of the standard arithmetic
operations or a logical operator to two or more images.
β’ The operators are applied in a pixel-by-pixel way, i.e. the value
of a pixel in the output image depends only on the values of
the corresponding pixels in the input images
β’ Hence, the images must be of the same size
β’ Although image arithmetic is the most simple form of image
processing, there is a wide range of applications. A main
advantage of arithmetic operators is that the process is very
simple and therefore fast.
42. β’ ADD
c = a + b
β’ SUB
c = a - b
β’ MUL
c = a * b
β’ DIV
c = a / b
β’ LOG
c = log(a)
β’ EXP
c = exp(a)
β’ SQRT
c = sqrt(a)
β’ TRIG.
c = sin/cos/tan(a)
β’ INVERT
c = (2B - 1) - a
43. Image Enhancement in Frequency
Domain
β’ Preliminary Concepts:
β’ Complex Number: C=R+jI where R,I are real numbers and j is an imaginary
number j=sqrt(-1).
β’ Fourier Series
β’ Convolution
β’ Impulses and their shifting properties
β’ Fourier Transforms.
44. The 2D-DFT and IDFT
β’ F(u,v)= π₯=0
πβ1
π¦=0
πβ1
π π₯, π¦ π^(βπ2π
π’π₯
π
+
π£π¦
π
)
β’ Where f(x,y) is a digital image of size M X N
β’ As similar to 1 D DFT the above equation has to be calculated
for values of discrete variables u and v ranging from 0 to M-1
and N-1 respectively.
β’ Given the transform f(u,v) we can obtain f(x,y) by using IDFT
β’ f(x,y)=
1
ππ π’=0
πβ1
π£=0
πβ1
πΉ π’, π£ π^
(π2π
π’π₯
π
+
π£π¦
π
45. Properties of 2D-DFT
οΆRelationship between Spatial and Frequency
Intervals
β’ Consider a continuous function f(t,z) is sampled to form digital image f(x,y)
which consist MXN samples taken in t,z and directions.
β’ Let ΞT and ΞZ denotes the separation between samples
β’ Then separations between the corresponding discrete, frequency domain
variables are given by
Ξu=
1
πΞπ
and Ξv=
1
πΞπ
46. οΆTranslation and Rotation
β’ Translation properties states that
π π₯, π¦ π^(π2π
π’0π₯
π
+
π£ππ¦
π
= πΉ π’ β π’0, π£ β π£0 πππ
π π₯ β π₯0, π¦ β π¦0 = πΉ π’, π£ π
βπ2π
π₯0π’
π +
π¦0π£
π
οΆ Periodicity:
β’ 2D-DFT and IDFT are infinitely periodic in the u and v directions.
β’ F(u,v)=F(u+k1M,v)=F(u,v+k2N)=F(u+k1M,v+k2N)
and
F(x,y)=f(x+k1M,y)=f(x,y+k2N)=f(x+k1M,y+k2N) where k1 and k2 are
integers.
47. β’ Symmetry Properties:
w(x,y)=we(x,y)+wo(x,y)
where the even and odd parts are defined as
we(x,y)=w(x,y)+w(-x,-y)/2
And
wo(x,y)=w(x,y)-w(-x,-y)/2
β’ we(x,y)=we(-x,-y)- Symmetric
β’ wo(x,y)=-wo(-x,-y)- Anti Symmetric.
48. Basics of filtering in frequency domain
β’ Consider the equation of 2D-DFT, there each term of f(u,v)
contains all values of f(x,y).
β’ Thus it is usually impossible to make direct association
between image components and its transforms, but some
general statements can be made.
β’ Filtering techniques in the frequency domain are based on
modifying the fourier transform to achieve specific objective
49. Frequency Domain filtering
fundamentals
β’ Filtering in the frequency domain consists of modifying the
fourier transform of an image and then computing the inverse
transform to obtain the processed result.
β’ Thus the given digital image f(x,y) of size M X N the basic
filtering equation in which we are interested has the form
g(x,y)=[T^-1(H(u,v)F(u,v)]
β’ Where H(u,v) are filter functions F(u,v ) is DFT.
β’ g(x,y) is the filtered image.
β’ The product H(u,v)and F(u,v) are formed using array
multiplication.
β’ The function modifies the transform of the image to yield the
processed output.
50. β’ H(u,v) is simplified considerably by using functions that are
symmetric about their center, which requires F(u,v) to be at
center also.
β’ This is been accomplished by multiplying input image by
(-1)^(x+y).
β’ One of the simplest filter we can construct is a filter H(u,v)
which is 0 at center and 1 elsewhere
β’ This filter would reject the DC term and pass all other terms of
F(u,v) when form the product H(u,v)F(u,v).
β’ The dc term will be responsible for the average intensity, so
setting dc term zero will reduce the average intensity to be
zero.
52. β’ Low frequencies in the transform are related to slowly varying
intensities on other hand high frequencies are caused by
sharp transitions.
β’ So we except the filters which attenuates the high frequencies
while passing low frequencies would blur an image. While
filter with opposite property would enhance sharp detail but
causes the reduction in contrast of the image.
53.
54. β’ g(x,y)=[T^-1(H(u,v)F(u,v)] this equation involves the product
of two functions in frequency domain which by properties it is
the convolution in spatial domain.
β’ If the functions in the questions are not padded then we can
except wrap around error.
β’ So padding the image with the border causes the uniform
boundary around periodic sequence
55. Steps for filtering in the frequency
domain
1. Given an input image f(x,y) of size M X N, obtain the padding parameters
P and Q. typically we pad P=2M and Q=2N
2. Form the padding image, fp(x,y) of size P and Q by appending necessary
number of zeros
3. Multiply fp(x,y) by (-1)^(x+y) to center its transform.
4. Compute the DFT of the image from step 3.
5. Generate a real symmetric filter function, H(u,v) of size P X Q with center
at coordinates (P/2, Q/2).
6. Form the product G(u,v)=H(u,v)F(u,v) using array multiplication
7. Obtain the processed image. gp(x,y)= ππππ[πβ1[πΊ π’, π£ ] β1 π₯+π¦
8. Obtain the final processed result g(x,y), by extracting the MXN region.
57. Correspondence between filtering in the
spatial and frequency domains
β’ The link between filtering in the spatial domain and
frequency domain is convolution theorem.
β’ Spatial Domain Frequency Domain
f(x,y)*h(x,y) F(u,v)H(u,v)
πΏ π₯, π¦ β β π₯, π¦ π πΏ π₯, π¦ π» π’, π£
β’ h(x,y)=>H(u,v)
58. Image Smoothing using frequency
domain filters
β’ Edges and other sharp intensity transitions such as noise in an image
contribute higher frequency content.
β’ So smoothing or blurring is achieved by attenuating the higher frequency
content, this is known as low pass filter.
β’ Here we consider 3 types of LPFs
1. Ideal Low pass filter(sharp)
2. Butterworth
3. Gaussian (smooth)
59. Ideal low pass filters
β’ A 2D low pass filter that passes without attenuation all
frequencies with in a circle of radius D0 from origin and cuts
off all other frequencies outside this is known as Ideal LPF.
β’ π» π’, π£ =
1 ππ π· π’, π£ β€ π·0
0 ππ π· π’, π£ > π·0
π€βπππ π·0 ππ πππ ππ‘ππ£π ππππ π‘πππ‘
π· π’, π£ ππ πππ π‘ππππ
61. β’ The ideal low pass filters are radically symmetric about the origin, which
means that the filter is completely defined by radical cross section as
shown in fig c.
β’ For an ILPF cross section the point of transition between H(u,v)=1 and
H(u,v)=0 is called cutoff frequency. In the case of previous figure D0 is the
cutoff frequency
β’ Simply Cutoff high frequency components that are at specified distance D0
from the origin of the transform, changing the distance changes the
behavior of the filter.
β’ One way to establish the standard cutoff frequency loci in the image is to
compute circles or rings that encloses the specified amount of the total
image power PT
β’ PT= π’=0
πβ1
π£=0
πβ1
π(π’, π£)
β’ Where P(u,v)= πΉ π’, π£ 2
= π 2
π’, π£ + πΌ2
(π’, π£)
62. β’ If the DFT has been centered a circle of radius D0 with origin at the center
of the frequency rectangle encloses πΌ percent of power where
πΌ=100[ π’ π£
π π’,π£
ππ
]
β’ Summation is taken over the values of(u,v) that lie inside the circle or on
its boundary.
64. Butterworth low pass filter
β’ Its is type of signal processing filter designed to have flat
frequency response as possible in the pass band
β’ It is also referred as a maximally flat magnitude filter.
β’ The transfer function of Butterworth low pass filter of order n
with cutoff frequency at distance D0 is given by
H(u,v)=
1
1+
π· π’,π£
π·0
2π
65.
66. β’ Unlike ILPF , the BLPF transfer function does not have sharp discontinuity
that gives a clear cutoff between passed and filter frequencies.
67. β’ The BLPF of order 1 has no ringing in the spatial domain
β’ The BLPF of order 1 has neither ringing nor negetive values
β’ The BLPF of order shows mild ringing and small negative
values, but there are less pronounced than compared to ILPF.
β’ A butterworth filter of order 20 exhibits characteristics similar
to the ideal low pass filters.
68.
69. Gaussian lowpass filters
β’ Gaussian low pass filters have the transfer function given by
π» π’, π£ = π^(β
π·2 π’,π£
2π·02 )
β’ Where D0 is the cutoff frequency. When D(u,v)=D0 , the GLPF
is down to 0.607 of its maximum value.
β’ A spatial Gaussian filter obtained by computing the IDFT will
have no ringing effect.
70.
71. Image Sharpening using frequency
domain filters
β’ Image sharpening in the frequency domain can be achieved by the high
pass filters, which attenuates the low frequency components with out
disturbing the high frequency components in the system.
β’ High pass filter is obtained from low pass filters using the equation
Hhp(u,v)=1-Hlp (u,v)
β’ Three types of high pass filters we are considering
1. Ideal High Pass Filters
2. Butterworth High Pass filters
3. Gaussian low pass filters
72. Ideal High Pass filters
β’ A 2D High pass filters are defined by following equation
H(u,v)=
0 ππ π· π’, π£ β€ π·0
1 ππ π· π’, π£ > π·0
β’ Where Do is the Cutoff frequency D(u,v) is the distance.
β’ IHPF is the opposite of ILPF
β’ IHPF also have the same ringing effect as the ILPF
76. Butterworth high pass filters
β’ A 2-D butterworth high pass filters of order n and cutoff frequency D0 is
given by
H(u,v)=
1
1+
π·π
π· π’,π£
2π
β’ Butterworth HPF have smoother than IHPFβs
77. Gaussian High pass filters
β’ The transfer functions of the Gaussian high pass filters with
cutoff frequency locus at distance Do is given by
H(u,v)=1 β πβπ·2(π’,π£)/2π·02
78. The Laplacian in the frequency domain
β’ It is one of the image enhancement technique
β’ Shall be implemented in both spatial domain and frequency
domain.
β’ We will discuss about frequency domain representation.
β’ H(u,v)=β4π2 π’2 + π£2 . .
β’ H(u,v)=β4π2
π’ β
π
2
2
+ π£ β
π
2
2
= β4π2
π·2
(π’, π£)
β’ Then Laplacian image is obtained by
β’ π»2
π π₯, π¦ = πβ1
(π» π’, π£ πΉ π’, π£ )
79. β’ Enhancement is obtained by using below equation.
π π₯, π¦ = π π₯, π¦ + ππ»2
π(π₯, π¦)β¦2
β’ Here c=-1 as H(u,v) is negative
β’ The above equation can be written in frequency domain as
follows
β’ π π₯, π¦ = πβ1 πΉ π’, π£ β π» π’, π£ πΉ π’, π£
β’ = πβ1 1 β π» π’, π£ πΉ π’, π£
β’ =πβ1( 1 + 4π2 π·2 π’, π£ πΉ π’, π£ )
80.
81. Unsharp Masking, Highboost filtering and High-
Frequency-Emphasis Filtering
β’ ππππ π π₯, π¦ = π π₯, π¦ β πππ π₯, π¦
β’ Where flp(x,y)=πβ1
π»ππ π’, π£ πΉ π’, π£
β’ Where Hlp is low pass filter F(u,v) is the fourier transfrom of
the image.
β’ g(x,y)=f(x,y)+k*gmask(x,y) this expression defines the
unsharp masking when k=1 and highboost filtering when k>1
β’ g(x,y)=πβ1{[1 + π β 1 β π»ππ π’, π£ πΉ(π’, π£)}
β’ g(x,y)=πβ1{ 1 + π β π»βπ π’, π£ πΉ π’, π£ }
β’ The expression contained in square bracket is known as high
frequency emphasis filtering..
82. Homomorphic filtering
β’ Homomorphic filtering is the generalized technique for image
enhancement and image correction
β’ It simultaneously normalizes the brightness across the image
and also increases the contrast.
β’ Image can be represented as product of illumination and
reflectance.
f(x,y)=i(x,y)r(x,y)
β’ z(x,y)=ln(f(x,y))
β’ z(x,y)=ln(i(x,y)+ln(r(x,y))
β’ Z(u,v)=Fi(u,v)+Fr(u,v)
83. (Cont)
β’ We can filter Z(u,v) using H(u,v)
β’ Then S(u,v)=H(u,v)Z(u,v)
β’ =H(u,v)Fi(u,v)+H(u,v)Fr(u,v)
β’ The filtered image in the spatial domain is
β’ s(x,y)=T^-1(S(u,v))
β’ =T^-1{H(u,v)Fi(u,v)+T^-1 H(u,v)Fr(u,v)}
β’ s(x,y)=iβ(x,y)+rβ(x,y)
β’ Since z(x,y) is formed by taking natural logarithm we reverse
that process by taking exponential of the filtered result to
form the output image.
85. Selective Filtering
β’ The filters we discussed in the previous were used to operate
over entire frequency rectangle.
β’ There are some applications where we have to use only
selected band or small regions.
β’ The filters where it filters only selected band of frequency are
known as bandreject or bandpass filters.
β’ The filters which filters only small region are known as notch
filters.
β’ This kind of processing is known as selective filtering.
87. β’ Bandpass filters are obtained by band reject filters by
following equations
β’ HBP (u,v)=1-HBR(u,v)
88. Notch filters
β’ Notch filters are the most useful of the selective filters.
β’ A notch filter rejects frequencies in the predefined
neighborhood about the center of the frequency band.
β’ Zerophase filters must be symmetric about the origin so a
notch with center at (uo,vo) must have a corresponding notch
at location (-uo,-vo).
β’ Notch reject filters are constructed as products of highpass
filters whose center translates to the center of the notches.
β’ π»ππ π’, π£ = π=1
π
π» π π’, π£ π» β π(π’, π£)
β’ A notch pass filters are obtained from notch reject filters as
same as we discussed earlier.
89.
90. Refrences
β’ Digital Image Processing- Third Edition by RC
Gonzalez and R.E.Woods
β’ www.imageproceesingplace.com
β’ www.slideshare.net