2. What is an Image
Image is a representation of some property of a physical entity.
The property can be represented as a function f (x, y , z) of 3
variables.
A 2D image is obtained by:
perspective projection through a pin-hole camera
Assuming that the objects are very far away from the imaging
system (for eg: z → ∞), thereby giving f (x, y ) = f (x, y , z).
When the independent variables x, y and the function value f
are discretized, we get a Digital Image.
IT472 - DIP: Lecture 2 2/23
3. What is an Image
Image is a representation of some property of a physical entity.
The property can be represented as a function f (x, y , z) of 3
variables.
A 2D image is obtained by:
perspective projection through a pin-hole camera
Assuming that the objects are very far away from the imaging
system (for eg: z → ∞), thereby giving f (x, y ) = f (x, y , z).
When the independent variables x, y and the function value f
are discretized, we get a Digital Image.
IT472 - DIP: Lecture 2 2/23
4. What is an Image
Image is a representation of some property of a physical entity.
The property can be represented as a function f (x, y , z) of 3
variables.
A 2D image is obtained by:
perspective projection through a pin-hole camera
Assuming that the objects are very far away from the imaging
system (for eg: z → ∞), thereby giving f (x, y ) = f (x, y , z).
When the independent variables x, y and the function value f
are discretized, we get a Digital Image.
IT472 - DIP: Lecture 2 2/23
5. What is an Image
Image is a representation of some property of a physical entity.
The property can be represented as a function f (x, y , z) of 3
variables.
A 2D image is obtained by:
perspective projection through a pin-hole camera
Assuming that the objects are very far away from the imaging
system (for eg: z → ∞), thereby giving f (x, y ) = f (x, y , z).
When the independent variables x, y and the function value f
are discretized, we get a Digital Image.
IT472 - DIP: Lecture 2 2/23
6. What is an Image
Image is a representation of some property of a physical entity.
The property can be represented as a function f (x, y , z) of 3
variables.
A 2D image is obtained by:
perspective projection through a pin-hole camera
Assuming that the objects are very far away from the imaging
system (for eg: z → ∞), thereby giving f (x, y ) = f (x, y , z).
When the independent variables x, y and the function value f
are discretized, we get a Digital Image.
IT472 - DIP: Lecture 2 2/23
7. What is an Image
Image is a representation of some property of a physical entity.
The property can be represented as a function f (x, y , z) of 3
variables.
A 2D image is obtained by:
perspective projection through a pin-hole camera
Assuming that the objects are very far away from the imaging
system (for eg: z → ∞), thereby giving f (x, y ) = f (x, y , z).
When the independent variables x, y and the function value f
are discretized, we get a Digital Image.
IT472 - DIP: Lecture 2 2/23
8. What is an Image
Image is a representation of some property of a physical entity.
The property can be represented as a function f (x, y , z) of 3
variables.
A 2D image is obtained by:
perspective projection through a pin-hole camera
Assuming that the objects are very far away from the imaging
system (for eg: z → ∞), thereby giving f (x, y ) = f (x, y , z).
When the independent variables x, y and the function value f
are discretized, we get a Digital Image.
IT472 - DIP: Lecture 2 2/23
10. Let i(x, y ) be the illumination at a point (x, y ) and r (x, y ) be
the reflectance at the same point, then the image f (x, y ) at
the point is given by f (x, y ) = i(x, y ) r (x, y ).
From Physics, we get 0 < f (x, y ), i(x, y ) < ∞ and
0 < r (x, y ) < 1.
The image capturing device is directly related to the
illumination source used, for eg: Infrared source - Infrared
detector, X-ray source - X-ray film, Visible light - CCD array
detectors.
Summary
At the end, we get a mathematical object f (x, y ) to work with,
that represents an aspect of the real object that we are interested
in.
IT472 - DIP: Lecture 2 4/23
11. Let i(x, y ) be the illumination at a point (x, y ) and r (x, y ) be
the reflectance at the same point, then the image f (x, y ) at
the point is given by f (x, y ) = i(x, y ) r (x, y ).
From Physics, we get 0 < f (x, y ), i(x, y ) < ∞ and
0 < r (x, y ) < 1.
The image capturing device is directly related to the
illumination source used, for eg: Infrared source - Infrared
detector, X-ray source - X-ray film, Visible light - CCD array
detectors.
Summary
At the end, we get a mathematical object f (x, y ) to work with,
that represents an aspect of the real object that we are interested
in.
IT472 - DIP: Lecture 2 4/23
12. Let i(x, y ) be the illumination at a point (x, y ) and r (x, y ) be
the reflectance at the same point, then the image f (x, y ) at
the point is given by f (x, y ) = i(x, y ) r (x, y ).
From Physics, we get 0 < f (x, y ), i(x, y ) < ∞ and
0 < r (x, y ) < 1.
The image capturing device is directly related to the
illumination source used, for eg: Infrared source - Infrared
detector, X-ray source - X-ray film, Visible light - CCD array
detectors.
Summary
At the end, we get a mathematical object f (x, y ) to work with,
that represents an aspect of the real object that we are interested
in.
IT472 - DIP: Lecture 2 4/23
13. Let i(x, y ) be the illumination at a point (x, y ) and r (x, y ) be
the reflectance at the same point, then the image f (x, y ) at
the point is given by f (x, y ) = i(x, y ) r (x, y ).
From Physics, we get 0 < f (x, y ), i(x, y ) < ∞ and
0 < r (x, y ) < 1.
The image capturing device is directly related to the
illumination source used, for eg: Infrared source - Infrared
detector, X-ray source - X-ray film, Visible light - CCD array
detectors.
Summary
At the end, we get a mathematical object f (x, y ) to work with,
that represents an aspect of the real object that we are interested
in.
IT472 - DIP: Lecture 2 4/23
14. What sort of objects are images?
Since we want to process, operate on and play with images,
we should first characterize what sort of objects images are
and what should be possible to do with images?
Should it be possible to apply filters on images (say, using
convolution)?
If yes, then what operations should be allowed on images?
Addition and Scalar multiplication → Vector Spaces!
What sort of vector space? - Differentiable functions?
Continuous functions? Finite bandwidth?
NO!
IT472 - DIP: Lecture 2 5/23
15. What sort of objects are images?
Since we want to process, operate on and play with images,
we should first characterize what sort of objects images are
and what should be possible to do with images?
Should it be possible to apply filters on images (say, using
convolution)?
If yes, then what operations should be allowed on images?
Addition and Scalar multiplication → Vector Spaces!
What sort of vector space? - Differentiable functions?
Continuous functions? Finite bandwidth?
NO!
IT472 - DIP: Lecture 2 5/23
16. What sort of objects are images?
Since we want to process, operate on and play with images,
we should first characterize what sort of objects images are
and what should be possible to do with images?
Should it be possible to apply filters on images (say, using
convolution)?
If yes, then what operations should be allowed on images?
Addition and Scalar multiplication → Vector Spaces!
What sort of vector space? - Differentiable functions?
Continuous functions? Finite bandwidth?
NO!
IT472 - DIP: Lecture 2 5/23
17. What sort of objects are images?
Since we want to process, operate on and play with images,
we should first characterize what sort of objects images are
and what should be possible to do with images?
Should it be possible to apply filters on images (say, using
convolution)?
If yes, then what operations should be allowed on images?
Addition and Scalar multiplication → Vector Spaces!
What sort of vector space? - Differentiable functions?
Continuous functions? Finite bandwidth?
NO!
IT472 - DIP: Lecture 2 5/23
18. What sort of objects are images?
Since we want to process, operate on and play with images,
we should first characterize what sort of objects images are
and what should be possible to do with images?
Should it be possible to apply filters on images (say, using
convolution)?
If yes, then what operations should be allowed on images?
Addition and Scalar multiplication → Vector Spaces!
What sort of vector space? - Differentiable functions?
Continuous functions? Finite bandwidth?
NO!
IT472 - DIP: Lecture 2 5/23
19. What sort of objects are images?
Since we want to process, operate on and play with images,
we should first characterize what sort of objects images are
and what should be possible to do with images?
Should it be possible to apply filters on images (say, using
convolution)?
If yes, then what operations should be allowed on images?
Addition and Scalar multiplication → Vector Spaces!
What sort of vector space? - Differentiable functions?
Continuous functions? Finite bandwidth?
NO!
IT472 - DIP: Lecture 2 5/23
20. Vector space of images
Images are defined on a set with finite area, i.e., images are
functions with compact support.
The image values must be finite at all points,
→ the energy: ||f || = 2 (x, y )
supp(f ) f dx dy has to be finite.
Vector space of images
Images are part of a vector space of 2-d functions with compact
support Ω which are square integrable. This vector space is
denoted as L2 (Ω).
IT472 - DIP: Lecture 2 6/23
21. Vector space of images
Images are defined on a set with finite area, i.e., images are
functions with compact support.
The image values must be finite at all points,
→ the energy: ||f || = 2 (x, y )
supp(f ) f dx dy has to be finite.
Vector space of images
Images are part of a vector space of 2-d functions with compact
support Ω which are square integrable. This vector space is
denoted as L2 (Ω).
IT472 - DIP: Lecture 2 6/23
22. Vector space of images
Images are defined on a set with finite area, i.e., images are
functions with compact support.
The image values must be finite at all points,
→ the energy: ||f || = 2 (x, y )
supp(f ) f dx dy has to be finite.
Vector space of images
Images are part of a vector space of 2-d functions with compact
support Ω which are square integrable. This vector space is
denoted as L2 (Ω).
IT472 - DIP: Lecture 2 6/23
23. Vector space of images
Images are defined on a set with finite area, i.e., images are
functions with compact support.
The image values must be finite at all points,
→ the energy: ||f || = 2 (x, y )
supp(f ) f dx dy has to be finite.
Vector space of images
Images are part of a vector space of 2-d functions with compact
support Ω which are square integrable. This vector space is
denoted as L2 (Ω).
IT472 - DIP: Lecture 2 6/23
24. Image sensors
Figure: Line of sensors
Figure: Single Sensor
Figure: Circular Sensor
Figure: Array of sensors
IT472 - DIP: Lecture 2 7/23
25. Sampling & Quantization
Although theoretically 0 < f (x, y ) < ∞, in practice
Lmin ≤ f (x, y ) ≤ Lmax , where Lmin > 0 and Lmax < ∞ depend on
sensor ratings.
For gray scale digital images, typically we use Lmin = 0 representing
black and Lmax = L − 1 representing white.
Sampled and quantized image gives a digital image which can be
represented as a m × n matrix, say A, of which each element is
called a pixel (or picture element).
IT472 - DIP: Lecture 2 8/23
26. Sampling & Quantization
Although theoretically 0 < f (x, y ) < ∞, in practice
Lmin ≤ f (x, y ) ≤ Lmax , where Lmin > 0 and Lmax < ∞ depend on
sensor ratings.
For gray scale digital images, typically we use Lmin = 0 representing
black and Lmax = L − 1 representing white.
Sampled and quantized image gives a digital image which can be
represented as a m × n matrix, say A, of which each element is
called a pixel (or picture element).
IT472 - DIP: Lecture 2 8/23
27. Sampling & Quantization
Although theoretically 0 < f (x, y ) < ∞, in practice
Lmin ≤ f (x, y ) ≤ Lmax , where Lmin > 0 and Lmax < ∞ depend on
sensor ratings.
For gray scale digital images, typically we use Lmin = 0 representing
black and Lmax = L − 1 representing white.
Sampled and quantized image gives a digital image which can be
represented as a m × n matrix, say A, of which each element is
called a pixel (or picture element).
IT472 - DIP: Lecture 2 8/23
28. Sampling & Quantization
Although theoretically 0 < f (x, y ) < ∞, in practice
Lmin ≤ f (x, y ) ≤ Lmax , where Lmin > 0 and Lmax < ∞ depend on
sensor ratings.
For gray scale digital images, typically we use Lmin = 0 representing
black and Lmax = L − 1 representing white.
Sampled and quantized image gives a digital image which can be
represented as a m × n matrix, say A, of which each element is
called a pixel (or picture element).
IT472 - DIP: Lecture 2 8/23
29. L is typically a power of 2, L = 2k . L levels require k bits of
memory.
For a general image of size 1024 × 1024 pixels with L = 256,
we will need approximately 8MB memory.
Compare this with the file size of one image in your computer.
IT472 - DIP: Lecture 2 9/23
30. L is typically a power of 2, L = 2k . L levels require k bits of
memory.
For a general image of size 1024 × 1024 pixels with L = 256,
we will need approximately 8MB memory.
Compare this with the file size of one image in your computer.
IT472 - DIP: Lecture 2 9/23
31. L is typically a power of 2, L = 2k . L levels require k bits of
memory.
For a general image of size 1024 × 1024 pixels with L = 256,
we will need approximately 8MB memory.
Compare this with the file size of one image in your computer.
IT472 - DIP: Lecture 2 9/23
32. Spatial Resolution
Resolution of an imaging system determines the smallest
discernible detail possible and technically is defined as the
largest number of discernible lines per unit distance.
(1) Is number of pixels enough to define resolution?
Not Always!- Also depends on (2) pixel size. Commonly found
sensors have individual pixel length/width 2 − 8 microns.
Are smaller sensors always better?
NO! - Since an image is produced based on number of
photons (a discrete random variable - Poisson pdf) incident
on each sensor, bigger sensors are found to be more reliable or
have higher SNR ratio compared to smaller sensors.
IT472 - DIP: Lecture 2 10/23
33. Spatial Resolution
Resolution of an imaging system determines the smallest
discernible detail possible and technically is defined as the
largest number of discernible lines per unit distance.
(1) Is number of pixels enough to define resolution?
Not Always!- Also depends on (2) pixel size. Commonly found
sensors have individual pixel length/width 2 − 8 microns.
Are smaller sensors always better?
NO! - Since an image is produced based on number of
photons (a discrete random variable - Poisson pdf) incident
on each sensor, bigger sensors are found to be more reliable or
have higher SNR ratio compared to smaller sensors.
IT472 - DIP: Lecture 2 10/23
34. Spatial Resolution
Resolution of an imaging system determines the smallest
discernible detail possible and technically is defined as the
largest number of discernible lines per unit distance.
(1) Is number of pixels enough to define resolution?
Not Always!- Also depends on (2) pixel size. Commonly found
sensors have individual pixel length/width 2 − 8 microns.
Are smaller sensors always better?
NO! - Since an image is produced based on number of
photons (a discrete random variable - Poisson pdf) incident
on each sensor, bigger sensors are found to be more reliable or
have higher SNR ratio compared to smaller sensors.
IT472 - DIP: Lecture 2 10/23
35. Spatial Resolution
Resolution of an imaging system determines the smallest
discernible detail possible and technically is defined as the
largest number of discernible lines per unit distance.
(1) Is number of pixels enough to define resolution?
Not Always!- Also depends on (2) pixel size. Commonly found
sensors have individual pixel length/width 2 − 8 microns.
Are smaller sensors always better?
NO! - Since an image is produced based on number of
photons (a discrete random variable - Poisson pdf) incident
on each sensor, bigger sensors are found to be more reliable or
have higher SNR ratio compared to smaller sensors.
IT472 - DIP: Lecture 2 10/23
36. Spatial Resolution
Resolution of an imaging system determines the smallest
discernible detail possible and technically is defined as the
largest number of discernible lines per unit distance.
(1) Is number of pixels enough to define resolution?
Not Always!- Also depends on (2) pixel size. Commonly found
sensors have individual pixel length/width 2 − 8 microns.
Are smaller sensors always better?
NO! - Since an image is produced based on number of
photons (a discrete random variable - Poisson pdf) incident
on each sensor, bigger sensors are found to be more reliable or
have higher SNR ratio compared to smaller sensors.
IT472 - DIP: Lecture 2 10/23
37. For color images, sensors are arranged in a (3) mosaic pattern
It also depends on the (4) spatial resolution of the lens.
To summarize, a camera with 10 megapixels is said to have a
better resolution then a 3 megapixel camera assuming similar
lenses and sensors and that images are taken at the same
distance.
IT472 - DIP: Lecture 2 11/23
38. For color images, sensors are arranged in a (3) mosaic pattern
It also depends on the (4) spatial resolution of the lens.
To summarize, a camera with 10 megapixels is said to have a
better resolution then a 3 megapixel camera assuming similar
lenses and sensors and that images are taken at the same
distance.
IT472 - DIP: Lecture 2 11/23
39. For color images, sensors are arranged in a (3) mosaic pattern
It also depends on the (4) spatial resolution of the lens.
To summarize, a camera with 10 megapixels is said to have a
better resolution then a 3 megapixel camera assuming similar
lenses and sensors and that images are taken at the same
distance.
IT472 - DIP: Lecture 2 11/23
40. Imaging system
We can assume that the imaging system is linear and position
invariant/shift invariant.
A meaningful conclusion about the spatial resolution can be
obtained by looking at the impulse response of the imaging
system.
What is an impulse/impulse response for a camera?
IT472 - DIP: Lecture 2 12/23
41. Imaging system
We can assume that the imaging system is linear and position
invariant/shift invariant.
A meaningful conclusion about the spatial resolution can be
obtained by looking at the impulse response of the imaging
system.
What is an impulse/impulse response for a camera?
IT472 - DIP: Lecture 2 12/23
42. Imaging system
We can assume that the imaging system is linear and position
invariant/shift invariant.
A meaningful conclusion about the spatial resolution can be
obtained by looking at the impulse response of the imaging
system.
What is an impulse/impulse response for a camera?
IT472 - DIP: Lecture 2 12/23
43. Imaging system
We can assume that the imaging system is linear and position
invariant/shift invariant.
A meaningful conclusion about the spatial resolution can be
obtained by looking at the impulse response of the imaging
system.
What is an impulse/impulse response for a camera?
IT472 - DIP: Lecture 2 12/23
44. Spatial resolution
Print technology: dots per inch (dpi), Computer screens:
pixels per inch (ppi)
Difference: Collection of dots forms one pixel.
IT472 - DIP: Lecture 2 13/23
45. Spatial resolution
Print technology: dots per inch (dpi), Computer screens:
pixels per inch (ppi)
Difference: Collection of dots forms one pixel.
IT472 - DIP: Lecture 2 13/23
46. Spatial resolution
Print technology: dots per inch (dpi), Computer screens:
pixels per inch (ppi)
Difference: Collection of dots forms one pixel.
IT472 - DIP: Lecture 2 13/23
47. Intensity resolution
Smallest discernible change in the intensity level.
IT472 - DIP: Lecture 2 14/23
48. Intensity resolution
Smallest discernible change in the intensity level.
IT472 - DIP: Lecture 2 14/23
50. Topological concepts
Neighbors of a pixel p = (x, y )
4-Neighborhood
N4 (p) = {(x + 1, y ), (x − 1, y ), (x, y + 1), (x, y − 1)}.
Diagonal Neighborhood
ND (p) = {(x+1, y +1), (x−1, y +1), (x+1, y −1), (x−1, y −1)}.
8-Neighborhood N8 (p) = N4 (p) ∪ ND (p).
IT472 - DIP: Lecture 2 16/23
51. Topological concepts
Neighbors of a pixel p = (x, y )
4-Neighborhood
N4 (p) = {(x + 1, y ), (x − 1, y ), (x, y + 1), (x, y − 1)}.
Diagonal Neighborhood
ND (p) = {(x+1, y +1), (x−1, y +1), (x+1, y −1), (x−1, y −1)}.
8-Neighborhood N8 (p) = N4 (p) ∪ ND (p).
IT472 - DIP: Lecture 2 16/23
52. Topological concepts
Neighbors of a pixel p = (x, y )
4-Neighborhood
N4 (p) = {(x + 1, y ), (x − 1, y ), (x, y + 1), (x, y − 1)}.
Diagonal Neighborhood
ND (p) = {(x+1, y +1), (x−1, y +1), (x+1, y −1), (x−1, y −1)}.
8-Neighborhood N8 (p) = N4 (p) ∪ ND (p).
IT472 - DIP: Lecture 2 16/23
53. Topological concepts
Neighbors of a pixel p = (x, y )
4-Neighborhood
N4 (p) = {(x + 1, y ), (x − 1, y ), (x, y + 1), (x, y − 1)}.
Diagonal Neighborhood
ND (p) = {(x+1, y +1), (x−1, y +1), (x+1, y −1), (x−1, y −1)}.
8-Neighborhood N8 (p) = N4 (p) ∪ ND (p).
IT472 - DIP: Lecture 2 16/23
54. Topological concepts
Adjacency: Used to define relation between pixels of an
image.
Let V be the set of gray levels used to define the relation.
Example: V = {0, . . . , 10}, V = {0}.
4-adjacency: Two pixels p and q with values in V are
4-adjacent if q ∈ N4 (p).
8-adjacency: Two pixels p and q with values in V are
8-adjacent if q ∈ N8 (p).
m-adjacency: Two pixels p and q with values in V are
m-adjacent if:
q ∈ N4 (p), or
q ∈ ND (p) and the set N4 (p) ∪ N4 (q) has no pixels whose
values are in V .
IT472 - DIP: Lecture 2 17/23
55. Topological concepts
Adjacency: Used to define relation between pixels of an
image.
Let V be the set of gray levels used to define the relation.
Example: V = {0, . . . , 10}, V = {0}.
4-adjacency: Two pixels p and q with values in V are
4-adjacent if q ∈ N4 (p).
8-adjacency: Two pixels p and q with values in V are
8-adjacent if q ∈ N8 (p).
m-adjacency: Two pixels p and q with values in V are
m-adjacent if:
q ∈ N4 (p), or
q ∈ ND (p) and the set N4 (p) ∪ N4 (q) has no pixels whose
values are in V .
IT472 - DIP: Lecture 2 17/23
56. Topological concepts
Adjacency: Used to define relation between pixels of an
image.
Let V be the set of gray levels used to define the relation.
Example: V = {0, . . . , 10}, V = {0}.
4-adjacency: Two pixels p and q with values in V are
4-adjacent if q ∈ N4 (p).
8-adjacency: Two pixels p and q with values in V are
8-adjacent if q ∈ N8 (p).
m-adjacency: Two pixels p and q with values in V are
m-adjacent if:
q ∈ N4 (p), or
q ∈ ND (p) and the set N4 (p) ∪ N4 (q) has no pixels whose
values are in V .
IT472 - DIP: Lecture 2 17/23
57. Topological concepts
Adjacency: Used to define relation between pixels of an
image.
Let V be the set of gray levels used to define the relation.
Example: V = {0, . . . , 10}, V = {0}.
4-adjacency: Two pixels p and q with values in V are
4-adjacent if q ∈ N4 (p).
8-adjacency: Two pixels p and q with values in V are
8-adjacent if q ∈ N8 (p).
m-adjacency: Two pixels p and q with values in V are
m-adjacent if:
q ∈ N4 (p), or
q ∈ ND (p) and the set N4 (p) ∪ N4 (q) has no pixels whose
values are in V .
IT472 - DIP: Lecture 2 17/23
58. Topological concepts
Adjacency: Used to define relation between pixels of an
image.
Let V be the set of gray levels used to define the relation.
Example: V = {0, . . . , 10}, V = {0}.
4-adjacency: Two pixels p and q with values in V are
4-adjacent if q ∈ N4 (p).
8-adjacency: Two pixels p and q with values in V are
8-adjacent if q ∈ N8 (p).
m-adjacency: Two pixels p and q with values in V are
m-adjacent if:
q ∈ N4 (p), or
q ∈ ND (p) and the set N4 (p) ∪ N4 (q) has no pixels whose
values are in V .
IT472 - DIP: Lecture 2 17/23
59. Topological concepts
Adjacency: Used to define relation between pixels of an
image.
Let V be the set of gray levels used to define the relation.
Example: V = {0, . . . , 10}, V = {0}.
4-adjacency: Two pixels p and q with values in V are
4-adjacent if q ∈ N4 (p).
8-adjacency: Two pixels p and q with values in V are
8-adjacent if q ∈ N8 (p).
m-adjacency: Two pixels p and q with values in V are
m-adjacent if:
q ∈ N4 (p), or
q ∈ ND (p) and the set N4 (p) ∪ N4 (q) has no pixels whose
values are in V .
IT472 - DIP: Lecture 2 17/23
60. Topological concepts
Adjacency: Used to define relation between pixels of an
image.
Let V be the set of gray levels used to define the relation.
Example: V = {0, . . . , 10}, V = {0}.
4-adjacency: Two pixels p and q with values in V are
4-adjacent if q ∈ N4 (p).
8-adjacency: Two pixels p and q with values in V are
8-adjacent if q ∈ N8 (p).
m-adjacency: Two pixels p and q with values in V are
m-adjacent if:
q ∈ N4 (p), or
q ∈ ND (p) and the set N4 (p) ∪ N4 (q) has no pixels whose
values are in V .
IT472 - DIP: Lecture 2 17/23
61. Topological concepts
Path: Path from pixel p = (x, y ) to pixel q = (s, t) is a
sequence of distinct pixels with coordinates
(x0 = x, y0 = y ), (x1 , y1 ), . . . , (xn = s, yn = t) such that pixels
(xi−1 , yi−1 ) and (xi , yi ), ∀1 ≤ i ≤ n are adjacent. If the first
and last pixels are same then we have a closed path.
Connectedness: For a given subset S of pixels in an image,
p, q ∈ S are said to be connected in S if there exists a path
connecting the two, consisting of pixels only from S.
Connected component: For p ∈ S, the set of all pixels
connected to p is a connected component in S.
Connected Set: If S has only one connected component, it is
called a connected set. A connected set in an image is often
called a region.
IT472 - DIP: Lecture 2 18/23
62. Topological concepts
Path: Path from pixel p = (x, y ) to pixel q = (s, t) is a
sequence of distinct pixels with coordinates
(x0 = x, y0 = y ), (x1 , y1 ), . . . , (xn = s, yn = t) such that pixels
(xi−1 , yi−1 ) and (xi , yi ), ∀1 ≤ i ≤ n are adjacent. If the first
and last pixels are same then we have a closed path.
Connectedness: For a given subset S of pixels in an image,
p, q ∈ S are said to be connected in S if there exists a path
connecting the two, consisting of pixels only from S.
Connected component: For p ∈ S, the set of all pixels
connected to p is a connected component in S.
Connected Set: If S has only one connected component, it is
called a connected set. A connected set in an image is often
called a region.
IT472 - DIP: Lecture 2 18/23
63. Topological concepts
Path: Path from pixel p = (x, y ) to pixel q = (s, t) is a
sequence of distinct pixels with coordinates
(x0 = x, y0 = y ), (x1 , y1 ), . . . , (xn = s, yn = t) such that pixels
(xi−1 , yi−1 ) and (xi , yi ), ∀1 ≤ i ≤ n are adjacent. If the first
and last pixels are same then we have a closed path.
Connectedness: For a given subset S of pixels in an image,
p, q ∈ S are said to be connected in S if there exists a path
connecting the two, consisting of pixels only from S.
Connected component: For p ∈ S, the set of all pixels
connected to p is a connected component in S.
Connected Set: If S has only one connected component, it is
called a connected set. A connected set in an image is often
called a region.
IT472 - DIP: Lecture 2 18/23
64. Topological concepts
Path: Path from pixel p = (x, y ) to pixel q = (s, t) is a
sequence of distinct pixels with coordinates
(x0 = x, y0 = y ), (x1 , y1 ), . . . , (xn = s, yn = t) such that pixels
(xi−1 , yi−1 ) and (xi , yi ), ∀1 ≤ i ≤ n are adjacent. If the first
and last pixels are same then we have a closed path.
Connectedness: For a given subset S of pixels in an image,
p, q ∈ S are said to be connected in S if there exists a path
connecting the two, consisting of pixels only from S.
Connected component: For p ∈ S, the set of all pixels
connected to p is a connected component in S.
Connected Set: If S has only one connected component, it is
called a connected set. A connected set in an image is often
called a region.
IT472 - DIP: Lecture 2 18/23
65. Application
Figure: Count the number of components in the image
IT472 - DIP: Lecture 2 19/23
66. Application
Figure: Convert it into a binary image
IT472 - DIP: Lecture 2 20/23
67. Application
Figure: Do some morphological processing on the image. Let V = {1}.
Find the connected sets in the image
IT472 - DIP: Lecture 2 21/23