. 1
Muhammad Tahir Mumtaz
M- Phil in computer science
(Computer Vision)
University of Central Punjab Lahore, Punjab, Pakistan.
M- Phil in computer science
(Software Engineering)
Pir Mehr Ali Shah Arid
Agriculture University Rawalpindi Islamabad, Punjab,
Pakistan
Ph. D in computer science (Cont..)
(Deep Learning)
Unisza, Terengganu, Malaysia.
. 3
Today’s Topics
►Signal
►Analog and Digital Data & Signals
►Periodic & Aperiodic Signals
►Digital Signal Processing
►Digital Image Processing
. 5
. 6
Analog
►ANALOG
 Refers to something that is Continuous
►CONTINUOUS
 A set of specific points of data and all
possible points between them
Digital
►DIGITAL
 Refers to something that is Discrete
►DISCRETE
 A set of specific points of data with no
points in between
Analog and Digital Data
►Analog Data
 Human Voice
►Digital Data
 Data stored in the memory of a
computer
Analog and Digital Signals
Periodic and Aperiodic Signals
Periodic
Signals
(Analog or Digital)
Aperiodic
Periodic Signals
A signal is called Periodic if it completes
a pattern within a measurable time
frame called a Period and then
repeats that pattern over identical
subsequent Periods
Periodic Signal Example
Aperiodic Signals
An Aperiodic or Non-Periodic signal is
the one that changes constantly
without exhibiting (showing) a pattern
or cycle that repeats over time
Aperiodic Signals
Digital Signal Processing
►Digital signal processing (DSP) refers
to various techniques for improving the
accuracy and reliability of digital
communications.
►This can involve multiple mathematical
operations such as compression,
decompression, filtering, equalization,
modulation and demodulation to generate a
signal of superior quality.
►i.e Audio, video, image etc.. 16
. 17
Analog-to-Digital Converter
(ADC)
►In electronics, an ADC stands for Analog-
to-Digital Converter.
►It is a device that converts continuous-
time and continuous-amplitude analog
signals (such as from sensors) into
discrete-time and discrete-amplitude
digital signals for processing by digital
systems
. 18
Digital-to-Analog Converter
(DAC):
►In the context of audio and technology, a
DAC, or Digital-to-Analog Converter, is a
device that converts digital audio signals
(ones and zeros) into analog signals
(continuous electrical voltage) that can be
played through headphones or speakers
. 19
Audio Amp
►An audio amplifier, often referred to
simply as an "amp," is a device used to
increase the amplitude (volume) of audio
signals.
. 20
. 21
Applications of DSP
► Digital image processing
► Computer graphics
► Photo manipulation
► Biomedicine
► Speech processing
► Speech recognition
► Data transmission
► Audio signal processing
► Audio data compression e.g. MP3
► Video data compression
► Radar
► Sonar
► Financial signal processing
► Economic forecasting
► Seismology
► Weather forecasting
. 22
. 23
. 24
Disadvantages
►Search at least three and write into your
notes. Or on this slide…
. 25
. 26
Image processing
►Image processing is the process of
transforming an image into a digital form
and performing certain operations to get
some useful information from it.
►The image processing system usually
treats all images as 2D signals when
applying certain predetermined signal
processing methods.
. 27
Image
►An image is a visual representation of
something.
►In information technology, the term has
several usages: 1) An image is
a picture that has been created or copied
and stored in electronic form. An
image can be described in terms of
vector graphics or raster graphics.
►Image – A two-dimensional signal that
can be observed by human visual system
. 28
How are images represented
in the computer?
Image processing is used for two somewhat different
purposes:
• improving the visual appearance of images (pictorial
information) to a human viewer, and
• Preparing (processing) images for measurement of
the features and structures present
. 30
. 31
. 32
Introduction
► What is Digital Image Processing?
Digital Image
— a two-dimensional function
x and y are spatial coordinates
The f is called intensity or gray level at the point (x, y)
Digital Image Processing
— process digital images by means of computer, it covers low-, mid-, and high-level
processes
low-level: inputs and outputs are images
mid-level: outputs are attributes extracted from input images
high-level: an ensemble of recognition of individual objects
Pixel
— the elements of a digital image
( , )
f x y
Color Image
►A color image is just three functions
pasted together. We can write this as a
“vector-valued” function:
. 33
( , )
( , ) ( , )
( , )
r x y
f x y g x y
b x y
 
 

 
 
 
. 34
Origins of Digital Image
Processing
Sent by submarine cable
between London and
New York, the
transportation time was
reduced to less than
three hours from more
than a week
. 35
Origins of Digital Image
Processing
. 36
. 37
. 38
. 39
. 40
. 41
. 42
. 43
. 44
. 45
. 46
Sources for Images
► Electromagnetic (EM) energy spectrum
► Acoustic
► Ultrasonic
► Electronic
► Artificial images produced by computer
47
Electromagnetic (EM) energy spectrum
Major uses
Gamma-ray imaging: nuclear medicine and astronomical observations
X-rays: medical diagnostics, industry, and astronomy, etc.
Ultraviolet: industrial inspection, microscopy, lasers, biological imaging,
and astronomical observations
Visible and infrared bands: light microscopy, astronomy, remote sensing, industry,
and law enforcement
Microwave band: radar
Radio band: medicine (such as MRI) and astronomy
. 48
. 49
Examples: Gama-Ray Imaging
. 50
Examples: X-Ray Imaging
. 51
Examples: Ultraviolet Imaging
. 52
Examples: Light Microscopy Imaging
. 53
Examples: Visual and Infrared Imaging
. 54
Examples: Infrared Satellite Imaging
USA 1993
USA 2003
. 55
Examples: Infrared Satellite Imaging
. 56
Examples: Automated Visual Inspection
. 57
Examples: Automated Visual Inspection
The area in
which the
imaging system
detected the
plate
Results of
automated
reading of the
plate content
by the system
. 58
Example of Radar Image
. 59
Examples: MRI (Radio Band)
. 60
Examples: Ultrasound Imaging
. 61
Light and EM Spectrum
. 62
Light and EM Spectrum
► The colors that humans perceive in an object
are determined by the nature of the light
reflected from the object.
. 63
. 64
Image Acquisition
. 65
. 66
. 67
. 68
. 69
. 70
. 71
. 72
. 73
. 74
. 75
. 76
. 77
. 78
. 79
. 80
. 81
. 82
. 83
. 84
. 85
. 86
A Simple Image Formation Model
( , ) ( , ) ( , )
( , ): intensity at the point ( , )
( , ): illumination at the point ( , )
(the amount of source illumination incident on the scene)
( , ): reflectance/transmissivity
f x y i x y r x y
f x y x y
i x y x y
r x y
 
at the point ( , )
(the amount of illumination reflected/transmitted by the object)
where 0 < ( , ) < and 0 < ( , ) < 1
x y
i x y r x y

. 87
Representing Digital Images
►The representation of an M×N numerical
array as
(0,0) (0,1) ... (0, 1)
(1,0) (1,1) ... (1, 1)
( , )
... ... ... ...
( 1,0) ( 1,1) ... ( 1, 1)
f f f N
f f f N
f x y
f M f M f M N

 
 

 

 
 
   
 
. 88
Representing Digital Images
►The representation of an M×N numerical
array as
0,0 0,1 0, 1
1,0 1,1 1, 1
1,0 1,1 1, 1
...
...
... ... ... ...
...
N
N
M M M N
a a a
a a a
A
a a a


   
 
 
 

 
 
 
. 89
Representing Digital Images
►The representation of an M×N numerical
array in MATLAB
(1,1) (1,2) ... (1, )
(2,1) (2,2) ... (2, )
( , )
... ... ... ...
( ,1) ( ,2) ... ( , )
f f f N
f f f N
f x y
f M f M f M N
 
 
 

 
 
 
. 90
Representing Digital Images
► Discrete intensity interval [0, L-1], L=2k
► The number b of bits required to store a M × N
digitized image
b = M × N × k
. 91
Representing Digital Images
. 92
Spatial and Intensity Resolution
►Spatial resolution
— A measure of the smallest visible detail in an image
— stated with line pairs per unit distance, dots (pixels) per
unit distance, dots per inch (dpi)
►Intensity resolution
— The smallest discernible change in intensity level
— stated with 8 bits, 12 bits, 16 bits, etc.
. 93
Spatial and Intensity Resolution
. 94
Spatial and Intensity Resolution
. 95
Spatial and Intensity Resolution
. 96
Image Interpolation
► Interpolation — Process of using known data
to estimate unknown values
e.g., zooming, shrinking, rotating, and geometric correction
► Interpolation (sometimes called resampling)
— an imaging method to increase (or decrease) the
number of pixels in a digital image.
Some digital cameras use interpolation to produce a larger image
than the sensor captured or to create digital zoom
http://www.dpreview.com/learn/?/key=interpolation
. 97
Examples: Interpolation
. 98
Examples: Interpolation
. 99
Examples: Interpolation
. 100
Examples: Interpolation
. 101
Basic Relationships Between Pixels
► Neighborhood
► Adjacency
► Connectivity
► Paths
► Regions and boundaries
. 102
Basic Relationships Between Pixels
► Neighbors of a pixel p at coordinates (x,y)
 4-neighbors of p, denoted by N4(p):
(x-1, y), (x+1, y), (x,y-1), and (x, y+1).
 4 diagonal neighbors of p, denoted by ND(p):
(x-1, y-1), (x+1, y+1), (x+1,y-1), and (x-1, y+1).
 8 neighbors of p, denoted N8(p)
N8(p) = N4(p) U ND(p)
. 103
. 104
. 105
Basic Relationships Between Pixels
► Adjacency
Let V be the set of intensity values
 4-adjacency: Two pixels p and q with values from V are
4-adjacent if q is in the set N4(p).
 8-adjacency: Two pixels p and q with values from V are
8-adjacent if q is in the set N8(p).
. 106
Basic Relationships Between Pixels
► Adjacency
Let V be the set of intensity values
 m-adjacency: Two pixels p and q with values from V are
m-adjacent if
(i) q is in the set N4(p), or
(ii) q is in the set ND(p) and the set N4(p) ∩ N4(p) has no pixels whose
values are from V.
. 107
Basic Relationships Between Pixels
► Path
 A (digital) path (or curve) from pixel p with coordinates (x0, y0) to
pixel q with coordinates (xn, yn) is a sequence of distinct pixels with
coordinates
(x0, y0), (x1, y1), …, (xn, yn)
Where (xi, yi) and (xi-1, yi-1) are adjacent for 1 i n.
≤ ≤
 Here n is the length of the path.
 If (x0, y0) = (xn, yn), the path is closed path.
 We can define 4-, 8-, and m-paths based on the type of adjacency
used.
. 108
Examples: Adjacency and Path
0 1 1 0 1 1 0 1 1
0 2 0 0 2 0 0 2 0
0 0 1 0 0 1 0 0 1
V = {1, 2}
. 109
Examples: Adjacency and Path
0 1 1 0 1 1 0 1 1
0 2 0 0 2 0 0 2 0
0 0 1 0 0 1 0 0 1
V = {1, 2}
8-adjacent
. 110
Examples: Adjacency and Path
0 1 1 0 1 1 0 1 1
0 2 0 0 2 0 0 2 0
0 0 1 0 0 1 0 0 1
V = {1, 2}
8-adjacent m-adjacent
. 111
Examples: Adjacency and Path
01,1 11,2 11,3 0 1 1 0 1 1
02,1 22,2 02,3 0 2 0 0 2 0
03,1 03,2 13,3 0 0 1 0 0 1
V = {1, 2}
8-adjacent m-adjacent
The 8-path from (1,3) to (3,3):
(i) (1,3), (1,2), (2,2), (3,3)
(ii) (1,3), (2,2), (3,3)
The m-path from (1,3) to (3,3):
(1,3), (1,2), (2,2), (3,3)
. 112
Introduction to Mathematical Operations in
DIP
► Array vs. Matrix Operation
11 12
21 22
b b
B
b b
 
 
 
11 12
21 22
a a
A
a a
 
 
 
11 11 12 21 11 12 12 22
21 11 22 21 21 12 22 22
*
a b a b a b a b
A B
a b a b a b a b
 
 
 
 
 
11 11 12 12
21 21 22 22
.*
a b a b
A B
a b a b
 
 
 
Array product
Matrix product
Array
product
operator
Matrix
product
operator
. 113
Arithmetic Operations
► Arithmetic operations between images are array
operations. The four arithmetic operations are
denoted as
s(x,y) = f(x,y) + g(x,y)
d(x,y) = f(x,y) – g(x,y)
p(x,y) = f(x,y) × g(x,y)
v(x,y) = f(x,y) ÷ g(x,y)
. 114
Example: Addition of Noisy Images for Noise Reduction
Noiseless image: f(x,y)
Noise: n(x,y) (at every pair of coordinates (x,y), the noise is uncorrelated
and has zero average value)
Corrupted image: g(x,y)
g(x,y) = f(x,y) + n(x,y)
Reducing the noise by adding a set of noisy images,
{gi(x,y)}
1
1
( , ) ( , )
K
i
i
g x y g x y
K 
 
. 115
Example: Addition of Noisy Images for Noise Reduction
 
 
1
1
1
1
( , ) ( , )
1
( , ) ( , )
1
( , ) ( , )
( , )
K
i
i
K
i
i
K
i
i
E g x y E g x y
K
E f x y n x y
K
f x y E n x y
K
f x y



 
  
 
 
 
 
 
 
   
 




1
1
( , ) ( , )
K
i
i
g x y g x y
K 
 
2
( , ) 1
( , )
1
1
( , )
1
2
2 2
( , )
1
g x y K
g x y
i
K i
K
n x y
i
K i
n x y
K
 
 





 
. 116
Example: Addition of Noisy Images for Noise Reduction
► In astronomy, imaging under very low light levels
frequently causes sensor noise to render single
images virtually useless for analysis.
► In astronomical observations, similar sensors for
noise reduction by observing the same scene over
long periods of time. Image averaging is then used
to reduce the noise.
. 117
. 118
An Example of Image Subtraction: Mask Mode
Radiography
Mask h(x,y): an X-ray image of a region of a patient’s body
Live images f(x,y): X-ray images captured at TV rates after injection
of the contrast medium
Enhanced detail g(x,y)
g(x,y) = f(x,y) - h(x,y)
The procedure gives a movie showing how the contrast medium
propagates through the various arteries in the area being
observed.
. 119
. 120
An Example of Image Multiplication
. 121
Set and Logical Operations
. 122
Set and Logical Operations
► Let A be the elements of a gray-scale image
The elements of A are triplets of the form (x, y, z), where
x and y are spatial coordinates and z denotes the
intensity at the point (x, y).
► The complement of A is denoted Ac
{( , , ) | ( , , ) }
2 1; is the number of intensity bits used to represent
c
k
A x y K z x y z A
K k z
  
 
{( , , ) | ( , )}
A x y z z f x y
 
. 123
Set and Logical Operations
► The union of two gray-scale images (sets) A and B is
defined as the set
{max( , ) | , }
z
A B a b a A b B
   
. 124
Set and Logical Operations
. 125
Set and Logical Operations
. 126
Spatial Operations
► Single-pixel operations
Alter the values of an image’s pixels based on the intensity.
e.g.,
( )
s T z

. 127
Spatial Operations
► Neighborhood operations
The value of this pixel is
determined by a specified
operation involving the pixels in
the input image with
coordinates in Sxy
. 128
Spatial Operations
► Neighborhood operations
. 129
Geometric Spatial Transformations
► Geometric transformation (rubber-sheet
transformation)
— A spatial transformation of coordinates
— intensity interpolation that assigns intensity values to the spatially
transformed pixels.
► Affine transform
( , ) {( , )}
x y T v w

   
11 12
21 22
31 32
0
1 1 0
1
t t
x y v w t t
t t
 
 
  
 
 
. 130
. 131
Intensity Assignment
► Forward Mapping
It’s possible that two or more pixels can be transformed to the
same location in the output image.
► Inverse Mapping
The nearest input pixels to determine the intensity of the output
pixel value.
Inverse mappings are more efficient to implement than forward
mappings.
( , ) {( , )}
x y T v w

1
( , ) {( , )}
v w T x y


. 132
Example: Image Rotation and Intensity
Interpolation
. 133
Image Registration
► Input and output images are available but the
transformation function is unknown.
Goal: estimate the transformation function and use it to
register the two images.
► One of the principal approaches for image registration
is to use tie points (also called control points)
 The corresponding points are known precisely in the
input and output (reference) images.
. 134
Image Registration
► A simple model based on bilinear approximation:
1 2 3 4
5 6 7 8
Where ( , ) and ( , ) are the coordinates of
tie points in the input and reference images.
x c v c w c vw c
y c v c w c vw c
v w x y
   


   

. 135
Image Registration
. 136
Image Transform
► A particularly important class of 2-D linear transforms,
denoted T(u, v)
1 1
0 0
( , ) ( , ) ( , , , )
where ( , ) is the input image,
( , , , ) is the ker ,
variables and are the transform variables,
= 0, 1, 2, ..., M-1 and = 0, 1,
M N
x y
T u v f x y r x y u v
f x y
r x y u v forward transformation nel
u v
u v
 
 

..., N-1.
. 137
Image Transform
► Given T(u, v), the original image f(x, y) can be recoverd
using the inverse tranformation of T(u, v).
1 1
0 0
( , ) ( , ) ( , , , )
where ( , , , ) is the ker ,
= 0, 1, 2, ..., M-1 and = 0, 1, ..., N-1.
M N
u v
f x y T u v s x y u v
s x y u v inverse transformation nel
x y
 
 

. 138
Image Transform
. 139
Example: Image Denoising by Using DCT
Transform
. 140
Forward Transform Kernel
1 1
0 0
1 2
1 2
( , ) ( , ) ( , , , )
The kernel ( , , , ) is said to be SEPERABLE if
( , , , ) ( , ) ( , )
In addition, the kernel is said to be SYMMETRIC if
( , ) is functionally equal to ( ,
M N
x y
T u v f x y r x y u v
r x y u v
r x y u v r x u r y v
r x u r y v
 
 



1 1
), so that
( , , , ) ( , ) ( , )
r x y u v r x u r y u

. 141
The Kernels for 2-D Fourier Transform
2 ( / / )
2 ( / / )
The kernel
( , , , )
Where = 1
The kernel
1
( , , , )
j ux M vy N
j ux M vy N
forward
r x y u v e
j
inverse
s x y u v e
MN


 




. 142
2-D Fourier Transform
1 1
2 ( / / )
0 0
1 1
2 ( / / )
0 0
( , ) ( , )
1
( , ) ( , )
M N
j ux M vy N
x y
M N
j ux M vy N
u v
T u v f x y e
f x y T u v e
MN


 
 
 
 

 




. 143
Probabilistic Methods
Let , 0, 1, 2, ..., -1, denote the values of all possible intensities
in an digital image. The probability, ( ), of intensity level
occurring in a given image is estimated as
i
k
k
z i L
M N p z
z


( ) ,
where is the number of times that intensity occurs in the image.
k
k
k k
n
p z
MN
n z

1
0
( ) 1
L
k
k
p z




1
0
The mean (average) intensity is given by
= ( )
L
k k
k
m z p z



. 144
Probabilistic Methods
1
2 2
0
The variance of the intensities is given by
= ( ) ( )
L
k k
k
z m p z





th
1
0
The moment of the intensity variable is
( ) = ( ) ( )
L
n
n k k
k
n z
u z z m p z




. 145
Example: Comparison of Standard
Deviation Values
31.6
 
14.3
  49.2
 
. 146
Homework
http://cramer.cs.nmt.edu/~ip/assignments.html

DIP Lecture 1-8 (Digital Image Processing Presentation.pptx

  • 1.
  • 2.
    Muhammad Tahir Mumtaz M-Phil in computer science (Computer Vision) University of Central Punjab Lahore, Punjab, Pakistan. M- Phil in computer science (Software Engineering) Pir Mehr Ali Shah Arid Agriculture University Rawalpindi Islamabad, Punjab, Pakistan Ph. D in computer science (Cont..) (Deep Learning) Unisza, Terengganu, Malaysia.
  • 3.
  • 4.
    Today’s Topics ►Signal ►Analog andDigital Data & Signals ►Periodic & Aperiodic Signals ►Digital Signal Processing ►Digital Image Processing
  • 5.
  • 6.
  • 7.
    Analog ►ANALOG  Refers tosomething that is Continuous ►CONTINUOUS  A set of specific points of data and all possible points between them
  • 8.
    Digital ►DIGITAL  Refers tosomething that is Discrete ►DISCRETE  A set of specific points of data with no points in between
  • 9.
    Analog and DigitalData ►Analog Data  Human Voice ►Digital Data  Data stored in the memory of a computer
  • 10.
  • 11.
    Periodic and AperiodicSignals Periodic Signals (Analog or Digital) Aperiodic
  • 12.
    Periodic Signals A signalis called Periodic if it completes a pattern within a measurable time frame called a Period and then repeats that pattern over identical subsequent Periods
  • 13.
  • 14.
    Aperiodic Signals An Aperiodicor Non-Periodic signal is the one that changes constantly without exhibiting (showing) a pattern or cycle that repeats over time
  • 15.
  • 16.
    Digital Signal Processing ►Digitalsignal processing (DSP) refers to various techniques for improving the accuracy and reliability of digital communications. ►This can involve multiple mathematical operations such as compression, decompression, filtering, equalization, modulation and demodulation to generate a signal of superior quality. ►i.e Audio, video, image etc.. 16
  • 17.
  • 18.
    Analog-to-Digital Converter (ADC) ►In electronics,an ADC stands for Analog- to-Digital Converter. ►It is a device that converts continuous- time and continuous-amplitude analog signals (such as from sensors) into discrete-time and discrete-amplitude digital signals for processing by digital systems . 18
  • 19.
    Digital-to-Analog Converter (DAC): ►In thecontext of audio and technology, a DAC, or Digital-to-Analog Converter, is a device that converts digital audio signals (ones and zeros) into analog signals (continuous electrical voltage) that can be played through headphones or speakers . 19
  • 20.
    Audio Amp ►An audioamplifier, often referred to simply as an "amp," is a device used to increase the amplitude (volume) of audio signals. . 20
  • 21.
  • 22.
    Applications of DSP ►Digital image processing ► Computer graphics ► Photo manipulation ► Biomedicine ► Speech processing ► Speech recognition ► Data transmission ► Audio signal processing ► Audio data compression e.g. MP3 ► Video data compression ► Radar ► Sonar ► Financial signal processing ► Economic forecasting ► Seismology ► Weather forecasting . 22
  • 23.
  • 24.
  • 25.
    Disadvantages ►Search at leastthree and write into your notes. Or on this slide… . 25
  • 26.
  • 27.
    Image processing ►Image processingis the process of transforming an image into a digital form and performing certain operations to get some useful information from it. ►The image processing system usually treats all images as 2D signals when applying certain predetermined signal processing methods. . 27
  • 28.
    Image ►An image isa visual representation of something. ►In information technology, the term has several usages: 1) An image is a picture that has been created or copied and stored in electronic form. An image can be described in terms of vector graphics or raster graphics. ►Image – A two-dimensional signal that can be observed by human visual system . 28
  • 29.
    How are imagesrepresented in the computer?
  • 30.
    Image processing isused for two somewhat different purposes: • improving the visual appearance of images (pictorial information) to a human viewer, and • Preparing (processing) images for measurement of the features and structures present . 30
  • 31.
  • 32.
    . 32 Introduction ► Whatis Digital Image Processing? Digital Image — a two-dimensional function x and y are spatial coordinates The f is called intensity or gray level at the point (x, y) Digital Image Processing — process digital images by means of computer, it covers low-, mid-, and high-level processes low-level: inputs and outputs are images mid-level: outputs are attributes extracted from input images high-level: an ensemble of recognition of individual objects Pixel — the elements of a digital image ( , ) f x y
  • 33.
    Color Image ►A colorimage is just three functions pasted together. We can write this as a “vector-valued” function: . 33 ( , ) ( , ) ( , ) ( , ) r x y f x y g x y b x y           
  • 34.
    . 34 Origins ofDigital Image Processing Sent by submarine cable between London and New York, the transportation time was reduced to less than three hours from more than a week
  • 35.
    . 35 Origins ofDigital Image Processing
  • 36.
  • 37.
  • 38.
  • 39.
  • 40.
  • 41.
  • 42.
  • 43.
  • 44.
  • 45.
  • 46.
    . 46 Sources forImages ► Electromagnetic (EM) energy spectrum ► Acoustic ► Ultrasonic ► Electronic ► Artificial images produced by computer
  • 47.
    47 Electromagnetic (EM) energyspectrum Major uses Gamma-ray imaging: nuclear medicine and astronomical observations X-rays: medical diagnostics, industry, and astronomy, etc. Ultraviolet: industrial inspection, microscopy, lasers, biological imaging, and astronomical observations Visible and infrared bands: light microscopy, astronomy, remote sensing, industry, and law enforcement Microwave band: radar Radio band: medicine (such as MRI) and astronomy
  • 48.
  • 49.
  • 50.
  • 51.
  • 52.
    . 52 Examples: LightMicroscopy Imaging
  • 53.
    . 53 Examples: Visualand Infrared Imaging
  • 54.
    . 54 Examples: InfraredSatellite Imaging USA 1993 USA 2003
  • 55.
    . 55 Examples: InfraredSatellite Imaging
  • 56.
    . 56 Examples: AutomatedVisual Inspection
  • 57.
    . 57 Examples: AutomatedVisual Inspection The area in which the imaging system detected the plate Results of automated reading of the plate content by the system
  • 58.
    . 58 Example ofRadar Image
  • 59.
    . 59 Examples: MRI(Radio Band)
  • 60.
  • 61.
    . 61 Light andEM Spectrum
  • 62.
    . 62 Light andEM Spectrum ► The colors that humans perceive in an object are determined by the nature of the light reflected from the object.
  • 63.
  • 64.
  • 65.
  • 66.
  • 67.
  • 68.
  • 69.
  • 70.
  • 71.
  • 72.
  • 73.
  • 74.
  • 75.
  • 76.
  • 77.
  • 78.
  • 79.
  • 80.
  • 81.
  • 82.
  • 83.
  • 84.
  • 85.
  • 86.
    . 86 A SimpleImage Formation Model ( , ) ( , ) ( , ) ( , ): intensity at the point ( , ) ( , ): illumination at the point ( , ) (the amount of source illumination incident on the scene) ( , ): reflectance/transmissivity f x y i x y r x y f x y x y i x y x y r x y   at the point ( , ) (the amount of illumination reflected/transmitted by the object) where 0 < ( , ) < and 0 < ( , ) < 1 x y i x y r x y 
  • 87.
    . 87 Representing DigitalImages ►The representation of an M×N numerical array as (0,0) (0,1) ... (0, 1) (1,0) (1,1) ... (1, 1) ( , ) ... ... ... ... ( 1,0) ( 1,1) ... ( 1, 1) f f f N f f f N f x y f M f M f M N                   
  • 88.
    . 88 Representing DigitalImages ►The representation of an M×N numerical array as 0,0 0,1 0, 1 1,0 1,1 1, 1 1,0 1,1 1, 1 ... ... ... ... ... ... ... N N M M M N a a a a a a A a a a                   
  • 89.
    . 89 Representing DigitalImages ►The representation of an M×N numerical array in MATLAB (1,1) (1,2) ... (1, ) (2,1) (2,2) ... (2, ) ( , ) ... ... ... ... ( ,1) ( ,2) ... ( , ) f f f N f f f N f x y f M f M f M N             
  • 90.
    . 90 Representing DigitalImages ► Discrete intensity interval [0, L-1], L=2k ► The number b of bits required to store a M × N digitized image b = M × N × k
  • 91.
  • 92.
    . 92 Spatial andIntensity Resolution ►Spatial resolution — A measure of the smallest visible detail in an image — stated with line pairs per unit distance, dots (pixels) per unit distance, dots per inch (dpi) ►Intensity resolution — The smallest discernible change in intensity level — stated with 8 bits, 12 bits, 16 bits, etc.
  • 93.
    . 93 Spatial andIntensity Resolution
  • 94.
    . 94 Spatial andIntensity Resolution
  • 95.
    . 95 Spatial andIntensity Resolution
  • 96.
    . 96 Image Interpolation ►Interpolation — Process of using known data to estimate unknown values e.g., zooming, shrinking, rotating, and geometric correction ► Interpolation (sometimes called resampling) — an imaging method to increase (or decrease) the number of pixels in a digital image. Some digital cameras use interpolation to produce a larger image than the sensor captured or to create digital zoom http://www.dpreview.com/learn/?/key=interpolation
  • 97.
  • 98.
  • 99.
  • 100.
  • 101.
    . 101 Basic RelationshipsBetween Pixels ► Neighborhood ► Adjacency ► Connectivity ► Paths ► Regions and boundaries
  • 102.
    . 102 Basic RelationshipsBetween Pixels ► Neighbors of a pixel p at coordinates (x,y)  4-neighbors of p, denoted by N4(p): (x-1, y), (x+1, y), (x,y-1), and (x, y+1).  4 diagonal neighbors of p, denoted by ND(p): (x-1, y-1), (x+1, y+1), (x+1,y-1), and (x-1, y+1).  8 neighbors of p, denoted N8(p) N8(p) = N4(p) U ND(p)
  • 103.
  • 104.
  • 105.
    . 105 Basic RelationshipsBetween Pixels ► Adjacency Let V be the set of intensity values  4-adjacency: Two pixels p and q with values from V are 4-adjacent if q is in the set N4(p).  8-adjacency: Two pixels p and q with values from V are 8-adjacent if q is in the set N8(p).
  • 106.
    . 106 Basic RelationshipsBetween Pixels ► Adjacency Let V be the set of intensity values  m-adjacency: Two pixels p and q with values from V are m-adjacent if (i) q is in the set N4(p), or (ii) q is in the set ND(p) and the set N4(p) ∩ N4(p) has no pixels whose values are from V.
  • 107.
    . 107 Basic RelationshipsBetween Pixels ► Path  A (digital) path (or curve) from pixel p with coordinates (x0, y0) to pixel q with coordinates (xn, yn) is a sequence of distinct pixels with coordinates (x0, y0), (x1, y1), …, (xn, yn) Where (xi, yi) and (xi-1, yi-1) are adjacent for 1 i n. ≤ ≤  Here n is the length of the path.  If (x0, y0) = (xn, yn), the path is closed path.  We can define 4-, 8-, and m-paths based on the type of adjacency used.
  • 108.
    . 108 Examples: Adjacencyand Path 0 1 1 0 1 1 0 1 1 0 2 0 0 2 0 0 2 0 0 0 1 0 0 1 0 0 1 V = {1, 2}
  • 109.
    . 109 Examples: Adjacencyand Path 0 1 1 0 1 1 0 1 1 0 2 0 0 2 0 0 2 0 0 0 1 0 0 1 0 0 1 V = {1, 2} 8-adjacent
  • 110.
    . 110 Examples: Adjacencyand Path 0 1 1 0 1 1 0 1 1 0 2 0 0 2 0 0 2 0 0 0 1 0 0 1 0 0 1 V = {1, 2} 8-adjacent m-adjacent
  • 111.
    . 111 Examples: Adjacencyand Path 01,1 11,2 11,3 0 1 1 0 1 1 02,1 22,2 02,3 0 2 0 0 2 0 03,1 03,2 13,3 0 0 1 0 0 1 V = {1, 2} 8-adjacent m-adjacent The 8-path from (1,3) to (3,3): (i) (1,3), (1,2), (2,2), (3,3) (ii) (1,3), (2,2), (3,3) The m-path from (1,3) to (3,3): (1,3), (1,2), (2,2), (3,3)
  • 112.
    . 112 Introduction toMathematical Operations in DIP ► Array vs. Matrix Operation 11 12 21 22 b b B b b       11 12 21 22 a a A a a       11 11 12 21 11 12 12 22 21 11 22 21 21 12 22 22 * a b a b a b a b A B a b a b a b a b           11 11 12 12 21 21 22 22 .* a b a b A B a b a b       Array product Matrix product Array product operator Matrix product operator
  • 113.
    . 113 Arithmetic Operations ►Arithmetic operations between images are array operations. The four arithmetic operations are denoted as s(x,y) = f(x,y) + g(x,y) d(x,y) = f(x,y) – g(x,y) p(x,y) = f(x,y) × g(x,y) v(x,y) = f(x,y) ÷ g(x,y)
  • 114.
    . 114 Example: Additionof Noisy Images for Noise Reduction Noiseless image: f(x,y) Noise: n(x,y) (at every pair of coordinates (x,y), the noise is uncorrelated and has zero average value) Corrupted image: g(x,y) g(x,y) = f(x,y) + n(x,y) Reducing the noise by adding a set of noisy images, {gi(x,y)} 1 1 ( , ) ( , ) K i i g x y g x y K   
  • 115.
    . 115 Example: Additionof Noisy Images for Noise Reduction     1 1 1 1 ( , ) ( , ) 1 ( , ) ( , ) 1 ( , ) ( , ) ( , ) K i i K i i K i i E g x y E g x y K E f x y n x y K f x y E n x y K f x y                               1 1 ( , ) ( , ) K i i g x y g x y K    2 ( , ) 1 ( , ) 1 1 ( , ) 1 2 2 2 ( , ) 1 g x y K g x y i K i K n x y i K i n x y K           
  • 116.
    . 116 Example: Additionof Noisy Images for Noise Reduction ► In astronomy, imaging under very low light levels frequently causes sensor noise to render single images virtually useless for analysis. ► In astronomical observations, similar sensors for noise reduction by observing the same scene over long periods of time. Image averaging is then used to reduce the noise.
  • 117.
  • 118.
    . 118 An Exampleof Image Subtraction: Mask Mode Radiography Mask h(x,y): an X-ray image of a region of a patient’s body Live images f(x,y): X-ray images captured at TV rates after injection of the contrast medium Enhanced detail g(x,y) g(x,y) = f(x,y) - h(x,y) The procedure gives a movie showing how the contrast medium propagates through the various arteries in the area being observed.
  • 119.
  • 120.
    . 120 An Exampleof Image Multiplication
  • 121.
    . 121 Set andLogical Operations
  • 122.
    . 122 Set andLogical Operations ► Let A be the elements of a gray-scale image The elements of A are triplets of the form (x, y, z), where x and y are spatial coordinates and z denotes the intensity at the point (x, y). ► The complement of A is denoted Ac {( , , ) | ( , , ) } 2 1; is the number of intensity bits used to represent c k A x y K z x y z A K k z      {( , , ) | ( , )} A x y z z f x y  
  • 123.
    . 123 Set andLogical Operations ► The union of two gray-scale images (sets) A and B is defined as the set {max( , ) | , } z A B a b a A b B    
  • 124.
    . 124 Set andLogical Operations
  • 125.
    . 125 Set andLogical Operations
  • 126.
    . 126 Spatial Operations ►Single-pixel operations Alter the values of an image’s pixels based on the intensity. e.g., ( ) s T z 
  • 127.
    . 127 Spatial Operations ►Neighborhood operations The value of this pixel is determined by a specified operation involving the pixels in the input image with coordinates in Sxy
  • 128.
    . 128 Spatial Operations ►Neighborhood operations
  • 129.
    . 129 Geometric SpatialTransformations ► Geometric transformation (rubber-sheet transformation) — A spatial transformation of coordinates — intensity interpolation that assigns intensity values to the spatially transformed pixels. ► Affine transform ( , ) {( , )} x y T v w      11 12 21 22 31 32 0 1 1 0 1 t t x y v w t t t t           
  • 130.
  • 131.
    . 131 Intensity Assignment ►Forward Mapping It’s possible that two or more pixels can be transformed to the same location in the output image. ► Inverse Mapping The nearest input pixels to determine the intensity of the output pixel value. Inverse mappings are more efficient to implement than forward mappings. ( , ) {( , )} x y T v w  1 ( , ) {( , )} v w T x y  
  • 132.
    . 132 Example: ImageRotation and Intensity Interpolation
  • 133.
    . 133 Image Registration ►Input and output images are available but the transformation function is unknown. Goal: estimate the transformation function and use it to register the two images. ► One of the principal approaches for image registration is to use tie points (also called control points)  The corresponding points are known precisely in the input and output (reference) images.
  • 134.
    . 134 Image Registration ►A simple model based on bilinear approximation: 1 2 3 4 5 6 7 8 Where ( , ) and ( , ) are the coordinates of tie points in the input and reference images. x c v c w c vw c y c v c w c vw c v w x y           
  • 135.
  • 136.
    . 136 Image Transform ►A particularly important class of 2-D linear transforms, denoted T(u, v) 1 1 0 0 ( , ) ( , ) ( , , , ) where ( , ) is the input image, ( , , , ) is the ker , variables and are the transform variables, = 0, 1, 2, ..., M-1 and = 0, 1, M N x y T u v f x y r x y u v f x y r x y u v forward transformation nel u v u v      ..., N-1.
  • 137.
    . 137 Image Transform ►Given T(u, v), the original image f(x, y) can be recoverd using the inverse tranformation of T(u, v). 1 1 0 0 ( , ) ( , ) ( , , , ) where ( , , , ) is the ker , = 0, 1, 2, ..., M-1 and = 0, 1, ..., N-1. M N u v f x y T u v s x y u v s x y u v inverse transformation nel x y     
  • 138.
  • 139.
    . 139 Example: ImageDenoising by Using DCT Transform
  • 140.
    . 140 Forward TransformKernel 1 1 0 0 1 2 1 2 ( , ) ( , ) ( , , , ) The kernel ( , , , ) is said to be SEPERABLE if ( , , , ) ( , ) ( , ) In addition, the kernel is said to be SYMMETRIC if ( , ) is functionally equal to ( , M N x y T u v f x y r x y u v r x y u v r x y u v r x u r y v r x u r y v        1 1 ), so that ( , , , ) ( , ) ( , ) r x y u v r x u r y u 
  • 141.
    . 141 The Kernelsfor 2-D Fourier Transform 2 ( / / ) 2 ( / / ) The kernel ( , , , ) Where = 1 The kernel 1 ( , , , ) j ux M vy N j ux M vy N forward r x y u v e j inverse s x y u v e MN        
  • 142.
    . 142 2-D FourierTransform 1 1 2 ( / / ) 0 0 1 1 2 ( / / ) 0 0 ( , ) ( , ) 1 ( , ) ( , ) M N j ux M vy N x y M N j ux M vy N u v T u v f x y e f x y T u v e MN                 
  • 143.
    . 143 Probabilistic Methods Let, 0, 1, 2, ..., -1, denote the values of all possible intensities in an digital image. The probability, ( ), of intensity level occurring in a given image is estimated as i k k z i L M N p z z   ( ) , where is the number of times that intensity occurs in the image. k k k k n p z MN n z  1 0 ( ) 1 L k k p z     1 0 The mean (average) intensity is given by = ( ) L k k k m z p z   
  • 144.
    . 144 Probabilistic Methods 1 22 0 The variance of the intensities is given by = ( ) ( ) L k k k z m p z      th 1 0 The moment of the intensity variable is ( ) = ( ) ( ) L n n k k k n z u z z m p z    
  • 145.
    . 145 Example: Comparisonof Standard Deviation Values 31.6   14.3   49.2  
  • 146.

Editor's Notes

  • #4 The physical layer is the first layer of the Open System Interconnection Model (OSI Model). The physical layer deals with bit-level transmission between different devices and supports electrical or mechanical interfaces connecting to the physical medium for synchronized communication.
  • #8 Discrete in science is the opposite of continuous: something that is separate; distinct; individual
  • #16 Decompression: the process of expanding computer data to its normal size so that it can be read by a computer. Modulation is the process of encoding information in a transmitted signal, while demodulation is the process of extracting information from the transmitted signal Encoding : convert (information or an instruction) into a particular form.
  • #17 ADC: Analogue to Digital Converter. Audio amp: is a device used to increase the amplitude (volume) of audio signals
  • #21 Distoration : Signal not its original form. مسخ, "غلط بیان"
  • #22 Seismology is the study of earthquakes and seismic waves that move through and around the Earth. Sonar uses sound waves to 'see' in the water.
  • #23 A consumer application is a computer application that consumes (invokes) assets (services, BPEL processes or XML schemas) at run time. Typically, you create consumer applications to specify the consumers that are allowed to consume a particular asset
  • #24 Multiplexing, or muxing, is a way of sending multiple signals or streams of information over a communications link at the same time in the form of a single, complex signal. In the context of Digital Signal Processing (DSP), flexibility refers to the ability of DSP systems and architectures to adapt and efficiently process various types of signals and algorithms.
  • #28 Raster: a rectangular pattern of parallel scanning lines followed by the electron beam on a television screen or computer monitor.
  • #31 In a spatial coordinate system like this, locations in an image are positions on a plane, and they are described in terms of x and y The value of a pixel is its intensity. The range of intensity values from 0 (black) to 255 (white). In term of greyscale image, the intensity of the pixel corresponds to it's brightness. The greater the intensity, the greater the brightness. This also means that increasing intensity can be viewed as brightening the image (while decreasing intensity can be viewed as darkening the image).
  • #35 EDT : Eastern Daylight Time. Eastern Daylight Time(EDT) is 4 hours behind Coordinated Universal Time (UTC). This time zone is a Daylight Saving Time time zone and is used in: North America, Caribbean. This time zone is often called Eastern Daylight Time Lunar : قمری
  • #36 Probe : تحقیقات Ranger 7 was the first space probe of the United States to successfully transmit close images of the lunar surface back to Earth.
  • #37 Primitive : ابتدائی  Cognitive : علم رکھنے والا Computer Vision: Deals with the development of the theoretical and algorithmic basis by which useful information about the 3D world can be automatically extracted and analyzed from a single or multiple or 2D images of the world. Cognitive: علمی ENSEMBLE: جوڑنا
  • #39 Wiener filter algorithm, returning deblurred image Deblurring with an inverse filter.
  • #40 The Hubble Space Telescope helps scientists understand how planets and galaxies form.
  • #43 face tracker can automatically detect faces and keep track of facial points in real-time.
  • #44 Interpretation : ترجمانی Forensics: عدالتی Acquisition : حصول  Archiving: وہ ريکارڈ يا دَستاويز جِسے کِسی چِيز کے ثُبُوت کے طَور پَر محفُوظ کِيا گَيا ہو Reconnaissance  جاسوسی Fibre: نازک سا ريشہ Photo identification or photo ID is an identity document that includes a photograph of the holder, usually only his or her face. DNA: deoxyribonucleic acid,
  • #45 cultivation: کاشت Yields : پیداوار Navigation= سمت شناسی Remote Sensing : the scanning of the earth by satellite or high-flying aircraft in order to obtain information about it Terrain Rendering : خطےپیش کرنے کا عمل Altitude : اونچائی
  • #46 Acoustic is the study of sound. Underwater acoustic plays an important role in understanding the undersea environment. Ultrasound imaging (sonography) uses high-frequency sound waves to view inside the body. Because ultrasound images are captured in real-time, they can also show movement of the body's internal organs as well as blood flowing through the blood vessels. ...
  • #47 Astronomy: فلکیات Light microscope that uses visible light and a system of lenses to magnify images of small topics.  MRI stands for magnetic resonance imaging, and it is a kind of scan that can produce detailed pictures of parts of the body, including the brain. Nuclear medicine is a branch of medical imaging that uses small amounts of radioactive material to diagnose and determine the severity of or treat a variety of diseases, including many types of cancers, heart disease, neurological disorders and other abnormalities within the body.
  • #49 Positron emission tomography (PET) is a nuclear imaging technology (also referred to as molecular imaging) that enables visualization of metabolic processes in the body. The Cygnus Loop (also known as the Veil Nebula) is a supernova remnant, the trash of the explosive death of a massive star about ten to twenty thousand years ago. 
  • #51 Corn = مکئی smut = کوئی کالا نِشان This image shows a small portion of a nebula (A nebula is a giant cloud of dust and gas in space) called the 'Cygnus Loop
  • #52 A light microscope uses focused light and lenses to magnify a specimen, usually a cell. In this way, a light microscope is much like a telescope, except that instead of the object being very large and very far away, it is very small and very close to the lens.
  • #56 Cereal : اناج  Intraocular implant : لگانا : درون چشمی
  • #59 Spine : ریڑھ کی ہڈی MRI = Magnetic resonance imaging
  • #60 The thyroid gland is a butterfly-shaped organ located in the base of your neck. It releases hormones that control metabolism—the way your body uses energy. Metabolism is the process by which your body converts what you eat and drink into energy. Lesion= گھاو
  • #64 Digital imaging or digital image acquisition is the creation of photographic images, such as of a physical scene or of the interior structure of an object. The term is often assumed to imply or include the processing, compression, storage, printing, and display of such images. acquisition : حصول
  • #66 Image enhancement is the process of digitally manipulating a stored image using software.
  • #68 Image Restoration is the operation of taking a corrupt/noisy image and estimating the clean, original image.
  • #70 Morphological image processing is a collection of non-linear operations related to the shape or morphology of features in an image.
  • #72 In computer vision, image segmentation is the process of partitioning a digital image into multiple segments (sets of pixels, also known as super-pixels). The goal of segmentation is to simplify and/or change the representation of an image into something that is more meaningful and easier to analyze.
  • #74 Object recognition is a computer vision technique for identifying objects in images or videos. Object recognition is a key output of deep learning and machine learning algorithms
  • #76 Segmentation is to subdivide an image into constituent regions or objects. ... After an image is segmented into regions, each region is represented and described in a form suitable for further computer processing.
  • #77 Segmentation is to subdivide an image into constituent regions or objects. ... After an image is segmented into regions, each region is represented and described in a form suitable for further computer processing.
  • #78 Image compression is a type of data compression applied to digital images, to reduce their cost ...Compression algorithms require different amounts of processing power to encode and decode.
  • #80 A (digital) color image is a digital image that includes color information for each pixel. For visually acceptable results, it is necessary (and almost sufficient) to provide three samples (color channels) for each pixel, which are interpreted as coordinates in some color space.
  • #84 Obscured
  • #106 Assignment solve one practicle question and ready to present in class.