Sahil Biswas
DTU/2K12/ECE-150
Mentor: Mr. Avinash
Ratre
CONTENTS
This presentation covers:
 What is a digital image?
 What is digital image processing?
 History of digital image processing
 State of the art examples of digital image processing
 Key stages in digital image processing
 Face detection
WHAT IS A DIGITAL IMAGE?
A digital image is a representation of a two-dimensional image as a
finite set of digital values, called picture elements or pixels
Pixel values typically represent gray levels, colours, heights, opacities etc
Remember digitization implies that a digital image is an approximation of a
real scene

1 pixel
Common image formats include:
 1 sample per point (B&W or Grayscale)
 3 samples per point (Red, Green, and Blue)
 4 samples per point (Red, Green, Blue, and “Alpha”, a.k.a.
Opacity)

For most of this presentation we will focus
on greyscale images.
WHAT IS DIGITAL IMAGE PROCESSING?
Digital image processing focuses on two major tasks
 Improvement of pictorial information for human interpretation
 Processing of image data for storage, transmission and representation for
autonomous machine perception
Some argument about where image processing ends and fields such as
image analysis and computer vision start
The continuum from image processing to computer vision can be broken up
into low-, mid- and high-level processes

Low Level Process

Mid Level Process

High Level Process

Input: Image
Output: Image

Input: Image
Output: Attributes

Input: Attributes
Output: Understanding

Examples: Noise
removal, image
sharpening

Examples: Object
recognition,
segmentation

Examples: Scene
understanding,
autonomous navigation
HISTORY OF DIGITAL IMAGE PROCESSING
Early 1920s: One of the first applications of digital imaging was in the newspaper industry
 The Bartlane cable picture
transmission service
 Images were transferred by submarine cable between London and New York
 Pictures were coded for cable transfer and reconstructed at the receiving end on a
telegraph printer

Early digital image
Mid to late 1920s: Improvements to the Bartlane system resulted in higher
quality images
 New reproduction
processes based
on photographic
techniques
 Increased number
of tones in
reproduced images

Improved
digital image
Early 15 tone digital
image
1960s: Improvements in computing technology and the onset of the space race
led to a surge of work in digital image processing
 1964: Computers used to
improve the quality of
images of the moon taken
by the Ranger 7 probe
 Such techniques were used
in other space missions
including the Apollo landings

A picture of the moon taken
by the Ranger 7 probe
minutes before landing
1970s: Digital image processing begins to be used in medical applications
 1979: Sir Godfrey N.
Hounsfield & Prof. Allan M.
Cormack share the Nobel
Prize in medicine for the
invention of tomography,
the technology behind
Computerised Axial
Tomography (CAT) scans

Typical head slice CAT
image
1980s - Today: The use of digital image processing techniques has exploded
and they are now used for all kinds of tasks in all kinds of areas
 Image enhancement/restoration
 Artistic effects
 Medical visualisation
 Industrial inspection
 Law enforcement
 Human computer interfaces
EXAMPLES: IMAGE ENHANCEMENT
One of the most common uses of DIP techniques: improve quality,
remove noise etc
EXAMPLES: THE HUBBLE TELESCOPE
Launched in 1990 the Hubble
telescope can take images of
very distant objects
However, an incorrect mirror
made many of Hubble’s
images useless
Image processing
techniques were
used to fix this
EXAMPLES: ARTISTIC EFFECTS
Artistic effects are used to make
images more visually appealing, to
add special effects and to make
composite images
EXAMPLES: MEDICINE
Take slice from MRI scan of canine heart, and find boundaries between
types of tissue
 Image with gray levels representing tissue density
 Use a suitable filter to highlight edges

Original MRI Image of a Dog Heart

Edge Detection Image
EXAMPLES: GIS
Geographic Information Systems
 Digital image processing techniques are used extensively to manipulate satellite imagery
 Terrain classification
 Meteorology
EXAMPLES: GIS (CONT…)
Night-Time Lights of the World data set
 Global inventory of human settlement
 Not hard to imagine the kind of analysis that
might be done using this data
EXAMPLES: INDUSTRIAL INSPECTION
Human operators are expensive, slow and
unreliable
Make machines do the
job instead
Industrial vision systems
are used in all kinds of industries
Can we trust them?
EXAMPLES: PCB INSPECTION
Printed Circuit Board (PCB) inspection
 Machine inspection is used to determine that all components are present and that
all solder joints are acceptable
 Both conventional imaging and x-ray imaging are used
EXAMPLES: LAW ENFORCEMENT
Image processing techniques are used
extensively by law enforcers
 Number plate recognition for speed
cameras/automated toll systems
 Fingerprint recognition
 Enhancement of CCTV images
EXAMPLES: HCI
Try to make human computer interfaces more
natural
 Face recognition
 Gesture recognition
Does anyone remember the
user interface from “Minority Report”?
These tasks can be extremely difficult
KEY STAGES IN DIGITAL IMAGE
PROCESSING
Image
Restoration

Morphological
Processing

Image
Enhancement

Segmentation

Image
Acquisition

Object
Recognition

Problem Domain

Representation
& Description

Colour Image
Processing

Image
Compression
KEY STAGES IN DIGITAL IMAGE
PROCESSING:
IMAGE AQUISITION
Image
Restoration

Morphological
Processing

Image
Enhancement

Segmentation

Image
Acquisition

Object
Recognition

Problem Domain

Representation
& Description

Colour Image
Processing

Image
Compression
KEY STAGES IN DIGITAL IMAGE
PROCESSING:
IMAGE ENHANCEMENT
Image
Restoration

Morphological
Processing

Image
Enhancement

Segmentation

Image
Acquisition

Object
Recognition

Problem Domain

Representation
& Description

Colour Image
Processing

Image
Compression
KEY STAGES IN DIGITAL IMAGE
PROCESSING:
IMAGE RESTORATION
Image
Restoration

Morphological
Processing

Image
Enhancement

Segmentation

Image
Acquisition

Object
Recognition

Problem Domain

Representation
& Description

Colour Image
Processing

Image
Compression
KEY STAGES IN DIGITAL IMAGE
PROCESSING:
MORPHOLOGICAL PROCESSING
Image
Restoration

Morphological
Processing

Image
Enhancement

Segmentation

Image
Acquisition

Object
Recognition

Problem Domain

Representation
& Description

Colour Image
Processing

Image
Compression
KEY STAGES IN DIGITAL IMAGE
PROCESSING:
SEGMENTATION
Image
Restoration

Morphological
Processing

Image
Enhancement

Segmentation

Image
Acquisition

Object
Recognition

Problem Domain

Representation
& Description

Colour Image
Processing

Image
Compression
KEY STAGES IN DIGITAL IMAGE
PROCESSING:
OBJECT RECOGNITION
Image
Restoration

Morphological
Processing

Image
Enhancement

Segmentation

Image
Acquisition

Object
Recognition

Problem Domain

Representation
& Description

Colour Image
Processing

Image
Compression
KEY STAGES IN DIGITAL IMAGE
PROCESSING:
REPRESENTATION & DESCRIPTION
Image
Restoration

Morphological
Processing

Image
Enhancement

Segmentation

Image
Acquisition

Object
Recognition

Problem Domain

Representation
& Description

Colour Image
Processing

Image
Compression
KEY STAGES IN DIGITAL IMAGE
PROCESSING:
IMAGE COMPRESSION
Image
Restoration

Morphological
Processing

Image
Enhancement

Segmentation

Image
Acquisition

Object
Recognition

Problem Domain

Representation
& Description

Colour Image
Processing

Image
Compression
KEY STAGES IN DIGITAL IMAGE
PROCESSING:
COLOUR IMAGE PROCESSING
Image
Restoration

Morphological
Processing

Image
Enhancement

Segmentation

Image
Acquisition

Object
Recognition

Problem Domain

Representation
& Description

Colour Image
Processing

Image
Compression
AUTOMATIC FACE RECOGNITION USING
COLOR BASED SEGMENTATION

In given digital image, detect the presence of faces
in the image and output their location.
BASIC SYSTEM SUMMARY

• Initial Design
 Reduced Eigenface-based coordinate system defining a “face space”, each possible face a point in space.
 Using training images, find coordinates of faces/non-faces, and train a neural net classifier.
 Abandoned due to problems with neural network: lack of transparency, poor generalization.
 Replaced with our secondary design strategy:

• Final System
Input
Image

Color-space
Based
Segmentation

Morphological
Image
Processing

Matched
Filtering

Peak/Face
Detector

Face
Estimates
H VS. S VS. V (FACE VS. NON-FACE)

For faces, the Hue value is seen to typically occupy values in the range
H < 19
H > 240
We use this fact to remove some of the non-faces pixels in the image.
Y VS. CR VS. CB

In the same manner, we found empirically that for the YCbCr space
that the face pixels occupied the range
102 < Cb < 128
125 < Cr < 160
Any other pixels were assumed non-face and removed.
R VS. G VS. B

Finally, we found some useful trends in the RGB space as well. The
Following rules were used to further isolate face candidates:
0.836·G – 14 < B < 0.836·G + 44
0.89·G – 67 < B < 0.89·G + 42
REMOVAL OF LOWER REGION – ATTEMPT
TO AVOID POSSIBLE FALSE DETECTIONS

Just as we used information regarding face color, orientation, and scale from
The training images, we also allowed ourselves to make the assumption that
Faces were unlikely to appear in the lower portion of the visual field: We
Removed that region to help reduce the possibility of false detections.
CONCLUSIONS
• In most cases, effective use of color space – face color
relationships and morphological processing allowed
effective pre-processing.
• For images trained on, able to detect faces with reasonable
accuracy and miss and false alarm rates.
• Adaptive adjustment of template scale, angle, and threshold
allowed most faces to be detected.
REFERENCES
•

R. Gonzalez and R. Woods, “Digital Image Processing – 2 nd Edition”, Prentice
Hall, 2002

•

C. Garcia et al., “Face Detection in Color Images Using Wavelet Packet
Analysis”.

•

“Machine Vision: Automated Visual Inspection and Robot Vision”, David
Vernon, Prentice Hall, 1991
Available online at:
homepages.inf.ed.ac.uk/rbf/BOOKS/VERNON/

Digital Image Processing

  • 1.
  • 2.
    CONTENTS This presentation covers: What is a digital image?  What is digital image processing?  History of digital image processing  State of the art examples of digital image processing  Key stages in digital image processing  Face detection
  • 3.
    WHAT IS ADIGITAL IMAGE? A digital image is a representation of a two-dimensional image as a finite set of digital values, called picture elements or pixels
  • 4.
    Pixel values typicallyrepresent gray levels, colours, heights, opacities etc Remember digitization implies that a digital image is an approximation of a real scene 1 pixel
  • 5.
    Common image formatsinclude:  1 sample per point (B&W or Grayscale)  3 samples per point (Red, Green, and Blue)  4 samples per point (Red, Green, Blue, and “Alpha”, a.k.a. Opacity) For most of this presentation we will focus on greyscale images.
  • 6.
    WHAT IS DIGITALIMAGE PROCESSING? Digital image processing focuses on two major tasks  Improvement of pictorial information for human interpretation  Processing of image data for storage, transmission and representation for autonomous machine perception Some argument about where image processing ends and fields such as image analysis and computer vision start
  • 7.
    The continuum fromimage processing to computer vision can be broken up into low-, mid- and high-level processes Low Level Process Mid Level Process High Level Process Input: Image Output: Image Input: Image Output: Attributes Input: Attributes Output: Understanding Examples: Noise removal, image sharpening Examples: Object recognition, segmentation Examples: Scene understanding, autonomous navigation
  • 8.
    HISTORY OF DIGITALIMAGE PROCESSING Early 1920s: One of the first applications of digital imaging was in the newspaper industry  The Bartlane cable picture transmission service  Images were transferred by submarine cable between London and New York  Pictures were coded for cable transfer and reconstructed at the receiving end on a telegraph printer Early digital image
  • 9.
    Mid to late1920s: Improvements to the Bartlane system resulted in higher quality images  New reproduction processes based on photographic techniques  Increased number of tones in reproduced images Improved digital image Early 15 tone digital image
  • 10.
    1960s: Improvements incomputing technology and the onset of the space race led to a surge of work in digital image processing  1964: Computers used to improve the quality of images of the moon taken by the Ranger 7 probe  Such techniques were used in other space missions including the Apollo landings A picture of the moon taken by the Ranger 7 probe minutes before landing
  • 11.
    1970s: Digital imageprocessing begins to be used in medical applications  1979: Sir Godfrey N. Hounsfield & Prof. Allan M. Cormack share the Nobel Prize in medicine for the invention of tomography, the technology behind Computerised Axial Tomography (CAT) scans Typical head slice CAT image
  • 12.
    1980s - Today:The use of digital image processing techniques has exploded and they are now used for all kinds of tasks in all kinds of areas  Image enhancement/restoration  Artistic effects  Medical visualisation  Industrial inspection  Law enforcement  Human computer interfaces
  • 13.
    EXAMPLES: IMAGE ENHANCEMENT Oneof the most common uses of DIP techniques: improve quality, remove noise etc
  • 14.
    EXAMPLES: THE HUBBLETELESCOPE Launched in 1990 the Hubble telescope can take images of very distant objects However, an incorrect mirror made many of Hubble’s images useless Image processing techniques were used to fix this
  • 15.
    EXAMPLES: ARTISTIC EFFECTS Artisticeffects are used to make images more visually appealing, to add special effects and to make composite images
  • 16.
    EXAMPLES: MEDICINE Take slicefrom MRI scan of canine heart, and find boundaries between types of tissue  Image with gray levels representing tissue density  Use a suitable filter to highlight edges Original MRI Image of a Dog Heart Edge Detection Image
  • 17.
    EXAMPLES: GIS Geographic InformationSystems  Digital image processing techniques are used extensively to manipulate satellite imagery  Terrain classification  Meteorology
  • 18.
    EXAMPLES: GIS (CONT…) Night-TimeLights of the World data set  Global inventory of human settlement  Not hard to imagine the kind of analysis that might be done using this data
  • 19.
    EXAMPLES: INDUSTRIAL INSPECTION Humanoperators are expensive, slow and unreliable Make machines do the job instead Industrial vision systems are used in all kinds of industries Can we trust them?
  • 20.
    EXAMPLES: PCB INSPECTION PrintedCircuit Board (PCB) inspection  Machine inspection is used to determine that all components are present and that all solder joints are acceptable  Both conventional imaging and x-ray imaging are used
  • 21.
    EXAMPLES: LAW ENFORCEMENT Imageprocessing techniques are used extensively by law enforcers  Number plate recognition for speed cameras/automated toll systems  Fingerprint recognition  Enhancement of CCTV images
  • 22.
    EXAMPLES: HCI Try tomake human computer interfaces more natural  Face recognition  Gesture recognition Does anyone remember the user interface from “Minority Report”? These tasks can be extremely difficult
  • 23.
    KEY STAGES INDIGITAL IMAGE PROCESSING Image Restoration Morphological Processing Image Enhancement Segmentation Image Acquisition Object Recognition Problem Domain Representation & Description Colour Image Processing Image Compression
  • 24.
    KEY STAGES INDIGITAL IMAGE PROCESSING: IMAGE AQUISITION Image Restoration Morphological Processing Image Enhancement Segmentation Image Acquisition Object Recognition Problem Domain Representation & Description Colour Image Processing Image Compression
  • 25.
    KEY STAGES INDIGITAL IMAGE PROCESSING: IMAGE ENHANCEMENT Image Restoration Morphological Processing Image Enhancement Segmentation Image Acquisition Object Recognition Problem Domain Representation & Description Colour Image Processing Image Compression
  • 26.
    KEY STAGES INDIGITAL IMAGE PROCESSING: IMAGE RESTORATION Image Restoration Morphological Processing Image Enhancement Segmentation Image Acquisition Object Recognition Problem Domain Representation & Description Colour Image Processing Image Compression
  • 27.
    KEY STAGES INDIGITAL IMAGE PROCESSING: MORPHOLOGICAL PROCESSING Image Restoration Morphological Processing Image Enhancement Segmentation Image Acquisition Object Recognition Problem Domain Representation & Description Colour Image Processing Image Compression
  • 28.
    KEY STAGES INDIGITAL IMAGE PROCESSING: SEGMENTATION Image Restoration Morphological Processing Image Enhancement Segmentation Image Acquisition Object Recognition Problem Domain Representation & Description Colour Image Processing Image Compression
  • 29.
    KEY STAGES INDIGITAL IMAGE PROCESSING: OBJECT RECOGNITION Image Restoration Morphological Processing Image Enhancement Segmentation Image Acquisition Object Recognition Problem Domain Representation & Description Colour Image Processing Image Compression
  • 30.
    KEY STAGES INDIGITAL IMAGE PROCESSING: REPRESENTATION & DESCRIPTION Image Restoration Morphological Processing Image Enhancement Segmentation Image Acquisition Object Recognition Problem Domain Representation & Description Colour Image Processing Image Compression
  • 31.
    KEY STAGES INDIGITAL IMAGE PROCESSING: IMAGE COMPRESSION Image Restoration Morphological Processing Image Enhancement Segmentation Image Acquisition Object Recognition Problem Domain Representation & Description Colour Image Processing Image Compression
  • 32.
    KEY STAGES INDIGITAL IMAGE PROCESSING: COLOUR IMAGE PROCESSING Image Restoration Morphological Processing Image Enhancement Segmentation Image Acquisition Object Recognition Problem Domain Representation & Description Colour Image Processing Image Compression
  • 33.
    AUTOMATIC FACE RECOGNITIONUSING COLOR BASED SEGMENTATION In given digital image, detect the presence of faces in the image and output their location.
  • 34.
    BASIC SYSTEM SUMMARY •Initial Design  Reduced Eigenface-based coordinate system defining a “face space”, each possible face a point in space.  Using training images, find coordinates of faces/non-faces, and train a neural net classifier.  Abandoned due to problems with neural network: lack of transparency, poor generalization.  Replaced with our secondary design strategy: • Final System Input Image Color-space Based Segmentation Morphological Image Processing Matched Filtering Peak/Face Detector Face Estimates
  • 35.
    H VS. SVS. V (FACE VS. NON-FACE) For faces, the Hue value is seen to typically occupy values in the range H < 19 H > 240 We use this fact to remove some of the non-faces pixels in the image.
  • 36.
    Y VS. CRVS. CB In the same manner, we found empirically that for the YCbCr space that the face pixels occupied the range 102 < Cb < 128 125 < Cr < 160 Any other pixels were assumed non-face and removed.
  • 37.
    R VS. GVS. B Finally, we found some useful trends in the RGB space as well. The Following rules were used to further isolate face candidates: 0.836·G – 14 < B < 0.836·G + 44 0.89·G – 67 < B < 0.89·G + 42
  • 38.
    REMOVAL OF LOWERREGION – ATTEMPT TO AVOID POSSIBLE FALSE DETECTIONS Just as we used information regarding face color, orientation, and scale from The training images, we also allowed ourselves to make the assumption that Faces were unlikely to appear in the lower portion of the visual field: We Removed that region to help reduce the possibility of false detections.
  • 40.
    CONCLUSIONS • In mostcases, effective use of color space – face color relationships and morphological processing allowed effective pre-processing. • For images trained on, able to detect faces with reasonable accuracy and miss and false alarm rates. • Adaptive adjustment of template scale, angle, and threshold allowed most faces to be detected.
  • 41.
    REFERENCES • R. Gonzalez andR. Woods, “Digital Image Processing – 2 nd Edition”, Prentice Hall, 2002 • C. Garcia et al., “Face Detection in Color Images Using Wavelet Packet Analysis”. • “Machine Vision: Automated Visual Inspection and Robot Vision”, David Vernon, Prentice Hall, 1991 Available online at: homepages.inf.ed.ac.uk/rbf/BOOKS/VERNON/

Editor's Notes

  • #4 Real world is continuous – an image is simply a digital approximation of this.
  • #8 Give the analogy of the character recognition system. Low Level: Cleaning up the image of some text Mid level: Segmenting the text from the background and recognising individual characters High level: Understanding what the text says