SENSORS FOR ROBOTICS
Introduction to Vision System
Lecture 8
Dr. Hema C.R.
Road Map
• Vision Optics
• Image Sensors
• Frame Grabbers
• Lighting
• Image Formation and Geometry
• Sampling and Quantization
• Image Acquisition
Hema –SR Lecture 8 2
Vision Optics
• Vision Systems
• Stand alone
• PC based
• Smart Camera
• Self contained [no pc req.]
• CCD image sensors
• CMOS image sensors
• Vision Sensors
• Integrated devices
• No programming required
• Between smart cams and vision systems
• Digital Cameras
• CCD image
• CMOS image
• Flash memory
• Memory stick
• SmartMedia cards
• Removable [microdrives,CD,DVD]
Hema –SR Lecture 8 3
Neural Network-Based
ZiCAMs from JAI Pulnix
Compact Vision System
from National Instruments
A Cognex In-Sight
Vision Sensor
Hema –SR Lecture 8 4
Imaging Sensors
• Image sensors convert light into electric charge and process it into
electronic signals
• Image Sensors
• Charge Coupled Device CCD
• All pixels are devoted to light capture
• Output is uniform
• High image quality
• Used in cell phone cameras
• Complementary Metal Oxide Semiconductor CMOS
• Pixels devoted to light capture are limited
• Output is not uniform
• High Image quality
• Used in professional and industrial cameras
Hema –SR Lecture 8 5
Frame Grabbers
• A frame grabber is a device to
acquire [grab] and convert
analog to digital images. Modern
FG have many additional
features like more storage,
multiple camera links etc.
Hema –SR Lecture 8 6
Frame Grabbers
• A typical frame grabber consists of
• a circuit to recover the horizontal and vertical synchronization
pulses from the input signal;
• an analog to digital converter
• a color decoder circuit, a function that can also be implemented in
software
• some memory for storing the acquired image (frame buffer)
• a bus interface through which the main processor can control the
acquisition and access the data.
Hema –SR Lecture 8 7
Lighting
• Correct lighting is the single most important design parameter in
a vision system
• Selection of a light source for a vision application is governed
by three factors:
• The type of features that must be captured by the vision system
• The need for the part to be either moving or stationary when the image
is captured.
• The degree of visibility of the environment in which the image is
captured.
Hema –SR Lecture 8 8
Lighting Techniques
• The three lighting techniques used in vision applications are:
• Front lighting,
• Back lighting
• Structured lighting
Hema –SR Lecture 8 9
Front Lighting Sources
Spot Lighting to
check chip
orientation in
embossed tape
Ring Shape
Lighting to detect
loose caps
Tube Lighting to
detect stains on
sheets
Area type lighting to
detect hole position
in lead frames
10 Hema –SR Lecture 8
Image Formation
Image formation in the eye and the camera
 Understanding function of the human eye provides
insight into robot vision solutions
 Biological vision is the process of using light
reflected from the surrounding world as a way of
modifying behavior
11 Hema –SR Lecture 8
Image Formation in the eye
 Light enters through cornea
 Passes through aqueous humor, the lens and
vitreous humor
 Finally forms an image on the retina
 Lens adjusts and focus image directly on retina
 Retina is a complex tiling of photoreceptors
known as rods and cones
 When stimulated by light they produce electrical
signals that are transmitted to the brain by the
optic nerve
Refer: http://homepages/inf.ed.ac.uk/rbf/CVonline/
12 Hema –SR Lecture 8
Image formation :Pin hole
camera
• Camera is analogous to the eye
• Pin hole camera has a small hole
through which light enters before
forming an inverted image
• Pin hole cameras are modeled by placing
image plane between focal point of the
camera and the object so
that image is not inverted
13 Hema –SR Lecture 8
Image Geometry
• Image Formation has two divisions
• Geometry of image formation
• Physics of light [brightness of point]
• Image geometry determines where a world point is
projected on the image plane
14 Hema –SR Lecture 8
Image Geometry
• Object point is represented by x, y and z 3D co-
ordinates.
• Image plane is parallel to x and y axis [world] at a
distance f [focal length]
z
x`
y`
y (x,y,z)
(x`, y`)
Object Point
f r
r`
x
y
2
2
y
x
r 
 2
2
y
x
r 




15 Hema –SR Lecture 8
Image Sampling
• Continuous images are sampled to convert them to digital
form
• Each image sample is called a pixel [picture element]
• Sampling is the process of representing a continuous signal
by a set of samples taken at discrete intervals of time
[sampling interval]
Continuous Signal
Sampled Signal
Sampling Frequency
 
T
fs 
1

16 Hema –SR Lecture 8
• Quantization is the process of converting
analog pixel intensities to discrete valued
integer numbers
• Quantization involves assigning a single
value to each sample values in such a way
that the image reconstructed from
quantized values is good
Image Quantization
17 Hema –SR Lecture 8
Image Acquisition
• Image acquisition is the first stage of a vision system
• Acquired Image is dependent on
• Nature of sensing device
• Vidicon, CCD, infra red , grayscale , color
• Properties of the device
• Sensitivity, resolution, lenses, stability , focus
• The lighting of the scene
• Shadows, excessive reflection, poor contrast
• The environment
• Dust, fog, humidity
• The reflective properties of the objects
• Texture, color, specularity
18 Hema –SR Lecture 8
Image Acquisition
• Two Dimensional Images
• Three Dimensional Images
19 Hema –SR Lecture 8
Image Acquisition
• Acquisition [capture] of 2D Images
• Monochrome or Color
• Analog Cameras
• Digital CCD Cameras
• Digital CMOS Cameras
• Video Cameras [Analog and Digital]
20 Hema –SR Lecture 8
Image Acquisition
• Methods of acquisition for 3D
• Laser Ranging Systems
• Structured Lighting Methods
• Moire Fringe Methods
• Shape from Shading Methods
• Passive Stereoscopic Methods
• Active Stereoscopic Methods
21 Hema –SR Lecture 8
Image Capturing
• A basic image capture system contains a lens and a detector.
Film detects far more visual information than is possible
with a digital system.
• With digital imaging, the detector is a solid state image
sensor called a charge coupled device...CCD for short
• On an area array CCD, a matrix of hundreds of thousands of
microscopic photocells creates pixels by sensing the light
intensity of small portions of the image
22 Hema –SR Lecture 8
Image Capture
• To capture images in color, red, green and blue filters are
placed over the photocells.
• Film scanners often use three linear array image sensors
covered with red, green and blue filters.
• Each linear image sensor, containing thousands of photocells,
is moved across the film to capture the image one-line-at-a-
time.
23 Hema –SR Lecture 8
Image Definitions
• Pixel – A sample of the image intensity quantized to an
integer value
• Image – A two dimensional array of pixels
• Pixel
• Row and column indices [ i, j] are integer values
• Pixels have intensity values
• 0 to 255 grayscale images
• RGB value [vector value] color images
24 Hema –SR Lecture 8
Pixel Array
Pixel [4,4] ↓i →j
25 Hema –SR Lecture 8
Pixel
•The quality of a scanned image is determined by
pixel size, or spatial resolution; and by pixel depth, or
brightness resolution
•This relates to the two basic steps in the digital
capture process:
•In step one, sampling determines pixel size and
brightness value.
•In step two quantization determines pixel depth
26 Hema –SR Lecture 8
Image File Formats
• Images are stored in a computer in one of the following
formats, depending on the application of the images stored.
• Tagged Image Format [.tif]
• Portable Network Graphics [.png]
• Joint Photographic Experts Group [.jpeg, .jpg]
• Bitmap [.bmp]
• Graphics Interchange Format [.gif]
• Raster Images [.ras]
• Postscript [.ps]
Hema –SR Lecture 8 27

Vision Basics

  • 1.
    SENSORS FOR ROBOTICS Introductionto Vision System Lecture 8 Dr. Hema C.R.
  • 2.
    Road Map • VisionOptics • Image Sensors • Frame Grabbers • Lighting • Image Formation and Geometry • Sampling and Quantization • Image Acquisition Hema –SR Lecture 8 2
  • 3.
    Vision Optics • VisionSystems • Stand alone • PC based • Smart Camera • Self contained [no pc req.] • CCD image sensors • CMOS image sensors • Vision Sensors • Integrated devices • No programming required • Between smart cams and vision systems • Digital Cameras • CCD image • CMOS image • Flash memory • Memory stick • SmartMedia cards • Removable [microdrives,CD,DVD] Hema –SR Lecture 8 3 Neural Network-Based ZiCAMs from JAI Pulnix Compact Vision System from National Instruments A Cognex In-Sight Vision Sensor
  • 4.
    Hema –SR Lecture8 4 Imaging Sensors • Image sensors convert light into electric charge and process it into electronic signals • Image Sensors • Charge Coupled Device CCD • All pixels are devoted to light capture • Output is uniform • High image quality • Used in cell phone cameras • Complementary Metal Oxide Semiconductor CMOS • Pixels devoted to light capture are limited • Output is not uniform • High Image quality • Used in professional and industrial cameras
  • 5.
    Hema –SR Lecture8 5 Frame Grabbers • A frame grabber is a device to acquire [grab] and convert analog to digital images. Modern FG have many additional features like more storage, multiple camera links etc.
  • 6.
    Hema –SR Lecture8 6 Frame Grabbers • A typical frame grabber consists of • a circuit to recover the horizontal and vertical synchronization pulses from the input signal; • an analog to digital converter • a color decoder circuit, a function that can also be implemented in software • some memory for storing the acquired image (frame buffer) • a bus interface through which the main processor can control the acquisition and access the data.
  • 7.
    Hema –SR Lecture8 7 Lighting • Correct lighting is the single most important design parameter in a vision system • Selection of a light source for a vision application is governed by three factors: • The type of features that must be captured by the vision system • The need for the part to be either moving or stationary when the image is captured. • The degree of visibility of the environment in which the image is captured.
  • 8.
    Hema –SR Lecture8 8 Lighting Techniques • The three lighting techniques used in vision applications are: • Front lighting, • Back lighting • Structured lighting
  • 9.
    Hema –SR Lecture8 9 Front Lighting Sources Spot Lighting to check chip orientation in embossed tape Ring Shape Lighting to detect loose caps Tube Lighting to detect stains on sheets Area type lighting to detect hole position in lead frames
  • 10.
    10 Hema –SRLecture 8 Image Formation Image formation in the eye and the camera  Understanding function of the human eye provides insight into robot vision solutions  Biological vision is the process of using light reflected from the surrounding world as a way of modifying behavior
  • 11.
    11 Hema –SRLecture 8 Image Formation in the eye  Light enters through cornea  Passes through aqueous humor, the lens and vitreous humor  Finally forms an image on the retina  Lens adjusts and focus image directly on retina  Retina is a complex tiling of photoreceptors known as rods and cones  When stimulated by light they produce electrical signals that are transmitted to the brain by the optic nerve Refer: http://homepages/inf.ed.ac.uk/rbf/CVonline/
  • 12.
    12 Hema –SRLecture 8 Image formation :Pin hole camera • Camera is analogous to the eye • Pin hole camera has a small hole through which light enters before forming an inverted image • Pin hole cameras are modeled by placing image plane between focal point of the camera and the object so that image is not inverted
  • 13.
    13 Hema –SRLecture 8 Image Geometry • Image Formation has two divisions • Geometry of image formation • Physics of light [brightness of point] • Image geometry determines where a world point is projected on the image plane
  • 14.
    14 Hema –SRLecture 8 Image Geometry • Object point is represented by x, y and z 3D co- ordinates. • Image plane is parallel to x and y axis [world] at a distance f [focal length] z x` y` y (x,y,z) (x`, y`) Object Point f r r` x y 2 2 y x r   2 2 y x r     
  • 15.
    15 Hema –SRLecture 8 Image Sampling • Continuous images are sampled to convert them to digital form • Each image sample is called a pixel [picture element] • Sampling is the process of representing a continuous signal by a set of samples taken at discrete intervals of time [sampling interval] Continuous Signal Sampled Signal Sampling Frequency   T fs  1 
  • 16.
    16 Hema –SRLecture 8 • Quantization is the process of converting analog pixel intensities to discrete valued integer numbers • Quantization involves assigning a single value to each sample values in such a way that the image reconstructed from quantized values is good Image Quantization
  • 17.
    17 Hema –SRLecture 8 Image Acquisition • Image acquisition is the first stage of a vision system • Acquired Image is dependent on • Nature of sensing device • Vidicon, CCD, infra red , grayscale , color • Properties of the device • Sensitivity, resolution, lenses, stability , focus • The lighting of the scene • Shadows, excessive reflection, poor contrast • The environment • Dust, fog, humidity • The reflective properties of the objects • Texture, color, specularity
  • 18.
    18 Hema –SRLecture 8 Image Acquisition • Two Dimensional Images • Three Dimensional Images
  • 19.
    19 Hema –SRLecture 8 Image Acquisition • Acquisition [capture] of 2D Images • Monochrome or Color • Analog Cameras • Digital CCD Cameras • Digital CMOS Cameras • Video Cameras [Analog and Digital]
  • 20.
    20 Hema –SRLecture 8 Image Acquisition • Methods of acquisition for 3D • Laser Ranging Systems • Structured Lighting Methods • Moire Fringe Methods • Shape from Shading Methods • Passive Stereoscopic Methods • Active Stereoscopic Methods
  • 21.
    21 Hema –SRLecture 8 Image Capturing • A basic image capture system contains a lens and a detector. Film detects far more visual information than is possible with a digital system. • With digital imaging, the detector is a solid state image sensor called a charge coupled device...CCD for short • On an area array CCD, a matrix of hundreds of thousands of microscopic photocells creates pixels by sensing the light intensity of small portions of the image
  • 22.
    22 Hema –SRLecture 8 Image Capture • To capture images in color, red, green and blue filters are placed over the photocells. • Film scanners often use three linear array image sensors covered with red, green and blue filters. • Each linear image sensor, containing thousands of photocells, is moved across the film to capture the image one-line-at-a- time.
  • 23.
    23 Hema –SRLecture 8 Image Definitions • Pixel – A sample of the image intensity quantized to an integer value • Image – A two dimensional array of pixels • Pixel • Row and column indices [ i, j] are integer values • Pixels have intensity values • 0 to 255 grayscale images • RGB value [vector value] color images
  • 24.
    24 Hema –SRLecture 8 Pixel Array Pixel [4,4] ↓i →j
  • 25.
    25 Hema –SRLecture 8 Pixel •The quality of a scanned image is determined by pixel size, or spatial resolution; and by pixel depth, or brightness resolution •This relates to the two basic steps in the digital capture process: •In step one, sampling determines pixel size and brightness value. •In step two quantization determines pixel depth
  • 26.
    26 Hema –SRLecture 8 Image File Formats • Images are stored in a computer in one of the following formats, depending on the application of the images stored. • Tagged Image Format [.tif] • Portable Network Graphics [.png] • Joint Photographic Experts Group [.jpeg, .jpg] • Bitmap [.bmp] • Graphics Interchange Format [.gif] • Raster Images [.ras] • Postscript [.ps]
  • 27.