1
Image Formation
2
Objectives
•Fundamental imaging notions
•Physical basis for image formation
­ Light
­ Color
­ Perception
•Synthetic camera model
•Other models
3
Image Formation
•In computer graphics, we form images
which are generally two dimensional using
a process analogous to how images are
formed by physical imaging systems
­ Cameras
­ Microscopes
­ Telescopes
­ Human visual system
4
Elements of Image Formation
•Objects
•Viewer
•Light source(s)
•Attributes that govern how light interacts
with the materials in the scene
•Note the independence of the objects, the
viewer, and the light source(s)
Pinhole Lens
Light (Energy) Source
Surface
Imaging Plane
World Optics Sensor Signal
B&W Film
Color Film
TV Camera
Silver Density
Silver density
in three color
layers
Electrical
6
Light
•Light is the part of the electromagnetic
spectrum that causes a reaction in our
visual systems
•Generally these are wavelengths in the
range of about 350-750 nm (nanometers)
•Long wavelengths appear as reds and
short wavelengths as blues
7
Ray Tracing and Geometric Optics
One way to form an image is to
follow rays of light from a
point source finding which
rays enter the lens of the
camera. However, each
ray of light may have
multiple interactions with objects
before being absorbed or going to infinity.
8
Luminance and Color Images
•Luminance Image
­ Monochromatic
­ Values are gray levels
­ Analogous to working with black and white film
or television
•Color Image
­ Has perceptional attributes of hue, saturation,
and lightness.
9
Three-Color Theory
•Human visual system has two types of
sensors
­ Rods: monochromatic, night vision
­ Cones
• Color sensitive
• Three types of cones
• Only three values (the tristimulus
values) are sent to the brain
•Need only match these three values
­ Need only three primary colors
10
Additive and Subtractive Color
•Additive color
­ Form a color by adding amounts of three
primaries
• CRTs, projection systems, positive film
­ Primaries are Red (R), Green (G), Blue (B)
•Subtractive color
­ Form a color by filtering white light with cyan
(C), Magenta (M), and Yellow (Y) filters
• Light-material interactions
• Printing
• Negative film
A Simple model of image formation
• The scene is illuminated by a single source.
• The scene reflects radiation towards the camera.
• The camera senses it via solid state cells (CCD cameras)
Image formation (cont’d)
f(x,y) = i(x,y) r(x,y)
Simple model:
i: illumination, r: reflectance
Let’s design a camera
•Put a piece of film in front of an object - do we
get a reasonable image?
­ Blurring - need to be more selective!
Let’s design a camera (cont’d)
“Pinhole” camera model
•The simplest device to form an image of a 3D
scene on a 2D surface.
•Rays of light pass through a "pinhole" and
form an inverted image of the object on the
image plane.
center of
projection
(x,y)
(X,Y,Z)
f: focal length
fX
x
Z

fY
y
Z

Pinhole Camera
xp= -x/z/d yp= -y/z/d
Use trigonometry to find projection of point at (x,y,z)
These are equations of simple perspective
zp= - d
17
Synthetic Camera Model
center of projection
image plane
projector
p
projection of p
What is the effect of
aperture size?
Example: varying aperture size
Example: varying aperture size
(cont’d)
SOLUTION: refraction
Refraction
•Bending of wave when it enters a medium
where its speed is different.
Lens
refraction
Lens (cont’d)
• Lens improves image
quality, leading to sharper
images.
Properties of “thin” lens (i.e., ideal lens)
• Light rays passing through the center are not deviated.
• Light rays passing through a point far away from the center are
deviated more.
focal point
f
Properties of “thin” lens (i.e., ideal
lens)
focal point
f
Properties of “thin” lens
focal point
f
Thin lens
“circle of
confusion”
Thin lens equation (cont’d)
• When objects move far away from the camera, then the
focal plane approaches the image plane.
focal point
f
1 1 1
u v f
+ =
Depth of Field(DOF)
http://www.cambridgeincolour.com/tutorials/depth-of-field.htm
The range of depths over which the world is approximately sharp (i.e., in focus)
.
The depth of field (DOF) is the distance between the nearest and the farthest
objects that are in acceptably sharp focus in an image captured with a camera.
How can we control depth of field?
•The size of blur circle is proportional to
aperture size.
How can we control depth of field?
(cont’d)
Varying aperture size
Large aperture = small DOF Small aperture = large DOF
Another Example
Large aperture = small DOF
Field of View (Zoom)
f
f
• The cone of viewing directions of the camera.
• Inversely proportional to focal length.
35
In photography and cinematography, perspective
distortion is a warping or transformation of an object
and its surrounding area that differs significantly from
what the object would look like with a normal focal
length, due to the relative scale of nearby and distant
features.
Perspective Distortions
36
37
Reduce Perspective Distortions by varying Distance / Focal
Length
Less perspective
distortion!
Same effect for faces
standard
wide-angle telephoto
Practical significance: we can approximate perspective projection
using a simpler model when using telephoto lens to view a distant
object that has a relatively small range of depth.
Less perspective
distortion!
40
AFFINE CAMERA
Approximating an “affine”
camera
Center of projection is at infinity!
Real lenses
• All but the simplest cameras contain lenses which are actually
comprised of several "lens elements."
• Each element aims to direct the path of light rays such that
they recreate the image as accurately as possible on the
digital sensor.
Lens Flaws: Chromatic Aberration
Chromatic Aberration - Example
Fringing or
chromatic
aberration, is a
color distortion that
creates an unwanted
colored outline around
the edges of objects in
a photograph
45
Lens Flaws: Radial Distortion

Straight lines become distorted as we move further away from the center of
the image.

Deviations are most noticeable for rays that pass through the edge of the lens.
No distortion Pin cushion Barrel
Radial distortion is a type of optical distortion that occurs
when light rays bend more at the edges of a lens than they
do at the center. This causes straight lines in an image to
appear curved
47
Digital cameras
http://electronics.howstuffworks.com/digital-camera.htm
49
Digital cameras (cont’d)
CCD(Charge-Coupled) Cameras

CCDs move photogenerated charge from pixel to pixel and convert it to voltage at an
output node.

An analog-to-digital converter (ADC) then turns each pixel's value into a digital value.
http://www.dalsa.com/shared/content/pdfs/CCD_vs_CMOS_Litwiller_2005.pdf
52
https://www.microscopyu.com/
digital-imaging/introduction-to-
charge-coupled-devices-ccds
CMOS(complementary metal oxide
semiconductor) Cameras
complementary metal oxide semiconductor
CMOs convert charge to voltage inside each element.
Uses several transistors at each pixel to amplify and move the charge using more traditional wires.
The CMOS signal is digital, so it needs no ADC.
http://www.dalsa.com/shared/content/pdfs/CCD_vs_CMOS_Litwiller_2005.pdf
54
https://www.techtarget.com/
whatis/definition/CMOS-sensor
55
Image digitization
Image digitization is the process of converting a
physical image into a digital file. This process allows the
image to be stored and edited on a computer
•Scanning: A scanner captures an image and converts it
into a digital file.
•Recording: An analog-to-digital converter captures an
image or sound and converts it into a digital format.
•Sampling: The amplitude of an analog waveform is
measured and represented as numerical values.
•Quantization: represent measured value (i.e., voltage) at the
sampled point by an integer.
Image digitization
Image digitization (cont’d)
58
•This is an example of an
original continuous
function or shape. The
shaded area represents
intensity levels or values
that vary smoothly.
•The horizontal line
marked A-B indicates the
cross-section being
analyzed
59
•.
60
61
•Quantization converts the
sampled continuous values
into discrete levels.
•The vertical axis shows
quantization levels (e.g.,
grayscale from black to
white).
62
•This step shows the final result
of sampling and quantization: a
digital representation suitable for
storage or processing in a
computer.
•The fully digitized version of the
cross-section is displayed. Each
sampled point is now represented
by discrete quantized values.
63
What is an image?
8 bits/pixel
0
255
What is an image?
•We can think of a (grayscale) image as a function, f, a 2D
signal:
­ f (x, y) gives the intensity at position (x,y)
A digital image is a discrete (sampled,quantized) version of this
function
x
y
f (x, y)
Image Sampling - Example
original image sampled by a factor of 2
sampled by a factor of 6 sampled by a factor of 8
Images have
been resized
for easier
comparison
Color Images
• Color images are comprised of three color
channels – red, green, and, blue – which
combine to create most of the colors we can see.
=
Color images
( , )
( , ) ( , )
( , )
r x y
f x y g x y
b x y
 
 

 
 
 
Color sensing in camera: Prism
•Requires three chips and precise alignment.
CCD(B)
CCD(G)
CCD(R)
Color sensing in camera: Color filter
array
Why more green?
• In traditional systems, color filters are applied to a single
layer of photodetectors in a tiled mosaic pattern.
Human Luminance Sensitivity Function
Bayer grid
Color sensing in camera: Color filter
array
red green blue output
demosaicing
(interpolation)
Alterative Color paces
Red
Gree
n
Blue
T-1
Processing
Processing Strategy
T
Red
Gree
n
Blue
Skin color
RGB
rg
r
g
Image file formats
Many image formats adhere to the simple model shown below (line by line, no breaks
between lines).
The header contains at least the width and height of the image.
Most headers begin with a signature or “magic number”
(i.e., a short sequence of bytes for identifying the file format)
Common image file formats

GIF (Graphic Interchange Format) -

PNG (Portable Network Graphics)

JPEG (Joint Photographic Experts Group)

TIFF (Tagged Image File Format)

PGM (Portable Gray Map)

FITS (Flexible Image Transport System)
IMAGE FILE FORMAT
77
VECTOR IMAGES
•8-bit image: An 8-bit image has 256 possible values for
each pixel, which is equivalent to 28. This means that
each pixel can have one of 256 colors.
•16-bit image: A 16-bit image has 65,536 possible colors
for each pixel.
•24-bit image: A 24-bit image has 16,777,216 possible
colors for each pixel.
Unprocessed data
from a digital
camera's image
sensor
Lempel–Ziv–Welch (LZW) compression is a lossless
algorithm that reduces the size of files without losing any
information.
Image_Formation(1).ppt powerpoint presentation
Image_Formation(1).ppt powerpoint presentation
Image_Formation(1).ppt powerpoint presentation

Image_Formation(1).ppt powerpoint presentation

  • 1.
  • 2.
    2 Objectives •Fundamental imaging notions •Physicalbasis for image formation ­ Light ­ Color ­ Perception •Synthetic camera model •Other models
  • 3.
    3 Image Formation •In computergraphics, we form images which are generally two dimensional using a process analogous to how images are formed by physical imaging systems ­ Cameras ­ Microscopes ­ Telescopes ­ Human visual system
  • 4.
    4 Elements of ImageFormation •Objects •Viewer •Light source(s) •Attributes that govern how light interacts with the materials in the scene •Note the independence of the objects, the viewer, and the light source(s)
  • 5.
    Pinhole Lens Light (Energy)Source Surface Imaging Plane World Optics Sensor Signal B&W Film Color Film TV Camera Silver Density Silver density in three color layers Electrical
  • 6.
    6 Light •Light is thepart of the electromagnetic spectrum that causes a reaction in our visual systems •Generally these are wavelengths in the range of about 350-750 nm (nanometers) •Long wavelengths appear as reds and short wavelengths as blues
  • 7.
    7 Ray Tracing andGeometric Optics One way to form an image is to follow rays of light from a point source finding which rays enter the lens of the camera. However, each ray of light may have multiple interactions with objects before being absorbed or going to infinity.
  • 8.
    8 Luminance and ColorImages •Luminance Image ­ Monochromatic ­ Values are gray levels ­ Analogous to working with black and white film or television •Color Image ­ Has perceptional attributes of hue, saturation, and lightness.
  • 9.
    9 Three-Color Theory •Human visualsystem has two types of sensors ­ Rods: monochromatic, night vision ­ Cones • Color sensitive • Three types of cones • Only three values (the tristimulus values) are sent to the brain •Need only match these three values ­ Need only three primary colors
  • 10.
    10 Additive and SubtractiveColor •Additive color ­ Form a color by adding amounts of three primaries • CRTs, projection systems, positive film ­ Primaries are Red (R), Green (G), Blue (B) •Subtractive color ­ Form a color by filtering white light with cyan (C), Magenta (M), and Yellow (Y) filters • Light-material interactions • Printing • Negative film
  • 11.
    A Simple modelof image formation • The scene is illuminated by a single source. • The scene reflects radiation towards the camera. • The camera senses it via solid state cells (CCD cameras)
  • 12.
    Image formation (cont’d) f(x,y)= i(x,y) r(x,y) Simple model: i: illumination, r: reflectance
  • 13.
    Let’s design acamera •Put a piece of film in front of an object - do we get a reasonable image? ­ Blurring - need to be more selective!
  • 14.
    Let’s design acamera (cont’d)
  • 15.
    “Pinhole” camera model •Thesimplest device to form an image of a 3D scene on a 2D surface. •Rays of light pass through a "pinhole" and form an inverted image of the object on the image plane. center of projection (x,y) (X,Y,Z) f: focal length fX x Z  fY y Z 
  • 16.
    Pinhole Camera xp= -x/z/dyp= -y/z/d Use trigonometry to find projection of point at (x,y,z) These are equations of simple perspective zp= - d
  • 17.
    17 Synthetic Camera Model centerof projection image plane projector p projection of p
  • 18.
    What is theeffect of aperture size?
  • 19.
  • 20.
    Example: varying aperturesize (cont’d) SOLUTION: refraction
  • 21.
    Refraction •Bending of wavewhen it enters a medium where its speed is different.
  • 22.
  • 23.
    Lens (cont’d) • Lensimproves image quality, leading to sharper images.
  • 24.
    Properties of “thin”lens (i.e., ideal lens) • Light rays passing through the center are not deviated. • Light rays passing through a point far away from the center are deviated more. focal point f
  • 25.
    Properties of “thin”lens (i.e., ideal lens) focal point f
  • 26.
    Properties of “thin”lens focal point f
  • 27.
  • 28.
    Thin lens equation(cont’d) • When objects move far away from the camera, then the focal plane approaches the image plane. focal point f 1 1 1 u v f + =
  • 29.
    Depth of Field(DOF) http://www.cambridgeincolour.com/tutorials/depth-of-field.htm Therange of depths over which the world is approximately sharp (i.e., in focus) . The depth of field (DOF) is the distance between the nearest and the farthest objects that are in acceptably sharp focus in an image captured with a camera.
  • 30.
    How can wecontrol depth of field? •The size of blur circle is proportional to aperture size.
  • 31.
    How can wecontrol depth of field? (cont’d)
  • 32.
    Varying aperture size Largeaperture = small DOF Small aperture = large DOF
  • 33.
  • 34.
    Field of View(Zoom) f f • The cone of viewing directions of the camera. • Inversely proportional to focal length.
  • 35.
    35 In photography andcinematography, perspective distortion is a warping or transformation of an object and its surrounding area that differs significantly from what the object would look like with a normal focal length, due to the relative scale of nearby and distant features. Perspective Distortions
  • 36.
  • 37.
  • 38.
    Reduce Perspective Distortionsby varying Distance / Focal Length Less perspective distortion!
  • 39.
    Same effect forfaces standard wide-angle telephoto Practical significance: we can approximate perspective projection using a simpler model when using telephoto lens to view a distant object that has a relatively small range of depth. Less perspective distortion!
  • 40.
  • 41.
    Approximating an “affine” camera Centerof projection is at infinity!
  • 42.
    Real lenses • Allbut the simplest cameras contain lenses which are actually comprised of several "lens elements." • Each element aims to direct the path of light rays such that they recreate the image as accurately as possible on the digital sensor.
  • 43.
  • 44.
    Chromatic Aberration -Example Fringing or chromatic aberration, is a color distortion that creates an unwanted colored outline around the edges of objects in a photograph
  • 45.
  • 46.
    Lens Flaws: RadialDistortion  Straight lines become distorted as we move further away from the center of the image.  Deviations are most noticeable for rays that pass through the edge of the lens. No distortion Pin cushion Barrel Radial distortion is a type of optical distortion that occurs when light rays bend more at the edges of a lens than they do at the center. This causes straight lines in an image to appear curved
  • 47.
  • 48.
  • 49.
  • 50.
  • 51.
    CCD(Charge-Coupled) Cameras  CCDs movephotogenerated charge from pixel to pixel and convert it to voltage at an output node.  An analog-to-digital converter (ADC) then turns each pixel's value into a digital value. http://www.dalsa.com/shared/content/pdfs/CCD_vs_CMOS_Litwiller_2005.pdf
  • 52.
  • 53.
    CMOS(complementary metal oxide semiconductor)Cameras complementary metal oxide semiconductor CMOs convert charge to voltage inside each element. Uses several transistors at each pixel to amplify and move the charge using more traditional wires. The CMOS signal is digital, so it needs no ADC. http://www.dalsa.com/shared/content/pdfs/CCD_vs_CMOS_Litwiller_2005.pdf
  • 54.
  • 55.
    55 Image digitization Image digitizationis the process of converting a physical image into a digital file. This process allows the image to be stored and edited on a computer •Scanning: A scanner captures an image and converts it into a digital file. •Recording: An analog-to-digital converter captures an image or sound and converts it into a digital format. •Sampling: The amplitude of an analog waveform is measured and represented as numerical values. •Quantization: represent measured value (i.e., voltage) at the sampled point by an integer.
  • 56.
  • 57.
  • 58.
  • 59.
    •This is anexample of an original continuous function or shape. The shaded area represents intensity levels or values that vary smoothly. •The horizontal line marked A-B indicates the cross-section being analyzed 59 •.
  • 60.
  • 61.
    61 •Quantization converts the sampledcontinuous values into discrete levels. •The vertical axis shows quantization levels (e.g., grayscale from black to white).
  • 62.
    62 •This step showsthe final result of sampling and quantization: a digital representation suitable for storage or processing in a computer. •The fully digitized version of the cross-section is displayed. Each sampled point is now represented by discrete quantized values.
  • 63.
  • 64.
    What is animage? 8 bits/pixel 0 255
  • 65.
    What is animage? •We can think of a (grayscale) image as a function, f, a 2D signal: ­ f (x, y) gives the intensity at position (x,y) A digital image is a discrete (sampled,quantized) version of this function x y f (x, y)
  • 66.
    Image Sampling -Example original image sampled by a factor of 2 sampled by a factor of 6 sampled by a factor of 8 Images have been resized for easier comparison
  • 67.
    Color Images • Colorimages are comprised of three color channels – red, green, and, blue – which combine to create most of the colors we can see. =
  • 68.
    Color images ( ,) ( , ) ( , ) ( , ) r x y f x y g x y b x y           
  • 69.
    Color sensing incamera: Prism •Requires three chips and precise alignment. CCD(B) CCD(G) CCD(R)
  • 70.
    Color sensing incamera: Color filter array Why more green? • In traditional systems, color filters are applied to a single layer of photodetectors in a tiled mosaic pattern. Human Luminance Sensitivity Function Bayer grid
  • 71.
    Color sensing incamera: Color filter array red green blue output demosaicing (interpolation)
  • 72.
  • 73.
  • 74.
  • 75.
    Image file formats Manyimage formats adhere to the simple model shown below (line by line, no breaks between lines). The header contains at least the width and height of the image. Most headers begin with a signature or “magic number” (i.e., a short sequence of bytes for identifying the file format)
  • 76.
    Common image fileformats  GIF (Graphic Interchange Format) -  PNG (Portable Network Graphics)  JPEG (Joint Photographic Experts Group)  TIFF (Tagged Image File Format)  PGM (Portable Gray Map)  FITS (Flexible Image Transport System)
  • 77.
  • 79.
  • 87.
    •8-bit image: An8-bit image has 256 possible values for each pixel, which is equivalent to 28. This means that each pixel can have one of 256 colors. •16-bit image: A 16-bit image has 65,536 possible colors for each pixel. •24-bit image: A 24-bit image has 16,777,216 possible colors for each pixel.
  • 94.
    Unprocessed data from adigital camera's image sensor
  • 101.
    Lempel–Ziv–Welch (LZW) compressionis a lossless algorithm that reduces the size of files without losing any information.

Editor's Notes

  • #14 It gets inverted
  • #29 Depth of field is the range of distance within the subject that is acceptably sharp.