FUNDAMENTALS OF
DIGITAL IMAGE PROCESSING
Introduction to Digital Image
                     Processing
• An image may be represented as a 2-dimensional function,
  f(x,y).
       where, x and y are spatial coordinates.
                                                                       x
• The amplitude of f at any pair                0
  of coordinates (x,y) is called                    . . . . . . . f(x,y) .
  the intensity of gray level of                    . . . . . . . . . ..
  the image at that point.                          . . . . . . . . . ..
                                                    . . . . . . . . . . .
• when x, y and amplitude of f                 y
  are all finite, discreet quantities,
   we call it a Digital Image.
• The field of Digital Image Processing(DIP)     Fig : Representation of a
                                                 Digital Image
  refers to processing the digital image
  by means of a digital computer.
• Digital Image is composed of a finite number
  of elements, each of which has a particular
  location and value.
• These elements are referred to as a picture
  elements, image elements, pels & pixels.
• Pixel is the term most widely used to denote
  the elements of a digital image.
Types of Image Processing
1. Low Level Processing-involves primitive
    operations like noise reduction, contrast
    enhancement and image sharpening.
    Its inputs and outputs both are images.
2. Mid Level Processing-involves tasks such as
    segmentation, classification(recognition) of
    individual objects.
   Its inputs generally are images but its outputs
  are attributes extracted from those images.
  example: edges, contours, identity of object
(contd...)
3. High Level Processing-involves “making
  sense” of an ensemble of recognized
  objects, performing the cognitive functions
  normally associated with the vision.
Fundamental steps in DIP
                    Outputs of the processes generally are images

      Color image          Wavelets and                             Morphological




                                                                                      Outputs of these processes generally are
      processing           multiresolution      compression         processing
                           processing

      Image
      restoration                                                   segmentation




                                                                                      image attributes
      Image                       Knowledge Base                    Representation
      enhancement                                                   and description



      Image                                                         Object
      acquisition                                                   recognition

Problem
domain
Components of an Image Processing
            System
                                         network



 Image display       Computer                Mass storage



                 Specialized image       Image
   Hardcopy      processing              processing
                 hardware                software




                   Image sensors


                            Problem domain
Image Formation in the Eye


d1                                                  c
                                                                             h




                           d


d1/d = h/λ

Where c = optical centre of lens
      d = distance between the object and the optical centre
      d1 = height of the object
      h = height of the object formed in the eye
      λ = focal length,17mm if we are focussing on an object present greater than 3m
Mathematical representation of a
         Digital Image
Function f(x,y) may be characterized by 2 components:
1. the amount of source illumination incident on the
scene being viewed, i(x,y) i.e illumination component.
2. the amount of illumination reflected by the objects
in the scene, r(x,y) i.e reflectance component.
therefore,
                   f(x,y) = i(x,y) * r(x,y)
where, 0 < i(x,y) < ∞
        0(total absorption) < r(x,y) < 1(total reflectance)
• Consider the intensity of a monochrome image at
  any coordinate (x0, y0), the gray level (l) of the
  image at that point,
                l = f (x0, y0)
   ‘l’ lies in the range, Lmin ≤ l ≤ Lmax
   where, Lmin= imin * rmin
             Lmax= imax * rmax
• The interval [Lmin,Lmax] is called the gray scale.
• Common practice is to shift this interval
     numerically to the interval [0,L-1],
        where, l=0 is considered black
                  l=L-1 is considered white

Digital image processing

  • 1.
  • 2.
    Introduction to DigitalImage Processing • An image may be represented as a 2-dimensional function, f(x,y). where, x and y are spatial coordinates. x • The amplitude of f at any pair 0 of coordinates (x,y) is called . . . . . . . f(x,y) . the intensity of gray level of . . . . . . . . . .. the image at that point. . . . . . . . . . .. . . . . . . . . . . . • when x, y and amplitude of f y are all finite, discreet quantities, we call it a Digital Image. • The field of Digital Image Processing(DIP) Fig : Representation of a Digital Image refers to processing the digital image by means of a digital computer.
  • 3.
    • Digital Imageis composed of a finite number of elements, each of which has a particular location and value. • These elements are referred to as a picture elements, image elements, pels & pixels. • Pixel is the term most widely used to denote the elements of a digital image.
  • 4.
    Types of ImageProcessing 1. Low Level Processing-involves primitive operations like noise reduction, contrast enhancement and image sharpening. Its inputs and outputs both are images. 2. Mid Level Processing-involves tasks such as segmentation, classification(recognition) of individual objects. Its inputs generally are images but its outputs are attributes extracted from those images. example: edges, contours, identity of object
  • 5.
    (contd...) 3. High LevelProcessing-involves “making sense” of an ensemble of recognized objects, performing the cognitive functions normally associated with the vision.
  • 6.
    Fundamental steps inDIP Outputs of the processes generally are images Color image Wavelets and Morphological Outputs of these processes generally are processing multiresolution compression processing processing Image restoration segmentation image attributes Image Knowledge Base Representation enhancement and description Image Object acquisition recognition Problem domain
  • 7.
    Components of anImage Processing System network Image display Computer Mass storage Specialized image Image Hardcopy processing processing hardware software Image sensors Problem domain
  • 8.
    Image Formation inthe Eye d1 c h d d1/d = h/λ Where c = optical centre of lens d = distance between the object and the optical centre d1 = height of the object h = height of the object formed in the eye λ = focal length,17mm if we are focussing on an object present greater than 3m
  • 9.
    Mathematical representation ofa Digital Image Function f(x,y) may be characterized by 2 components: 1. the amount of source illumination incident on the scene being viewed, i(x,y) i.e illumination component. 2. the amount of illumination reflected by the objects in the scene, r(x,y) i.e reflectance component. therefore, f(x,y) = i(x,y) * r(x,y) where, 0 < i(x,y) < ∞ 0(total absorption) < r(x,y) < 1(total reflectance)
  • 10.
    • Consider theintensity of a monochrome image at any coordinate (x0, y0), the gray level (l) of the image at that point, l = f (x0, y0) ‘l’ lies in the range, Lmin ≤ l ≤ Lmax where, Lmin= imin * rmin Lmax= imax * rmax • The interval [Lmin,Lmax] is called the gray scale. • Common practice is to shift this interval numerically to the interval [0,L-1], where, l=0 is considered black l=L-1 is considered white