• Save
Camera , Visual ,  Imaging Technology : A Walk-through
Upcoming SlideShare
Loading in...5

Camera , Visual , Imaging Technology : A Walk-through






Total Views
Views on SlideShare
Embed Views



0 Embeds 0

No embeds



Upload Details

Uploaded via as OpenOffice

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
  • Its definitely helpful Sherin :)
    Are you sure you want to
    Your message goes here
  • Its been a while I have been thinking of posting something basic which i struggled to get , when i started my career... If this is of help to someone , I am happy!
    Are you sure you want to
    Your message goes here
Post Comment
Edit your comment
  • The human eye is quite similar to a photographic camera. The cornea and the eye lens are the optical elements responsible for forming an image in the back of the eye. The iris is like the diaphragm of the camera, where the opening (or the aperture) controls the amount of light entering the eye. The retina, located at the back of the eye, is like the film, detecting the photons that entered the eye and then turning them into electrical impulses that exits out to the brain through the optic nerve. Now let us look at each part of the eye in detail.
  • So far in the course we have been analyzing various imaging systems in a system chain analogy where the imaging chain consisted of different steps in the whole system. Human visual system can also be considered as an imaging chain, where there are optical elements for image formation, anatomy and physiology responsible for exposure control, detectors responsible for capturing photons and turning them into electrical impulses, and processing. This section will cover the first three boxes responsible for image formation, exposure control, and detection. The later chapters will cover the processing and perception that the brain is responsible for.
  • The image is formed at the back of the eye using the cornea and the eye lens. The image formed is upside down and real. As we will see, the cornea is responsible for the most of the refraction of the light, while the eye lens is the fine tune used to focus between far and close objects.

Camera , Visual ,  Imaging Technology : A Walk-through Camera , Visual , Imaging Technology : A Walk-through Presentation Transcript

  • Camera / Visual/ Imaging Technology: A Walk-through ... - Human Visual System - Camera Technology and Features - Future of Camera system and Technology SherinSasidharan : in.linkedin.com/in/sherinsasidharan About Me: - Multimedia System Software Engineer ; with specialisation and passion for Camera/Imaging! :) Contact: sherin.s@gmail.com
  • Camera/imaging/Visual • Primarily for humans eye (visible spectrum) • Machines (visible + invisible spectrum) ART SCIENCE TECHNOLOGY
  •  Image formation • Features. • Human Visual System. (HVS model)  Image capture • Analog and Digital (conversion & storage)  Artifacts / issues / adjustments with Digital capture. • Comparison of Human Eye. (Photography need for Humans) • Not for humans eye.  Basic Items in digital image capture. (just capture aspect – Part-I) • Camera front end: • Image sensors : » CMOS/CCD (2D conventional): dynamic range , format, types, etc. » 3D sensors » D/A artifacts introduced. » Resolution: benefit and disadvantage. • Lens: » Need for lens. » Artifacts introduced. • Specification of the captured image: » Exposure, Focus, White balance (colorness aspect). • Image pipe-line : raw to yuv or jpeg • Typical digital imaging pipeline. (interface, algorithms) » Raw, cfa, lens, Agenda(1/2)
  •  Camera : Intelligent/ advanced processing aspect: (Part-II)  Fundamental Intelligence: MUST HAVE • Intelligent 3A : camera HW is not human eye.  Advanced imaging processing: Computer Vision • Note on Computer vision – for human and for machine. • Video/ image stabilization • Reg-eye reduction, Effects , • Panorama/ 360view stiching. • High Dynamic Range Imaging/ Automatic local Brightness, contrast control. • Multi focus capture. (Pelica/ ) • 2D to 3D conversion. • Multi-View capture. (3D) • Face/ eye/ smile detection. • Object /shape/ scene detection and recognition. • Scene and object comparison. • face recognition. • Gesture recognition. • Machine learning getting to machine/computer vision.  Computer Vision, OpenCV and the future of Camera Technology. Agenda (2/2)
  • Rods and cones • 120 million receptors in each eye. – Cones– red, green, blue cones. Colour/Day vision. – Rods - low light - night vision.
  • Rod Sensitivity: - Peak at 498 nm. Cone Sensitivity - Red or "L" cones peak at 564 nm. - Green or "M" cones peak at 533 nm. - Blue or "S" cones peak at 437 nm.
  • Colour spectrum
  • Colour: Hue, Saturation and Brightness Hue Saturation Brightness
  • Image Formation • The curved surfaces of the eye focus the image onto the back surface of the eye rest is up to the brain to make sense of the information received. Object Image conescones Image
  •  Image formation model:  Brightness Adaptation  Brightness Discrimination  Angle of view Image formation - HVS
  •  Sensitivity and Dynamic Range:  Variable range for different scenes.  Brain helps in creating final impression.  Much larger than digital camera. Resolution details & color : the human eye  Capable of resolving up to 53Mpix; But human eye scan of a scene is not one shot.  It will be keep on scanning at different regions. And brain forms the image of total picture. HVS
  • Camera pipeline : sensor module : Bayer filter Optical filter Bryce Bayer Issues and Need for improvement: • Image Noise (photon, thermal, electrical, silicon defect) • Image Distortion (Lens property) • Image sharpness (focus aspect) • Image brightness/ lightness (exposure aspect) • Image colour mismatch (white balance and color correction aspect)
  • Camera pipeline Resize Or Algorithm Display JPEG
  • Bayer to RGB : CFA interpolation (bayer demosaic) More sensitive to Green and that dominates the content details. Luma and chroma Luma component is more important and most sensitive Chroma is not that important as Luma: Thus, YUV444 can give the same information as YUV422 and YUV420 RGB  YUV
  • Image Noise – from Sensor - Amplifier Noise. - Salt and pepper Noise – ADC , pixel silicon defect. - Short noise - quantum fluctuations. - Quantization Noise. Effect of sensor size: and manufacturing : cheaper, costlier, pixel size , pixel to pixel gap. Etc. How much light able to collect – FSI, BSI sensors. NOISE Filter of different capability would be needed to remove these.
  • Exposure/ Focus / White balance • Camera needs to adjust the parameters to simulate human eye/brain. • Exposure control goes to sensor: after evaluation is made by software. • Exposure time/ shutter speed. • Analog gain / ISO speed. • Aperture size • (mobile phone cameras doesn’t have variable aperture) Focus control goes to Lens: after evaluation is made by software. • Lens position is adjusted to achieve best focus. White Balance: - Different lighting conditions.
  • Image artifact – from CCD Sensor
  • Image artifact – from CMOS Sensor rolling shutter - Skew - http://dvxuser.com/jason/CMOS-CCD/ - http:// web.tiscali.it/rudiversal/images/Rolling%20Shutter%20Effekt%20HC1.JPG
  • Spatial image aliasing/moire noise
  • Lens Distortion
  • Lens Shading
  • Chromatic Aberration
  • Lens Sharpness: finally its lens – multi element lens
  •  Quantization effect. (quality factor)  Video Compression also has similar artifacts. JPEG compression artifacts
  •  High Dynamic Range Imaging: (HDR) Next Level Advanced Enhancements/ Algorithms
  •  Optical  Prevention (PRE)  Gyro:  Prevention (PRE)  Digital: (POST)  Correction. • Video correction is easy. • Image correction is complex. • Morpho Movie Solid Demo: • http://www.youtube.com/watch?v=IvKZsFl-fg0&feature=player_embedded Video/ Image Stabilization / anti-shake still video
  •  Using Intelligent algorithms to “detect” “analyzing” and “recognizing” the image frame contents.  It is a subjective classification with accuracy information.  Accuracy can be improved by making the machine/computer to learn and see multiple scenarios of the same case.   This is machine Learning.  What was there in PC and desktop implementation and was with researchers are coming on to hand-held devices. FUTURE: Machine Vision / Computer Vision Intelligent processing & understanding captured image.
  • Face Detection & Recognition
  • Object/ scene / gesture detection/ recognition
  • Innovative image capture use-cases: Scalado : Rewind : http://www.scalado.com/display/en/Rewind Scalado : Remove: http://www.scalado.com/display/en/Remove Lytro camera: multiple focus capture : https://www.lytro.com/camera Photosphere : (google 360 panorama) : http://maps.google.com/help/maps/streetview/contribute/#all
  • Robotic vision and 3D camera/ advanced vision: 3D camera – 2 camera based and 1 camera based. Depth sensing camera. 123d catch – 2D to 3D scan: https:// www.youtube.com/watch?v=sGNesS8vo4M Future: Augmented reality based application growth in Handheld devices. AR: (Qualcomm SDK apps) : https://www.youtube.com/watch?v=_ ic7YwTVqu8&feature=endscreen&NR=1
  • Aperture 1/∞ DOF (out of order in this slide:) )
  • Thank You!