2. What’s Computational Photography?
• Wiki Version
Computational photography is a digital image capture and processing techniques
that use digital computation instead of optical processes.
• Stanford Professor and pioneer of computational photography Marc Levoy (he's
also in charge of Google Pixel's camera now)
Computational imaging techniques that enhance or extend the capabilities of
digital photography in which the output is an ordinary photograph, but one that
could not have been taken by a traditional camera.
• So simply (my definition): Making ordinary photography – extraordinary.
4. • DSLRs are almost gone. We don’t find many in the market. Because
Smartphone is doing everything, infact more than what traditional
camera does.
Why Computational Photography?
5. Computational Photography Reach
A recent photograph of a black hole would not have been
taken without using computational photography methods. To
take such picture with a standard telescope, we would have
to make it the size of the Earth. However, by combining the
data of eight radio telescopes at different locations of our
Earth-ball, we obtained the picture of Black Hole.
21. Motion Blur
Blurring == Convolution
Traditional Camera: Shutter is OPEN: Box Filter
PSF == Sinc Function
ω
Sharp Photo Blurred Photo
Fourier Transform
22. Preserves High Spatial
Frequencies
Motion Blur
Blurring == Convolution
ω
Sharp Photo Blurred Photo
Fourier Transform
Flutter Shutter: Shutter is OPEN and CLOSED – Possible with Electronic Shutter
28. • Take multiple pictures and computationally combine
Post-Processing
Others
• Multiview Stereo
• Depth from Focus/Defocus
• Tomography
• Structured Light
• Deconvolution microscopy
• etc.
HDR Imaging Panoramic Stitching
Digital Holography
[Greenbum et al. ’12]
Image-Based Lighting
[Debevec et al. ’00]
Light Field Capture
[Wilburn et al. ’04]
29. HDR Imaging
• A computational photography technology that improves this situation
by capturing a burst of frames, aligning the frames in software, and
merging them together.
• The main purpose of HDR+ is to improve dynamic range, meaning the
ability to photograph scenes that exhibit a wide range of brightness
(like sunsets or backlit portraits).
• Google named this algorithm as HDR+ which is what they use for
Night Sight in Pixel phones. Thanks to Marc Levoy again.
41. Light Transport
The aim of the computational light transport is to understand the interaction of
the lights with different objects present in the scene.
49. Acquisition of Light Transport Matrix
• Brute Force Method
Pradeep Sen and Soheil Darabi. Compressive dual photography. In Computer Graphics Forum, volume 28, pages 609–618. Wiley Online Library, 2009.
50. Brute – Force Method
• Switch on every projector Pixel. Record the images and concatenate.
• Very time consuming and proved to be inefficient when there are bright objects in the scene.
51. Optimal Multiplexing
Idea behind the concept of Optimal Multiplexing
Yoav, Nayar, Member, Belhumeur, “Multiplexing for Optimal Lighting” IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE,,VOL. 29,
NO. 8, AUGUST 2007.