0
MIT Media LabMIT Media Lab
Camera CultureCamera Culture
Ramesh RaskarRamesh Raskar
MIT Media LabMIT Media Lab
http:// Came...
MIT Media LabMIT Media Lab
http://scalarmotion.wordpress.com/2009/03/15/propeller-image-aliasing/
Direct Global
Shower Curtain: Diffuser
Direct Global
Shower Curtain: Diffuser
A Teaser: Dual PhotographyA Teaser: Dual Photography
Scene
PhotocellProjector
A Teaser: Dual PhotographyA Teaser: Dual Photography
Scene
PhotocellProjector
A Teaser: Dual PhotographyA Teaser: Dual Photography
Scene
PhotocellProjector
A Teaser: Dual PhotographyA Teaser: Dual Photography
Scene
PhotocellProjector Camera
camera
The 4D transport matrix:The 4D transport matrix:
Contribution of each projector pixel to each camera pixelContribut...
camera
The 4D transport matrix:The 4D transport matrix:
Contribution of each projector pixel to each camera pixelContribut...
camera
The 4D transport matrix:The 4D transport matrix:
Which projector pixel contribute to each camera pixelWhich project...
Dual photography from diffuse reflections:Dual photography from diffuse reflections:
Homework Assignment 2Homework Assignm...
Digital cameras are boring:Digital cameras are boring:
Film-like PhotographyFilm-like Photography
• Roughly the same featu...
Improving FILM-LIKEImproving FILM-LIKE
Camera PerformanceCamera Performance
What would make it ‘perfect’ ?What would make ...
MIT Media Lab
• What type of ‘Cameras’ will we study?
• Not just film-mimicking 2D sensors
– 0D sensors
• Motion detector
...
Can you look around a
corner ?
Convert LCD into a big flat camera?
Beyond Multi-touch
18
Medical Applications of Tomography
bone reconstruction segmented vessels
Slides by Doug Lanman
MIT Media Lab
Camera Culture
Ramesh Raskar
Camera Culture
Mitsubishi Electric Research Laboratories Raskar 2006Spatial Augmented Reality
CurvedPlanar Non-planar
Single
Projector
Mu...
MIT Media Lab
Questions
• What will a camera look like in 10,20 years?
• How will the next billion cameras change the soci...
MIT Media Lab
Approach
• Not just USE but CHANGE camera
– Optics, illumination, sensor, movement
– Exploit wavelength, spe...
PlanPlan
• What is Computational Camera?What is Computational Camera?
• IntroductionsIntroductions
• Class formatClass for...
Fernald, Science [Sept 2006]
Shadow Refractive Reflective
Tools
for
Visual
Computing
Traditional ‘film-like’ PhotographyTraditional ‘film-like’ Photography
Lens
Detector
Pixels
Image
Slide by Shree Nayar
Computational CameraComputational Camera::
Optics, Sensors and ComputationsOptics, Sensors and Computations
Generalized
Se...
Novel Cameras
Generalized
Sensor
Generalized
Optics
Processing
Programmable Lighting
Novel Cameras
Scene
Generalized
Sensor
Generalized
Optics
Processing
Generalized
Optics
Light Source...
Cameras EverywhereCameras Everywhere
Where are the ‘cameras’?Where are the ‘cameras’?
Where are the ‘cameras’?Where are the ‘cameras’?
DIY Green Screen Effects $160
Panasonic Beauty Appliance
Fujifilm FinePix 3D (stereo)
GigaPan Epic 100
Simply getting depth is challenging !
Godin et al. An Assessment of Laser Range Measurement on Marble Surfaces. Intl.
M. L...
Contact
Non-Contact
Active
Passive
Transmissive
Reflective
Shape-from-X
(stereo/multi-view, silhouettes, focus/defocus, mo...
DARPA Grand ChallengeDARPA Grand Challenge
Do-It-Yourself (DIY) 3D Scanners
Lanman and Taubin’09
What is ‘interesting’ here?
Social voting in the real world = ‘popular’
Pantheon
How do we move through a space?
Computational Photography
[Raskar and Tumblin]
1. Epsilon Photography
– Low-level vision: Pixels
– Multi-photos by perturb...
Goal and Experience
Low Level Mid Level High
Level
Hyper
realism
Raw
Angle,
spectrum
aware
Non-visual
Data, GPS
Metadata
P...
• Ramesh Raskar and
Jack Tumblin
• Book Publishers: A K Peters
• ComputationalPhotography.org
GoalsGoals
• Change the rules of the gameChange the rules of the game
– Emerging optics, illumination, novel sensorsEmergi...
Vein ViewerVein Viewer (Luminetx)(Luminetx)
Locate subcutaneous veinsLocate subcutaneous veins
Vein ViewerVein Viewer (Luminetx)(Luminetx)
Near-IR camera locates subcutaneous veins and projectNear-IR camera locates su...
Beyond Visible SpectrumBeyond Visible Spectrum
CedipRedShift
• FormatFormat
– 4 (3) Assignments4 (3) Assignments
• Hands on with optics,Hands on with optics,
illumination, sensors, ma...
What is the emphasis?What is the emphasis?
• Learn fundamental techniques in imagingLearn fundamental techniques in imagin...
Pre-reqsPre-reqs
• Two tracks:Two tracks:
– Supporting students with varying backgroundsSupporting students with varying b...
Assignments:Assignments:
You are encouraged to program in Matlab for image analysisYou are encouraged to program in Matlab...
2 Sept 18th Modern Optics and Lenses, Ray-matrix operations
3 Sept 25th Virtual Optical Bench, Lightfield Photography, Fou...
Topics not coveredTopics not covered
• Only a bit of topics belowOnly a bit of topics below
• Art and AestheticsArt and Ae...
Courses related to CompCameraCourses related to CompCamera
• Spring 2010:Spring 2010:
– Camera Culture Seminar [Raskar, Me...
Questions ..Questions ..
• Brief IntroductionsBrief Introductions
• Are you a photographer ?Are you a photographer ?
• Do you use camera for vision...
2nd
International
Conference on
Computational
Photography
Papers due
November 2,
2009
http://cameraculture.media.mit.edu/i...
Writing a Conference Quality PaperWriting a Conference Quality Paper
• How to come up with new ideasHow to come up with ne...
How to quickly get started writing a paperHow to quickly get started writing a paper
• AbstractAbstract
• 1. Introduction1...
Casio EX F1Casio EX F1
• What can it do?What can it do?
– Mostly high speed imagingMostly high speed imaging
– 1200 fps120...
Cameras and PhotographyCameras and Photography
Art, Magic, MiracleArt, Magic, Miracle
TopicsTopics
• Smart LightingSmart Lighting
– Light stages, Domes, Light waving, Towards 8DLight stages, Domes, Light wavi...
Debevec et al. 2002: ‘Light Stage 3’
Image-Based Actual Re-lightingImage-Based Actual Re-lighting
Film the background in Milan,Film the background in Milan,
Me...
Can you look around a
corner ?
Can you look around a corner ?
Kirmani, Hutchinson, Davis, Raskar 2009
Accepted for ICCV’2009, Oct 2009 in Kyoto
Impulse R...
Femtosecond Laser as Light Source
Pico-second detector array as Camera
Rollout Photographs © Justin Kerr:Rollout Photographs © Justin Kerr:
Slide idea: Steve SeitzSlide idea: Steve Seitz
http:/...
Part 2:Part 2:
Fast ForwardFast Forward
PreviewPreview
Synthetic LightingSynthetic Lighting
Paul Haeberli, Jan 1992Paul Haeberli, Jan 1992
HomeworkHomework
• Take multiple photos by changing lightingTake multiple photos by changing lighting
• Mix and match colo...
Debevec et al. 2002: ‘Light Stage 3’
Image-Based Actual Re-lightingImage-Based Actual Re-lighting
Film the background in Milan,Film the background in Milan,
Me...
Depth Edge CameraDepth Edge Camera
Ramesh Raskar, Karhan Tan, Rogerio Feris,Ramesh Raskar, Karhan Tan, Rogerio Feris,
Jingyi Yu, Matthew TurkJingyi Yu, Matth...
Depth Discontinuities
Internal and external
Shape boundaries, Occluding contour, Silhouettes
Depth
Edges
Our MethodCanny
Participatory Urban SensingParticipatory Urban Sensing
Deborah Estrin talk yesterday
Static/semi-dynamic/dynamic data
A. C...
CrowdsourcingCrowdsourcing
http://www.wired.com/wired/archive/14.06/crowds.html
Object Recognition
Fakes
Template matching...
Community Photo CollectionsCommunity Photo Collections U of Washington/Microsoft: Photosynth
GigaPixel ImagesGigaPixel Images
Microsoft HDView
http://www.xrez.com/owens_giga.html
http://www.gigapxl.org/
OpticsOptics
• It is all about rays not pixelsIt is all about rays not pixels
• Study using lightfieldsStudy using lightfi...
Assignment 2Assignment 2
• Andrew Adam’s Virtual Optical BenchAndrew Adam’s Virtual Optical Bench
Light Field Inside a CameraLight Field Inside a Camera
Lenslet-based Light Field cameraLenslet-based Light Field camera
[Adelson and Wang, 1992, Ng et al. 2005 ]
Light Field Ins...
Stanford Plenoptic CameraStanford Plenoptic Camera [Ng et al 2005][Ng et al 2005]
4000 × 4000 pixels ÷ 292 × 292 lenses = ...
Digital RefocusingDigital Refocusing
[Ng et al 2005][Ng et al 2005]
Can we achieve this with aCan we achieve this with a M...
Mask based Light Field Camera
Mask
Sensor
[Veeraraghavan, Raskar, Agrawal, Tumblin, Mohan, Siggraph 2007 ]
How to Capture
4D Light Field with
2D Sensor ?
What should be the
pattern of the mask ?
Radio Frequency HeterodyningRadio Frequency Heterodyning
Baseband Audio
Signal
Receiver: DemodulationHigh Freq Carrier
100...
Optical HeterodyningOptical Heterodyning
Photographic
Signal
(Light Field)
Carrier Incident
Modulated
Signal
Reference
Car...
Captured 2D Photo
Encoding due to
Mask
1/f0
Mask Tile
Cosine Mask Used
fθ
Modulated Light Field
fx
fθ0
fx0
Modulation
Function
Sensor Slice captures entire Light Field
2D
FFT
Traditional Camera Photo
Heterodyne Camera Photo
Magnitude of 2D FFT
2D
FFT
Magnitude of 2D FFT
Computing 4D Light Field
2D Sensor Photo, 1800*1800 2D Fourier Transform, 1800*1800
2D
FFT
Rearrange 2D tiles into 4D plan...
Agile Spectrum Imaging
With Ankit Mohan, Jack Tumblin [Eurographics 2008]
Lens Glare Reduction
[Raskar, Agrawal, Wilson, Veeraraghavan SIGGRAPH 2008]
Glare/Flare due to camera lenses reduces contr...
Glare Reduction/Enhancement usingGlare Reduction/Enhancement using
4D Ray Sampling4D Ray Sampling
Captured Glare
Reduced
G...
i
j
x
Sensor
u
Glare = low frequency noise in 2D
•But is high frequency noise in 4D
•Remove via simple outlier rejection
Long-range
synthetic aperture photography
Levoy et al., SIGG2005
Synthetic aperture videography
Focus Adjustment: Sum of Bundles
Synthetic aperture photography
Smaller aperture   less blur, smaller circle of confusion
Synthetic aperture photography
Merge MANY cameras to act as ONE BIG LENS
Small items are so blurry
they seem to disappear..
Light field photography using a
handheld plenoptic camera
Ren Ng, Marc Levoy, Mathieu Brédif,
Gene Duval, Mark Horowitz an...
Prototype camera
4000 × 4000 pixels ÷ 292 × 292 lenses = 14 × 14 pixels
Contax medium format camera Kodak 16-megapixel sen...
Example of digital refocusing
Extending the depth of field
conventional photograph,
main lens at f / 22
conventional photograph,
main lens at f / 4
ligh...
Ramesh Raskar, CompPhoto Class Northeastern, Fall 2005
Imaging in Sciences:Imaging in Sciences:
Computer TomographyCompute...
© 2004 Marc Levoy
Borehole tomography
• receivers measure end-to-end travel time
• reconstruct to find velocities in inter...
© 2004 Marc Levoy
Deconvolution microscopy
• competitive with confocal imaging, and much faster
• assumes emission or atte...
Ramesh Raskar, CompPhoto Class Northeastern, Fall 2005
Coded-Aperture ImagingCoded-Aperture Imaging
• Lens-free imaging!Le...
Mask in a Camera
Mask
Aperture
Canon EF 100 mm 1:1.28 Lens,
Canon SLR Rebel XT camera
Digital Refocusing
Captured Blurred Image
Digital Refocusing
Refocused Image on Person
Digital Refocusing
Larval Trematode WormLarval Trematode Worm
Mask? Sensor
Mask
SensorMask? Sensor
Mask
Sensor
Mask? Sensor
4D Light Field from 2D
Photo:
Heterodyne Light Field
Camera
...
Coding and Modulation in Camera Using MasksCoding and Modulation in Camera Using Masks
Mask? Sensor
Mask
Sensor
Mask
Senso...
Slides by Todor Georgiev
Slides by Todor Georgiev
Slides by Todor Georgiev
Conventional Lens: Limited Depth of FieldConventional Lens: Limited Depth of Field
Smaller
Aperture
Open
Aperture
Slides b...
Wavefront Coding using Cubic Phase PlateWavefront Coding using Cubic Phase Plate
"Wavefront Coding: jointly optimized opti...
Depth Invariant BlurDepth Invariant Blur
Conventional System Wavefront Coded System
Slides by Shree Nayar
Typical PSF changes slowly Designed PSF changes fast
Decoding depth via defocus blur
• Design PSF that changes quickly thr...
Rotational PSFRotational PSF
R. Piestun, Y. Schechner, J. Shamir, “Propagation-Invariant Wave Fields with Finite Energy,” ...
Can we deal with particle-wave duality of light
with modern Lightfield theory ?
15
Young’s Double Slit
Expt
first null
(OP...
Light Fields
• Radiance per ray
• Ray parameterization:
• Position : x
• Direction : θ Reference
plane
position
direction
...
Light Fields for Wave Optics EffectsLight Fields for Wave Optics Effects
Wigner
Distribution
Function
Light
Field
LF < WDF...
Limitations of Traditional Lightfields
Wigner
Distribution
Function
TraditionalTraditional
Light FieldLight Field
Traditio...
Example: New Representations Augmented Lightfields
Wigner
Distribution
Function
TraditionalTraditional
Light FieldLight Fi...
(ii) Augmented Light Field with LF
Transformer
15
WDF
LightLight
FieldField
LightLight
FieldField
Augmented LF
Interaction...
Virtual light projector with real valued (possibly
negative radiance) along a ray
15
real projector
real projector
first n...
(ii) ALF with LF Transformer
15
“Origami Lens”: Thin Folded Optics (2007)
“Ultrathin Cameras Using Annular Folded Optics, “
E. J. Tremblay, R. A. Stack, R...
Gradient Index (GRIN) Optics
Conventional Convex LensGradient Index ‘Lens’
Continuous change of the refractive index
withi...
Photonic Crystals
• ‘Routers’ for photons instead of electrons
• Photonic Crystal
– Nanostructure material with ordered ar...
Schlieren
Photography
• Image of small index of refraction gradients in a gas
• Invisible to human eye (subtle mirage effe...
http://www.mne.psu.edu/psgdl/FSSPhotoalbum/index1.htm
Varying PolarizationVarying Polarization
Yoav Y. Schechner, Nir Karpel 2005Yoav Y. Schechner, Nir Karpel 2005
Best polariz...
Varying PolarizationVarying Polarization
• Schechner, Narasimhan, NayarSchechner, Narasimhan, Nayar
• Instant dehazingInst...
Photon-x:
Polarization Bayer Mosaic for
Surface normals
Novel SensorsNovel Sensors
• Gradient sensingGradient sensing
• HDR Camera, Log sensingHDR Camera, Log sensing
• Line-scan...
MIT Media Lab
• Camera =
– 0D sensors
• Motion detector
• Bar code scanner
• Time-of-flight range detector (Darpa Grand Ch...
Single Pixel Camera
Compressed Imaging
∫ =
Scene
X Aggregate
Brightness
Y
“A New Compressive Imaging Camera Architecture”
D. Takhar et al., Pr...
Single Pixel Camera
Image on the
DMD
Example
Original Compressed Imaging
4096 Pixels
1600 Measurements
(40%)
65536 Pixels
6600 Measurements
(10%)
Example
Original Compressed Imaging
4096 Pixels
800 Measurements
(20%)
4096 Pixels
1600 Measurements
(40%)
Line Scan Camera: PhotoFinish 2000 Hz
© 2004 Marc Levoy
The CityBlock Project
Precursor to Google Streetview Maps
Figure 2 results
Input Image
Problem: Motion Deblurring
Image Deblurred by solving a linear system. No post-processing
Blurred Taxi
Application: Aerial Imaging
Time = 0Time = T
Long Exposure:
The moving camera creates smear
Time
Shutter Open
Shutter Clos...
Application: Electronic Toll
Booths
Time
Goal: Automatic number
plate recognition from
sharp image
Monitoring Camera for d...
Fluttered Shutter Camera
Raskar, Agrawal, Tumblin Siggraph2006
Ferroelectric shutter in front of the lens is turned
opaque...
Short Exposure Traditional MURA Coded
Coded Exposure Photography:
Assisting Motion Deblurring using Fluttered Shutter
Rask...
Compound Lens of Dragonfly
TOMBO: Thin Camera (2001)
“Thin observation module by bound optics (TOMBO),”
J. Tanida, T. Kumagai, K. Yamada, S. Miyatake...
TOMBO: Thin Camera
ZCam (3Dvsystems),
Shuttered Light Pulse
Resolution :Resolution :
1cm for 2-7 meters1cm for 2-7 meters
Graphics can inserted behind and between characters
Cameras for HCICameras for HCI
• Frustrated total internal reflectionFrustrated total internal reflection
Han, J. Y. 2005....
Converting LCD Screen = large Camera for 3D
Interactive HCI and Video Conferencing
Matthew Hirsch, Henry Holtzman
Doug Lan...
Beyond Multi-touch: Mobile
Laptops
Mobile
Light Sensing Pixels in LCD
Displaywithembeddedopticalsensors
Sharp Microelectronics Optical Multi-touch Prototype
Design Overview
Displaywithembeddedopticalsensors
LCD,
displaying mask
Opticalsensorarray
~2.5 cm~50 cm
Beyond Multi-touch: Hover Interaction
• Seamless transition of
multitouch to gesture
• Thin package, LCD
Design Vision
Object Collocated Capture
and Display
Bare Sensor
SpatialLightModulator
Touch + Hover using Depth Sensing LCD Sensor
Overview: Sensing Depth from
Array of Virtual Cameras in
LCD
• 36 bit code at 0.3mm resolution36 bit code at 0.3mm resolution
• 100 fps camera at 8800 nm100 fps camera at 8800 nm
http...
• Smart Barcode size : 3mm x 3mm
• Ordinary Camera: Distance 3 meter
Computational Probes:Computational Probes:
Long Dista...
MIT Media Lab Camera Culture
Bokode
MIT media lab camera culture
Barcodes
markers that assist machines in
understanding the real world
MIT media lab camera culture
Bokode:
ankit mohan, grace woo, shinsaku hiura,
quinn smithwick, ramesh raskar
camera culture...
MIT Media Lab Camera Culture
Defocus
blur of
Bokode
MIT Media Lab Camera Culture
Image greatly magnified.
Simplified Ray Diagram
MIT Media Lab Camera Culture
Our Prototypes
MIT media lab camera culture
street-view tagging
Mitsubishi Electric Research Laboratories Special Effects in the Real World Raskar 2006
Vicon
Motion Capture
High-speed
IR...
Mitsubishi Electric Research Laboratories Special Effects in the Real World Raskar 2006
R Raskar, H Nii, B de Decker, Y Ha...
Mitsubishi Electric Research Laboratories Special Effects in the Real World Raskar 2006
Imperceptible Tags under clothing,...
Camera-based HCICamera-based HCI
• Many projects hereMany projects here
– Robotics, Speechome, Spinner, Sixth SenseRobotic...
213
Computational Imaging in the Sciences
Driving Factors:
 new instruments lead to new discoveries
(e.g., Leeuwenhoek + ...
214
Computational Imaging in the Sciences
Medical Imaging:
 transmission tomography (CT)
 reflection tomography (ultraso...
215
What is Tomography?
Definition:
 imaging by sectioning (from Greek tomos: “a section” or “cutting”)
 creates a cross...
216
Reconstruction: Filtered Backprojection
x
y
fy
fx
Fourier Projection-Slice Theorem:
 F-1
{Gθ(ω)} = Pθ(t)
 add slices...
217
Medical Applications of Tomography
bone reconstruction segmented vessels
Slides by Doug Lanman
218
Biology: Confocal Microscopy
pinhole
light source
photocell
pinhole
Slides by Doug Lanman
219
Confocal Microscopy Examples
Slides by Doug Lanman
Forerunners ..Forerunners ..
Mask
Sensor
Mask
Sensor
Fernald, Science [Sept 2006]
Shadow Refractive Reflective
Tools
for
Visual
Computing
Project AssignmentsProject Assignments
• RelightingRelighting
• Dual PhotographyDual Photography
• Virtual Optical BenchVi...
Synthetic LightingSynthetic Lighting
Paul Haeberli, Jan 1992Paul Haeberli, Jan 1992
Image-Based Actual Re-lightingImage-Based Actual Re-lighting
Film the background in Milan,Film the background in Milan,
Me...
Dual photography from diffuse reflections:Dual photography from diffuse reflections:
Homework Assignment 2Homework Assignm...
Direct Global
Shower Curtain: Diffuser
• Andrew Adam’s Virtual Optical BenchAndrew Adam’s Virtual Optical Bench
Beyond Visible SpectrumBeyond Visible Spectrum
CedipRedShift
GoalsGoals
• Change the rules of the gameChange the rules of the game
– Emerging optics, illumination, novel sensorsEmergi...
First Assignment: Synthetic LightingFirst Assignment: Synthetic Lighting
Paul Haeberli, Jan 1992Paul Haeberli, Jan 1992
• FormatFormat
– 4 (3) Assignments4 (3) Assignments
• Hands on with optics,Hands on with optics,
illumination, sensors, ma...
Assignments:Assignments:
You are encouraged to program in Matlab for image analysisYou are encouraged to program in Matlab...
2 Sept 18th Modern Optics and Lenses, Ray-matrix operations
3 Sept 25th Virtual Optical Bench, Lightfield Photography, Fou...
What is the emphasis?What is the emphasis?
• Learn fundamental techniques in imagingLearn fundamental techniques in imagin...
First Homework AssignmentFirst Homework Assignment
• Take multiple photos by changing lightingTake multiple photos by chan...
Goal and Experience
Low Level Mid Level High
Level
Hyper
realism
Raw
Angle,
spectrum
aware
Non-visual
Data, GPS
Metadata
P...
Capture
• Overcome Limitations of Cameras
• Capture Richer Data
Multispectral
• New Classes of Visual Signals
Lightfields,...
Blind CameraBlind Camera
Sascha Pohflepp,Sascha Pohflepp,
U of the Art, Berlin, 2006U of the Art, Berlin, 2006
ENDEND
Raskar Computational Camera Fall 2009 Lecture 01
Raskar Computational Camera Fall 2009 Lecture 01
Raskar Computational Camera Fall 2009 Lecture 01
Raskar Computational Camera Fall 2009 Lecture 01
Raskar Computational Camera Fall 2009 Lecture 01
Raskar Computational Camera Fall 2009 Lecture 01
Raskar Computational Camera Fall 2009 Lecture 01
Raskar Computational Camera Fall 2009 Lecture 01
Raskar Computational Camera Fall 2009 Lecture 01
Raskar Computational Camera Fall 2009 Lecture 01
Raskar Computational Camera Fall 2009 Lecture 01
Raskar Computational Camera Fall 2009 Lecture 01
Upcoming SlideShare
Loading in...5
×

Raskar Computational Camera Fall 2009 Lecture 01

10,324

Published on

Keywords: Signal processing, Applied optics, Computer graphics and vision, Electronics, Art, and Online photo collections

A computational camera attempts to digitally capture the essence of visual information by exploiting the synergistic combination of task-specific optics, illumination, sensors and processing. We will discuss and play with thermal cameras, multi-spectral cameras, high-speed, and 3D range-sensing cameras and camera arrays. We will learn about opportunities in scientific and medical imaging, mobile-phone based photography, camera for HCI and sensors mimicking animal eyes.

We will learn about the complete camera pipeline. In several hands-on projects we will build several physical imaging prototypes and understand how each stage of the imaging process can be manipulated.

We will learn about modern methods for capturing and sharing visual information. If novel cameras can be designed to sample light in radically new ways, then rich and useful forms of visual information may be recorded -- beyond those present in traditional protographs. Furthermore, if computational process can be made aware of these novel imaging models, them the scene can be analyzed in higher dimensions and novel aesthetic renderings of the visual information can be synthesized.

In this couse we will study this emerging multi-disciplinary field -- one which is at the intersection of signal processing, applied optics, computer graphics and vision, electronics, art, and online sharing through social networks. We will examine whether such innovative camera-like sensors can overcome the tough problems in scene understanding and generate insightful awareness. In addition, we will develop new algorithms to exploit unusual optics, programmable wavelength control, and femto-second accurate photon counting to decompose the sensed values into perceptually critical elements.

Published in: Technology, Business
0 Comments
4 Likes
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total Views
10,324
On Slideshare
0
From Embeds
0
Number of Embeds
3
Actions
Shares
0
Downloads
973
Comments
0
Likes
4
Embeds 0
No embeds

No notes for slide
  • http://scalarmotion.wordpress.com/2009/03/15/propeller-image-aliasing/
  • See http://www1.cs.columbia.edu/CAVE/projects/separation/occluders_gallery.php
  • Since we are adapting LCD technology we can fit a BiDi screen into laptops and mobile devices.
  • http://www.youtube.com/watch?v=2CWpZKuy-NE
  • About the Camera Culture group (http://cameraculture.info)
  • My own background
  • CPUs and computers don’t mimic the human brain. And robots don’t mimic human activities. Should the hardware for visual computing which is cameras and capture devices, mimic the human eye? Even if we decide to use a successful biological vision system as basis, we have a range of choices. For single chambered to compounds eyes, shadow-based to refractive to reflective optics. So the goal of my group at Media Lab is to explore new designs and develop software algorithms that exploit these designs.
  • 4 blocks : light, optics, sensors, processing, (display: light sensitive display)
  • 4 blocks : light, optics, sensors, processing, (display: light sensitive display)
  • Panasonic’s Nano Care beautyappliance. Model numbers EH-SA42 and EH-SA41
    http://www.slipperybrick.com/2008/12/webcam-equipped-headphones-put-eyes-over-your-ears/
  • Image sources:
    http://meshlab.sourceforge.net/images/screenshots/SnapMeshLab.align1.png
  • Image sources:
    http://community.middlebury.edu/~schar/papers/structlight/p1.html
  • Image sources:
    http://blog.makezine.com/archive/2006/10/how_to_build_your_own_3d.html
    http://www.make-digital.com/make/vol14/?pg=195
    http://www.shapeways.com/blog/uploads/david-starter-kit.jpg
    http://www.shapeways.com/blog/archives/248-DAVID-3D-Scanner-Starter-Kit-Review.html#extended
    http://www.david-laserscanner.com/
    http://www.youtube.com/watch?v=XSrW-wAWZe4
    http://www.chromecow.com/MadScience/3DScanner/3DScan_02.htm
    Liquid scanner, various laser scanners
  • Check Steve Seitz and U of Washington Phototourism Page
  • Inference and perception are important. Intent and goal of the photo is important.
    The same way camera put photorealistic art out of business, maybe this new artform will put the traditional camera out of business. Because we wont really care about a photo, merely a recording of light but a form that captures meaningful subset of the visual experience. Multiperspective photos. Photosynth is an example.
  • For all preliminary info watch videos and see slides at
    http://raskar.info/photo/
    2008
    http://web.media.mit.edu/~raskar/photo/2008/VideoPresentation/
    2007
    http://portal.acm.org/citation.cfm?doid=1281500.1281502
    Given the multi-disciplinary nature of the course, the class will be open and supportive of students with different backgrounds. Hence, we are going to try a two-track approach for homeworks: one software-intensive and the other with software-hardware (electronics/optics) emphasis.
  • Slides on ‘How to Come up With Ideas’ http://stellar.mit.edu/S/course/MAS/fa09/MAS.531/index.html
  • cameraculture.media.mit.edu/femtotransientimaging
  • Why doesnt food look as yummy as it tastes in most photos?
  • 0:30
  • Taken to the extreme
    objects off the focal plane become so blurry that they effectively disappear
    at least if they are smaller than the aperture
    Leonardo noticed 500 years ago
    a needle placed in front of his eye
    because it was smaller than his pupil
    did not occlude his vision
  • Shielded by screening pigment. The visual organ provides no spatial information, but by comparing the signal from 2 organs or by moving the body, the worm can navigate towards brighter or darker places. It can also keep certain body orientation. Despite lack of spatial vision, this is an evolutionary forerunner to real eyes.
  • Conventional lenses have a limited depth of field. One can increase the depth of field and reduce the blur by stopping down the aperture. However, this leads to noisy images.
  • A solution proposed by authors and now commercialized by CDM optics uses a cubic phase plate. The effect of cubic phase plate is equivalent so summing images due to lens position was different planes of focus. Note that the cubic phase plate can be made up of glass of varying thickness OR glass of varying refractive index. The example here shows a total phase difference of just 8 periods.
  • Unlike traditional systems, where you see a conical profile of lightfield for a point in focus, for CDM the profile is more like a twisted cylinder of straws. This makes the point spread function somewhat depth independent.
  • put two projectors, one virtual projector at the middle, along this line, connecting the virtual light source, always destructive interference, does it make sense and right?
    http://raskar.scripts.mit.edu/~raskar/lightfields/
  • in wave optics, WDF exhibit similar property, compare the two,
  • the motivation, to augment lf, model diffraction in light field formulation
  • put two projectors, one virtual projector at the middle, along this line, connecting the virtual light source, always destructive interference, does it make sense and right?
  • New techniques are trying decrease this distance using a folded optics approach. The origami lens uses multiple total internal reflection to propagate the bundle of rays.
  • Consider a conventional lens: An incoming light ray is first refracted when it enters the shaped lens surface because of the abrupt change of the refractive index from air to the homogeneous material. It passes the lens material in a direct way until it emerges through the exit surface of the lens where it is refracted again because of the abrupt index change from the lens material to air (see Fig. 1, right). A well-defined surface shape of the lens causes the rays to be focussed on a spot and to create the image. The high precision required for the fabrication of the surfaces of conventional lenses aggrevates the miniaturization of the lenses and raises the costs of production.
    GRIN lenses represent an interesting alternative since the lens performance depends on a continuous change of the refractive index within the lens material. Instead of complicated shaped surfaces plane optical surfaces are used. The light rays are continuously bent within the lens until finally they are focussed on a spot. Miniaturized lenses are fabricated down to 0.2 mm in thickness or diameter. The simple geometry allows a very cost-effective production and simplifies the assembly. Varying the lens length implies an enormous flexibility at hand to fit the lens parameters as, e.g., the focal length and working distance. For example, appropriately choosing the lens length causes the image plane to lie directly on the surface plane of the lens so that sources such as optical fibers can be glued directly onto the lens surface.
  • Current explosion in information technology has been derived from our ability to control the flow of electrons in a semiconductor in the most intricate ways. Photonic crystals promise to give us similar control over photons - with even greater flexibility because we have far more control over the properties of photonic crystals than we do over the electronic properties of semiconductors.
  • Changes in the index of refraction of air are made visible by Schlieren Optics. This special optics technique is extremely sensitive to deviations of any kind that cause the light to travel a different path.
    Clearest results are obtained from flows which are largely two-dimensional and not volumetric.
    In schlieren photography, the collimated light is focused with a lens, and a knife-edge is placed at the focal point, positioned to block about half the light. In flow of uniform density this will simply make the photograph half as bright. However in flow with density variations the distorted beam focuses imperfectly, and parts which have focussed in an area covered by the knife-edge are blocked. The result is a set of lighter and darker patches corresponding to positive and negative fluid density gradients in the direction normal to the knife-edge.
  • Full-Scale Schlieren Image Reveals The Heat Coming off of a Space Heater, Lamp and Person
  • Precursor to Google Streetview Maps
  • The liquid lenses that we develop are based on the electrowetting phenomenon described below : a water drop is deposited on a substrate made of metal, covered by a thin insulating layer. The voltage applied to the substrate modifies the contact angle of the liquid drop. The liquid lens uses two isodensity liquids, one is an insulator while the other is a conductor. The variation of voltage leads to a change of curvature of the liquid-liquid interface, which in turn leads to a change of the focal length of the lens.
  • The liquid lenses that we develop are based on the electrowetting phenomenon described below : a water drop is deposited on a substrate made of metal, covered by a thin insulating layer. The voltage applied to the substrate modifies the contact angle of the liquid drop. The liquid lens uses two isodensity liquids, one is an insulator while the other is a conductor. The variation of voltage leads to a change of curvature of the liquid-liquid interface, which in turn leads to a change of the focal length of the lens.
  • The liquid lenses that we develop are based on the electrowetting phenomenon described below : a water drop is deposited on a substrate made of metal, covered by a thin insulating layer. The voltage applied to the substrate modifies the contact angle of the liquid drop. The liquid lens uses two isodensity liquids, one is an insulator while the other is a conductor. The variation of voltage leads to a change of curvature of the liquid-liquid interface, which in turn leads to a change of the focal length of the lens.
  • Since we are adapting LCD technology we can fit a BiDi screen into laptops and mobile devices.
  • Recall that one of our inspirations was this new class of optical multi-touch device. At the top you can see a prototype that Sharp Microelectronics has published. These devices are basically arrays of naked phototransistors. Like a document scanner, they are able to capture a sharp image of objects in contact with the surface of the screen.
    But as objects move away from the screen, without any focusing optics, the images captured this device are blurred.
  • Our observation is that by moving the sensor plane a small distance from the LCD in an optical multitouch device, we enable mask-based light-field capture.
    We use the LCD screen to display the desired masks, multiplexing between images displayed for the user and masks displayed to create a virtual camera array. I’ll explain more about the virtual camera array in a moment, but suffice to say that once we have measurements from the array we can extract depth.
  • This device would of course support multi-touch on-screen interaction, but because it can measure the distance to objects in the scene a user’s hands can be tracked in a volume in front of the screen, without gloves or other fiducials.
  • Thus the ideal BiDi screen consists of a normal LCD panel separated by a small distance from a bare sensor array. This format creates a single device that spatially collocates a display and capture surface.
  • So here is a preview of our quantitative results. I’ll explain this in more detail later on, but you can see we’re able to accurately distinguish the depth of a set of resolution targets. We show above a portion of portion of views form our virtual cameras, a synthetically refocused image, and the depth map derived from it.
  • http://raskar.info/prakash
  • http://cobweb.ecn.purdue.edu/~malcolm/pct/CTI_Ch03.pdf
  • http://www.youtube.com/watch?v=2CWpZKuy-NE
  • In a confocal laser scanning microscope, a laser beam passes through a light source aperture and then is focused by an objective lens into a small (ideally diffraction limited) focal volume within a fluorescent specimen. A mixture of emitted fluorescent light as well as reflected laser light from the illuminated spot is then recollected by the objective lens. A beam splitter separates the light mixture by allowing only the laser light to pass through and reflecting the fluorescent light into the detection apparatus. After passing a pinhole, the fluorescent light is detected by a photodetection device (a photomultiplier tube (PMT) or avalanche photodiode), transforming the light signal into an electrical one that is recorded by a computer.
  • While the single photosensor worm, has evolved into worms with multiple photosensors .. They have been evolutionary forerunners to the human eye.
    So what about the cameras here .. Maybe we are thinking about these initial types in a limited way and They are forerunners to something very exciting.
  • CPUs and computers don’t mimic the human brain. And robots don’t mimic human activities. Should the hardware for visual computing which is cameras and capture devices, mimic the human eye? Even if we decide to use a successful biological vision system as basis, we have a range of choices. For single chambered to compounds eyes, shadow-based to refractive to reflective optics. So the goal of my group at Media Lab is to explore new designs and develop software algorithms that exploit these designs.
  • Maybe all the consumer photographer wants is a black box with big red button. No optics, sensors or flash.
    If I am standing the middle of times square and I need to take a photo. Do I really need a fancy camera?
  • The camera can trawl on flickr and retrieve a photo that is roughly taken at the same position, at the same time of day. Maybe all the consumer wants is a blind camera.
  • Transcript of "Raskar Computational Camera Fall 2009 Lecture 01"

    1. 1. MIT Media LabMIT Media Lab Camera CultureCamera Culture Ramesh RaskarRamesh Raskar MIT Media LabMIT Media Lab http:// CameraCulture . info/http:// CameraCulture . info/ MAS 131/ 531MAS 131/ 531 Computational Camera &Computational Camera & Photography:Photography: MAS 131/ 531MAS 131/ 531 Computational Camera &Computational Camera & Photography:Photography:
    2. 2. MIT Media LabMIT Media Lab http://scalarmotion.wordpress.com/2009/03/15/propeller-image-aliasing/
    3. 3. Direct Global Shower Curtain: Diffuser
    4. 4. Direct Global Shower Curtain: Diffuser
    5. 5. A Teaser: Dual PhotographyA Teaser: Dual Photography Scene PhotocellProjector
    6. 6. A Teaser: Dual PhotographyA Teaser: Dual Photography Scene PhotocellProjector
    7. 7. A Teaser: Dual PhotographyA Teaser: Dual Photography Scene PhotocellProjector
    8. 8. A Teaser: Dual PhotographyA Teaser: Dual Photography Scene PhotocellProjector Camera
    9. 9. camera The 4D transport matrix:The 4D transport matrix: Contribution of each projector pixel to each camera pixelContribution of each projector pixel to each camera pixel scene projector
    10. 10. camera The 4D transport matrix:The 4D transport matrix: Contribution of each projector pixel to each camera pixelContribution of each projector pixel to each camera pixel scene projector Sen et al, Siggraph 2005Sen et al, Siggraph 2005
    11. 11. camera The 4D transport matrix:The 4D transport matrix: Which projector pixel contribute to each camera pixelWhich projector pixel contribute to each camera pixel scene projector Sen et al, Siggraph 2005Sen et al, Siggraph 2005 ??
    12. 12. Dual photography from diffuse reflections:Dual photography from diffuse reflections: Homework Assignment 2Homework Assignment 2 the camera’s view Sen et al, Siggraph 2005Sen et al, Siggraph 2005
    13. 13. Digital cameras are boring:Digital cameras are boring: Film-like PhotographyFilm-like Photography • Roughly the same features and controls as film camerasRoughly the same features and controls as film cameras – zoom and focuszoom and focus – aperture and exposureaperture and exposure – shutter release and advanceshutter release and advance – one shutter press = one snapshotone shutter press = one snapshot
    14. 14. Improving FILM-LIKEImproving FILM-LIKE Camera PerformanceCamera Performance What would make it ‘perfect’ ?What would make it ‘perfect’ ? • Dynamic RangeDynamic Range • Vary Focus Point-by-PointVary Focus Point-by-Point • Field of view vs. ResolutionField of view vs. Resolution • Exposure time and Frame rateExposure time and Frame rate
    15. 15. MIT Media Lab • What type of ‘Cameras’ will we study? • Not just film-mimicking 2D sensors – 0D sensors • Motion detector • Bar code scanner • Time-of-flight range detector – 1D sensors • Line scan camera (photofinish) • Flatbed scanner • Fax machine – 2D sensors – 2-1/2D sensors – ‘3D’ sensors – 4D and 6D tomography machines and displays
    16. 16. Can you look around a corner ?
    17. 17. Convert LCD into a big flat camera? Beyond Multi-touch
    18. 18. 18 Medical Applications of Tomography bone reconstruction segmented vessels Slides by Doug Lanman
    19. 19. MIT Media Lab Camera Culture Ramesh Raskar Camera Culture
    20. 20. Mitsubishi Electric Research Laboratories Raskar 2006Spatial Augmented Reality CurvedPlanar Non-planar Single Projector Multiple Projectors Projector j User : T ? Pocket- Proj Objects Computational IlluminationComputational Illumination 1998 1998 2002 2002 1999 2002 20031998 1997 Computational PhotographyComputational Photography My Background
    21. 21. MIT Media Lab Questions • What will a camera look like in 10,20 years? • How will the next billion cameras change the social culture? • How can we augment the camera to support best ‘image search’? • What are the opportunities in pervasive recording? – e.g. GoogleEarth Live • How will ultra-high-speed/resolution imaging change us? • How should we change cameras for movie-making, news reporting?
    22. 22. MIT Media Lab Approach • Not just USE but CHANGE camera – Optics, illumination, sensor, movement – Exploit wavelength, speed, depth, polarization etc – Probes, actuators, Network • We have exhausted bits in pixels – Scene understanding is challenging – Build feature-revealing cameras – Process photons
    23. 23. PlanPlan • What is Computational Camera?What is Computational Camera? • IntroductionsIntroductions • Class formatClass format • Fast Forward PreviewFast Forward Preview – Sample topicsSample topics • First warmup assignmentFirst warmup assignment
    24. 24. Fernald, Science [Sept 2006] Shadow Refractive Reflective Tools for Visual Computing
    25. 25. Traditional ‘film-like’ PhotographyTraditional ‘film-like’ Photography Lens Detector Pixels Image Slide by Shree Nayar
    26. 26. Computational CameraComputational Camera:: Optics, Sensors and ComputationsOptics, Sensors and Computations Generalized Sensor Generalized Optics Computations Picture 4D Ray Bender Upto 4D Ray Sampler Ray Reconstruction Raskar and Tumblin
    27. 27. Novel Cameras Generalized Sensor Generalized Optics Processing
    28. 28. Programmable Lighting Novel Cameras Scene Generalized Sensor Generalized Optics Processing Generalized Optics Light Sources Modulators
    29. 29. Cameras EverywhereCameras Everywhere
    30. 30. Where are the ‘cameras’?Where are the ‘cameras’?
    31. 31. Where are the ‘cameras’?Where are the ‘cameras’?
    32. 32. DIY Green Screen Effects $160 Panasonic Beauty Appliance
    33. 33. Fujifilm FinePix 3D (stereo) GigaPan Epic 100
    34. 34. Simply getting depth is challenging ! Godin et al. An Assessment of Laser Range Measurement on Marble Surfaces. Intl. M. Levoy. Why is 3D scanning hard? 3DPVT, 2002  Must be simultaneously illuminated and imaged (occlusion problems)  Non-Lambertian BRDFs (transparency, reflections, subsurface scattering)  Acquisition time (dynamic scenes), large (or small) features, etc. Lanman and Taubin’09
    35. 35. Contact Non-Contact Active Passive Transmissive Reflective Shape-from-X (stereo/multi-view, silhouettes, focus/defocus, motion, texture, etc.) Active Variants of Passive Methods (stereo/focus/defocus using projected patterns) Time-of-Flight Triangulation (laser striping and structured lighting) Computed Tomography (CT) Transmissive Ultrasound Direct Measurements (rulers, calipers, pantographs, coordinate measuring machines (CMM), AFM) Non-optical Methods (reflective ultrasound, radar, sonar, MRI) Taxonomy of 3D Scanning: Lanman and Taubin’09
    36. 36. DARPA Grand ChallengeDARPA Grand Challenge
    37. 37. Do-It-Yourself (DIY) 3D Scanners Lanman and Taubin’09
    38. 38. What is ‘interesting’ here? Social voting in the real world = ‘popular’
    39. 39. Pantheon
    40. 40. How do we move through a space?
    41. 41. Computational Photography [Raskar and Tumblin] 1. Epsilon Photography – Low-level vision: Pixels – Multi-photos by perturbing camera parameters – HDR, panorama, … – ‘Ultimate camera’ 1. Coded Photography – Mid-Level Cues: • Regions, Edges, Motion, Direct/global – Single/few snapshot • Reversible encoding of data – Additional sensors/optics/illum – ‘Scene analysis’ 1. Essence Photography – High-level understanding • Not mimic human eye • Beyond single view/illum – ‘New artform’ captures a machine-readable representation of our world to hyper-realistically synthesize the essence of our visual experience.
    42. 42. Goal and Experience Low Level Mid Level High Level Hyper realism Raw Angle, spectrum aware Non-visual Data, GPS Metadata Priors Comprehensive 8D reflectance field Digital Epsilon Coded Essence CP aims to make progress on both axis Camera Array HDR, FoV Focal stack Decompositio n problems Depth Spectrum LightFields Human Stereo Vision Transient Imaging Virtual Object Insertion Relighting Augmented Human Experience Material editing from single photo Scene completion from photos Motion Magnification Phototourism
    43. 43. • Ramesh Raskar and Jack Tumblin • Book Publishers: A K Peters • ComputationalPhotography.org
    44. 44. GoalsGoals • Change the rules of the gameChange the rules of the game – Emerging optics, illumination, novel sensorsEmerging optics, illumination, novel sensors – Exploit priors and online collectionsExploit priors and online collections • ApplicationsApplications – Better scene understanding/analysisBetter scene understanding/analysis – Capture visual essenceCapture visual essence – Superior Metadata tagging for effective sharingSuperior Metadata tagging for effective sharing – Fuse non-visual dataFuse non-visual data • Sensors for disabled, new art forms, crowdsourcing,Sensors for disabled, new art forms, crowdsourcing, bridging culturesbridging cultures
    45. 45. Vein ViewerVein Viewer (Luminetx)(Luminetx) Locate subcutaneous veinsLocate subcutaneous veins
    46. 46. Vein ViewerVein Viewer (Luminetx)(Luminetx) Near-IR camera locates subcutaneous veins and projectNear-IR camera locates subcutaneous veins and project their location onto the surface of the skin.their location onto the surface of the skin. Coaxial IR cameraCoaxial IR camera + Projector+ Projector Coaxial IR cameraCoaxial IR camera + Projector+ Projector
    47. 47. Beyond Visible SpectrumBeyond Visible Spectrum CedipRedShift
    48. 48. • FormatFormat – 4 (3) Assignments4 (3) Assignments • Hands on with optics,Hands on with optics, illumination, sensors, masksillumination, sensors, masks • Rolling schedule for overlapRolling schedule for overlap • We have cameras, lenses,We have cameras, lenses, electronics, projectors etcelectronics, projectors etc • Vote on best projectVote on best project – Mid term examMid term exam • Test conceptsTest concepts – 1 Final project1 Final project • Should be a Novel and CoolShould be a Novel and Cool • Conference quality paperConference quality paper • Award for best projectAward for best project – Take 1 class notesTake 1 class notes – Lectures (and guestLectures (and guest talks)talks) – In-class + onlineIn-class + online discussiondiscussion • If you are a listenerIf you are a listener – Participate in online discussion, digParticipate in online discussion, dig new recent worknew recent work – Present one short 15 minute idea orPresent one short 15 minute idea or new worknew work • CreditCredit • Assignments: 40%Assignments: 40% • Project: 30%Project: 30% • Mid-term: 20%Mid-term: 20% • Class participation: 10%Class participation: 10% • Pre-reqsPre-reqs • Helpful: Linear algebra, imageHelpful: Linear algebra, image processing, think in 3Dprocessing, think in 3D • We will try to keep math toWe will try to keep math to essentials, but complex conceptsessentials, but complex concepts
    49. 49. What is the emphasis?What is the emphasis? • Learn fundamental techniques in imagingLearn fundamental techniques in imaging – In class and in homeworksIn class and in homeworks – Signal processing, Applied optics, Computer graphics and vision,Signal processing, Applied optics, Computer graphics and vision, Electronics, Art, and Online photo collectionsElectronics, Art, and Online photo collections – This is not a discussion classThis is not a discussion class • Three Applications areasThree Applications areas – PhotographyPhotography • Think in higher dimensions 3D, 4D, 6D, 8D, thermal IR, range cam,Think in higher dimensions 3D, 4D, 6D, 8D, thermal IR, range cam, lightfields, applied opticslightfields, applied optics – Active Computer Vision (real-time)Active Computer Vision (real-time) • HCI, Robotics, Tracking/Segmentation etcHCI, Robotics, Tracking/Segmentation etc – Scientific ImagingScientific Imaging • Compressive sensing, wavefront coding, tomography, deconvolution, psfCompressive sensing, wavefront coding, tomography, deconvolution, psf – But the 3 areas are merging and use similar principlesBut the 3 areas are merging and use similar principles
    50. 50. Pre-reqsPre-reqs • Two tracks:Two tracks: – Supporting students with varying backgroundsSupporting students with varying backgrounds – A. software-intensive (Photoshop/HDRshop maybe ok)A. software-intensive (Photoshop/HDRshop maybe ok) • But you will actually take longer to do assignmentsBut you will actually take longer to do assignments – B. software-hardware (electronics/optics) emphasis.B. software-hardware (electronics/optics) emphasis. • Helpful:Helpful: – Watch all videos on http://raskar.info/photo/Watch all videos on http://raskar.info/photo/ – Linear algebra, image processing, think in 3DLinear algebra, image processing, think in 3D – Signal processing, Applied optics, Computer graphics and vision, Electronics,Signal processing, Applied optics, Computer graphics and vision, Electronics, Art, and Online photo collectionsArt, and Online photo collections • We will try to keep math to essentials, but introduce complex concepts at rapidWe will try to keep math to essentials, but introduce complex concepts at rapid pacepace • Assignments versus Class materialAssignments versus Class material – Class material will present material with varying degree of complexityClass material will present material with varying degree of complexity – Each assignments has sub-elements with increasing sophisticationEach assignments has sub-elements with increasing sophistication – You can pick your levelYou can pick your level
    51. 51. Assignments:Assignments: You are encouraged to program in Matlab for image analysisYou are encouraged to program in Matlab for image analysis You may need to use C++/OpenGL/Visual programming for some hardware assignmentsYou may need to use C++/OpenGL/Visual programming for some hardware assignments Each student is expected to prepare notes for one lectureEach student is expected to prepare notes for one lecture These notes should be prepared and emailed to the instructor no later than the followingThese notes should be prepared and emailed to the instructor no later than the following Monday night (midnight EST). Revisions and corrections will be exchanged by email andMonday night (midnight EST). Revisions and corrections will be exchanged by email and after changes the notes will be posted to the website before class the following week.after changes the notes will be posted to the website before class the following week. 5 points5 points Course mailing listCourse mailing list: Please make sure that your emailid is on the course mailing list: Please make sure that your emailid is on the course mailing list Send email to raskar (at) media.mit.eduSend email to raskar (at) media.mit.edu Please fill in the email/credit/dept sheetPlease fill in the email/credit/dept sheet Office hoursOffice hours:: Email is the best way to get in touchEmail is the best way to get in touch Ramesh:. raskar (at) media.mit.eduRamesh:. raskar (at) media.mit.edu Ankit:Ankit: ankit (at) media.mit.eduankit (at) media.mit.edu Other mentors: Prof Mukaigawa, Dr Ashok VeeraraghavanOther mentors: Prof Mukaigawa, Dr Ashok Veeraraghavan After class:After class: Muddy Charles PubMuddy Charles Pub (Walker Memorial next to tennis courts, only with valid id)(Walker Memorial next to tennis courts, only with valid id)
    52. 52. 2 Sept 18th Modern Optics and Lenses, Ray-matrix operations 3 Sept 25th Virtual Optical Bench, Lightfield Photography, Fourier Optics, Wavefront Coding 4 Oct 2nd Digital Illumination, Hadamard Coded and Multispectral Illumination 5 Oct 9th Emerging Sensors: High speed imaging, 3D range sensors, Femto-second concepts, Front/back illumination, Diffraction issues 6 Oct 16th Beyond Visible Spectrum: Multispectral imaging and Thermal sensors, Fluorescent imaging, 'Audio camera' 7 Oct 23rd Image Reconstruction Techniques, Deconvolution, Motion and Defocus Deblurring, Tomography, Heterodyned Photography, Compressive Sensing 8 Oct 30th Cameras for Human Computer Interaction (HCI): 0-D and 1-D sensors, Spatio-temporal coding, Frustrated TIR, Camera-display fusion 9 Nov 6th Useful techniques in Scientific and Medical Imaging: CT-scans, Strobing, Endoscopes, Astronomy and Long range imaging 10 Nov 13th Mid-term Exam, Mobile Photography, Video Blogging, Life logs and Online Photo collections 11 Nov 20th Optics and Sensing in Animal Eyes. What can we learn from successful biological vision systems? 12 Nov 27th Thanksgiving Holiday (No Class) 13 Dec 4th Final Projects
    53. 53. Topics not coveredTopics not covered • Only a bit of topics belowOnly a bit of topics below • Art and AestheticsArt and Aesthetics • 4.343 Photography and Related Media4.343 Photography and Related Media • Software Image ManipulationSoftware Image Manipulation – Traditional computer vision,Traditional computer vision, – Camera fundamentals, Image processing, Learning,Camera fundamentals, Image processing, Learning, • 6.815/6.865 Digital and Computational Photography6.815/6.865 Digital and Computational Photography • OpticsOptics • 2.71/2.710 Optics2.71/2.710 Optics • PhotoshopPhotoshop – Tricks, toolsTricks, tools • Camera OperationCamera Operation – Whatever is in the instruction manualWhatever is in the instruction manual
    54. 54. Courses related to CompCameraCourses related to CompCamera • Spring 2010:Spring 2010: – Camera Culture Seminar [Raskar, Media Lab]Camera Culture Seminar [Raskar, Media Lab] • Graduate seminarGraduate seminar • Guest lectures + in class discussionGuest lectures + in class discussion • Homework question each weekHomework question each week • Final survey paper (or project)Final survey paper (or project) • CompCamera class: hands on projects, technical detailsCompCamera class: hands on projects, technical details – Digital and Computational Photography [Durand, CSAIL]Digital and Computational Photography [Durand, CSAIL] • Emphasis on software methods, Graphics and image processingEmphasis on software methods, Graphics and image processing • CompCamera class: hardware projects, devices, beyond visibleCompCamera class: hardware projects, devices, beyond visible spectrum/next gen camerasspectrum/next gen cameras – Optics [George Barbastathis, MechE]Optics [George Barbastathis, MechE] • Fourier optics, coherent imagingFourier optics, coherent imaging • CompCamera class: Photography, time-domain, sensors, illuminationCompCamera class: Photography, time-domain, sensors, illumination – Computational Imaging (Horn, Spring 2006)Computational Imaging (Horn, Spring 2006) • Coding, Nuclear/Astronomical imaging, emphasis on theoryCoding, Nuclear/Astronomical imaging, emphasis on theory
    55. 55. Questions ..Questions ..
    56. 56. • Brief IntroductionsBrief Introductions • Are you a photographer ?Are you a photographer ? • Do you use camera for vision/imageDo you use camera for vision/image processing? Real-time processing?processing? Real-time processing? • Do you have background inDo you have background in optics/sensors?optics/sensors? • Name, Dept, Year, Why you are hereName, Dept, Year, Why you are here
    57. 57. 2nd International Conference on Computational Photography Papers due November 2, 2009 http://cameraculture.media.mit.edu/iccp10
    58. 58. Writing a Conference Quality PaperWriting a Conference Quality Paper • How to come up with new ideasHow to come up with new ideas – See slideshow on Stellar siteSee slideshow on Stellar site • Developing your ideaDeveloping your idea – Deciding if it is worth persuingDeciding if it is worth persuing – http://en.wikipedia.org/wiki/George_H._Heilmeier#Heilmeier.27s_Catechismhttp://en.wikipedia.org/wiki/George_H._Heilmeier#Heilmeier.27s_Catechism – What are you trying to do? How is it done today, and what are the limits of currentWhat are you trying to do? How is it done today, and what are the limits of current practice? What's new in your approach and why do you think it will be successful? Whopractice? What's new in your approach and why do you think it will be successful? Who cares? If you're successful, what difference will it make? What are the risks and thecares? If you're successful, what difference will it make? What are the risks and the payoffs? How much will it cost? How long will it take?payoffs? How much will it cost? How long will it take? • Last year outcomeLast year outcome – 3 Siggraph/ICCV submissions, SRC award, 2 major research themes3 Siggraph/ICCV submissions, SRC award, 2 major research themes
    59. 59. How to quickly get started writing a paperHow to quickly get started writing a paper • AbstractAbstract • 1. Introduction1. Introduction • MotivationMotivation • Contributions** (For the first time, we have shown that xyz)Contributions** (For the first time, we have shown that xyz) • Related WorkRelated Work • Limitations and BenefitsLimitations and Benefits • 2. Method2. Method • (For every section as well as paragraph, first sentence should be the 'conlusion'(For every section as well as paragraph, first sentence should be the 'conlusion' of what that section or paragraph is going to show)of what that section or paragraph is going to show) • 3. More Second Order details (Section title will change)3. More Second Order details (Section title will change) • 4. Implementation4. Implementation • 5. Results5. Results • Performance EvaluationPerformance Evaluation • DemonstrationDemonstration • 6. Discussion and Issues6. Discussion and Issues • Future DirectionsFuture Directions • 7. Conclusion7. Conclusion
    60. 60. Casio EX F1Casio EX F1 • What can it do?What can it do? – Mostly high speed imagingMostly high speed imaging – 1200 fps1200 fps – Burst modeBurst mode • Déjà vuDéjà vu (Media Lab 1998) and(Media Lab 1998) and Moment CameraMoment Camera (Michael(Michael Cohen 2005)Cohen 2005) • HDRHDR • MovieMovie
    61. 61. Cameras and PhotographyCameras and Photography Art, Magic, MiracleArt, Magic, Miracle
    62. 62. TopicsTopics • Smart LightingSmart Lighting – Light stages, Domes, Light waving, Towards 8DLight stages, Domes, Light waving, Towards 8D • Computational Imaging outside PhotographyComputational Imaging outside Photography – Tomography, Coded Aperture ImagingTomography, Coded Aperture Imaging • Smart OpticsSmart Optics – Handheld Light field camera, ProgrammableHandheld Light field camera, Programmable imaging/apertureimaging/aperture • Smart SensorsSmart Sensors – HDR Cameras, Gradient Sensing, Line-scan Cameras,HDR Cameras, Gradient Sensing, Line-scan Cameras, DemodulatorsDemodulators • SpeculationsSpeculations
    63. 63. Debevec et al. 2002: ‘Light Stage 3’
    64. 64. Image-Based Actual Re-lightingImage-Based Actual Re-lighting Film the background in Milan,Film the background in Milan, Measure incoming light,Measure incoming light, Light the actress in Los AngelesLight the actress in Los Angeles Matte the backgroundMatte the background Matched LA and Milan lighting.Matched LA and Milan lighting. Debevec et al., SIGG2001
    65. 65. Can you look around a corner ?
    66. 66. Can you look around a corner ? Kirmani, Hutchinson, Davis, Raskar 2009 Accepted for ICCV’2009, Oct 2009 in Kyoto Impulse Response of a Scene cameraculture.media.mit.edu/femtotransientimaging
    67. 67. Femtosecond Laser as Light Source Pico-second detector array as Camera
    68. 68. Rollout Photographs © Justin Kerr:Rollout Photographs © Justin Kerr: Slide idea: Steve SeitzSlide idea: Steve Seitz http://research.famsi.org/kerrmaya.html AreAre BOTHBOTH a ‘photograph’?a ‘photograph’?
    69. 69. Part 2:Part 2: Fast ForwardFast Forward PreviewPreview
    70. 70. Synthetic LightingSynthetic Lighting Paul Haeberli, Jan 1992Paul Haeberli, Jan 1992
    71. 71. HomeworkHomework • Take multiple photos by changing lightingTake multiple photos by changing lighting • Mix and match color channels to relightMix and match color channels to relight • Due Sept 19thDue Sept 19th
    72. 72. Debevec et al. 2002: ‘Light Stage 3’
    73. 73. Image-Based Actual Re-lightingImage-Based Actual Re-lighting Film the background in Milan,Film the background in Milan, Measure incoming light,Measure incoming light, Light the actress in Los AngelesLight the actress in Los Angeles Matte the backgroundMatte the background Matched LA and Milan lighting.Matched LA and Milan lighting. Debevec et al., SIGG2001
    74. 74. Depth Edge CameraDepth Edge Camera
    75. 75. Ramesh Raskar, Karhan Tan, Rogerio Feris,Ramesh Raskar, Karhan Tan, Rogerio Feris, Jingyi Yu, Matthew TurkJingyi Yu, Matthew Turk Mitsubishi Electric Research Labs (MERL), Cambridge, MAMitsubishi Electric Research Labs (MERL), Cambridge, MA U of California at Santa BarbaraU of California at Santa Barbara U of North Carolina at Chapel HillU of North Carolina at Chapel Hill Non-photorealistic Camera:Non-photorealistic Camera: Depth Edge DetectionDepth Edge Detection andand Stylized RenderingStylized Rendering usingusing Multi-Flash ImagingMulti-Flash Imaging
    76. 76. Depth Discontinuities Internal and external Shape boundaries, Occluding contour, Silhouettes
    77. 77. Depth Edges
    78. 78. Our MethodCanny
    79. 79. Participatory Urban SensingParticipatory Urban Sensing Deborah Estrin talk yesterday Static/semi-dynamic/dynamic data A. City Maintenance -Side Walks B. Pollution -Sensor network C. Diet, Offenders -Graffiti -Bicycle on sidewalk Future .. Citizen Surveillance Health Monitoring http://research.cens.ucla.edu/areas/2007/Urban_Sensing/ (Erin Brockovich) n
    80. 80. CrowdsourcingCrowdsourcing http://www.wired.com/wired/archive/14.06/crowds.html Object Recognition Fakes Template matching Amazon Mechanical Turk: Steve Fossett search ReCAPTCHA=OCR
    81. 81. Community Photo CollectionsCommunity Photo Collections U of Washington/Microsoft: Photosynth
    82. 82. GigaPixel ImagesGigaPixel Images Microsoft HDView http://www.xrez.com/owens_giga.html http://www.gigapxl.org/
    83. 83. OpticsOptics • It is all about rays not pixelsIt is all about rays not pixels • Study using lightfieldsStudy using lightfields
    84. 84. Assignment 2Assignment 2 • Andrew Adam’s Virtual Optical BenchAndrew Adam’s Virtual Optical Bench
    85. 85. Light Field Inside a CameraLight Field Inside a Camera
    86. 86. Lenslet-based Light Field cameraLenslet-based Light Field camera [Adelson and Wang, 1992, Ng et al. 2005 ] Light Field Inside a CameraLight Field Inside a Camera
    87. 87. Stanford Plenoptic CameraStanford Plenoptic Camera [Ng et al 2005][Ng et al 2005] 4000 × 4000 pixels ÷ 292 × 292 lenses = 14 × 14 pixels per lens Contax medium format camera Kodak 16-megapixel sensor Adaptive Optics microlens array 125μ square-sided microlenses
    88. 88. Digital RefocusingDigital Refocusing [Ng et al 2005][Ng et al 2005] Can we achieve this with aCan we achieve this with a MaskMask alone?alone?
    89. 89. Mask based Light Field Camera Mask Sensor [Veeraraghavan, Raskar, Agrawal, Tumblin, Mohan, Siggraph 2007 ]
    90. 90. How to Capture 4D Light Field with 2D Sensor ? What should be the pattern of the mask ?
    91. 91. Radio Frequency HeterodyningRadio Frequency Heterodyning Baseband Audio Signal Receiver: DemodulationHigh Freq Carrier 100 MHz Reference Carrier Incoming Signal 99 MHz
    92. 92. Optical HeterodyningOptical Heterodyning Photographic Signal (Light Field) Carrier Incident Modulated Signal Reference Carrier Main LensObject Mask Sensor Recovered Light Field Software Demodulation Baseband Audio Signal Receiver: DemodulationHigh Freq Carrier 100 MHz Reference Carrier Incoming Signal 99 MHz
    93. 93. Captured 2D Photo Encoding due to Mask
    94. 94. 1/f0 Mask Tile Cosine Mask Used
    95. 95. fθ Modulated Light Field fx fθ0 fx0 Modulation Function Sensor Slice captures entire Light Field
    96. 96. 2D FFT Traditional Camera Photo Heterodyne Camera Photo Magnitude of 2D FFT 2D FFT Magnitude of 2D FFT
    97. 97. Computing 4D Light Field 2D Sensor Photo, 1800*1800 2D Fourier Transform, 1800*1800 2D FFT Rearrange 2D tiles into 4D planes 200*200*9*94D IFFT 4D Light Field 9*9=81 spectral copies 200*200*9*9
    98. 98. Agile Spectrum Imaging With Ankit Mohan, Jack Tumblin [Eurographics 2008]
    99. 99. Lens Glare Reduction [Raskar, Agrawal, Wilson, Veeraraghavan SIGGRAPH 2008] Glare/Flare due to camera lenses reduces contrast
    100. 100. Glare Reduction/Enhancement usingGlare Reduction/Enhancement using 4D Ray Sampling4D Ray Sampling Captured Glare Reduced Glare Enhanced
    101. 101. i j x Sensor u Glare = low frequency noise in 2D •But is high frequency noise in 4D •Remove via simple outlier rejection
    102. 102. Long-range synthetic aperture photography Levoy et al., SIGG2005
    103. 103. Synthetic aperture videography
    104. 104. Focus Adjustment: Sum of Bundles
    105. 105. Synthetic aperture photography Smaller aperture   less blur, smaller circle of confusion
    106. 106. Synthetic aperture photography Merge MANY cameras to act as ONE BIG LENS Small items are so blurry they seem to disappear..
    107. 107. Light field photography using a handheld plenoptic camera Ren Ng, Marc Levoy, Mathieu Brédif, Gene Duval, Mark Horowitz and Pat Hanrahan
    108. 108. Prototype camera 4000 × 4000 pixels ÷ 292 × 292 lenses = 14 × 14 pixels Contax medium format camera Kodak 16-megapixel sensor Adaptive Optics microlens array 125μ square-sided microlenses
    109. 109. Example of digital refocusing
    110. 110. Extending the depth of field conventional photograph, main lens at f / 22 conventional photograph, main lens at f / 4 light field, main lens at f / 4, after all-focus algorithm [Agarwala 2004]
    111. 111. Ramesh Raskar, CompPhoto Class Northeastern, Fall 2005 Imaging in Sciences:Imaging in Sciences: Computer TomographyComputer Tomography • http://info.med.yale.edu/intmed/cardio/imaging/techniques/ct_imhttp://info.med.yale.edu/intmed/cardio/imaging/techniques/ct_im aging/aging/
    112. 112. © 2004 Marc Levoy Borehole tomography • receivers measure end-to-end travel time • reconstruct to find velocities in intervening cells • must use limited-angle reconstruction method (like ART) (from Reynolds)
    113. 113. © 2004 Marc Levoy Deconvolution microscopy • competitive with confocal imaging, and much faster • assumes emission or attenuation, but not scattering • therefore cannot be applied to opaque objects • begins with less information than a light field (3D vrs 4D) ordinary microscope image deconvolved from focus stack
    114. 114. Ramesh Raskar, CompPhoto Class Northeastern, Fall 2005 Coded-Aperture ImagingCoded-Aperture Imaging • Lens-free imaging!Lens-free imaging! • Pinhole-cameraPinhole-camera sharpness,sharpness, without massive lightwithout massive light loss.loss. • No ray bending (OK forNo ray bending (OK for X-ray, gamma ray, etc.)X-ray, gamma ray, etc.) • Two elementsTwo elements – Code Mask: binaryCode Mask: binary (opaque/transparent)(opaque/transparent) – Sensor gridSensor grid • Mask autocorrelation isMask autocorrelation is delta function (impulse)delta function (impulse) • Similar to MotionSensorSimilar to MotionSensor
    115. 115. Mask in a Camera Mask Aperture Canon EF 100 mm 1:1.28 Lens, Canon SLR Rebel XT camera
    116. 116. Digital Refocusing Captured Blurred Image
    117. 117. Digital Refocusing Refocused Image on Person
    118. 118. Digital Refocusing
    119. 119. Larval Trematode WormLarval Trematode Worm
    120. 120. Mask? Sensor Mask SensorMask? Sensor Mask Sensor Mask? Sensor 4D Light Field from 2D Photo: Heterodyne Light Field Camera Full Resolution Digital Refocusing: Coded Aperture Camera
    121. 121. Coding and Modulation in Camera Using MasksCoding and Modulation in Camera Using Masks Mask? Sensor Mask Sensor Mask Sensor Coded Aperture for Full Resolution Digital Refocusing Heterodyne Light Field Camera
    122. 122. Slides by Todor Georgiev
    123. 123. Slides by Todor Georgiev
    124. 124. Slides by Todor Georgiev
    125. 125. Conventional Lens: Limited Depth of FieldConventional Lens: Limited Depth of Field Smaller Aperture Open Aperture Slides by Shree Nayar
    126. 126. Wavefront Coding using Cubic Phase PlateWavefront Coding using Cubic Phase Plate "Wavefront Coding: jointly optimized optical and digital imaging systems“, E. Dowski, R. H. Cormack and S. D. Sarama , Aerosense Conference, April 25, 2000 Slides by Shree Nayar
    127. 127. Depth Invariant BlurDepth Invariant Blur Conventional System Wavefront Coded System Slides by Shree Nayar
    128. 128. Typical PSF changes slowly Designed PSF changes fast Decoding depth via defocus blur • Design PSF that changes quickly through focus so that defocus can be easily estimated • Implementation using phase diffractive mask (Sig 2008, Levin et al used amplitude mask) Phase mask R. Piestun, Y. Schechner, J. Shamir, “Propagation-Invariant Wave Fields with Finite Energy,” JOSA A 17, 294-303 (2000) R. Piestun, J. Shamir, “Generalized propagation invariant wave-fields,” JOSA A 15, 3039 (1998)
    129. 129. Rotational PSFRotational PSF R. Piestun, Y. Schechner, J. Shamir, “Propagation-Invariant Wave Fields with Finite Energy,” JOSA A 17, 294-303 (2000) R. Piestun, J. Shamir, “Generalized propagation invariant wave-fields,” JOSA A 15, 3039 (1998)
    130. 130. Can we deal with particle-wave duality of light with modern Lightfield theory ? 15 Young’s Double Slit Expt first null (OPD = /2)λ Diffraction and Interferences modeled using Ray representation
    131. 131. Light Fields • Radiance per ray • Ray parameterization: • Position : x • Direction : θ Reference plane position direction Goal: Representing propagation, interaction and image formation of light using purely position and angle parameters
    132. 132. Light Fields for Wave Optics EffectsLight Fields for Wave Optics Effects Wigner Distribution Function Light Field LF < WDF Lacks phase properties Ignores diffraction, phase masks Radiance = Positive Light Field Augmente d Light Field WDF ALF ~ WDF Supports coherent/incoherent Radiance = Positive/Negative Virtual light sources
    133. 133. Limitations of Traditional Lightfields Wigner Distribution Function TraditionalTraditional Light FieldLight Field TraditionalTraditional Light FieldLight Field ray optics based simple and powerful rigorous but cumbersome wave optics based limited in diffraction & interference holograms beam shaping rotational PSF
    134. 134. Example: New Representations Augmented Lightfields Wigner Distribution Function TraditionalTraditional Light FieldLight Field TraditionalTraditional Light FieldLight Field WDF TraditionalTraditional Light FieldLight Field TraditionalTraditional Light FieldLight Field Augmented LF Interference & Diffraction Interaction w/ optical elements ray optics based simple and powerful limited in diffraction & interference rigorous but cumbersome wave optics based Non-paraxial propagation http://raskar.scripts.mit.edu/~raskar/lightfields/
    135. 135. (ii) Augmented Light Field with LF Transformer 15 WDF LightLight FieldField LightLight FieldField Augmented LF Interaction at the optical elements LF propagation (diffractive) optical element LF LF LF LF LF propagation light field transformer negative radiance Augmenting Light Field to Model Wave Optics Effects , [Oh, Barbastathis, Raskar]
    136. 136. Virtual light projector with real valued (possibly negative radiance) along a ray 15 real projector real projector first null (OPD = /2)λ virtual light projector Augmenting Light Field to Model Wave Optics Effects , [Oh, Barbastathis, Raskar]
    137. 137. (ii) ALF with LF Transformer 15
    138. 138. “Origami Lens”: Thin Folded Optics (2007) “Ultrathin Cameras Using Annular Folded Optics, “ E. J. Tremblay, R. A. Stack, R. L. Morrison, J. E. Ford Applied Optics, 2007 - OSA Slides by Shree Nayar
    139. 139. Gradient Index (GRIN) Optics Conventional Convex LensGradient Index ‘Lens’ Continuous change of the refractive index within the optical material Constant refractive index but carefully designed geometric shape Refractive Index along width n x Change in RI is very small, 0.1 or 0.2
    140. 140. Photonic Crystals • ‘Routers’ for photons instead of electrons • Photonic Crystal – Nanostructure material with ordered array of holes – A lattice of high-RI material embedded within a lower RI – High index contrast – 2D or 3D periodic structure • Photonic band gap – Highly periodic structures that blocks certain wavelengths – (creates a ‘gap’ or notch in wavelength) • Applications – ‘Semiconductors for light’: mimics silicon band gap for electrons – Highly selective/rejecting narrow wavelength filters (Bayer Mosaic?) – Light efficient LEDs – Optical fibers with extreme bandwidth (wavelength multiplexing) – Hype: future terahertz CPUs via optical communication on chip
    141. 141. Schlieren Photography • Image of small index of refraction gradients in a gas • Invisible to human eye (subtle mirage effect) Knife edge blocks half the light unless distorted beam focuses imperfectly Collimated Light Camera
    142. 142. http://www.mne.psu.edu/psgdl/FSSPhotoalbum/index1.htm
    143. 143. Varying PolarizationVarying Polarization Yoav Y. Schechner, Nir Karpel 2005Yoav Y. Schechner, Nir Karpel 2005 Best polarization state Worst polarization state Best polarization state Recovered image [Left] The raw images taken through a polarizer. [Right] White-balanced results: The recovered image is much clearer, especially at distant objects, than the raw image
    144. 144. Varying PolarizationVarying Polarization • Schechner, Narasimhan, NayarSchechner, Narasimhan, Nayar • Instant dehazingInstant dehazing of images usingof images using polarizationpolarization
    145. 145. Photon-x: Polarization Bayer Mosaic for Surface normals
    146. 146. Novel SensorsNovel Sensors • Gradient sensingGradient sensing • HDR Camera, Log sensingHDR Camera, Log sensing • Line-scan CameraLine-scan Camera • DemodulatingDemodulating • Motion CaptureMotion Capture • 3D3D
    147. 147. MIT Media Lab • Camera = – 0D sensors • Motion detector • Bar code scanner • Time-of-flight range detector (Darpa Grand Challenge) – 1D sensors • Line scan camera (photofinish) • Flatbed scanner • Fax machine – 2D sensors – 2-1/2D sensors – ‘3D’ sensors
    148. 148. Single Pixel Camera
    149. 149. Compressed Imaging ∫ = Scene X Aggregate Brightness Y “A New Compressive Imaging Camera Architecture” D. Takhar et al., Proc. SPIE Symp. on Electronic Imaging, Vol. 6065, 2006. Sparsity of Image: θΨ=X sparse basis coefficients XY Φ= measurement basis Measurements:
    150. 150. Single Pixel Camera Image on the DMD
    151. 151. Example Original Compressed Imaging 4096 Pixels 1600 Measurements (40%) 65536 Pixels 6600 Measurements (10%)
    152. 152. Example Original Compressed Imaging 4096 Pixels 800 Measurements (20%) 4096 Pixels 1600 Measurements (40%)
    153. 153. Line Scan Camera: PhotoFinish 2000 Hz
    154. 154. © 2004 Marc Levoy The CityBlock Project Precursor to Google Streetview Maps
    155. 155. Figure 2 results Input Image Problem: Motion Deblurring
    156. 156. Image Deblurred by solving a linear system. No post-processing Blurred Taxi
    157. 157. Application: Aerial Imaging Time = 0Time = T Long Exposure: The moving camera creates smear Time Shutter Open Shutter Closed Short Explosure: Avoids blur. But the image is dark Time Time Shutter Open Shutter Closed Goal: Capture sharp image with sufficient brightness using a camera on a fast moving aircraft Sharpness versus Image Pixel Brightness Time Shutter Open Shutter Closed Solution: Flutter Shutter
    158. 158. Application: Electronic Toll Booths Time Goal: Automatic number plate recognition from sharp image Monitoring Camera for detecting license plates Time Shutter Open Shutter Closed Solution: Sufficiently long exposure duration with fluttered shutter Ideal exposure duration depends on car speed which is difficult to determine a-priory. Longer exposure duration blurs the license plate image making character recognition difficult
    159. 159. Fluttered Shutter Camera Raskar, Agrawal, Tumblin Siggraph2006 Ferroelectric shutter in front of the lens is turned opaque or transparent in a rapid binary sequence
    160. 160. Short Exposure Traditional MURA Coded Coded Exposure Photography: Assisting Motion Deblurring using Fluttered Shutter Raskar, Agrawal, Tumblin (Siggraph2006) Deblurred Results Captured Photos Shutter Result has Banding Artifacts and some spatial frequencies are lost Decoded image is as good as image of a static scene Image is dark and noisy
    161. 161. Compound Lens of Dragonfly
    162. 162. TOMBO: Thin Camera (2001) “Thin observation module by bound optics (TOMBO),” J. Tanida, T. Kumagai, K. Yamada, S. Miyatake Applied Optics, 2001
    163. 163. TOMBO: Thin Camera
    164. 164. ZCam (3Dvsystems), Shuttered Light Pulse Resolution :Resolution : 1cm for 2-7 meters1cm for 2-7 meters
    165. 165. Graphics can inserted behind and between characters
    166. 166. Cameras for HCICameras for HCI • Frustrated total internal reflectionFrustrated total internal reflection Han, J. Y. 2005. Low-Cost Multi-Touch Sensing through Frustrated Total Internal Reflection. In Proceedings of the 18th Annual ACM Symposium on User Interface Software and Technology
    167. 167. Converting LCD Screen = large Camera for 3D Interactive HCI and Video Conferencing Matthew Hirsch, Henry Holtzman Doug Lanman, Ramesh Raskar Siggraph Asia 2009 Class Project in CompCam 2008 SRC Winner BiDi Screen*
    168. 168. Beyond Multi-touch: Mobile Laptops Mobile
    169. 169. Light Sensing Pixels in LCD Displaywithembeddedopticalsensors Sharp Microelectronics Optical Multi-touch Prototype
    170. 170. Design Overview Displaywithembeddedopticalsensors LCD, displaying mask Opticalsensorarray ~2.5 cm~50 cm
    171. 171. Beyond Multi-touch: Hover Interaction • Seamless transition of multitouch to gesture • Thin package, LCD
    172. 172. Design Vision Object Collocated Capture and Display Bare Sensor SpatialLightModulator
    173. 173. Touch + Hover using Depth Sensing LCD Sensor
    174. 174. Overview: Sensing Depth from Array of Virtual Cameras in LCD
    175. 175. • 36 bit code at 0.3mm resolution36 bit code at 0.3mm resolution • 100 fps camera at 8800 nm100 fps camera at 8800 nm http://www.acreo.se/upload/Publications/Proceedings/OE00/00-KAURANEN.pdf
    176. 176. • Smart Barcode size : 3mm x 3mm • Ordinary Camera: Distance 3 meter Computational Probes:Computational Probes: Long Distance Bar-codesLong Distance Bar-codes Mohan, Woo,Smithwick, Hiura, Raskar Accepted as Siggraph 2009 paper
    177. 177. MIT Media Lab Camera Culture Bokode
    178. 178. MIT media lab camera culture Barcodes markers that assist machines in understanding the real world
    179. 179. MIT media lab camera culture Bokode: ankit mohan, grace woo, shinsaku hiura, quinn smithwick, ramesh raskar camera culture group, MIT media lab imperceptible visual tags for camera based interaction from a distance
    180. 180. MIT Media Lab Camera Culture Defocus blur of Bokode
    181. 181. MIT Media Lab Camera Culture Image greatly magnified. Simplified Ray Diagram
    182. 182. MIT Media Lab Camera Culture Our Prototypes
    183. 183. MIT media lab camera culture street-view tagging
    184. 184. Mitsubishi Electric Research Laboratories Special Effects in the Real World Raskar 2006 Vicon Motion Capture High-speed IR Camera Medical Rehabilitation Athlete Analysis Performance Capture Biomechanical Analysis
    185. 185. Mitsubishi Electric Research Laboratories Special Effects in the Real World Raskar 2006 R Raskar, H Nii, B de Decker, Y Hashimoto, J Summet, D Moore, Y Zhao, J Westhues, P Dietz, M Inami, S Nayar, J Barnwell, M Noland, P Bekaert, V Branzoi, E Bruns Siggraph 2007 Prakash: Lighting-Aware Motion Capture Using Photosensing Markers and Multiplexed Illuminators
    186. 186. Mitsubishi Electric Research Laboratories Special Effects in the Real World Raskar 2006 Imperceptible Tags under clothing, tracked under ambient light Hidden Marker Tags Outdoors Unique Id http://raskar.info/prakash
    187. 187. Camera-based HCICamera-based HCI • Many projects hereMany projects here – Robotics, Speechome, Spinner, Sixth SenseRobotics, Speechome, Spinner, Sixth Sense • Sony EyeToySony EyeToy • WiiWii • Xbox/NatalXbox/Natal • Microsoft SurfaceMicrosoft Surface – Shahram Izadi (Microsoft Surface/SecondLight)Shahram Izadi (Microsoft Surface/SecondLight) – Talk at Media Lab, Tuesday Sept 22Talk at Media Lab, Tuesday Sept 22ndnd , 3pm, 3pm
    188. 188. 213 Computational Imaging in the Sciences Driving Factors:  new instruments lead to new discoveries (e.g., Leeuwenhoek + microscopy  microbiology)  Q: most important instrument in last century? A: the digital computer What is Computational Imagining?  according to B.K. Horn: “…imaging methods in which computation is inherent in image formation.”  digital processing has led to a revolution in medical and scientific data collection (e.g., CT, MRI, PET, remote sensing, etc.) Slides by Doug Lanman
    189. 189. 214 Computational Imaging in the Sciences Medical Imaging:  transmission tomography (CT)  reflection tomography (ultrasound) Geophysics:  borehole tomography  seismic reflection surveying Applied Physics:  diffuse optical tomography  diffraction tomography  scattering and inverse scattering Biology:  confocal microscopy  deconvolution microscopy Astronomy:  coded-aperture imaging  interferometric imaging Remote Sensing:  multi-perspective panoramas  synthetic aperture radar Optics:  wavefront coding  light field photography  holography Slides by Doug Lanman
    190. 190. 215 What is Tomography? Definition:  imaging by sectioning (from Greek tomos: “a section” or “cutting”)  creates a cross-sectional image of an object by transmission or reflection data collected by illuminating from many directions Parallel-beam Tomography Fan-beam Tomography Slides by Doug Lanman
    191. 191. 216 Reconstruction: Filtered Backprojection x y fy fx Fourier Projection-Slice Theorem:  F-1 {Gθ(ω)} = Pθ(t)  add slices Gθ(ω) into {u,v} at all angles θ and inverse transform to yield g(x,y)  add 2D backprojections Pθ(t) into {x,y} at all angles θ Pθ(t) Pθ(t,s) Gθ(ω) g(x,y) Slides by Doug Lanman
    192. 192. 217 Medical Applications of Tomography bone reconstruction segmented vessels Slides by Doug Lanman
    193. 193. 218 Biology: Confocal Microscopy pinhole light source photocell pinhole Slides by Doug Lanman
    194. 194. 219 Confocal Microscopy Examples Slides by Doug Lanman
    195. 195. Forerunners ..Forerunners .. Mask Sensor Mask Sensor
    196. 196. Fernald, Science [Sept 2006] Shadow Refractive Reflective Tools for Visual Computing
    197. 197. Project AssignmentsProject Assignments • RelightingRelighting • Dual PhotographyDual Photography • Virtual Optical BenchVirtual Optical Bench • Lightfield captureLightfield capture – Mask or LCD with programmable apertureMask or LCD with programmable aperture • One ofOne of – High speed imagingHigh speed imaging – Thermal imagingThermal imaging – 3D range sensing3D range sensing • Final ProjectFinal Project
    198. 198. Synthetic LightingSynthetic Lighting Paul Haeberli, Jan 1992Paul Haeberli, Jan 1992
    199. 199. Image-Based Actual Re-lightingImage-Based Actual Re-lighting Film the background in Milan,Film the background in Milan, Measure incoming light,Measure incoming light, Light the actress in Los AngelesLight the actress in Los Angeles Matte the backgroundMatte the background Matched LA and Milan lighting.Matched LA and Milan lighting. Debevec et al., SIGG2001
    200. 200. Dual photography from diffuse reflections:Dual photography from diffuse reflections: Homework Assignment 2Homework Assignment 2 the camera’s view Sen et al, Siggraph 2005Sen et al, Siggraph 2005
    201. 201. Direct Global Shower Curtain: Diffuser
    202. 202. • Andrew Adam’s Virtual Optical BenchAndrew Adam’s Virtual Optical Bench
    203. 203. Beyond Visible SpectrumBeyond Visible Spectrum CedipRedShift
    204. 204. GoalsGoals • Change the rules of the gameChange the rules of the game – Emerging optics, illumination, novel sensorsEmerging optics, illumination, novel sensors – Exploit priors and online collectionsExploit priors and online collections • ApplicationsApplications – Better scene understanding/analysisBetter scene understanding/analysis – Capture visual essenceCapture visual essence – Superior Metadata tagging for effective sharingSuperior Metadata tagging for effective sharing – Fuse non-visual dataFuse non-visual data • Sensors for disabled, new art forms, crowdsourcing,Sensors for disabled, new art forms, crowdsourcing, bridging culturesbridging cultures
    205. 205. First Assignment: Synthetic LightingFirst Assignment: Synthetic Lighting Paul Haeberli, Jan 1992Paul Haeberli, Jan 1992
    206. 206. • FormatFormat – 4 (3) Assignments4 (3) Assignments • Hands on with optics,Hands on with optics, illumination, sensors, masksillumination, sensors, masks • Rolling schedule for overlapRolling schedule for overlap • We have cameras, lenses,We have cameras, lenses, electronics, projectors etcelectronics, projectors etc • Vote on best projectVote on best project – Mid term examMid term exam • Test conceptsTest concepts – 1 Final project1 Final project • Should be a Novel and CoolShould be a Novel and Cool • Conference quality paperConference quality paper • Award for best projectAward for best project – Take 1 class notesTake 1 class notes – Lectures (and guestLectures (and guest talks)talks) – In-class + onlineIn-class + online discussiondiscussion • If you are a listenerIf you are a listener – Participate in online discussion, digParticipate in online discussion, dig new recent worknew recent work – Present one short 15 minute idea orPresent one short 15 minute idea or new worknew work • CreditCredit • Assignments: 40%Assignments: 40% • Project: 30%Project: 30% • Mid-term: 20%Mid-term: 20% • Class participation: 10%Class participation: 10% • Pre-reqsPre-reqs • Helpful: Linear algebra, imageHelpful: Linear algebra, image processing, think in 3Dprocessing, think in 3D • We will try to keep math toWe will try to keep math to essentials, but complex conceptsessentials, but complex concepts
    207. 207. Assignments:Assignments: You are encouraged to program in Matlab for image analysisYou are encouraged to program in Matlab for image analysis You may need to use C++/OpenGL/Visual programming for some hardware assignmentsYou may need to use C++/OpenGL/Visual programming for some hardware assignments Each student is expected to prepare notes for one lectureEach student is expected to prepare notes for one lecture These notes should be prepared and emailed to the instructor no later than the followingThese notes should be prepared and emailed to the instructor no later than the following Monday night (midnight EST). Revisions and corrections will be exchanged by email andMonday night (midnight EST). Revisions and corrections will be exchanged by email and after changes the notes will be posted to the website before class the following week.after changes the notes will be posted to the website before class the following week. 5 points5 points Course mailing listCourse mailing list: Please make sure that your emailid is on the course mailing list: Please make sure that your emailid is on the course mailing list Send email to raskar (at) media.mit.eduSend email to raskar (at) media.mit.edu Please fill in the email/credit/dept sheetPlease fill in the email/credit/dept sheet Office hoursOffice hours:: Email is the best way to get in touchEmail is the best way to get in touch Ramesh:. raskar (at) media.mit.eduRamesh:. raskar (at) media.mit.edu Ankit:Ankit: ankit (at) media.mit.eduankit (at) media.mit.edu After class:After class: Muddy Charles PubMuddy Charles Pub (Walker Memorial next to tennis courts)(Walker Memorial next to tennis courts)
    208. 208. 2 Sept 18th Modern Optics and Lenses, Ray-matrix operations 3 Sept 25th Virtual Optical Bench, Lightfield Photography, Fourier Optics, Wavefront Coding 4 Oct 2nd Digital Illumination, Hadamard Coded and Multispectral Illumination 5 Oct 9th Emerging Sensors: High speed imaging, 3D range sensors, Femto-second concepts, Front/back illumination, Diffraction issues 6 Oct 16th Beyond Visible Spectrum: Multispectral imaging and Thermal sensors, Fluorescent imaging, 'Audio camera' 7 Oct 23rd Image Reconstruction Techniques, Deconvolution, Motion and Defocus Deblurring, Tomography, Heterodyned Photography, Compressive Sensing 8 Oct 30th Cameras for Human Computer Interaction (HCI): 0-D and 1-D sensors, Spatio-temporal coding, Frustrated TIR, Camera-display fusion 9 Nov 6th Useful techniques in Scientific and Medical Imaging: CT-scans, Strobing, Endoscopes, Astronomy and Long range imaging 10 Nov 13th Mid-term Exam, Mobile Photography, Video Blogging, Life logs and Online Photo collections 11 Nov 20th Optics and Sensing in Animal Eyes. What can we learn from successful biological vision systems? 12 Nov 27th Thanksgiving Holiday (No Class) 13 Dec 4th Final Projects
    209. 209. What is the emphasis?What is the emphasis? • Learn fundamental techniques in imagingLearn fundamental techniques in imaging – In class and in homeworksIn class and in homeworks – Signal processing, Applied optics, Computer graphics and vision,Signal processing, Applied optics, Computer graphics and vision, Electronics, Art, and Online photo collectionsElectronics, Art, and Online photo collections – This is not a discussion classThis is not a discussion class • Three Applications areasThree Applications areas – PhotographyPhotography • Think in higher dimensions 4D, 6D, 8D, thermal, range cam, lightfields,Think in higher dimensions 4D, 6D, 8D, thermal, range cam, lightfields, applied opticsapplied optics – Active Computer Vision (real-time)Active Computer Vision (real-time) • HCI, Robotics, Tracking/Segmentation etcHCI, Robotics, Tracking/Segmentation etc – Scientific ImagingScientific Imaging • Compressive sensing, wavefront coding, tomography, deconvolution, psfCompressive sensing, wavefront coding, tomography, deconvolution, psf – But the 3 areas are merging and use similar principlesBut the 3 areas are merging and use similar principles
    210. 210. First Homework AssignmentFirst Homework Assignment • Take multiple photos by changing lightingTake multiple photos by changing lighting • Mix and match color channels to relightMix and match color channels to relight • Due Sept 25Due Sept 25thth • Need Volunteer: taking notes for next classNeed Volunteer: taking notes for next class – Sept 18: Sam PerliSept 18: Sam Perli – Sept 25: ?Sept 25: ?
    211. 211. Goal and Experience Low Level Mid Level High Level Hyper realism Raw Angle, spectrum aware Non-visual Data, GPS Metadata Priors Comprehensive 8D reflectance field Digital Epsilon Coded Essence CP aims to make progress on both axis Camera Array HDR, FoV Focal stack Decompositio n problems Depth Spectrum LightFields Human Stereo Vision Transient Imaging Virtual Object Insertion Relighting Augmented Human Experience Material editing from single photo Scene completion from photos Motion Magnification Phototourism
    212. 212. Capture • Overcome Limitations of Cameras • Capture Richer Data Multispectral • New Classes of Visual Signals Lightfields, Depth, Direct/Global, Fg/Bg separation Hyperrealistic Synthesis • Post-capture Control • Impossible Photos • Exploit Scientific Imaging Computational Photography http://raskar.info/photo/
    213. 213. Blind CameraBlind Camera Sascha Pohflepp,Sascha Pohflepp, U of the Art, Berlin, 2006U of the Art, Berlin, 2006
    214. 214. ENDEND
    1. A particular slide catching your eye?

      Clipping is a handy way to collect important slides you want to go back to later.

    ×