Raskar COSI invited talk Oct 2009
Upcoming SlideShare
Loading in...5
×
 

Like this? Share it with your network

Share

Raskar COSI invited talk Oct 2009

on

  • 1,481 views

 

Statistics

Views

Total Views
1,481
Views on SlideShare
1,479
Embed Views
2

Actions

Likes
0
Downloads
29
Comments
0

1 Embed 2

http://www.slideshare.net 2

Accessibility

Categories

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment
  • Inference and perception are important. Intent and goal of the photo is important. The same way camera put photorealistic art out of business, maybe this new artform will put the traditional camera out of business. Because we wont really care about a photo, merely a recording of light but a form that captures meaningful subset of the visual experience. Multiperspective photos. Photosynth is an example.
  • Pioneered by Nayar and Levoy Synthesis Minimal change of hardware Goals are often opposite (human perception) Use of non-visual data And Network
  • infinity-corrected ‘long-distance microscope’
  • Augmented plenoptic function the motivation, to augment lf, model diffraction in light field formulation
  • Multiplication in space Convolution in angle
  • more specifically, same lf propagation, Can we stay purely in ray-space and support propagation, diffraction and interference. I was highly inspired by Markus Testorf’s talk in Charlotte organized by Prof Fiddy. And he also took efforts to explain to us and pointed us to his two books on phase-space optics. In addition I am looking forward to Prof Alonso and my MIT colleague Anthony Accorsi’s talk. Plus Zhang and Levoy have clearly described a very useful subset of wave phenomenon that can be explained with traditional light field. Our goal in augmenting LF is however different. Personally, this has been my own path of discovery for how I can express complex wave phenomenon with rays.
  • Since we are adapting LCD technology we can fit a BiDi screen into laptops and mobile devices.
  • So here is a preview of our quantitative results. I’ll explain this in more detail later on, but you can see we’re able to accurately distinguish the depth of a set of resolution targets. We show above a portion of portion of views form our virtual cameras, a synthetically refocused image, and the depth map derived from it.
  • With the right synergy between capture and synthesis techniques, we go beyond traditional imaging and change the rules of the game.

Raskar COSI invited talk Oct 2009 Presentation Transcript

  • 1. Camera Culture Ramesh Raskar Camera Culture Associate Professor, MIT Media Lab Computational Photography http://raskar.info
  • 2. Invertible Motion Blur in Video Photo 1 Photo 2 Photo 3 Deblurring Agrawal, Xu, Raskar, Siggraph 2009
  • 3. Traditional Exposure Video DFT Motion PSF (Box Filter) Information is lost Exposure Time
  • 4. Coded Exposure (Flutter Shutter) Raskar et al. 2006 Single Photo Deblurred Image
  • 5. Varying Exposure Video DFT Exposure Time
  • 6. Varying Exposure Video DFT Exposure Time Exposure Time
  • 7. Varying Exposure Video DFT Exposure Time Exposure Time Exposure Time
  • 8. Varying Exposure Video == PSF Null-Filling DFT Joint Frequency Spectrum Preserves High Frequencies
  • 9. Varying Exposure Video: Exploit auto-exposure mode
  • 10. Completely automatic: (i) Segmentation, (ii) PSF estimation, (iii) deblurring Blurred Photo
  • 11. Deblurred Result Ground Truth Input Photos
  • 12. Computational Photography [Raskar and Tumblin]
    • Computational Imaging vs Computational Photography
    • Synthesis
    • Minimal change of hardware
    • Goals are often opposite (human perception)
    • Epsilon Photography
      • Low-level vision: Pixels
      • ‘ Ultimate camera’
    • Coded Photography
      • Mid-Level Cues:
        • Regions, Edges, Motion, Direct/global
      • ‘ Scene analysis’
    • Essence Photography
      • High-level understanding
      • ‘ New artform’
    captures a machine-readable representation of our world to hyper-realistically synthesize the essence of our visual experience.
  • 13. Synthesis Low Level Mid Level High Level Hyper realism Raw Angle, spectrum aware Non-visual Data, GPS Metadata Priors Comprehensive 8D reflectance field Computational Photography (vs Imaging) Digital Epsilon Coded Essence Computational Photography aims to make progress on both axis Camera Array HDR, FoV Focal stack Decomposition problems Depth Spectrum LightFields Human Stereo Vision Transient Imaging Virtual Object Insertion Relighting Augmented Human Experience Material editing from single photo Scene completion from photos Motion Magnification Phototourism Resolution
  • 14. Enhanced Defocus Blur Lots of glass; Heavy; Bulky; Expensive
  • 15. Image Destabilization: Programmable Defocus using Lens and Sensor Motion Ankit Mohan, Douglas Lanman, Shinsaku Hiura, Ramesh Raskar MIT Media Lab MIT Media Lab Camera Culture
  • 16. Image Destabilization Lens Sensor Camera Static Scene
  • 17. Image Destabilization Static Scene Lens Motion Sensor Motion Camera Mohan, Lanman,Hiura, Raskar ICCP 2009
  • 18. Shifting Pinhole and Sensor A B A’ B’ Pinhole v p Sensor v s d a d b d s Focus Here
  • 19. Shifting Pinhole and Sensor A B Pinhole A’ B’ v p Sensor v s d a d b d s Focus Here
  • 20. Shifting Pinhole and Sensor A B Pinhole A’ B’ v p Sensor v s d a d b d s Focus Here
  • 21. Shifting Pinhole and Sensor A B Pinhole A’ B’ v p Sensor v s d a d b d s Focus Here
  • 22. “ Time Lens” Ratio of speeds Lens Equation: Virtual Focal Length: Virtual F-Number:
  • 23. Time Lens:
  • 24. Adjusting the Focus Plane all-in-focus image
  • 25. Adjusting the Focus Plane focused in the front using destabilization
  • 26. Adjusting the Focus Plane focused in the middle using destabilization
  • 27. Adjusting the Focus Plane focused in the back using destabilization
  • 28. Bokode
  • 29.
    • Smart Barcode size : 3mm x 3mm
    • Ordinary Camera: Distance 3 meter
    Long Distance Barcodes
  • 30.  
  • 31. Defocus blur of Bokode
  • 32. Coding in Angle Mohan, Woo, Smithwick, Hiura, Raskar [Siggraph 2009]
  • 33. sensor Bokode (angle) Encoding in Angle , not space, time or wavelength camera
  • 34.
    • circle of confusion  circle of information
    camera Bokode (angle) Quote suggested by Kurt Akeley Encoding in Angle , not space, time or wavelength sensor
  • 35.
    • magnification = f c /f b (microscope) ;
    • focus always at infinity
    Bokode camera f b f c ‘ long-distance microscope’
  • 36.
    • Product labels
    Street-view Tagging
  • 37. capturing Bokodes cell-phone camera close to the Bokode (10,000+ bytes of data)
  • 38. Augmenting Plenoptic Function Wigner Distribution Function Traditional Light Field WDF Traditional Light Field Augmented LF Interference & Diffraction Interaction w/ optical elements ray optics based simple and powerful wave optics based rigorous but cumbersome
  • 39. Light Fields Goal: Representing propagation, interaction and image formation of light using purely position and angle parameters Reference plane position angle LF propagation (diffractive) optical element LF LF LF LF LF propagation light field transformer
  • 40.
      • Free-space propagation
      • Light field transformer
      • Virtual light projector
      • Possibly negative radiance
  • 41. LF Transformer: input to output LF Thin Elements: 6D General Case: 8D Angle Shift Invariant: 4D
  • 42. Augmented LF framework 1. LF propagation (diffractive) optical element LF LF LF LF LF propagation 2. light field transformer 3. negative radiance Tech report, S. B. Oh et al. http://web.media.mit.edu/~raskar/RayWavefront/ 4. interference
  • 43.
    • Cubic phase plate
    Pure ray bending Positive only radiance Diffraction Interference of +/- rays Rotating PSF
  • 44. Interference received on Complex Geometry
  • 45. Can you ‘see’ around a corner ?
  • 46. Femto-Photography: Higher Dimensional LF FemtoFlash UltraFast Detector Computational Optics Serious Sync
  • 47.  
  • 48. Important Dates Submission:  November 2, 2009 Notification : February 2, 2010 Topics Computational Cameras Multiple Images and Camera Arrays Computational Illumination Advanced Image and Video Processing Scientific Photography and Videography Organizing and Exploiting Photo & Video Collections Program Chairs Kyros Kutulakos, U. Toronto Rafael Piestun, U. Colorado Ramesh Raskar, MIT
 International Conference on Computational Photography (ICCP) March 29-30, 2010 MIT, Cambridge MA http://cameraculture.media.mit.edu/ iccp10 /
  • 49. Beyond Multi-touch: Mobile Laptops Mobile
  • 50. Converting LCD Screen = large Camera for 3D Interactive HCI and Video Conferencing Matthew Hirsch, Henry Holtzman Doug Lanman, Ramesh Raskar Siggraph Asia 2009 BiDi Screen
  • 51. Overview: Sensing Depth from Array of Virtual Cameras in LCD
  • 52.
    • Beyond Traditional Imaging
      • Invertible motion blur in video
      • Looking around a corner
      • LCDs as virtual cameras
      • Computational probes (bokode)
      • Image destabilization
    • Augmented Light Field
      • Rays for diffraction+interference
    Camera Culture Group, MIT Media Lab Ramesh Raskar http://raskar.info Computational Photography Digital Epsilon Coded Essence Computational Photography aims to make progress on both axis Camera Array HDR, FoV Focal stack Decomposition problems Depth Spectrum LightFields Human Stereo Vision Transient Imaging Virtual Object Insertion Relighting Augmented Human Experience Material editing from single photo Scene completion from photos Motion Magnification Phototourism WDF Light Field Augmented LF LF propagation (diffractive) optical element LF LF LF LF LF propagation light field transformer