Raskar COSI invited talk Oct 2009

1,240 views

Published on

Published in: Technology, Business
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
1,240
On SlideShare
0
From Embeds
0
Number of Embeds
4
Actions
Shares
0
Downloads
46
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide
  • Inference and perception are important. Intent and goal of the photo is important. The same way camera put photorealistic art out of business, maybe this new artform will put the traditional camera out of business. Because we wont really care about a photo, merely a recording of light but a form that captures meaningful subset of the visual experience. Multiperspective photos. Photosynth is an example.
  • Pioneered by Nayar and Levoy Synthesis Minimal change of hardware Goals are often opposite (human perception) Use of non-visual data And Network
  • infinity-corrected ‘long-distance microscope’
  • Augmented plenoptic function the motivation, to augment lf, model diffraction in light field formulation
  • Multiplication in space Convolution in angle
  • more specifically, same lf propagation, Can we stay purely in ray-space and support propagation, diffraction and interference. I was highly inspired by Markus Testorf’s talk in Charlotte organized by Prof Fiddy. And he also took efforts to explain to us and pointed us to his two books on phase-space optics. In addition I am looking forward to Prof Alonso and my MIT colleague Anthony Accorsi’s talk. Plus Zhang and Levoy have clearly described a very useful subset of wave phenomenon that can be explained with traditional light field. Our goal in augmenting LF is however different. Personally, this has been my own path of discovery for how I can express complex wave phenomenon with rays.
  • Since we are adapting LCD technology we can fit a BiDi screen into laptops and mobile devices.
  • So here is a preview of our quantitative results. I’ll explain this in more detail later on, but you can see we’re able to accurately distinguish the depth of a set of resolution targets. We show above a portion of portion of views form our virtual cameras, a synthetically refocused image, and the depth map derived from it.
  • With the right synergy between capture and synthesis techniques, we go beyond traditional imaging and change the rules of the game.
  • Raskar COSI invited talk Oct 2009

    1. 1. Camera Culture Ramesh Raskar Camera Culture Associate Professor, MIT Media Lab Computational Photography http://raskar.info
    2. 2. Invertible Motion Blur in Video Photo 1 Photo 2 Photo 3 Deblurring Agrawal, Xu, Raskar, Siggraph 2009
    3. 3. Traditional Exposure Video DFT Motion PSF (Box Filter) Information is lost Exposure Time
    4. 4. Coded Exposure (Flutter Shutter) Raskar et al. 2006 Single Photo Deblurred Image
    5. 5. Varying Exposure Video DFT Exposure Time
    6. 6. Varying Exposure Video DFT Exposure Time Exposure Time
    7. 7. Varying Exposure Video DFT Exposure Time Exposure Time Exposure Time
    8. 8. Varying Exposure Video == PSF Null-Filling DFT Joint Frequency Spectrum Preserves High Frequencies
    9. 9. Varying Exposure Video: Exploit auto-exposure mode
    10. 10. Completely automatic: (i) Segmentation, (ii) PSF estimation, (iii) deblurring Blurred Photo
    11. 11. Deblurred Result Ground Truth Input Photos
    12. 12. Computational Photography [Raskar and Tumblin] <ul><li>Computational Imaging vs Computational Photography </li></ul><ul><li>Synthesis </li></ul><ul><li>Minimal change of hardware </li></ul><ul><li>Goals are often opposite (human perception) </li></ul><ul><li>Epsilon Photography </li></ul><ul><ul><li>Low-level vision: Pixels </li></ul></ul><ul><ul><li>‘ Ultimate camera’ </li></ul></ul><ul><li>Coded Photography </li></ul><ul><ul><li>Mid-Level Cues: </li></ul></ul><ul><ul><ul><li>Regions, Edges, Motion, Direct/global </li></ul></ul></ul><ul><ul><li>‘ Scene analysis’ </li></ul></ul><ul><li>Essence Photography </li></ul><ul><ul><li>High-level understanding </li></ul></ul><ul><ul><li>‘ New artform’ </li></ul></ul>captures a machine-readable representation of our world to hyper-realistically synthesize the essence of our visual experience.
    13. 13. Synthesis Low Level Mid Level High Level Hyper realism Raw Angle, spectrum aware Non-visual Data, GPS Metadata Priors Comprehensive 8D reflectance field Computational Photography (vs Imaging) Digital Epsilon Coded Essence Computational Photography aims to make progress on both axis Camera Array HDR, FoV Focal stack Decomposition problems Depth Spectrum LightFields Human Stereo Vision Transient Imaging Virtual Object Insertion Relighting Augmented Human Experience Material editing from single photo Scene completion from photos Motion Magnification Phototourism Resolution
    14. 14. Enhanced Defocus Blur Lots of glass; Heavy; Bulky; Expensive
    15. 15. Image Destabilization: Programmable Defocus using Lens and Sensor Motion Ankit Mohan, Douglas Lanman, Shinsaku Hiura, Ramesh Raskar MIT Media Lab MIT Media Lab Camera Culture
    16. 16. Image Destabilization Lens Sensor Camera Static Scene
    17. 17. Image Destabilization Static Scene Lens Motion Sensor Motion Camera Mohan, Lanman,Hiura, Raskar ICCP 2009
    18. 18. Shifting Pinhole and Sensor A B A’ B’ Pinhole v p Sensor v s d a d b d s Focus Here
    19. 19. Shifting Pinhole and Sensor A B Pinhole A’ B’ v p Sensor v s d a d b d s Focus Here
    20. 20. Shifting Pinhole and Sensor A B Pinhole A’ B’ v p Sensor v s d a d b d s Focus Here
    21. 21. Shifting Pinhole and Sensor A B Pinhole A’ B’ v p Sensor v s d a d b d s Focus Here
    22. 22. “ Time Lens” Ratio of speeds Lens Equation: Virtual Focal Length: Virtual F-Number:
    23. 23. Time Lens:
    24. 24. Adjusting the Focus Plane all-in-focus image
    25. 25. Adjusting the Focus Plane focused in the front using destabilization
    26. 26. Adjusting the Focus Plane focused in the middle using destabilization
    27. 27. Adjusting the Focus Plane focused in the back using destabilization
    28. 28. Bokode
    29. 29. <ul><li>Smart Barcode size : 3mm x 3mm </li></ul><ul><li>Ordinary Camera: Distance 3 meter </li></ul>Long Distance Barcodes
    30. 31. Defocus blur of Bokode
    31. 32. Coding in Angle Mohan, Woo, Smithwick, Hiura, Raskar [Siggraph 2009]
    32. 33. sensor Bokode (angle) Encoding in Angle , not space, time or wavelength camera
    33. 34. <ul><li>circle of confusion  circle of information </li></ul>camera Bokode (angle) Quote suggested by Kurt Akeley Encoding in Angle , not space, time or wavelength sensor
    34. 35. <ul><li>magnification = f c /f b (microscope) ; </li></ul><ul><li>focus always at infinity </li></ul>Bokode camera f b f c ‘ long-distance microscope’
    35. 36. <ul><li>Product labels </li></ul>Street-view Tagging
    36. 37. capturing Bokodes cell-phone camera close to the Bokode (10,000+ bytes of data)
    37. 38. Augmenting Plenoptic Function Wigner Distribution Function Traditional Light Field WDF Traditional Light Field Augmented LF Interference & Diffraction Interaction w/ optical elements ray optics based simple and powerful wave optics based rigorous but cumbersome
    38. 39. Light Fields Goal: Representing propagation, interaction and image formation of light using purely position and angle parameters Reference plane position angle LF propagation (diffractive) optical element LF LF LF LF LF propagation light field transformer
    39. 40. <ul><ul><li>Free-space propagation </li></ul></ul><ul><ul><li>Light field transformer </li></ul></ul><ul><ul><li>Virtual light projector </li></ul></ul><ul><ul><li>Possibly negative radiance </li></ul></ul>
    40. 41. LF Transformer: input to output LF Thin Elements: 6D General Case: 8D Angle Shift Invariant: 4D
    41. 42. Augmented LF framework 1. LF propagation (diffractive) optical element LF LF LF LF LF propagation 2. light field transformer 3. negative radiance Tech report, S. B. Oh et al. http://web.media.mit.edu/~raskar/RayWavefront/ 4. interference
    42. 43. <ul><li>Cubic phase plate </li></ul>Pure ray bending Positive only radiance Diffraction Interference of +/- rays Rotating PSF
    43. 44. Interference received on Complex Geometry
    44. 45. Can you ‘see’ around a corner ?
    45. 46. Femto-Photography: Higher Dimensional LF FemtoFlash UltraFast Detector Computational Optics Serious Sync
    46. 48. Important Dates Submission:  November 2, 2009 Notification : February 2, 2010 Topics Computational Cameras Multiple Images and Camera Arrays Computational Illumination Advanced Image and Video Processing Scientific Photography and Videography Organizing and Exploiting Photo & Video Collections Program Chairs Kyros Kutulakos, U. Toronto Rafael Piestun, U. Colorado Ramesh Raskar, MIT
 International Conference on Computational Photography (ICCP) March 29-30, 2010 MIT, Cambridge MA http://cameraculture.media.mit.edu/ iccp10 /
    47. 49. Beyond Multi-touch: Mobile Laptops Mobile
    48. 50. Converting LCD Screen = large Camera for 3D Interactive HCI and Video Conferencing Matthew Hirsch, Henry Holtzman Doug Lanman, Ramesh Raskar Siggraph Asia 2009 BiDi Screen
    49. 51. Overview: Sensing Depth from Array of Virtual Cameras in LCD
    50. 52. <ul><li>Beyond Traditional Imaging </li></ul><ul><ul><li>Invertible motion blur in video </li></ul></ul><ul><ul><li>Looking around a corner </li></ul></ul><ul><ul><li>LCDs as virtual cameras </li></ul></ul><ul><ul><li>Computational probes (bokode) </li></ul></ul><ul><ul><li>Image destabilization </li></ul></ul><ul><li>Augmented Light Field </li></ul><ul><ul><li>Rays for diffraction+interference </li></ul></ul>Camera Culture Group, MIT Media Lab Ramesh Raskar http://raskar.info Computational Photography Digital Epsilon Coded Essence Computational Photography aims to make progress on both axis Camera Array HDR, FoV Focal stack Decomposition problems Depth Spectrum LightFields Human Stereo Vision Transient Imaging Virtual Object Insertion Relighting Augmented Human Experience Material editing from single photo Scene completion from photos Motion Magnification Phototourism WDF Light Field Augmented LF LF propagation (diffractive) optical element LF LF LF LF LF propagation light field transformer

    ×