• Like
Raskar Next Billion Cameras Siggraph 2009
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

Raskar Next Billion Cameras Siggraph 2009

  • 2,387 views
Published

http://raskar.scripts.mit.edu/~raskar/nextbillioncameras/ …

http://raskar.scripts.mit.edu/~raskar/nextbillioncameras/

Siggraph 2009 Course with Alyosha Efros,
Ramesh Raskar,
Steve Seitz

  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Be the first to comment
No Downloads

Views

Total Views
2,387
On SlideShare
0
From Embeds
0
Number of Embeds
2

Actions

Shares
Downloads
113
Comments
0
Likes
1

Embeds 0

No embeds

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
    No notes for slide
  • Since we are adapting LCD technology we can fit a BiDi screen into laptops and mobile devices.
  • Introduce gist.
  • 4 blocks : light, optics, sensors, processing, (display: light sensitive display)
  • Inference and perception are important. Intent and goal of the photo is important. The same way camera put photorealistic art out of business, maybe this new artform will put the traditional camera out of business. Because we wont really care about a photo, merely a recording of light but a form that captures meaningful subset of the visual experience. Multiperspective photos. Photosynth is an example.
  • Many think all this is for CP
  • Stereo-pair is a simple example of coded photography. Many decomposition problems, direct/global, diffuse/specular,
  • 1:15
  • Comparisons
  • SIGGRAPH 2008 Class: Computational Photography Debevec: Illumination as Computing / Scene & Performance Capture August 2008 Nayar et al. used high-frequency illumination patterns to quickly separate “direct” and “global” components. Basically, the global components stay the same as you phase-shift high-frequency illumination on the scene, while the direct components appear and disappear at different pixels. Taking the minimum value over a sequence of phase shifts yields the global component, multiplied by the fill ratio of the patterns; the maximum minus the minimum yields the direct component.
  • Inference and perception are important. Intent and goal of the photo is important. The same way camera put photorealistic art out of business, maybe this new artform will put the traditional camera out of business. Because we wont really care about a photo, merely a recording of light but a form that captures meaningful subset of the visual experience. Multiperspective photos. Photosynth is an example.
  • Maybe all the consumer photographer wants is a black box with big red button. No optics, sensors or flash. If I am standing the middle of times square and I need to take a photo. Do I really need a fancy camera?
  • The camera can trawl on flickr and retrieve a photo that is roughly taken at the same position, at the same time of day. Maybe all the consumer wants is a blind camera.
  • Reversibly encode all the information in this otherwise blurred photo
  • The glint out of focus shows the unusual pattern.
  • Since we are adapting LCD technology we can fit a BiDi screen into laptops and mobile devices.
  • Recall that one of our inspirations was this new class of optical multi-touch device. At the top you can see a prototype that Sharp Microelectronics has published. These devices are basically arrays of naked phototransistors. Like a document scanner, they are able to capture a sharp image of objects in contact with the surface of the screen. But as objects move away from the screen, without any focusing optics, the images captured this device are blurred.
  • Our observation is that by moving the sensor plane a small distance from the LCD in an optical multitouch device, we enable mask-based light-field capture. We use the LCD screen to display the desired masks, multiplexing between images displayed for the user and masks displayed to create a virtual camera array. I’ll explain more about the virtual camera array in a moment, but suffice to say that once we have measurements from the array we can extract depth.
  • This device would of course support multi-touch on-screen interaction, but because it can measure the distance to objects in the scene a user’s hands can be tracked in a volume in front of the screen, without gloves or other fiducials.
  • Thus the ideal BiDi screen consists of a normal LCD panel separated by a small distance from a bare sensor array. This format creates a single device that spatially collocates a display and capture surface.
  • So here is a preview of our quantitative results. I’ll explain this in more detail later on, but you can see we’re able to accurately distinguish the depth of a set of resolution targets. We show above a portion of portion of views form our virtual cameras, a synthetically refocused image, and the depth map derived from it.
  • CPUs and computers don’t mimic the human brain. And robots don’t mimic human activities. Should the hardware for visual computing which is cameras and capture devices, mimic the human eye? Even if we decide to use a successful biological vision system as basis, we have a range of choices. For single chambered to compounds eyes, shadow-based to refractive to reflective optics. So the goal of my group at Media Lab is to explore new designs and develop software algorithms that exploit these designs.

Transcript

  • 1. Camera Culture Ramesh Raskar Alyosha Efros Ramesh Raskar Steve Seitz Siggraph 2009 Curated Course Next Billion Cameras http://raskar.scripts.mit.edu / nextbillioncameras /
  • 2.
    • A. Introduction‐‐5 minutes
    • B. Cameras of the future ( Raskar , 30 minutes) * Form factors, Modalities and Interaction * Enabling Visual Social Computing
    • C. Reconstruction the World ( Seitz , 30 minutes) * Photo tourism and beyond * Image‐based modeling and rendering on a massive scale * Scene summarization
    • D. Understanding a Billion Photos ( Efros , 30 minutes) * What will the photos depict? * Photos as visual content for computer graphics * Solving computer vision
    • E. Discussion‐‐10 minutes
    Next Billion Cameras
  • 3.  
  • 4. Alexei (Alyosha) Efros [CMU]
    • Assistant professor at the Robotics Institute and the Computer Science Department at Carnegie Mellon University .
    • His research is in the area of computer vision and computer graphics, especially at the intersection of the two. He is particularly interested in using data-driven techniques to tackle problems which are very hard to model parametrically but where large quantities of data are readily available. Alyosha received his PhD in 2003 from UC Berkeley and spent the following year as a post-doctoral fellow in Oxford, England. Alyosha is a recipient of the NSF CAREER award (2006), the Sloan Fellowship (2008), the Guggenheim Fellowship (2008), and the Okawa Grant (2008).
    • http://www.cs.cmu.edu/~efros/
  • 5. Ramesh Raskar [MIT]
    • Associate Professor at the MIT Media Lab and heads the Camera Culture research group.
    • The group focuses on creating a new class for imaging platforms to better capture and share the visual experience. This research involves developing novel cameras with unusual optical elements, programmable illumination, digital wavelength control, and femtosecond analysis of light transport, as well as tools to decompose pixels into perceptually meaningful components.
    • Raskar is a receipient of Alfred P Sloan research fellowship 2009, the TR100 Award 2004, Global Indus Technovator Award 2003. He holds 35 US patents and has received four Mitsubishi Electric Invention Awards. He is currently co-authoring, with Jack Tumblin, a book on computational photography.
    • http://www.media.mit.edu/~raskar
  • 6. Steve Seitz [U-Washington]
    • Professor in the Department of Computer Science and Engineering at the University of Washington.
    • He received Ph.D. in computer sciences at the University of Wisconsin, Madison in 1997. He was twice awarded the David Marr Prize for the best paper at the International Conference of Computer Vision, and has received an NSF Career Award, an ONR Young Investigator Award, and an Alfred P. Sloan Fellowship.  His work on Photo Tourism (joint with Noah Snavely and Rick Szeliski) formed the basis of Microsoft's Photosynth technology.  Professor Seitz is interested in problems in computer vision and computer graphics. His current research focuses on capturing the structure, appearance, and behavior of the real world from digital imagery.
    • http://www.cs.washington.edu/homes/seitz/
  • 7. Where are the ‘camera’s?
  • 8. Where are the ‘camera’s?
  • 9.  
  • 10. Camera Culture Ramesh Raskar Alyosha Efros Ramesh Raskar Steve Seitz Siggraph 2009 Course Next Billion Cameras http://raskar.info/photo/
  • 11. Camera Culture Ramesh Raskar Alyosha Efros Ramesh Raskar Steve Seitz Siggraph 2009 Course Next 100 Billion Cameras http://raskar.info/photo/
  • 12. Key Message
    • Cameras will not look like anything today
      • Emerging optics, illumination, novel sensors
    • Visual Experience will differ from viewfinder
      • Photos will be ‘computed’
      • Remarkable post-capture control
      • Crowdsource the photo collection
      • Exploit priors and online collections
    • Visual Essence will dominate
      • Superior Metadata tagging for effective sharing
      • Fusion with non-visual data
  • 13. Can you look around a corner ?
  • 14. Can you decode a 5 micron feature from 3 meters away with an ordinary camera ?
  • 15. Convert LCD into a big flat camera? Beyond Multi-touch
  • 16. Pantheon
  • 17. How do we move through a space?
  • 18. What is ‘interesting’ here?
  • 19. Record what you ‘feel’ not what you ‘see’
  • 20.  
  • 21.  
  • 22. Camera Culture Ramesh Raskar Ramesh Raskar Camera Culture http://raskar.scripts.mit.edu / nextbillioncameras /
  • 23. “ Visual Social Computing”
    • Social Computing (SoCo)
      • Computing
      • by the people,
      • for the people,
      • of the people
    • Visual SoCo
      • Participatory, Collaborative
      • Visual semantics
      • http://raskar.scripts.mit.edu / nextbillioncameras
    ?
  • 24. Crowdsourcing http://www.wired.com/wired/archive/14.06/crowds.html Object Recognition Fakes Template matching Amazon Mechanical Turk: Steve Fossett search ReCAPTCHA=OCR
  • 25. Participatory Urban Sensing
    • Deborah Estrin et al
    • Static/semi-dynamic/dynamic data
    • A. City Maintenance
      • Side Walks
    • B. Pollution
    • -Sensor network
    • C. Diet, Offenders
      • Graffiti
      • Bicycle on sidewalk
    • Future ..
    • Citizen Surveillance Health Monitoring
    http://research.cens.ucla.edu/areas/2007/Urban_Sensing/ (Erin Brockovich) n
  • 26. Community Photo Collections U of Washington/Microsoft: Photosynth
  • 27. Beyond Visible Spectrum Cedip RedShift
  • 28. Trust in Images From Hany Farid
  • 29. Trust in Images From Hany Farid LA Times March’03
  • 30. Cameras in Developing Countries http://news.bbc.co.uk/2/hi/south_asia/7147796.stm Community news program run by village women
  • 31. Vision thru tongue http://www.pbs.org/kcet/wiredscience/story/97-mixed_feelings.html Solutions for the Visually Challenged http://www.seeingwithsound.com/
  • 32. New Topics in Imaging Research
    • Imaging Devices, Modern Optics and Lenses
    • Emerging Sensor Technologies
    • Mobile Photography
    • Visual Social Computing and Citizen Journalism
    • Imaging Beyond Visible Spectrum
    • Computational Imaging in Sciences (Medical)
    • Trust in Visual Media
    • Solutions for Visually Challenged
    • Cameras in Developing Countries
      • Social Stability, Commerce and Governance
    • Future Products and Business Models
  • 33. Traditional Photography Lens Detector Pixels Image Mimics Human Eye for a Single Snapshot : Single View, Single Instant, Fixed Dynamic range and Depth of field for given Illumination in a Static world Courtesy: Shree Nayar
  • 34. Computational Photography Computational Illumination Computational Camera Scene : 8D Ray Modulator Display Generalized Sensor Generalized Optics Processing 4D Ray Bender Upto 4D Ray Sampler Ray Reconstruction Generalized Optics Recreate 4D Lightfield Light Sources Modulators 4D Incident Lighting 4D Light Field
  • 35. Computational Photography [Raskar and Tumblin]
    • Epsilon Photography
      • Low-level vision: Pixels
      • Multi-photos by perturbing camera parameters
      • HDR, panorama, …
      • ‘ Ultimate camera’
    • Coded Photography
      • Mid-Level Cues:
        • Regions, Edges, Motion, Direct/global
      • Single/few snapshot
        • Reversible encoding of data
      • Additional sensors/optics/illum
      • ‘ Scene analysis’
    • Essence Photography
      • High-level understanding
        • Not mimic human eye
        • Beyond single view/illum
      • ‘ New artform’
    captures a machine-readable representation of our world to hyper-realistically synthesize the essence of our visual experience.
  • 36. Goal and Experience Low Level Mid Level High Level Hyper realism Raw Angle, spectrum aware Non-visual Data, GPS Metadata Priors Comprehensive 8D reflectance field Digital Epsilon Coded Essence Computational Photography aims to make progress on both axis Camera Array HDR, FoV Focal stack Decomposition problems Depth Spectrum LightFields Human Stereo Vision Transient Imaging Virtual Object Insertion Relighting Augmented Human Experience Material editing from single photo Scene completion from photos Motion Magnification Phototourism
  • 37. 2 nd International Conference on Computational Photography Papers due November 2, 2009 http://cameraculture.media.mit.edu/iccp10
  • 38.
    • Ramesh Raskar and Jack Tumblin
    • Book Publishers: A K Peters
    • Siggraph 2009 booth: 20% off
    • Booth #2527
    • ComputationalPhotography.org
    • Meet the Authors
    • Thursday at 2pm-2:30pm
  • 39. Computational Photography [Raskar and Tumblin]
    • Epsilon Photography
      • Low-level vision: Pixels
      • Multi-photos by perturbing camera parameters
      • HDR, panorama, …
      • ‘ Ultimate camera’
    • Coded Photography
      • Single/few snapshot
      • Reversible encoding of data
      • Additional sensors/optics/illum
      • ‘ Scene analysis’ : (Consumer software?)
    • Essence Photography
      • Beyond single view/illum
      • Not mimic human eye
      • ‘ New art form’
  • 40. Epsilon Photography
    • Dynamic range
      • Exposure bracketing [Mann-Picard, Debevec]
    • Wider FoV
      • Stitching a panorama
    • Depth of field
      • Fusion of photos with limited DoF [Agrawala04]
    • Noise
      • Flash/no-flash image pairs
    • Frame rate
      • Triggering multiple cameras [Wilburn04]
  • 41. Dynamic Range Goal: High Dynamic Range Short Exposure Long Exposure
  • 42. Epsilon Photography
    • Dynamic range
      • Exposure braketing [Mann-Picard, Debevec]
    • Wider FoV
      • Stitching a panorama
    • Depth of field
      • Fusion of photos with limited DoF [Agrawala04]
    • Noise
      • Flash/no-flash image pairs [ Petschnigg04, Eisemann04]
    • Frame rate
      • Triggering multiple cameras [Wilburn05, Shechtman02]
  • 43. Computational Photography
    • Epsilon Photography
      • Low-level Vision: Pixels
      • Multiphotos by perturbing camera parameters
      • HDR, panorama
      • ‘ Ultimate camera’
    • Coded Photography
      • Mid-Level Cues:
        • Regions, Edges, Motion, Direct/global
      • Single/few snapshot
        • Reversible encoding of data
      • Additional sensors/optics/illum
      • ‘ Scene analysis’
    • Essence Photography
      • Not mimic human eye
      • Beyond single view/illum
      • ‘ New artform’
  • 44.
    • 3D
      • Stereo of multiple cameras
    • Higher dimensional LF
      • Light Field Capture
        • lenslet array [Adelson92, Ng05] , ‘3D lens’ [Georgiev05] , heterodyne masks [Veeraraghavan07]
    • Boundaries and Regions
      • Multi-flash camera with shadows [Raskar08]
      • Fg/bg matting [Chuang01,Sun06]
    • Deblurring
      • Engineered PSF
      • Motion: Flutter shutter [Raskar06] , Camera Motion [Levin08]
      • Defocus: Coded aperture [Veeraraghavan07,Levin07] , Wavefront coding [Cathey95]
    • Global vs direct illumination
      • High frequency illumination [Nayar06]
      • Glare decomposition [Talvala07, Raskar08]
    • Coded Sensor
      • Gradient camera [Tumblin05]
  • 45. Digital Refocusing using Light Field Camera 125 μ square-sided microlenses [Ng et al 2005]
  • 46.
    • 3D
      • Stereo of multiple cameras
    • Higher dimensional LF
      • Light Field Capture
        • lenslet array [Adelson92, Ng05] , ‘3D lens’ [Georgiev05] , heterodyne masks [Veeraraghavan07]
    • Boundaries and Regions
      • Multi-flash camera with shadows [Raskar08]
      • Fg/bg matting [Chuang01,Sun06]
    • Deblurring
      • Engineered PSF
      • Motion: Flutter shutter [Raskar06] , Camera Motion [Levin08]
      • Defocus: Coded aperture [Veeraraghavan07,Levin07] , Wavefront coding [Cathey95]
    • Global vs direct illumination
      • High frequency illumination [Nayar06]
      • Glare decomposition [Talvala07, Raskar08]
    • Coded Sensor
      • Gradient camera [Tumblin05]
  • 47. Multi-flash Camera for Detecting Depth Edges
  • 48. Depth Edges Left Top Right Bottom Depth Edges Canny Edges
  • 49.
    • 3D
      • Stereo of multiple cameras
    • Higher dimensional LF
      • Light Field Capture
        • lenslet array [Adelson92, Ng05] , ‘3D lens’ [Georgiev05] , heterodyne masks [Veeraraghavan07]
    • Boundaries and Regions
      • Multi-flash camera with shadows [Raskar08]
      • Fg/bg matting [Chuang01,Sun06]
    • Deblurring
      • Engineered PSF
      • Motion: Flutter shutter [Raskar06] , Camera Motion [Levin08]
      • Defocus: Coded aperture [Veeraraghavan07,Levin07] , Wavefront coding [Cathey95]
    • Global vs direct illumination
      • High frequency illumination [Nayar06]
      • Glare decomposition [Talvala07, Raskar08]
    • Coded Sensor
      • Gradient camera [Tumblin05]
  • 50. Flutter Shutter Camera Raskar, Agrawal, Tumblin [Siggraph2006] LCD opacity switched in coded sequence
  • 51. Traditional Coded Exposure Image of Static Object Deblurred Image Deblurred Image
  • 52.
    • 3D
      • Stereo of multiple cameras
    • Higher dimensional LF
      • Light Field Capture
        • lenslet array [Adelson92, Ng05] , ‘3D lens’ [Georgiev05] , heterodyne masks [Veeraraghavan07]
    • Boundaries and Regions
      • Multi-flash camera with shadows [Raskar08]
      • Fg/bg matting [Chuang01,Sun06]
    • Deblurring
      • Engineered PSF
      • Motion: Flutter shutter [Raskar06] , Camera Motion [Levin08]
      • Defocus: Coded aperture [Veeraraghavan07,Levin07] , Wavefront coding [Cathey95]
    • Decomposition Problems
      • High frequency illumination, Global/direct illumination [Nayar06]
      • Glare decomposition [Talvala07, Raskar08]
    • Coded Sensor
      • Gradient camera [Tumblin05]
  • 53. "Fast Separation of Direct and Global Components of a Scene using High Frequency Illumination," S.K. Nayar, G. Krishnan, M. D. Grossberg, R. Raskar, ACM Trans. on Graphics (also Proc. of ACM SIGGRAPH), Jul, 2006.
  • 54. Computational Photography [Raskar and Tumblin]
    • Epsilon Photography
      • Multiphotos by varying camera parameters
      • HDR, panorama
      • ‘ Ultimate camera’ : (Photo-editor)
    • Coded Photography
      • Single/few snapshot
      • Reversible encoding of data
      • Additional sensors/optics/illum
      • ‘ Scene analysis’ : (Next software?)
    • Essence Photography
      • High-level understanding
        • Not mimic human eye
        • Beyond single view/illum
      • ‘ New artform’
  • 55.  
  • 56. Blind Camera Sascha Pohflepp, U of the Art, Berlin, 2006
  • 57. Capturing the Essence of Visual Experience
      • Exploiting online collections
        • Photo-tourism [Snavely2006]
        • Scene Completion [Hays2007]
      • Multi-perspective Images
        • Multi-linear Perspective [Jingyi Yu, McMillan 2004]
        • Unwrap Mosaics [Rav-Acha et al 2008]
        • Video texture panoramas [Agrawal et al 2005]
      • Non-photorealistic synthesis
        • Motion magnification [Liu05]
      • Image Priors
        • Learned features and natural statistics
        • Face Swapping: [Bitouk et al 2008]
        • Data-driven enhancement of facial attractiveness [Leyvand et al 2008]
        • Deblurring [Fergus et al 2006, Several 2008 and 2009 papers]
  • 58. Scene Completion Using Millions of Photographs Hays and Efros, Siggraph 2007
  • 59. Community Photo Collections U of Washington/Microsoft: Photosynth
  • 60. Can you look around a corner ?
  • 61. Can you look around a corner ? Kirmani, Hutchinson, Davis, Raskar 2009 Accepted for ICCV’2009, Oct 2009 in Kyoto Impulse Response of a Scene
  • 62. Femtosecond Laser as Light Source Pico-second detector array as Camera
  • 63. Coded Aperture Camera The aperture of a 100 mm lens is modified Rest of the camera is unmodified Insert a coded mask with chosen binary pattern
  • 64. In Focus Photo LED
  • 65. Out of Focus Photo: Open Aperture
  • 66. Out of Focus Photo: Coded Aperture
  • 67. Captured Blurred Photo
  • 68. Refocused on Person
  • 69.
    • Smart Barcode size : 3mm x 3mm
    • Ordinary Camera: Distance 3 meter
    Computational Probes: Long Distance Bar-codes Mohan, Woo,Smithwick, Hiura, Raskar Accepted as Siggraph 2009 paper
  • 70. Bokode
  • 71. Barcodes markers that assist machines in understanding the real world
  • 72. Bokode: ankit mohan, grace woo, shinsaku hiura, quinn smithwick, ramesh raskar camera culture group, MIT media lab imperceptible visual tags for camera based interaction from a distance
  • 73. Defocus blur of Bokode
  • 74. Image greatly magnified. Simplified Ray Diagram
  • 75. Our Prototypes
  • 76. street-view tagging
  • 77. Converting LCD Screen = large Camera for 3D Interactive HCI and Video Conferencing Matthew Hirsch, Henry Holtzman Doug Lanman, Ramesh Raskar BiDi Screen *
  • 78. Beyond Multi-touch: Mobile Laptops Mobile
  • 79. Light Sensing Pixels in LCD Display with embedded optical sensors Sharp Microelectronics Optical Multi-touch Prototype
  • 80. Design Overview Display with embedded optical sensors LCD , displaying mask Optical sensor array ~2.5 cm ~50 cm
  • 81. Beyond Multi-touch: Hover Interaction
    • Seamless transition of multitouch to gesture
    • Thin package, LCD
  • 82. Design Vision Object Collocated Capture and Display Bare Sensor Spatial Light Modulator
  • 83. Touch + Hover using Depth Sensing LCD Sensor
  • 84. Overview: Sensing Depth from Array of Virtual Cameras in LCD
  • 85.
    • A. Introduction‐‐5 minutes
    • B. Cameras of the future ( Raskar , 30 minutes) * Form factors, Modalities and Interaction * Enabling Visual Social Computing
    • C. Reconstruction the World ( Seitz , 30 minutes) * Photo tourism and beyond * Image‐based modeling and rendering on a massive scale * Scene summarization
    • D. Understanding a Billion Photos ( Efros , 30 minutes) * What will the photos depict? * Photos as visual content for computer graphics * Solving computer vision
    • E. Discussion‐‐10 minutes
    Next Billion Cameras
  • 86.
    • Visual Social Computing
    • Computational Photography
      • Digital
      • Epsilon
      • Coded
      • Essence
    • Beyond Traditional Imaging
      • Looking around a corner
      • LCDs as virtual cameras
      • Computational probes (bokode)
    Camera Culture Group, MIT Media Lab Ramesh Raskar http://raskar.info Cameras of the Future Digital Epsilon Coded Essence Computational Photography aims to make progress on both axis Camera Array HDR, FoV Focal stack Decomposition problems Depth Spectrum LightFields Human Stereo Vision Transient Imaging Virtual Object Insertion Relighting Augmented Human Experience Material editing from single photo Scene completion from photos Motion Magnification Phototourism
  • 87. Camera Culture Ramesh Raskar Alyosha Efros Ramesh Raskar Steve Seitz Siggraph 2009 Course Next Billion Cameras http://raskar.info/photo/
  • 88.
    • A. Introduction‐‐5 minutes
    • B. Cameras of the future ( Raskar , 30 minutes) * Form factors, Modalities and Interaction * Enabling Visual Social Computing
    • C. Reconstruction the World ( Seitz , 30 minutes) * Photo tourism and beyond * Image‐based modeling and rendering on a massive scale * Scene summarization
    • D. Understanding a Billion Photos ( Efros , 30 minutes) * What will the photos depict? * Photos as visual content for computer graphics * Solving computer vision
    • E. Discussion‐‐10 minutes
    Next Billion Cameras
  • 89.
    • Capture
    • Overcome Limitations of Cameras
    • Capture Richer Data
    • Multispectral
    • New Classes of Visual Signals
    • Lightfields, Depth, Direct/Global, Fg/Bg separation
    • Hyperrealistic Synthesis
    • Post-capture Control
    • Impossible Photos
    • Close to Scientific Imaging
    Computational Photography http://raskar.info/photo/
  • 90.
      • http://raskar.scripts.mit.edu / nextbillioncameras
  • 91.  
  • 92. Questions
    • What will a camera look like in 10,20 years?
    • How will a billion networked and portable cameras change the social culture?
    • How will online photo collections transform visual social computing?
    • How will movie making/new reporting change?
    • computational-journalism.com
  • 93. Fernald, Science [Sept 2006] Shadow Refractive Reflective Tools for Visual Computing
  • 94. Cameras and their Impact
    • Beyond Traditional Imaging Analysis and synthesis
      • Emerging optics, illumination, novel sensors
      • Exploit priors and online collections
    • Applications
      • Better scene understanding/analysis
      • Capture visual essence
      • Superior Metadata tagging for effective sharing
      • Fuse non-visual data
    • Impact on Society
      • Beyond entertainment and productivity
      • Sensors for disabled, new art forms, crowdsourcing, bridging cultures, social stability
  • 95. 2 nd International Conference on Computational Photography Papers due November 2, 2009 http://cameraculture.media.mit.edu/iccp10
  • 96.
    • Ramesh Raskar and Jack Tumblin
    • Book Publishers: A K Peters
    • Siggraph 2009 booth: 20% off
    • Booth #2527
    • ComputationalPhotography.org
    • Meet the Authors
    • Thursday at 2pm-2:30pm
  • 97.
    • Visual Social Computing
    • Computational Photography
      • Digital
      • Epsilon
      • Coded
      • Essence
    • Beyond Traditional Imaging
      • Looking around a corner
      • LCDs as virtual cameras
      • Computational probes (bokode)
      • http://raskar.scripts.mit.edu / nextbillioncameras
    Next Billion Cameras Digital Epsilon Coded Essence Computational Photography aims to make progress on both axis Camera Array HDR, FoV Focal stack Decomposition problems Depth Spectrum LightFields Human Stereo Vision Transient Imaging Virtual Object Insertion Relighting Augmented Human Experience Material editing from single photo Scene completion from photos Motion Magnification Phototourism
  • 98.
    • A. Cameras of the future ( Raskar , 30 minutes) * Enabling Visual Social Computing * Computational Photography * Beyond Traditional Imaging
    • B. Reconstruction the World ( Seitz , 30 minutes) * Photo tourism and beyond * Image‐based modeling and rendering on a massive scale * Scene summarization
    • C. Understanding a Billion Photos ( Efros , 30 minutes) * What will the photos depict? * Photos as visual content for computer graphics * Solving computer vision
    Next Billion Cameras
      • http://raskar.scripts.mit.edu / nextbillioncameras
    Course Evaluation (prize: free mug for each course!) http://www.siggraph.org/courses_evaluation IntConf on Computational Photography, Mar’2010 Papers due Nov 2, 2009 http://cameraculture.info/iccp10 Book: Computational Photography [Raskar and Tumblin] AkPeters Booth #2527, 20% coupons here, Meet Authors Thu 2pm