02 Fall09 Lecture Sept18web

1,540 views
1,330 views

Published on

Published in: Technology, Art & Photos
0 Comments
1 Like
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total views
1,540
On SlideShare
0
From Embeds
0
Number of Embeds
5
Actions
Shares
0
Downloads
30
Comments
0
Likes
1
Embeds 0
No embeds

No notes for slide
  • Like wristwatches?
  • http://www.flickr.com/photos/pgoyette/107849943/in/photostream/
  • New techniques are trying decrease this distance using a folded optics approach. The origami lens uses multiple total internal reflection to propagate the bundle of rays.
  • CPUs and computers don’t mimic the human brain. And robots don’t mimic human activities. Should the hardware for visual computing which is cameras and capture devices, mimic the human eye? Even if we decide to use a successful biological vision system as basis, we have a range of choices. For single chambered to compounds eyes, shadow-based to refractive to reflective optics. So the goal of my group at Media Lab is to explore new designs and develop software algorithms that exploit these designs.
  • Current explosion in information technology has been derived from our ability to control the flow of electrons in a semiconductor in the most intricate ways. Photonic crystals promise to give us similar control over photons - with even greater flexibility because we have far more control over the properties of photonic crystals than we do over the electronic properties of semiconductors.
  • Changes in the index of refraction of air are made visible by Schlieren Optics. This special optics technique is extremely sensitive to deviations of any kind that cause the light to travel a different path. Clearest results are obtained from flows which are largely two-dimensional and not volumetric. In schlieren photography, the collimated light is focused with a lens, and a knife-edge is placed at the focal point, positioned to block about half the light. In flow of uniform density this will simply make the photograph half as bright. However in flow with density variations the distorted beam focuses imperfectly, and parts which have focussed in an area covered by the knife-edge are blocked. The result is a set of lighter and darker patches corresponding to positive and negative fluid density gradients in the direction normal to the knife-edge.
  • Full-Scale Schlieren Image Reveals The Heat Coming off of a Space Heater, Lamp and Person
  • 4 blocks : light, optics, sensors, processing, (display: light sensitive display)
  • 4 blocks : light, optics, sensors, processing, (display: light sensitive display)
  • But what if the user is not wearing the special clothing. Can we still understand the gestures using a simple camera? The problem is that in a cluttered scene, it is often difficult to do image processing.
  • But what if the user is not wearing the special clothing. Can we still understand the gestures using a simple camera? The problem is that in a cluttered scene, it is often difficult to do image processing.
  • Good afternoon and thank you for attending our talk entitled “Dual Photography”.
  • I will start off by giving you a quick overview of our technique. Suppose you had the scene shown which is being imaged by a camera on the left and is illuminated by a projector on the right. If you took a picture with the camera, here’s what it would look like. You can see the scene is being illuminated from the right, from the position of the projector located off camera. Dual photography allows us to virtually exchange the positions of the camera and the projector, generating this image. This image is synthesized by our technique. We never had a camera in this position. You can see that the technique has captured shadows, refraction, reflection and other global illumination effects.
  • In this talk I will start off by discussing how dual photography works. I will give a motivation For dual photography by applying it to the problem of scene relighting, and show that it can be Used to greatly accelerate the acquisition of the data needed. I will then talk about an algorithm We developed to accelerate the acquisition of the light transport needed to perform dual photography And I will end with some conclusions.
  • Dual photography is based on the principle of Helmholtz reciprocity. Suppose we have a ray leaving the light with intensity I and scattering off the scene towards the eye with a certain attenuation. Let’s call this the primal configuration. In the dual configuration, the positions of the eye and the light are interchanged. Helmholtz reciprocity says that the scattering is symmetric, thus the same ray in the opposite direction will have the same attenuation.
  • This photocell configuration might remind us of imaging techniques. For example, in the early days of television a similar method was use to create one of the first TV cameras. Known as a “flying-spot” camera, a beam of extremely bright light would scan the scene and a bank of photosensors would measure the reflected light. The value measured at these sensors would be immediately sent out via closed-circuit to television sets whose electron beam was synchronized with the beam of light. Thus they drew out the image as it was being measured by the Photosensors. This allowed for the creation of a television system that did not have to have a means to store “frames” of video. Another related applications. Finally scanning electron microscopes (and other scanned beam systems for that matter) can be thought of as employing the principle of dual photography. Thus while some of these applications might be new, what is new is the framework that establishes dual photography in this manner and gives us insights to possible applications such as relighting as we shall see in a moment.
  • Suppose we had the scene shown and we illuminated it with a projector from the left and imaged it with a camera on the right. Now the pixels of the projector and the camera form solid angles in space whose size depends on the resolution of each. Lets assume a resolution of pxq for the projector and mxn for the camera. What dual photography does is transform the camera into a projector and the projector into a camera. Note that the properties of the new projector (such as position, field-of-view, and resolution) are the same as that of the old camera, and vice versa. We call this the dual configuration. In this work we shall see that it is possible to attain this dual configuration by making measurements only in the original primal configuration. We will do this by measuring the light transport between individual pixels of the projector to individual pixels of the camera. Because there are 2D pixels in the projector and 2D pixels in the camera, this light transport is a 4D function. Now lets see how we can represent this system with mathematical notation.
  • Fortunately, the superposition of light makes this a linear system and so we can represent this setup with a simple linear equation. We can represent the projected pattern as a column vector of resolution pq x 1 and likewise we can represent the camera as a column vector of resolution mn x 1. This means that the 4-D light transport function that specifies the transport of light from a pixel in the projector to a pixel in the camera can be represented as a matrix of resolution mn x pq.
  • If we put these elements together, we can see that it forms a simple linear equation. Here we apply the projected pattern at vector P to the transport matrix, which we call the “T” matrix in the paper and we get a result at vector C, which is our resulting camera image for that projector pattern. I must mention that if T is properly measured, it will contain all illumination paths from the projector to the camera, including multiple bounces, subsurface scattering, and other global illumination contributions which are often desireable. At this point, let’s gain an intuition as to the composition of T. What does this matrix look like?
  • We can gain insight into this by illuminating patterns at the projector that have only a single pixel turned on as shown here. We can see if we apply this vector at P, it will address a single column of T which will be output at C.
  • This is true for each vector P with a single pixel turned on; they will extract a different column of T.
  • Thus we can see that the columns of T are composed of the images we would take at C with a different pixel turned on at the projector.
  • So this is the way light flows in our primal configuration…
  • We’re going to put primes above the P and C vectors to indicate that they are in the primal space. So what will happen when we go to the dual space and interchange the roles of the camera and the projector? Now light will be emitted by the camera and photographed by the projector. So this gives us the linear equation shown at the bottom. It is obviously still a linear system, except that now light leaves the camera is transformed by this new transport matrix (let’s call it T2) and results in an image at the projector. Note that the dimensions of the camera and the projector stay the same. So dimensional analysis indicates that the dimension of T2 is now pq x mn. The key observation of dual photography is that the new matrix T2 is related to the original T. We can see that if we look at the transport for a particular pixel of the projector to a particular pixel of the camera. Let’s look at the transport from pixel “j” of the projector to pixel “i” of the camera. The transport between this pair is specified by a single element in the T matrix, in this case element Tij. Now let’s look at the same pixels in at the dual configuration with the camera emitting light and the projector capturing it. The pixel Of interest in the camera is still pixel I and the pixel of interest in the projector is still j. In this case the element that relates These two is T2ji. Dual Photography is made possible by Helmholtz reciprocity which can be shown to indicate that the pixel To pixel transport is symmetric, that is the transport will be the same whether the light is leaving the projector pixel and going to The camera pixel, or going from the camera pixel to the projector pixel. Thus we can write that T2ji = Tij. This means that T2 is simply the transpose the original T.
  • Thus we can define “Dual Photography” as the process of transposing this transport matrix to generate pictures from the point of view of the projector as illuminated by the camera. To create a dual image, we must first capture the transport matrix T between the projector and camera in the primary configuration. As I indicated earlier, lighting up individual pixels of the projector extract single columns of the T matrix, and if we do that for all the columns T can be acquired in that manner. We shall talk about an acceleration technique later in the talk. Again, dual photography is based only on the fact that the pixel-to-pixel transport is symmetric. We formally prove this in the Appendix of the paper.
  • Before we continue, let’s take a look at some initial results taken by our system. Here we show the primal image of a set of famous graphics objects. Here the projector is to the right. If we take a look at the dual image, we can see that we are now looking at these objects face on and the illumination is coming in from where the camera used to be. Note that the shading on all the objects is correct.
  • In this next example, we have a few objects viewed from above by the camera. The projector is in front of them and forms a fairly grazing angle with the floor So it is gray. If we look at the dual image, we can see the objects from in front being lit from above. Note that the floor is now brighter because the new light source (which was the original camera) is viewing it from a more perpendicular direction. Also see for example that the shadow on the horse in the dual image corresponds To the portion of the horse that the pillar is occluding. So in some ways, what we have here is a real-life shadow map, where the primal is the shadow map fro the dual. One thing I really like about this image is that you can see detail in the dual that is not visible in the primal. Take a look at the concentric rings in the detail at the base Of the pillar. This detail is simply not visible in the primal because of the angle but is very clear in the dual. Also the detail of the lions heads is more clear in the dual than in the primal.
  • We observe that since we have the complete pixel-to-pixel transport, we can relight either the primal or dual images with a new 2D projector pattern.
  • As far as the equations are concerned, what the photosensor is doing is integrating all of the values of the C vector into a single scalar value. Assume that this integration is being done uniformly across the field-of-view of the photosensor. So this is our new primal equation. Since the T matrix is no longer relating a vector to a vector, it collapses into a row vector of dimensions pq x 1 as shown here. We can measure this T vector in the same manner, by illuminating single pixels at the projector to extract the elements of T. If we transpose this vector into a column vector, we get the dual configuration, meaning the photograph taken by the projector and illuminated by the photocell. Here the incident illumination provided by c cannot be spatially varying since C is a scalar. This means that our light is a Uniform scaling of T. The picture shown here is an image that we acquired using a photocell shown and a projector.
  • I will now show some videos that show the projector patterns animating.
  • As far as the equations are concerned, what the photosensor is doing is integrating all of the values of the C vector into a single scalar value. Assume that this integration is being done uniformly across the field-of-view of the photosensor. So this is our new primal equation. Since the T matrix is no longer relating a vector to a vector, it collapses into a row vector of dimensions pq x 1 as shown here. We can measure this T vector in the same manner, by illuminating single pixels at the projector to extract the elements of T. If we transpose this vector into a column vector, we get the dual configuration, meaning the photograph taken by the projector and illuminated by the photocell. Here the incident illumination provided by c cannot be spatially varying since C is a scalar. This means that our light is a Uniform scaling of T. The picture shown here is an image that we acquired using a photocell shown and a projector.
  • http://web.media.mit.edu/~raskar/NPAR04/
  • 02 Fall09 Lecture Sept18web

    1. 1. Camera Culture Ramesh Raskar MIT Media Lab http:// CameraCulture . info/ Computational Camera & Photography:
    2. 2. Where are the ‘cameras’?
    3. 4. Poll, Sept 18 th 2009 <ul><li>When will DSCamera disappear? Why? </li></ul>Like Wristwatches ?
    4. 5. Taking Notes <ul><li>Use slides I post on the site </li></ul><ul><li>Write down anecdotes and stories </li></ul><ul><li>Try to get what is NOT on the slide </li></ul><ul><li>Summarize questions and answers </li></ul><ul><li>Take photos of demos + doodles on board </li></ul><ul><ul><li>Use laptop to take notes </li></ul></ul><ul><ul><li>Send before next Monday </li></ul></ul>
    5. 6. Synthetic Lighting Paul Haeberli, Jan 1992
    6. 7. Homework <ul><li>Take multiple photos by changing lighting other parameters. Be creative. </li></ul><ul><li>Mix and match color channels to relight </li></ul><ul><li>Due Sept 25 th </li></ul><ul><li>Submit on Stellar (via link): </li></ul><ul><ul><li>Commented Source code </li></ul></ul><ul><ul><li>Input images and output images PLUS intermediate results </li></ul></ul><ul><ul><li>CREATE a webpage and send me a link </li></ul></ul><ul><li>Ok to use online software </li></ul><ul><li>Update results on Flickr (group) page </li></ul>
    7. 8. Debevec et al. 2002: ‘Light Stage 3’
    8. 9. Image-Based Actual Re-lighting Film the background in Milan, Measure incoming light, Light the actress in Los Angeles Matte the background Matched LA and Milan lighting. Debevec et al., SIGG2001
    9. 10. Second Homework <ul><li>Extending Andrew Adam’s Virtual Optical Bench </li></ul>
    10. 11. Dual photography from diffuse reflections: Homework Assignment 2 the camera’s view Sen et al, Siggraph 2005
    11. 12. Beyond Visible Spectrum Cedip RedShift
    12. 13. <ul><li>Brief Introductions </li></ul><ul><li>Are you a photographer ? </li></ul><ul><li>Do you use camera for vision/image processing? Real-time processing? </li></ul><ul><li>Do you have background in optics/sensors? </li></ul><ul><ul><li>Name, Dept, Year, Why you are here </li></ul></ul><ul><ul><li>Are you on mailing list? On Stellar? </li></ul></ul><ul><ul><li>Did you get email from me? </li></ul></ul>
    13. 15. Dark Bldgs Reflections on bldgs Unknown shapes
    14. 16. ‘ Well-lit’ Bldgs Reflections in bldgs windows Tree, Street shapes
    15. 17. Background is captured from day-time scene using the same fixed camera Night Image Day Image Context Enhanced Image
    16. 18. Mask is automatically computed from scene contrast
    17. 19. But, Simple Pixel Blending Creates Ugly Artifacts
    18. 20. Pixel Blending
    19. 21. Pixel Blending Our Method : Integration of blended Gradients
    20. 22. Rene Magritte, ‘Empire of the Light’ Surrealism
    21. 23. Time-lapse Mosaics Maggrite Stripes time
    22. 24. t
    23. 25. Range Camera Demo
    24. 27. http://www.flickr.com/photos/pgoyette/107849943/in/photostream/
    25. 28.   Scheimpflug principle
    26. 29. Plan <ul><li>Lenses </li></ul><ul><ul><li>Point spread function </li></ul></ul><ul><li>Lightfields </li></ul><ul><ul><li>What are they? </li></ul></ul><ul><ul><li>What are the properties? </li></ul></ul><ul><ul><li>How to capture? </li></ul></ul><ul><ul><li>What are the applications? </li></ul></ul>
    27. 30. <ul><li>Format </li></ul><ul><ul><li>4 (3) Assignments </li></ul></ul><ul><ul><ul><li>Hands on with optics, illumination, sensors, masks </li></ul></ul></ul><ul><ul><ul><li>Rolling schedule for overlap </li></ul></ul></ul><ul><ul><ul><li>We have cameras, lenses, electronics, projectors etc </li></ul></ul></ul><ul><ul><ul><li>Vote on best project </li></ul></ul></ul><ul><ul><li>Mid term exam </li></ul></ul><ul><ul><ul><li>Test concepts </li></ul></ul></ul><ul><ul><li>1 Final project </li></ul></ul><ul><ul><ul><li>Should be a Novel and Cool </li></ul></ul></ul><ul><ul><ul><li>Conference quality paper </li></ul></ul></ul><ul><ul><ul><li>Award for best project </li></ul></ul></ul><ul><ul><li>Take 1 class notes </li></ul></ul><ul><ul><li>Lectures (and guest talks) </li></ul></ul><ul><ul><li>In-class + online discussion </li></ul></ul><ul><li>If you are a listener </li></ul><ul><ul><li>Participate in online discussion, dig new recent work </li></ul></ul><ul><ul><li>Present one short 15 minute idea or new work </li></ul></ul><ul><li>Credit </li></ul><ul><ul><ul><li>Assignments: 40% </li></ul></ul></ul><ul><ul><ul><li>Project: 30% </li></ul></ul></ul><ul><ul><ul><li>Mid-term: 20% </li></ul></ul></ul><ul><ul><ul><li>Class participation: 10% </li></ul></ul></ul><ul><li>Pre-reqs </li></ul><ul><ul><ul><li>Helpful: Linear algebra, image processing, think in 3D </li></ul></ul></ul><ul><ul><ul><li>We will try to keep math to essentials, but complex concepts </li></ul></ul></ul>
    28. 31. <ul><li>Assignments: </li></ul><ul><li>You are encouraged to program in Matlab for image analysis </li></ul><ul><li>You may need to use C++/OpenGL/Visual programming for some hardware assignments </li></ul><ul><li>Each student is expected to prepare notes for one lecture </li></ul><ul><li>These notes should be prepared and emailed to the instructor no later than the following Monday night (midnight EST). Revisions and corrections will be exchanged by email and after changes the notes will be posted to the website before class the following week. </li></ul><ul><li>5 points </li></ul><ul><li>Course mailing list : Please make sure that your emailid is on the course mailing list </li></ul><ul><li>Send email to raskar (at) media.mit.edu </li></ul><ul><li>Please fill in the email/credit/dept sheet </li></ul><ul><li>  </li></ul><ul><li>Office hours : </li></ul><ul><li>Email is the best way to get in touch </li></ul><ul><li>Ramesh:. raskar (at) media.mit.edu </li></ul><ul><li>Ankit: ankit (at) media.mit.edu </li></ul><ul><li>After class: </li></ul><ul><li>Muddy Charles Pub </li></ul><ul><li>(Walker Memorial next to tennis courts) </li></ul>
    29. 32. 2 Sept 18th Modern Optics and Lenses, Ray-matrix operations 3 Sept 25th Virtual Optical Bench, Lightfield Photography, Fourier Optics, Wavefront Coding 4 Oct 2nd Digital Illumination , Hadamard Coded and Multispectral Illumination 5 Oct 9th Emerging Sensors : High speed imaging, 3D  range sensors, Femto-second concepts, Front/back illumination, Diffraction issues 6 Oct 16th Beyond Visible Spectrum: Multispectral imaging and Thermal sensors, Fluorescent imaging, 'Audio camera' 7 Oct 23rd Image Reconstruction Techniques, Deconvolution, Motion and Defocus Deblurring, Tomography, Heterodyned Photography, Compressive Sensing 8 Oct 30th Cameras for Human Computer Interaction (HCI): 0-D and 1-D sensors, Spatio-temporal coding, Frustrated TIR, Camera-display fusion 9 Nov 6th Useful techniques in Scientific and Medical Imaging: CT-scans, Strobing, Endoscopes, Astronomy and Long range imaging 10 Nov 13th Mid-term  Exam, Mobile Photography, Video Blogging, Life logs and Online Photo collections 11 Nov 20th Optics and Sensing in Animal Eyes. What can we learn from successful biological vision systems? 12 Nov 27th Thanksgiving Holiday (No Class) 13 Dec 4th Final Projects
    30. 33. <ul><li>What are annoyances in photography ? </li></ul><ul><li>Why CCD camera behaves retroreflective? </li></ul><ul><li>Youtube videos on camera tutorial (DoF etc) http://www.youtube.com/user/MPTutor </li></ul>
    31. 34. Anti-Paparazzi Flash The anti-paparazzi flash: 1. The celebrity prey. 2. The lurking photographer. 3. The offending camera is detected and then bombed with a beam of light. 4. Voila! A blurry image of nothing much.
    32. 35. <ul><li>Anti-Paparazzi Flash </li></ul>Retroreflective CCD of cellphone camera Preventing Camera Recording by Designing a Capture-Resistant Environment Khai N. Truong, Shwetak N. Patel, Jay W. Summet, and Gregory D. Abowd. Ubicomp 2005
    33. 36. Auto Focus <ul><li>Contrast method compares contrast of images at three depths, </li></ul><ul><li> if in focus, image will have high contrast, else not </li></ul><ul><li>Phase methods compares two parts of lens at the sensor plane, </li></ul><ul><li> if in focus, entire exit pupil sees a uniform color, else not </li></ul><ul><li>- assumes object has diffuse BRDF </li></ul>
    34. 37. Final Project Ideas <ul><li>User interaction device </li></ul><ul><ul><li>Camera based </li></ul></ul><ul><ul><li>Illumination based </li></ul></ul><ul><ul><li>Photodetector or line-scan camera </li></ul></ul><ul><li>Capture the invisible </li></ul><ul><ul><li>Tomography for internals </li></ul></ul><ul><ul><li>Structured light for 3D scanning </li></ul></ul><ul><ul><li>Fluorescence for transparent materials </li></ul></ul><ul><li>Cameras in different EM/other spectrum </li></ul><ul><ul><li>Wifi, audio, magnetic, haptic, capacitive </li></ul></ul><ul><ul><li>Visible Thermal IR segmentation </li></ul></ul><ul><ul><li>Thermal IR (emotion detection, motion detector) </li></ul></ul><ul><ul><li>Multispectral camera, discriminating (camel-sand) </li></ul></ul><ul><li>Illumination </li></ul><ul><ul><li>Multi-flash with lighfield </li></ul></ul><ul><ul><li>Schielren photography </li></ul></ul><ul><ul><li>Strobing and Colored strobing </li></ul></ul><ul><li>External non-imaging sensor </li></ul><ul><ul><li>Camera with gyro movement sensors, find identity of user </li></ul></ul><ul><ul><li>Cameras with GPS and online geo-tagged photo collections </li></ul></ul><ul><ul><li>Interaction between two cameras (with lasers on-board) </li></ul></ul><ul><li>Optics </li></ul><ul><ul><li>Lightfield </li></ul></ul><ul><ul><li>Coded aperture </li></ul></ul><ul><ul><li>Bio-inspired vision </li></ul></ul><ul><li>Time </li></ul><ul><ul><li>Time-lapse photos </li></ul></ul><ul><ul><li>Motion blur </li></ul></ul>
    35. 38. Kitchen Sink: Volumetric Scattering Volumetric Scattering : Chandrasekar 50, Ishimaru 78 Direct Global
    36. 39. “ Origami Lens”: Thin Folded Optics (2007) “ Ultrathin Cameras Using Annular Folded Optics, “ E. J. Tremblay , R. A. Stack, R. L. Morrison, J. E. Ford Applied Optics , 2007 - OSA Slides by Shree Nayar
    37. 40. Origami Lens Conventional Lens Origami Lens Slides by Shree Nayar
    38. 41. Fernald, Science [Sept 2006] Shadow Refractive Reflective Tools for Visual Computing
    39. 42. Photonic Crystals <ul><li>‘ Routers’ for photons instead of electrons </li></ul><ul><li>Photonic Crystal </li></ul><ul><ul><li>Nanostructure material with ordered array of holes </li></ul></ul><ul><ul><li>A lattice of high-RI material embedded within a lower RI </li></ul></ul><ul><ul><li>High index contrast </li></ul></ul><ul><ul><li>2D or 3D periodic structure </li></ul></ul><ul><li>Photonic band gap </li></ul><ul><ul><li>Highly periodic structures that blocks certain wavelengths </li></ul></ul><ul><ul><li>(creates a ‘gap’ or notch in wavelength) </li></ul></ul><ul><li>Applications </li></ul><ul><ul><li>‘ Semiconductors for light’: mimics silicon band gap for electrons </li></ul></ul><ul><ul><li>Highly selective/rejecting narrow wavelength filters (Bayer Mosaic?) </li></ul></ul><ul><ul><li>Light efficient LEDs </li></ul></ul><ul><ul><li>Optical fibers with extreme bandwidth (wavelength multiplexing) </li></ul></ul><ul><ul><li>Hype: future terahertz CPUs via optical communication on chip </li></ul></ul>
    40. 43. Schlieren Photography <ul><li>Image of small index of refraction gradients in a gas </li></ul><ul><li>Invisible to human eye (subtle mirage effect) </li></ul>Knife edge blocks half the light unless distorted beam focuses imperfectly Collimated Light Camera
    41. 44. http://www.mne.psu.edu/psgdl/FSSPhotoalbum/index1.htm
    42. 45. Sample Final Projects <ul><li>Schlieren Photography </li></ul><ul><ul><li>(Best project award + Prize in 2008) </li></ul></ul><ul><li>Camera array for Particle Image Velocimetry </li></ul><ul><li>BiDirectional Screen </li></ul><ul><li>Looking Around a Corner (theory) </li></ul><ul><li>Tomography machine </li></ul><ul><li>.. </li></ul><ul><li>.. </li></ul>
    43. 46. Computational Illumination <ul><li>Dual Photography </li></ul><ul><li>Direct-global Separation </li></ul><ul><li>Multi-flash Camera </li></ul>
    44. 47. Computational Illumination
    45. 48. Computational Photography Illumination Novel Cameras Generalized Sensor Generalized Optics Processing 4D Light Field
    46. 49. Computational Illumination Novel Cameras Generalized Sensor Generalized Optics Processing Generalized Optics Light Sources Modulators 4D Light Field Programmable 4D Illumination field + time + wavelength Novel Illumination
    47. 50. Edgerton 1930’s Not Special Cameras but Special Lighting
    48. 51. Edgerton 1930’s Multi-flash Sequential Photography Stroboscope (Electronic Flash) Shutter Open Flash Time
    49. 52. ‘ Smarter’ Lighting Equipment What Parameters Can We Change ?
    50. 53. Computational Illumination: Programmable 4D Illumination Field + Time + Wavelength <ul><li>Presence or Absence, Duration, Brightness </li></ul><ul><ul><li>Flash/No-flash </li></ul></ul><ul><li>Light position </li></ul><ul><ul><li>Relighting: Programmable dome </li></ul></ul><ul><ul><li>Shape enhancement: Multi-flash for depth edges </li></ul></ul><ul><li>Light color/wavelength </li></ul><ul><li>Spatial Modulation </li></ul><ul><ul><li>Synthetic Aperture Illumination </li></ul></ul><ul><li>Temporal Modulation </li></ul><ul><ul><li>TV remote, Motion Tracking, Sony ID-cam, RFIG </li></ul></ul><ul><li>Exploiting (uncontrolled) natural lighting condition </li></ul><ul><ul><li>Day/Night Fusion, Time Lapse, Glare </li></ul></ul>
    51. 54. Multi-flash Camera for Detecting Depth Edges
    52. 55. Ramesh Raskar, Karhan Tan, Rogerio Feris, Jingyi Yu, Matthew Turk Mitsubishi Electric Research Labs (MERL), Cambridge, MA U of California at Santa Barbara U of North Carolina at Chapel Hill Non-photorealistic Camera: Depth Edge Detection and Stylized Rendering using Multi-Flash Imaging
    53. 57. Car Manuals
    54. 58. What are the problems with ‘real’ photo in conveying information ? Why do we hire artists to draw what can be photographed ?
    55. 59. Shadows Clutter Many Colors Highlight Shape Edges Mark moving parts Basic colors
    56. 60. Shadows Clutter Many Colors Highlight Edges Mark moving parts Basic colors A New Problem
    57. 61. Gestures Input Photo Canny Edges Depth Edges
    58. 62. Depth Edges with MultiFlash Raskar, Tan, Feris, Jingyi Yu, Turk – ACM SIGGRAPH 2004
    59. 67. Depth Discontinuities Internal and external Shape boundaries, Occluding contour, Silhouettes
    60. 68. Depth Edges
    61. 69. Our Method Canny
    62. 70. Canny Intensity Edge Detection Our Method Photo Result
    63. 73. Imaging Geometry Shadow lies along epipolar ray
    64. 74. Shadow lies along epipolar ray, Epipole and Shadow are on opposite sides of the edge Imaging Geometry m
    65. 75. Shadow lies along epipolar ray, Shadow and epipole are on opposite sides of the edge Imaging Geometry m
    66. 76. Depth Edge Camera Light epipolar rays are horizontal or vertical
    67. 77. U{depth edges} Normalized Left / Max Right / Max Left Flash Right Flash Input
    68. 78. U{depth edges} Normalized Left / Max Right / Max Left Flash Right Flash Input
    69. 79. U{depth edges} Normalized Left / Max Right / Max Left Flash Right Flash Input Negative transition along epipolar ray is depth edge Plot
    70. 80. U{depth edges} Normalized Left / Max Right / Max Left Flash Right Flash Input Negative transition along epipolar ray is depth edge Plot
    71. 81. <ul><li>% Max composite </li></ul><ul><li>maximg = max( left, right, top, bottom); </li></ul><ul><li>% Normalize by computing ratio images </li></ul><ul><li>r1 = left./ maximg; r2 = top ./ maximg; </li></ul><ul><li>r3 = right ./ maximg; r4 = bottom ./ maximg; </li></ul><ul><li>% Compute confidence map </li></ul><ul><li>v = fspecial( 'sobel' ); h = v'; </li></ul><ul><li>d1 = imfilter( r1, v ); d3 = imfilter( r3, v ); % vertical sobel </li></ul><ul><li>d2 = imfilter( r2, h ); d4 = imfilter( r4, h ); % horizontal sobel </li></ul><ul><li>%Keep only negative transitions </li></ul><ul><li>silhouette1 = d1 .* (d1>0); </li></ul><ul><li>silhouette2 = abs( d2 .* (d2<0) ); </li></ul><ul><li>silhouette3 = abs( d3 .* (d3<0) ); </li></ul><ul><li>silhouette4 = d4 .* (d4>0); </li></ul><ul><li>%Pick max confidence in each </li></ul><ul><li>confidence = max(silhouette1, silhouette2, silhouette3, silhouette4); </li></ul><ul><li>imwrite( confidence, 'confidence.bmp'); </li></ul>No magic parameters !
    72. 82. Depth Edges Left Top Right Bottom Depth Edges Canny Edges
    73. 83. Gestures Input Photo Canny Edges Depth Edges
    74. 86. Flash Matting Flash Matting, Jian Sun, Yin Li, Sing Bing Kang, and Heung-Yeung Shum, Siggraph 2006
    75. 88. Multi-light Image Collection [Fattal, Agrawala, Rusinkiewicz] Sig’2007 Input Photos ShadowFree, Enhanced surface detail, but Flat look Some Shadows for depth but Lost visibility
    76. 89. Multiscale decomposition using Bilateral Filter, Combine detail at each scale across all the input images. Fuse maximum gradient from each photo, Reconstruct from 2D integration all the input images. Enhanced shadows
    77. 90. Computational Illumination: Programmable 4D Illumination Field + Time + Wavelength <ul><li>Presence or Absence, Duration, Brightness </li></ul><ul><ul><li>Flash/No-flash (matting for foreground/background) </li></ul></ul><ul><li>Light position </li></ul><ul><ul><li>Relighting: Programmable dome </li></ul></ul><ul><ul><li>Shape enhancement: Multi-flash for depth edges </li></ul></ul><ul><li>Light color/wavelength </li></ul><ul><li>Spatial Modulation </li></ul><ul><ul><li>Dual Photography, Direct/Global Separation, Synthetic Aperture Illumination </li></ul></ul><ul><li>Temporal Modulation </li></ul><ul><ul><li>TV remote, Motion Tracking, Sony ID-cam, RFIG </li></ul></ul><ul><li>Exploiting (uncontrolled) natural lighting condition </li></ul><ul><ul><li>Day/Night Fusion, Time Lapse, Glare </li></ul></ul>
    78. 91. Dual Photography Pradeep Sen, Billy Chen, Gaurav Garg, Steve Marschner Mark Horowitz, Marc Levoy, Hendrik Lensch Stanford University Cornell University * * August 2, 2005 Los Angeles, CA
    79. 92. The card experiment book camera card projector primal
    80. 93. The card experiment primal dual
    81. 94. Overview of dual photography standard photograph from camera dual photograph from projector
    82. 95. Outline <ul><li>1. Introduction to dual photography </li></ul><ul><li>2. Application to scene relighting </li></ul><ul><li>3. Accelerating acquisition </li></ul><ul><li>4. Conclusions </li></ul>
    83. 96. Helmholtz reciprocity light eye eye light I  I  I I primal dual scene projector photosensor primal
    84. 97. Helmholtz reciprocity projector photosensor projector photosensor scene light camera primal dual
    85. 98. Forming a dual image projector photosensor light camera scene primal dual C 0 C 1 C 2 C 3 C 4 C 5 C 6 C 7
    86. 99. Forming a dual image scene light camera dual C 0 C 1 C 2 C 3 C 4 C 5 C 6 C 7
    87. 100. Physical demonstration <ul><li>Projector was scanned across a scene while a photosensor measured the outgoing light </li></ul>photosensor resulting dual image
    88. 101. Related imaging methods <ul><li>Example of a “flying-spot” camera built at the dawn of TV (Baird 1926) </li></ul><ul><li>Scanning electron microscope </li></ul>Velcro ® at 35x magnification, Museum of Science, Boston
    89. 102. Dual photography for relighting p q n m projector camera dual camera dual projector 4D projector photosensor primal scene dual
    90. 103. Mathematical notation scene p q n m projector camera primal P pq x 1 C mn x 1 T mn x pq
    91. 104. Mathematical notation = primal P pq x 1 C mn x 1 T mn x pq
    92. 105. = pq x 1 mn x 1 Mathematical notation 1 0 0 0 0 0 T mn x pq
    93. 106. = pq x 1 mn x 1 Mathematical notation 0 1 0 0 0 0 T mn x pq
    94. 107. = pq x 1 mn x 1 Mathematical notation 0 0 1 0 0 0 T mn x pq
    95. 108. = Mathematical notation little interreflection -> sparse matrix many interreflections -> dense matrix T mn x pq P pq x 1 C mn x 1
    96. 109. ? Mathematical notation T = mn x pq primal space dual space = pq x mn   P pq x 1 C mn x 1 C mn x 1 P pq x 1 j i i j     T ij T   T ji   T = T T   T = T ji ij  
    97. 110. Definition of dual photography = primal space dual space = T mn x pq pq x mn T T T mn x pq mn x 1 C   pq x 1 P   pq x 1 P  mn x 1 C 
    98. 111. Sample results primal dual
    99. 112. Sample results primal dual
    100. 113. Scene relighting <ul><li>Knowing the pixel-to-pixel transport between the projector and the camera allows us to relight the scene with an arbitrary 2D pattern </li></ul>primal dual
    101. 114. Photosensor experiment =  primal space dual space = T mn x pq pq x 1 P  mn x 1 C  C  T 1 x pq T 1 x pq pq x 1 P   pq x 1 T T C  
    102. 115. 2D relighting videos Relighting book scene with animated patterns Relighting box with animated pattern
    103. 116. Relighting with 4D incident light fields 2D 4D 6D Transport
    104. 117. From Masselus et al. SIGGRAPH ‘03 Relighting with 4D incident light fields
    105. 118. Relighting with 4D incident light fields 2D 4D 6D Transport
    106. 119. Relighting with 4D incident light fields 2D 4D 6D Transport
    107. 120. <ul><li>Acquisition of transport from multiple projectors cannot be parallelized </li></ul><ul><li>Acquisition of transport using multiple cameras can ! </li></ul>Advantages of our dual framework
    108. 121. “ Multiple” cameras with mirror array
    109. 122. Relighting video Relighting scene from multiple light positions using mirror array
    110. 123. Accelerating acquisition <ul><li>Brute-force pixel scan is very slow! (10 6 patterns for standard projector) </li></ul><ul><li>We present a hierarchical, adaptive algorithm in our paper to parallelize this process </li></ul>
    111. 124. Adaptive acquisition video Demonstration of adaptive algorithm acquiring cover image
    112. 125. Parallelize to accelerate acquisition = pq x 1 mn x 1 0 1 0 0 0 0 0 0 0 0 1 0 0 1 0 0 1 0 <ul><li>We can extract columns of T in parallel if no camera pixel sees contribution from both pixels at once </li></ul><ul><li>Our algorithm adaptively finds which pixels do not conflict with each other to display them in parallel </li></ul>T mn x pq P  C 
    113. 126. Overview of adaptive algorithm <ul><li>Start with floodlit projector, recursively subdivide into 4 child blocks until pixel level </li></ul><ul><li>Blocks in the projector are scheduled at the same time if there is no conflict in the camera </li></ul><ul><li>Blocks that do not have any contribution are culled away </li></ul><ul><li>Store the energy in the last point of the hierarchy where it was last measured </li></ul><ul><li>Details in the paper… </li></ul>
    114. 127. Results <ul><li>Size(MB) Time (min) Size (MB) Time(min) Acceleration </li></ul><ul><li>5.4e6 1.6e4 272 142 115x </li></ul><ul><li>3.7e6 1.1e4 179 14 751x </li></ul><ul><li>1.6e6 1.2e4 56 19 629x </li></ul><ul><li>1.4e6 1.2e4 139 15 797x </li></ul><ul><li>1.1e8 5.2e5 6.7e3 1.8e3 296x </li></ul>Pixel scan Hierarchical x80
    115. 128. Practical challenges and limitations <ul><li>Projector’s dark pixels are not fully dark </li></ul><ul><ul><li>Result: reduces our SNR </li></ul></ul><ul><ul><li>Solution: use high-contrast projector, subtract out dark-level </li></ul></ul><ul><li>Camera has Bayer filters on pixels </li></ul><ul><ul><li>Result: colors are desaturated if projector illuminates small portion of the CCD </li></ul></ul><ul><ul><li>Solution: normalize energy with flood-lit image </li></ul></ul><ul><li>Little light transport from projector to camera </li></ul><ul><ul><li>Result: hierarchical scheme quits early, resulting in blurry images </li></ul></ul><ul><ul><li>Solution: get more light from projector to camera – increase aperture, lengthen exposure </li></ul></ul>
    116. 129. Future work <ul><li>Acquire 6D data set using camera array </li></ul><ul><li>Relight with real 4D incident illumination captured by camera array </li></ul><ul><li>Explore further properties of T matrix </li></ul><ul><li>Combine dual photography with other techniques for efficient acquisition of full 8D reflectance function </li></ul>
    117. 130. Conclusions <ul><li>Dual photography is a novel imaging technique that allows us to interchange the camera and the projector </li></ul><ul><li>Dual photography can accelerate acquisition of the 6D transport for relighting with 4D incident light fields </li></ul><ul><li>We developed an algorithm to accelerate the acquisition of the transport matrix for dual photography </li></ul>
    118. 131. The card experiment book camera card projector primal
    119. 132. The card experiment primal dual
    120. 133. Hierarchical construction of T primal dual
    121. 134. Photosensor experiment =  primal space dual space = T mn x pq pq x 1 P  mn x 1 C  C  T 1 x pq T 1 x pq pq x 1 P   pq x 1 T T C  
    122. 135. Example X X X X X X O O 8 x 8 pixel projector projected patterns
    123. 136. Example <ul><li>In this example, it took 21 patterns to perform acquisition </li></ul><ul><li>It would take 64 with brute-force scan </li></ul><ul><li>Without conflicts, we need: num. levels x 4 + 1 </li></ul><ul><li>In this case: log 64 x 4 + 1 = 13 </li></ul>4 projected patterns
    124. 137. Projector dark level <ul><li>Unfortunately, projector “off” pixels are not completely dark. They emit light! </li></ul><ul><li>Correspondence with the number of pixels that are on, and with the distance from the nearest lit pixel. </li></ul><ul><li>Projector used in the experiments was quoted approx 2000:1 contrast ratio (full on/off). </li></ul>
    125. 138. Camera Bayer Pattern <ul><li>Digital cameras do not typically sample in all colors at every pixel. They sample based on a color pattern called a Bayer pattern. </li></ul><ul><li>When the projector illuminates small regions as seen by the camera, often the color can be mismatched </li></ul><ul><li>To fix this, normalize energy with fully-lit image </li></ul>
    126. 139. Results <ul><li>Size(TB) Time (days) Size (MB) Time(min) #patterns </li></ul><ul><li>5.4 10.9 272 136 3397 </li></ul><ul><li>3.7 7.3 179 14 352 </li></ul><ul><li>1.6 8.3 56 19 501 </li></ul><ul><li>1.4 8.3 139 15 369 </li></ul><ul><li>114 362 6,675 1,761 19,140 </li></ul>Pixel scan Hierarchical algorithm x80
    127. 140. Computational Illumination: Programmable 4D Illumination Field + Time + Wavelength <ul><li>Presence or Absence, Duration, Brightness </li></ul><ul><ul><li>Flash/No-flash (matting for foreground/background) </li></ul></ul><ul><li>Light position </li></ul><ul><ul><li>Relighting: Programmable dome </li></ul></ul><ul><ul><li>Shape enhancement: Multi-flash for depth edges </li></ul></ul><ul><li>Light color/wavelength </li></ul><ul><li>Spatial Modulation </li></ul><ul><ul><li>Dual Photography, Direct/Global Separation, Synthetic Aperture Illumination </li></ul></ul><ul><li>Temporal Modulation </li></ul><ul><ul><li>TV remote, Motion Tracking, Sony ID-cam, RFIG </li></ul></ul><ul><li>Exploiting (uncontrolled) natural lighting condition </li></ul><ul><ul><li>Day/Night Fusion, Time Lapse, Glare </li></ul></ul>
    128. 141. Visual Chatter in the Real World Shree K. Nayar Computer Science Columbia University With: Guru Krishnan, Michael Grossberg, Ramesh Raskar Eurographics Rendering Symposium June 2006, Nicosia, Cyprus Support: ONR
    129. 142. source surface P Direct and Global Illumination camera A A : Direct B B : Interrelection C C : Subsurface D participating medium D : Volumetric translucent surface E E : Diffusion
    130. 143. Shower Curtain: Diffuser Direct Global
    131. 144. Related Work <ul><li>Shape from Interreflections </li></ul>(Nayar et. al., ICCV 90) (Seitz et. al., ICCV 05) <ul><li>Inverse Light Transport </li></ul>(Sen et. al., Siggraph 05) T <ul><li>Dual Photography </li></ul>
    132. 145. Fast Separation of Direct and Global Images <ul><li>Create Novel Images of the Scene </li></ul><ul><li>Enhance Brightness Based Vision Methods </li></ul><ul><li>New Insights into Material Properties </li></ul>
    133. 146. Compute Direct and Global Images of a Scene from Two Captured Images Create Novel Images of the Scene Enhance Brightness Based Vision Methods New Insights into Material Properties
    134. 147. Direct and Global Components: Interreflections surface i camera source direct global radiance j BRDF and geometry
    135. 148. High Frequency Illumination Pattern surface camera source fraction of activated source elements + i
    136. 149. High Frequency Illumination Pattern surface fraction of activated source elements camera source + - i
    137. 150. Separation from Two Images direct global
    138. 151. Minimum Illumination Frequency
    139. 152. Other Global Effects: Subsurface Scattering translucent surface camera source i j
    140. 153. Other Global Effects: Volumetric Scattering surface camera source participating medium i j
    141. 154. Diffuse Interreflections Specular Interreflections Volumetric Scattering Subsurface Scattering Diffusion
    142. 155. Scene Direct Global
    143. 156. F G C A D B E A: Diffuse Interreflection (Board) B : Specular Interreflection (Nut) C : Subsurface Scattering (Marble) D : Subsurface Scattering (Wax) E : Translucency (Frosted Glass) F : Volumetric Scattering (Dilute Milk) G : Shadow (Fruit on Board) Verification
    144. 157. Verification Results 0 0.2 0.4 0.6 0.8 1 3 5 7 9 11 15 19 23 27 31 35 39 43 47 p C D F E G B A Checker Size marble D F E G B A 0 20 40 60 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Fraction of Activated Pixels
    145. 159. V-Grooves: Diffuse Interreflections concave convex Psychophysics: Gilchrist 79, Bloj et al. 04 Direct Global
    146. 160. Mirror Ball: Failure Case Direct Global
    147. 161. Real World Examples: Can You Guess the Images?
    148. 162. Eggs: Diffuse Interreflections Direct Global
    149. 163. Wooden Blocks: Specular Interreflections Direct Global
    150. 164. Novel Images
    151. 165. Variants of Separation Method <ul><li>Shadow of Line Occluder </li></ul><ul><li>Shadow of Mesh Occluders </li></ul><ul><li>Coded Structured Light </li></ul><ul><li>Shifted Sinusoids </li></ul>
    152. 166. Stick Building Corner Shadow 3D from Shadows: Bouguet and Perona 99 direct global
    153. 167. Building Corner Direct Global
    154. 168. Shower Curtain: Diffuser Shadow Mesh direct global
    155. 169. Shower Curtain: Diffuser Direct Global
    156. 171. Kitchen Sink: Volumetric Scattering Volumetric Scattering : Chandrasekar 50, Ishimaru 78 Direct Global
    157. 172. Novel Image
    158. 173. Peppers: Subsurface Scattering Direct Global
    159. 174. Novel Images
    160. 175. Real or Fake ? Direct Global R F R F R F
    161. 176. Tea Rose Leaf Leaf Anatomy: Purves et al. 03 Direct Global
    162. 177. Translucent Rubber Balls Direct Global
    163. 178. Scene Direct Global Marble: When BSSRDF becomes BRDF Subsurface Measurements: Jensen et al. 01, Goesele et al. 04 1 4 Resolution 1 1 6 1 2
    164. 179. Hand Skin: Hanrahan and Krueger 93, Uchida 96, Haro 01, Jensen et al. 01, Igarashi et al. 05, Weyrich et al. 05 Direct Global
    165. 180. Hands Afric. Amer. Female Chinese Male Spanish Male Direct Global Afric. Amer. Female Chinese Male Spanish Male Afric. Amer. Female Chinese Male Spanish Male
    166. 181. Separation from a Single Image
    167. 182. Face Direct Global Sum
    168. 183. Skin Tone Control Skin Color and Lipids: Tsumura et al. 03
    169. 184. Blonde Hair Hair Scattering: Stamm et al. 77, Bustard and Smith 91, Lu et al. 00 Marschner et al. 03 Direct Global
    170. 185. Hair: Bidirectional Texture Function Direct Global Hair
    171. 186. Pebbles: 3D Texture Direct Global
    172. 187. Pebbles: Bidirectional Texture Function Direct Global Pebbles
    173. 188. Pink Carnation Spectral Bleeding: Funt et al. 91 Global Direct
    174. 190. Summary <ul><li>Fast and Simple Separation Method </li></ul><ul><li>Wide Variety of Global Effects </li></ul><ul><li>No Prior Knowledge of Material Properties </li></ul><ul><li>Implications: </li></ul><ul><li>Generation of Novel Images </li></ul><ul><li>Enhance Computer Vision Methods </li></ul><ul><li>Insights into Properties of Materials </li></ul>
    175. 191. + = + = + = + = + = + + = = + = www.cs.columbia.edu/CAVE + = + =
    176. 192. Computational Illumination: Programmable 4D Illumination Field + Time + Wavelength <ul><li>Presence or Absence, Duration, Brightness </li></ul><ul><ul><li>Flash/No-flash (matting for foreground/background) </li></ul></ul><ul><li>Light position </li></ul><ul><ul><li>Relighting: Programmable dome </li></ul></ul><ul><ul><li>Shape enhancement: Multi-flash for depth edges </li></ul></ul><ul><li>Light color/wavelength </li></ul><ul><li>Spatial Modulation </li></ul><ul><ul><li>Dual Photography, Direct/Global Separation, Synthetic Aperture Illumination </li></ul></ul><ul><li>Temporal Modulation </li></ul><ul><ul><li>TV remote, Motion Tracking, Sony ID-cam, RFIG </li></ul></ul><ul><li>Exploiting (uncontrolled) natural lighting condition </li></ul><ul><ul><li>Day/Night Fusion, Time Lapse, Glare </li></ul></ul>
    177. 193. Day of the year Time of day
    178. 194. <ul><li>The Archive of Many Outdoor Scenes (AMOS) </li></ul><ul><ul><li>Images from ~1000 static webcams, </li></ul></ul><ul><ul><li>every 30 minutes since March 2006. </li></ul></ul>Variations over a year and over a day Jacobs, Roman, and Robert Pless, WUSTL CVPR 2007,
    179. 195. <ul><li>Analysing Time Lapse Images </li></ul><ul><li>PCA </li></ul><ul><ul><li>Linear Variations due to lighting and seasonal variation </li></ul></ul><ul><li>Decompose (by time scale) </li></ul><ul><ul><li>Hour: haze and cloud for depth . </li></ul></ul><ul><ul><li>Day: changing lighting directions for surface orientation </li></ul></ul><ul><ul><li>Year: effects of changing seasons highlight vegetation </li></ul></ul><ul><li>Applications: </li></ul><ul><ul><li>Scene segmentation. </li></ul></ul><ul><ul><li>Global Webcam localization. Correlate timelapse video over a month from unknown camera with: </li></ul></ul><ul><ul><ul><li>sunrise + sunset (localization accuracy ~ 50 miles) </li></ul></ul></ul><ul><ul><ul><li>Known nearby cameras (~25 miles) </li></ul></ul></ul><ul><ul><ul><li>Satellite image (~15 miles) </li></ul></ul></ul>Mean image + 3 components from time lapse of downtown st. louis over the course of 2 hours
    180. 196. 2 Hour time Lapse in St Louis: Depth from co-varying regions
    181. 197. Surface Orientation False Color PCA images
    182. 198. <ul><li>Image Fusion for </li></ul><ul><li>Context Enhancement </li></ul><ul><li>and Video Surrealism </li></ul>Adrian Ilie Ramesh Raskar Jingyi Yu
    183. 199. Dark Bldgs Reflections on bldgs Unknown shapes
    184. 200. ‘ Well-lit’ Bldgs Reflections in bldgs windows Tree, Street shapes
    185. 201. Background is captured from day-time scene using the same fixed camera Night Image Day Image Context Enhanced Image http://web.media.mit.edu/~raskar/NPAR04/
    186. 202. Factored Time Lapse Video Factor into shadow, illumination, and reflectance. Relight, recover surface normals, reflectance editing. [Sunkavalli, Matusik, Pfister, Rusinkiewicz], Sig’07
    187. 204. Computational Illumination: Programmable 4D Illumination Field + Time + Wavelength <ul><li>Presence or Absence, Duration, Brightness </li></ul><ul><ul><li>Flash/No-flash (matting for foreground/background) </li></ul></ul><ul><li>Light position </li></ul><ul><ul><li>Relighting: Programmable dome </li></ul></ul><ul><ul><li>Shape enhancement: Multi-flash for depth edges </li></ul></ul><ul><li>Light color/wavelength </li></ul><ul><li>Spatial Modulation </li></ul><ul><ul><li>Dual Photography, Direct/Global Separation, Synthetic Aperture Illumination </li></ul></ul><ul><li>Temporal Modulation </li></ul><ul><ul><li>TV remote, Motion Tracking, Sony ID-cam, RFIG </li></ul></ul><ul><li>Exploiting (uncontrolled) natural lighting condition </li></ul><ul><ul><li>Day/Night Fusion, Time Lapse, Glare </li></ul></ul>

    ×