ENGN2502 INTRO

1,383 views

Published on

Brown University Spring 2012 Course
ENGN2502 3D Photography

Published in: Technology, Art & Photos
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
1,383
On SlideShare
0
From Embeds
0
Number of Embeds
617
Actions
Shares
0
Downloads
14
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide

ENGN2502 INTRO

  1. 1. ENGN2502 3D Photography Spring 2012 Gabriel Taubin Brown University
  2. 2. What do we mean by 3D Photography ?• Techniques and systems using cameras and lights to capture the shape and appearance of 3D objects• The geometry of triangulation• Surface representations and data structures• Methods to smooth, denoise, edit, compress, transmit, simplify, and optimize very large polygonal models• Applications to computer animation, game development, electronic commerce, heritage preservation, reverse engineering, medicine, virtual reality, etc.• Project oriented• Goal: publication quality final projects• Instructor permission required
  3. 3. ENGN2502 : 3D Photography *Spring’2012+ 3D Shape Capture Representation / Data Structures Simplification / Compression Smoothing / Parameterization Remeshing / Segmentation Interactive Modeling Optimization / Resampling Surface Reconstruction Out-of-Core Algorithms
  4. 4. Motivation• Industry – Reverse engineering – Fast metrology – Physical simulations• Entertainment – Animating digital clays for movies or games• Archeology and Art – Digitization of cultural heritage and artistic works• Medical Imaging – Visualization – Segmentation
  5. 5. 3D Shape and Appearance CaptureLaser range scanning devices Multi-camera systems Structured lighting systems
  6. 6. http://mesh.brown.edu/byo3d/
  7. 7. http://mesh.brown.edu/byo3d
  8. 8. Stereoscopic Photography
  9. 9. Real-Time High-Definition Stereo on GPGPU using Progressive Multi-Resolution Adaptive WindowsY. Zhao, and G. Taubin, Image and Vision Computing 2011. Screen shots of our real-time stereo system working on the field
  10. 10. Real-Time High-Definition Stereo on GPGPU using Progressive Multi-Resolution Adaptive Windows Y. Zhao, and G. Taubin, Image and Vision Computing 2011. Stereo Frame Stereo Multi-Resolution Grabbing Rectification Pyramid Generating Resolution Scan from Low to Background Modeling With Shadow Removal Foreground Detection with Dilation & Erosion High Stereo Matching Using Adaptive Window with Cross-Checking Disparity Refinement¼ Resolution ½ Resolution Full Resolution Coarse-to-fine matching on multiple resolutions Processing Pipeline
  11. 11. Time of Flight 3D Scanning 2d 5.0mt   17 ns c 3 108 m s Single Shot Structured Lighting: MS Kinect
  12. 12. 3D Triangulation: Ray-Plane Intersection plane intersection point ray projector /coordinate systems
  13. 13. Triangulation by Line-Plane Intersection object being illuminated scanned projected light planepoint on object P  { p : nt ( p  q p )  0} p qp n v camera ray samecoordinate intersection qL L  { p  qL  v} system of light plane with object
  14. 14. Triangulation by Line-Line Intersection object being scanned lines may not intersect ! pprojected light rayL1  { p  q1  1v1} v1 v2 q1 camera ray q2 L2  { p  q2  2v2 }
  15. 15. Triangulation and Scanning with Swept-PlanesStructured Lighting using Projector-Camera Systems
  16. 16. Gray Code Structured Lighting Point Grey Flea2 (15 Hz @ 1024 x 768) Mitsubishi XD300U (50-85 Hz @ 1024 x 768)3D Reconstruction using Structured Light [Inokuchi 1984]• Recover 3D depth for each pixel using ray-plane intersection• Determine correspondence between camera pixels and projector planes by projecting a temporally-multiplexed binary image sequence• Each image is a bit-plane of the Gray code for each projector row/column
  17. 17. Gray Code Structured Lighting Point Grey Flea2 (15 Hz @ 1024 x 768) Mitsubishi XD300U (50-85 Hz @ 1024 x 768)3D Reconstruction using Structured Light [Inokuchi 1984]• Recover 3D depth for each pixel using ray-plane intersection• Determine correspondence between camera pixels and projector planes by projecting a temporally-multiplexed binary image sequence• Each image is a bit-plane of the Gray code for each projector row/column• Encoding algorithm: integer row/column index  binary code  Gray code
  18. 18. Gray Code Structured Lighting Results
  19. 19. 3D Photography in Android
  20. 20. Typical Surface Reconstruction PipelineOriented Reconstruction Surface Points Method RepresentationPositions & Normals Implicit Surface Polygon Mesh
  21. 21. SSD: Smooth Signed Distance Surface Reconstruction F. Calakli, G. Taubin, Computer Graphics Forum, 2011.• A new mathematical formulation• And a particular algorithm• To reconstruct a watertight surface From a static oriented point cloud
  22. 22. Particularly Good at Extrapolating Missing Data
  23. 23. http://mesh.brown.edu/ssd
  24. 24. 3D Reconstruction & Analysis of Bat Flight Maneuvers• 3D Reconstruction of Bat Flight Kinematics from Sparse Multiple Views, by A. Bergou, S. Swartz, and G. Taubin, and K. Breuer, 4DMOD, 2011.• 3D Reconstruction and Analysis of Bat Flight Maneuvers from Sparse Multiple View Video, by A. Bergou, S. Swartz, S. K. Breuer, G. Taubin, BioVis, 2011.• Falling with Style - The Role of Wing Inertia in Bat Flight Maneuvers, by A. Bergou, D. Riskin, G. Taubin, S. Swartz, and K. Breuer, Annual Meeting, Society for Integrative and Comparative Biology, 2011.• Falling with Style-Bat Flight Maneuvers, by A. Bergou, D. Riskin, G. Taubin, S. Swartz, and K. Breuer, Bulletin of the American Physical Society, Vol. 55, 2010.
  25. 25. How do we measure bats ?• Multiple synchronized 1000fps+ cameras• Controlled environment (backdrop & illumination)• Bats trained to land on landing pad• Experiments with several species
  26. 26. • Bats have highly articulated wings• Very complex wing motion• Current goal: Detailed reconstruction of wing and body kinematics and derivatives from visual data• Skeleton model with 52 degrees of freedom• Geometry parameterized by 37 constants• Future Goal: Model-less Dynamic Shape Reconstruction
  27. 27. Some Methods to Capture 3D Point CloudsMulti-Flash Camera Shadow Multi-Flash Backdrop Attachment Turntable8 Megapixel Camera
  28. 28. Beyond Silhouettes: Surface Reconstruction using Multi-Flash PhotographyD. Crispell, D. Lanman, P. Sibley, Y. Zhao and G. Taubin [3DPVT 2006]
  29. 29. Multi-Flash RecoveredMulti-Flash 3D Photography: Turntable Sequence: Estimated Shape: 3D Point Cloud Appearance: Input Image Phong BRDF ModelCapturing the Shape and Appearanceof 3D ObjectsA new approach for reconstructing 3D objectsusing shadows cast by depth discontinuities, asdetected by a multi-flash camera. Unlike existingstereo vision algorithms, this method works evenwith plain surfaces, including unpainted ceramicsand architecture.Data Capture: A turntable and a digital cameraare used to acquire data from 670 viewpoints. Foreach viewpoint, we capture a set of images usingillumination from four different flashes. Futureembodiments will include a small, inexpensivehandheld multi-flash camera. Multi-Flash Camera Shadow Turntable Rotation Recovering a Smooth Surface Backdrop Multi-Flash Attachment The reconstructed point cloud can possess errors, including gaps and noise. To minimize these effects, we find an implicit surface which interpolates the Turntable 3D points. This method can be applied to any 3D point cloud, including those 8 Megapixel Camera generated by laser scanners. 30
  30. 30. Raskar et al. [Siggraph 2004]• Depth discontinuity estimation for Non-Photorealistic Rendering – Camera static with respect to object – 4 Images are captured with object – Each illuminated by a different flash – Flashes located close to camera lens – Image processing extracts and combines shadows
  31. 31. What do we gain?Using only silhouettes Using all depth discontinuities
  32. 32. Silhouettes vs Depth Discontinuities [2D] Depth Discontinuity Silhouette
  33. 33. Differential Shape From Silhouette [Cipolla & Blake 92] Measured in depthKnown from discontinuity imagecamera motion Measured in epipolar slice
  34. 34. Computing r’(t) : Depth Discontinuity Edge Tracking12 1 234 3 4
  35. 35. VFIso Results [2006 110x110x110 grid]
  36. 36. Multi-Flash 3D Photography: Photometric ReconstructionUsing the implicit surface, we can determine which points are visiblefrom each viewpoint. To model the material properties of the surface,we fit a per-point Phong BRDF model to the set of visible reflectanceobservations (using a total of 67 viewpoints). Ambient Diffuse Specular … Multi-Flash Turntable Sequence Phong (Specular) Images 40 3D Point Implicit Surface Phong (Diffuse) Estimated Phong Appearance Model
  37. 37. Surround Structured Lighting:3-D Scanning with Orthographic IlluminationD. Lanman, D. Crispell, G. Taubin [CVIU 2009]
  38. 38. 3D Slit Scanning with Planar Constraints M. Leotta, A. Vandergon, and G. Taubin [CGF 2008] Laser pointer Camera 2 +cylindrical lens Camera 1
  39. 39. Catadioptric Stereo Implementation
  40. 40. Can Estimate Points Visible From One Camera
  41. 41. Surface Reconstruction from Multi-View Data
  42. 42. NSF Digital Archaeology Project• The main goal is to automate the tedious processes of data collection and documentation at the excavation site, as well as to provide visualization tools to explore the data collected in the database• Also to solve specific problems in Archaeology using computer vision techniques.• We first used a network of cameras to capture the activity at the excavation site, to reconstruct the shape of the environment as it is being excavated, to reconstruct layers, and to locate finds in 3D• Now we use multi-view stereo from photographs captured by a handheld digital camera
  43. 43. REVEAL Archaeological Data AcquisitionAssisted Data Acquisition, Algorithmic Reconstruction, Integrated multi-format analysis Data Acquisition Advanced Algorithms Improve Automatically Convert Photos to 3D Models speed and accuracy with computer assisted data Import photos, entry videos, and Semi-automatically laser scans and Assemble connect them to Fragments database Into Artifacts objects Import External Data Laser Scanned Site Plans Models REVEAL Database Objects: Data: Rule-based Artifacts Text Reconstructions Excavations Photos Areas Video Sites 3D Models
  44. 44. REVEAL Archaeological AnalysisData integrated and synchronized in tabular, plan drawing, 3D spatial, image, and video formatsTypical Activity Sequence Examine Relationship of Artifacts in-situ in auto-generated 3D Excavation Model Display Photos of Selected Artifacts Artifacts Excavations Export FormattedSelect Artifacts on Areas Artifact Data forSite Plan Sites inclusion in Site Publication
  45. 45. [Snavely et. al. 2006]http://phototour.cs.washington.edu/bundler/ MVS softwarePatch-based Multi-View Stereo (PMVS)http://grail.cs.washington.edu/software/pmvs/ [Furukawa and Ponce 2008]
  46. 46. Accurate 3D Footwear Impression Recovery From Photographs, F. A. Andalo, F. Calakli, G. Taubin, and S. Goldenstein, International Conference on Imaging for Crime Detection and Prevention (ICDP-2011). Comparable to 3D Laser Scanner
  47. 47. Challenges Uniform samplingNon-uniform sampling Noisy data Misaligned scans
  48. 48. From Multi-View Video Cameras
  49. 49. View Interpolation From Multi-View Video Cameras
  50. 50. View Interpolation From Multi-View Video Cameras
  51. 51. View Interpolation From Multi-View Video Cameras
  52. 52. View Interpolation From Multi-View Video Cameras
  53. 53. With Background Segmentation
  54. 54. Ongoing work
  55. 55. ENGN2502 3D Photography Spring 2012 I hope to see you in class !

×