Virtual Reality 3D home applications

1,228 views

Published on

Presentation about one 3D home applications based on the autostereoscopic technology involving lenticular lenses and 2DplusDepth content

Published in: Technology, Art & Photos
0 Comments
1 Like
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total views
1,228
On SlideShare
0
From Embeds
0
Number of Embeds
18
Actions
Shares
0
Downloads
73
Comments
0
Likes
1
Embeds 0
No embeds

No notes for slide

Virtual Reality 3D home applications

  1. 1. 3D Home Technology
  2. 2. <ul><li>Introduction </li></ul><ul><li>Capturing 3D scenery </li></ul><ul><li>Processing the captured data </li></ul><ul><li>Displaying the results for 3D viewing </li></ul><ul><li>Conclusion </li></ul>
  3. 3. <ul><li>Several fields in auto- stereoscopic 3D Home technology : </li></ul><ul><ul><li>3D home projectors </li></ul></ul><ul><ul><li>3D TV </li></ul></ul><ul><ul><li>3D computers screens </li></ul></ul><ul><ul><li>3d mobile phones... </li></ul></ul>
  4. 4. <ul><li>Focus on the 3D TV for this presentation. </li></ul><ul><li>More precisely on one kind of 3D TV using lenticular lenses: </li></ul><ul><li>Phillips WOW auto-stereoscopic 3D display products </li></ul>
  5. 5. <ul><li>Two main approches : </li></ul><ul><ul><li>Use multiple traditional cameras </li></ul></ul><ul><ul><li>One video associated to a per-pixel depth map </li></ul></ul>
  6. 6. <ul><li>Objective : recording the same scene form different points of views. </li></ul>
  7. 7. <ul><ul><li>Advantage : </li></ul></ul><ul><ul><ul><li>providing exact views for each eye </li></ul></ul></ul><ul><ul><li>Disadvantage : </li></ul></ul><ul><ul><ul><li>can be optimized only for one receiver configuration (size and number of views of the display) </li></ul></ul></ul><ul><ul><ul><li>the amount of data necessary to transmit the information of the two monoscopic color video is quite important. </li></ul></ul></ul>
  8. 8. <ul><li>Objective : using only a 2D picture/video associated to a map representing the depth of the scene. </li></ul>
  9. 9. <ul><ul><li>Advantage: </li></ul></ul><ul><ul><ul><li>Don’t need a geometric model of the model/environement </li></ul></ul></ul><ul><ul><ul><li>Can be done proportionally to screen size. </li></ul></ul></ul><ul><ul><li>Disadvantage: </li></ul></ul><ul><ul><ul><li>Visual artefacts are created during the warping. </li></ul></ul></ul>
  10. 11. <ul><ul><ul><ul><li>Depth-image-based rendering (DIBR) </li></ul></ul></ul></ul><ul><li>Techniques to allow depth perception from a monoscopic video + per-pixel depth information. </li></ul><ul><li>Create one or more “virtual” views of the 3D scene. </li></ul>
  11. 13. <ul><li>3D Image Warping : “ warp” the pixels of the image so they appear in the correct place for a new viewpoint. </li></ul><ul><li>Creation of 2 virtual views : </li></ul>
  12. 14. <ul><li>Pinhole camera model : define a mapping from image to rays in space (one center of projection) </li></ul><ul><li>Each image-space point can be placed into one-to-one correspondence with a ray that originates from the Euclidean-space origin. </li></ul>
  13. 15. <ul><li>This mapping function from image-space coordinates to rays can be described with a linear system: </li></ul>
  14. 17. <ul><li>Two different centers of projection (C 1 and C 2 ) linked by rays to one point X. </li></ul>
  15. 18. <ul><li>Form the right image : X 1 determine a ray d 1 via the pinhole camera maping with </li></ul><ul><li>From the left image we have : </li></ul><ul><li>Therefore the coordinate of the </li></ul><ul><li>point X can be expressed as : </li></ul><ul><li>Where t 1 and t 2 are unknown scaling factors for the vector from the origin to the viewing point which make it coincident with the point X. </li></ul>
  16. 19. <ul><li> t 2 /t 1 = 1 : </li></ul><ul><li>McMillan & Bishop Warping Equation : </li></ul><ul><li>Per-pixel distance values are used to warp pixels to the proper location </li></ul><ul><li>deppending on the curent eye postion. </li></ul>
  17. 21. <ul><li>Main problem of warping : the horizontal changes in the death map reveals aeras that are occluded in the original view and become visible in some virtual views. </li></ul><ul><li>To resolve this problem, filters and extrapolation techniques are used to fill this occlusions. </li></ul><ul><li>In pink : the newly exposed aeras </li></ul><ul><li>corresponding to the new virtual camera </li></ul>
  18. 22. <ul><li>Some examples of pre-processing for the depth map: </li></ul><ul><li>Smoothing of Depth Map: </li></ul><ul><ul><li>To reduce the holes, smoothing the depth map with smooth filters like average filters or Gaussian filters. They remove the sharp dicontinuities from depth image. </li></ul></ul><ul><li>Reshaping the dynamic range of the Depth Map : </li></ul><ul><ul><li>Expands dynamic range of higher depth values and compress lower ones improve the rendering quality. </li></ul></ul>
  19. 23. <ul><ul><ul><li>Butterflies 2D plus depth </li></ul></ul></ul><ul><ul><ul><li>Butterflies 3D </li></ul></ul></ul><ul><ul><ul><li>AI broken 2D plus depth </li></ul></ul></ul>
  20. 24. lenticular lenses <ul><li>Basics for 3D viewing : two images, one for the left and one for the right eye. </li></ul><ul><li>lenticular lenses technology : </li></ul><ul><ul><li>The both images are projected in different direction, so each one is correpondign to one eye. </li></ul></ul>
  21. 25. <ul><li>The lenses array generates a parallax difference </li></ul><ul><li>Series of viewing spots with transitions </li></ul>
  22. 26. <ul><li>The technology presented before is one of the most developped for the 3D home technology and is predicted to become accessible to the public in the next years. </li></ul><ul><li>The capturing and processing techniques used are flexible to fit with the display receiver. </li></ul><ul><li>Any 2D scene can be converted in a 3D display. </li></ul>
  23. 27. <ul><li>Sources : </li></ul><ul><ul><li>SIGGRAPH ’99 course notes by Leonard McMillan </li></ul></ul><ul><ul><li>A 3D-TV Approach Using Depth-Image-Based Rendering (DIBR) by CHRISTOPH FEHN </li></ul></ul><ul><ul><li>Distance Dependent Depth Filtering in 3D Warping </li></ul></ul><ul><ul><li>for 3DTV by Ismaäel Daribo, Christophe Tillier and Beatrice Pesquet-Popescu </li></ul></ul><ul><ul><li>Questions ?  </li></ul></ul>

×