• Like
Virtual Reality 3D home applications
Upcoming SlideShare
Loading in...5
×

Virtual Reality 3D home applications

  • 786 views
Uploaded on

Presentation about one 3D home applications based on the autostereoscopic technology involving lenticular lenses and 2DplusDepth content

Presentation about one 3D home applications based on the autostereoscopic technology involving lenticular lenses and 2DplusDepth content

  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Be the first to comment
    Be the first to like this
No Downloads

Views

Total Views
786
On Slideshare
0
From Embeds
0
Number of Embeds
1

Actions

Shares
Downloads
50
Comments
0
Likes
0

Embeds 0

No embeds

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
    No notes for slide

Transcript

  • 1. 3D Home Technology
  • 2.
    • Introduction
    • Capturing 3D scenery
    • Processing the captured data
    • Displaying the results for 3D viewing
    • Conclusion
  • 3.
    • Several fields in auto- stereoscopic 3D Home technology :
      • 3D home projectors
      • 3D TV
      • 3D computers screens
      • 3d mobile phones...
  • 4.
    • Focus on the 3D TV for this presentation.
    • More precisely on one kind of 3D TV using lenticular lenses:
    • Phillips WOW auto-stereoscopic 3D display products
  • 5.
    • Two main approches :
      • Use multiple traditional cameras
      • One video associated to a per-pixel depth map
  • 6.
    • Objective : recording the same scene form different points of views.
  • 7.
      • Advantage :
        • providing exact views for each eye
      • Disadvantage :
        • can be optimized only for one receiver configuration (size and number of views of the display)
        • the amount of data necessary to transmit the information of the two monoscopic color video is quite important.
  • 8.
    • Objective : using only a 2D picture/video associated to a map representing the depth of the scene.
  • 9.
      • Advantage:
        • Don’t need a geometric model of the model/environement
        • Can be done proportionally to screen size.
      • Disadvantage:
        • Visual artefacts are created during the warping.
  • 10.  
  • 11.
          • Depth-image-based rendering (DIBR)
    • Techniques to allow depth perception from a monoscopic video + per-pixel depth information.
    • Create one or more “virtual” views of the 3D scene.
  • 12.  
  • 13.
    • 3D Image Warping : “ warp” the pixels of the image so they appear in the correct place for a new viewpoint.
    • Creation of 2 virtual views :
  • 14.
    • Pinhole camera model : define a mapping from image to rays in space (one center of projection)
    • Each image-space point can be placed into one-to-one correspondence with a ray that originates from the Euclidean-space origin.
  • 15.
    • This mapping function from image-space coordinates to rays can be described with a linear system:
  • 16.  
  • 17.
    • Two different centers of projection (C 1 and C 2 ) linked by rays to one point X.
  • 18.
    • Form the right image : X 1 determine a ray d 1 via the pinhole camera maping with
    • From the left image we have :
    • Therefore the coordinate of the
    • point X can be expressed as :
    • Where t 1 and t 2 are unknown scaling factors for the vector from the origin to the viewing point which make it coincident with the point X.
  • 19.
    • t 2 /t 1 = 1 :
    • McMillan & Bishop Warping Equation :
    • Per-pixel distance values are used to warp pixels to the proper location
    • deppending on the curent eye postion.
  • 20.  
  • 21.
    • Main problem of warping : the horizontal changes in the death map reveals aeras that are occluded in the original view and become visible in some virtual views.
    • To resolve this problem, filters and extrapolation techniques are used to fill this occlusions.
    • In pink : the newly exposed aeras
    • corresponding to the new virtual camera
  • 22.
    • Some examples of pre-processing for the depth map:
    • Smoothing of Depth Map:
      • To reduce the holes, smoothing the depth map with smooth filters like average filters or Gaussian filters. They remove the sharp dicontinuities from depth image.
    • Reshaping the dynamic range of the Depth Map :
      • Expands dynamic range of higher depth values and compress lower ones improve the rendering quality.
  • 23.
        • Butterflies 2D plus depth
        • Butterflies 3D
        • AI broken 2D plus depth
  • 24. lenticular lenses
    • Basics for 3D viewing : two images, one for the left and one for the right eye.
    • lenticular lenses technology :
      • The both images are projected in different direction, so each one is correpondign to one eye.
  • 25.
    • The lenses array generates a parallax difference
    • Series of viewing spots with transitions
  • 26.
    • The technology presented before is one of the most developped for the 3D home technology and is predicted to become accessible to the public in the next years.
    • The capturing and processing techniques used are flexible to fit with the display receiver.
    • Any 2D scene can be converted in a 3D display.
  • 27.
    • Sources :
      • SIGGRAPH ’99 course notes by Leonard McMillan
      • A 3D-TV Approach Using Depth-Image-Based Rendering (DIBR) by CHRISTOPH FEHN
      • Distance Dependent Depth Filtering in 3D Warping
      • for 3DTV by Ismaäel Daribo, Christophe Tillier and Beatrice Pesquet-Popescu
      • Questions ? 