Successfully reported this slideshow.

VR2.0: Making Virtual Reality Better Than Reality?

4

Share

1 of 114
1 of 114

More Related Content

Related Books

Free with a 14 day trial from Scribd

See all

Related Audiobooks

Free with a 14 day trial from Scribd

See all

VR2.0: Making Virtual Reality Better Than Reality?

  1. 1. Making Virtual Reality better than Reality? Gordon Wetzstein Stanford University IS&T Electronic Imaging 2017 www.computationalimaging.org
  2. 2. Personal Computer e.g. Commodore PET 1983 Laptop e.g. Apple MacBook Smartphone e.g. Google Pixel AR/VR e.g. Microsoft Hololens ???
  3. 3. A Brief History of Virtual Reality 1838 1968 2012-2017 Stereoscopes Wheatstone, Brewster, … VR & AR Ivan Sutherland VR explosion Oculus, Sony, HTC, MS, … Nintendo Virtual Boy 1995 VR 2.0
  4. 4. Where we are now IFIXIT teardown
  5. 5. Magnified Display 1 d + 1 d' = 1 f d d’ f
  6. 6. Real World: Vergence & Accommodation Match!
  7. 7. Current VR Displays: Vergence & Accommodation Mismatch for people with normal vision
  8. 8. Presbyopia [Katz et al. 1997] 68% age 80+ 43% age 40 25% Hyperopia [Krachmer et al. 2005] Myopia 41.6% [Vitale et al. 2009] How Many People Have Normal Vision? all numbers of US population
  9. 9. 4D / 25cm Optical Infinity Normal vision Nearsighted/myopic Farsighted/Hyperopic Presbyopic Focal range (range of clear vision) Modified from Pamplona et al, Proc. of SIGGRAPH 2010 Nearsightedness & Farsightedness
  10. 10. Computational Near-eye Displays
  11. 11. • Q1: Can computational displays effectively replace glasses in VR/AR? • Q2: How to address the vergence-accommodation conflict for users of different ages? • Q3: What are (in)effective near-eye display technologies? possible solutions: gaze-contingent focus, monovision, multiplane, light field displays, …
  12. 12. • Q1: Can computational displays effectively replace glasses in VR/AR? • Q2: How to address the vergence-accommodation conflict for users of different ages? • Q3: What are (in)effective near-eye display technologies? possible solutions: gaze-contingent focus, monovision, multiplane, light field displays, …
  13. 13. Magnified Display Display Lens Fixed Focus 1 d + 1 d' = 1 f d d’ f
  14. 14. Adaptive Focus Magnified Display Display Lens 1 d + 1 d' = 1 f actuator  vary d’
  15. 15. Adaptive Focus Magnified Display Display Lens focus-tunable lens  vary f 1 d + 1 d' = 1 f
  16. 16. Adaptive Focus - History • M. Heilig “Sensorama”, 1962 (US Patent #3,050,870) • P. Mills, H. Fuchs, S. Pizer “High-Speed Interaction On A Vibrating-Mirror 3D Display”, SPIE 0507 1984 • S. Shiwa, K. Omura, F. Kishino “Proposal for a 3-D display with accommodative compensation: 3DDAC”, JSID 1996 • S. McQuaide, E. Seibel, J. Kelly, B. Schowengerdt, T. Furness “A retinal scanning display system that produces multiple focal planes with a deformable membrane mirror”, Displays 2003 • S. Liu, D. Cheng, H. Hua “An optical see-through head mounted display with addressable focal planes”, Proc. ISMAR 2008 manual focus adjustment Heilig 1962 automatic focus adjustment Mills 1984 deformabe mirrors & lenses McQuaide 2003, Liu 2008
  17. 17. Padmanaban et al., PNAS 2017
  18. 18. Padmanaban et al., PNAS 2017
  19. 19. Padmanaban et al., PNAS 2017
  20. 20. Padmanaban et al., PNAS 2017
  21. 21. Padmanaban et al., PNAS 2017
  22. 22. Padmanaban et al., PNAS 2017
  23. 23. Padmanaban et al., PNAS 2017
  24. 24. at ACM SIGGRAPH 2016 EyeNetra.com
  25. 25. at ACM SIGGRAPH 2016 participants of the study, 152 total EyeNetra.com
  26. 26. Participants - Prescription Padmanaban et al., PNAS 2017 n = 70, ages 21-64
  27. 27. How sharp is the target? (blurry, medium, sharp) Is the target fused? (yes, no) 4D (0.25m) 3D (0.33m) 2D (0.50m) 1D (1m) Four simulated distances Task
  28. 28. far near 1D 1m 2D 0.5m 3D 0.3m 4D 0.25m Distance medium 0 Relativesharpness sharp 1 blurry -1 VR uncorrected VR corrected Results - Sharpness Padmanaban et al., PNAS 2017
  29. 29. far near far near 1D 1m 2D 0.5m 3D 0.3m 4D 0.25m Distance medium 0 sharp 1 blurry -1 Relativesharpness VR uncorrected VR corrected Results - Sharpness Padmanaban et al., PNAS 2017
  30. 30. far near far near 1D 1m 2D 0.5m 3D 0.3m 4D 0.25m Distance medium 0 sharp 1 blurry -1 Relativesharpness VR uncorrected VR corrected Results - Sharpness Padmanaban et al., PNAS 2017
  31. 31. Mean = 0.63 Mean = 0.60 far near far near 1D 1m 2D 0.5m 3D 0.3m 4D 0.25m Distance medium 0 sharp 1 blurry -1 Relativesharpness VR uncorrected VR corrected normal correction Results - Sharpness Padmanaban et al., PNAS 2017
  32. 32. far far near 1D 1m 2D 0.5m 3D 0.3m 4D 0.25m Distance VR uncorrected VR corrected 1 Proportionfused 0.8 0.6 0.4 0.2 0 Results - Fusion Padmanaban et al., PNAS 2017
  33. 33. far far near 1D 1m 2D 0.5m 3D 0.3m 4D 0.25m Distance VR uncorrected VR corrected 1 Proportionfused 0.8 0.6 0.4 0.2 0 Results - Fusion Padmanaban et al., PNAS 2017
  34. 34. Computational Near-eye Displays • Q1: Can computational displays effectively replace glasses in VR/AR? • Q2: How to address the vergence-accommodation conflict for users of different ages? • Q3: What are (in)effective near-eye display technologies? possible solutions: gaze-contingent focus, monovision, light field displays, …
  35. 35. vergence accommodation Conventional Stereo / VR Display
  36. 36. • Visual discomfort (eye tiredness & eyestrain) after ~20 minutes of stereoscopic depth judgments (Hoffman et al. 2008; Shibata et al. 2011) • Degrades visual performance in terms of reaction times and acuity for stereoscopic vision (Hoffman et al. 2008; Konrad et al. 2016; Johnson et al. 2016) Consequences of Vergence-Accommodation Conflict
  37. 37. vergence accommodation Removing VAC with Adaptive Focus
  38. 38. Follow the target with your eyes 4D (0.25m) 0.5D (2m) Task
  39. 39. Stimulus Padmanaban et al., PNAS 2017 Accommodative Response RelativeDistance[D] Time [s]
  40. 40. Stimulus Accommodation n = 59, mean gain = 0.29 Padmanaban et al., PNAS 2017 Accommodative Response RelativeDistance[D] Time [s]
  41. 41. Stimulus Padmanaban et al., PNAS 2017 Accommodative Response RelativeDistance[D] Time [s]
  42. 42. Stimulus Accommodation Padmanaban et al., PNAS 2017 Accommodative Response RelativeDistance[D] Time [s] n = 24, mean gain = 0.77
  43. 43. Duane, 1912 Nearestfocusdistance Age (years) 8 16 24 32 40 48 56 64 72 4D (25cm) 8D (12.5cm) 12D (8cm) Presbyopia 0D (∞cm) 16D (6cm)
  44. 44. Presbyopia
  45. 45. Padmanaban et al., PNAS 2017 Do Presbyopes Benefit from Dynamic Focus? Gain Age
  46. 46. Padmanaban et al., PNAS 2017 Do Presbyopes Benefit from Dynamic Focus? Gain Age conventional
  47. 47. Padmanaban et al., PNAS 2017 Do Presbyopes Benefit from Dynamic Focus? Gain Age conventional dynamic
  48. 48. Padmanaban et al., PNAS 2017 Do Presbyopes Benefit from Dynamic Focus? Gain Age conventional dynamic Response for Physical Stimulus Heron & Charman 2004
  49. 49. far near far near Padmanaban et al., PNAS 2017 Age-dependent FusionPercentFused
  50. 50. far near far near Padmanaban et al., PNAS 2017 Age-dependent FusionPercentFused
  51. 51. far near far near Padmanaban et al., PNAS 2017 Age-dependent FusionPercentFused
  52. 52. far near far near Padmanaban et al., PNAS 2017 Age-dependent SharpnessRelativeSharpness
  53. 53. far near far near Padmanaban et al., PNAS 2017 Age-dependent SharpnessRelativeSharpness
  54. 54. far near far near Padmanaban et al., PNAS 2017 Age-dependent SharpnessRelativeSharpness
  55. 55. • Q1: Can computational displays effectively replace glasses in VR/AR? • Q2: How to address the vergence-accommodation conflict for users of different ages? • Q3: What are (in)effective near-eye display technologies? possible solutions: gaze-contingent focus, monovision, multiplane, light field displays, …
  56. 56. Gaze-contingent Focus • non-presbyopes: adaptive focus is like real world, but needs eye tracking! HMD lens micro display virtual image eye tracking Padmanaban et al., PNAS 2017
  57. 57. Gaze-contingent Focus Padmanaban et al., PNAS 2017
  58. 58. Gaze-contingent Focus Padmanaban et al., PNAS 2017
  59. 59. Gaze-contingent Focus Padmanaban et al., PNAS 2017
  60. 60. at ACM SIGGRAPH 2016
  61. 61. Gaze-contingent Focus – User Preference Padmanaban et al., PNAS 2017
  62. 62. Monovision VR Konrad et al., SIGCHI 2016; Johnson et al., Optics Express 2016; Padmanaban et al., PNAS 2017
  63. 63. Monovision VR Konrad et al., SIGCHI 2016; Johnson et al., Optics Express 2016; Padmanaban et al., PNAS 2017 • monovision did not drive accommodation more than conventional • visually comfortable for most; particularly uncomfortable for some users
  64. 64. Multiplane VR Displays • Rolland J, Krueger M, Goon A (2000) Multifocal planes head-mounted displays. Applied Optics 39 • Akeley K, Watt S, Girshick A, Banks M (2004) A stereo display prototype with multiple focal distances. ACM Trans. Graph. (SIGGRAPH) • Waldkirch M, Lukowicz P, Tröster G (2004) Multiple imaging technique for extending depth of focus in retinal displays. Optics Express • Schowengerdt B, Seibel E (2006) True 3-d scanned voxel displays using single or multiple light sources. JSID • Liu S, Cheng D, Hua H (2008) An optical see-through head mounted display with addressable focal planes in Proc. ISMAR • Love GD et al. (2009) High-speed switchable lens enables the development of a volumetric stereoscopic display. Optics Express • … many more ... idea introduced Rolland et al. 2000 benchtop prototype Akeley 2004 near-eye display prototype Liu 2008, Love 2009
  65. 65. Multiplane VR Displays • Rolland J, Krueger M, Goon A (2000) Multifocal planes head-mounted displays. Applied Optics 39 • Akeley K, Watt S, Girshick A, Banks M (2004) A stereo display prototype with multiple focal distances. ACM Trans. Graph. (SIGGRAPH) • Waldkirch M, Lukowicz P, Tröster G (2004) Multiple imaging technique for extending depth of focus in retinal displays. Optics Express • Schowengerdt B, Seibel E (2006) True 3-d scanned voxel displays using single or multiple light sources. JSID • Liu S, Cheng D, Hua H (2008) An optical see-through head mounted display with addressable focal planes in Proc. ISMAR • Love GD et al. (2009) High-speed switchable lens enables the development of a volumetric stereoscopic display. Optics Express • … many more ... idea introduced Rolland et al. 2000 benchtop prototype Akeley 2004 near-eye display prototype Liu 2008, Love 2009
  66. 66. Light Field CamerasLight Field Stereoscope Huang et al., SIGGRAPH 2015
  67. 67. Backlight Thin Spacer & 2nd panel (6mm) Magnifying Lenses LCD Panel Light Field Stereoscope Huang et al., SIGGRAPH 2015
  68. 68. Near-eye Light Field Displays Idea: project multiple different perspectives into different parts of the pupil!
  69. 69. Target Light Field Input: 4D light field for each eye
  70. 70. Multiplicative Two-layer Modulation Input: 4D light field for each eye
  71. 71. Multiplicative Two-layer Modulation Input: 4D light field for each eye
  72. 72. Multiplicative Two-layer Modulation Input: 4D light field for each eye
  73. 73. Multiplicative Two-layer Modulation Reconstruction: for layer t1 Tensor Displays, Wetzstein et al. 2012 Input: 4D light field for each eye
  74. 74. Traditional HMDs - No Focus Cues The Light Field HMD Stereoscope Light Field Stereoscope Huang et al., SIGGRAPH 2015
  75. 75. Traditional HMDs - No Focus Cues The Light Field HMD Stereoscope Light Field Stereoscope Huang et al., SIGGRAPH 2015
  76. 76. Traditional HMDs - No Focus Cues The Light Field HMD Stereoscope Light Field Stereoscope Huang et al., SIGGRAPH 2015
  77. 77. Traditional HMDs - No Focus Cues The Light Field HMD Stereoscope Light Field Stereoscope Huang et al., SIGGRAPH 2015
  78. 78. Vision-correcting Display iPod Touch prototypeprinted transparency Huang et al., SIGGRAPH 2014
  79. 79. prototype 300 dpi or higher Huang et al., SIGGRAPH 2014
  80. 80. Diffraction in Multilayer Light Field Displays Wetzstein et al., SIGGRAPH 2011 Lanman et al., SIGGRAPH Asia 2011 Wetzstein et al., SIGGRAPH 2012 Maimone et all., Trans. Graph. 2013 … Hirsch et al, SIGGRAPH 2014 No diffraction artifacts with LCoS blur!
  81. 81. Summary • focus cues in VR/AR are challenging • adaptive focus can correct for refractive errors (myopia, hyperopia) • gaze-contingent focus gives natural focus cues for non-presbyopes, but require eyes tracking • presbyopes require fixed focal plane with correction • multiplane displays require very high speed microdisplays • monovision has not demonstrated significant improvements • light field displays may be the “ultimate” display  need to solve “diffraction problem”
  82. 82. Making Virtual Reality Better Than Reality? • focus cues in VR/AR are challenging • adaptive focus can correct for refractive errors (myopia, hyperopia) • gaze-contingent focus gives natural focus cues for non-presbyopes, but require eyes tracking • presbyopes require fixed focal plane with correction, better than reality! • multiplane displays require very high speed microdisplays • monovision has not demonstrated significant improvements • light field displays may be the “ultimate” display  need to solve “diffraction problem”
  83. 83. VR/AR = Frontier of Engineering • Focus cues / visual accessibility • Vestibular-visual conflict (motion sickness) • AR • occlusions • aesthetics / form factor • battery life • heat • wireless operation • low-power computer vision • registration of physical / virtual world and eyes • consistent lighting • scanning real world • VAC more important • display contrast & brightness • fast, embedded GPUs • …
  84. 84. Capturing and Sharing Experiences
  85. 85. It’s Not About Technology but Experiences!
  86. 86. Facebook’s Surround 360 RAW Data: 17 Gb/sec Compute time: days to weeks on conventional computer, minutes to hours on data center
  87. 87. Facebook’s Surround 360 RAW Data: 17 Gb/sec Compute time: days to weeks on conventional computer, minutes to hours on data center
  88. 88. Konrad et al., arxiv 2017
  89. 89. Konrad et al., arxiv 2017
  90. 90. Konrad et al., arxiv 2017
  91. 91. Konrad et al., arxiv 2017
  92. 92. Advancing AR/VR technology requires deep understanding of human vision, optics, signal processing, computation, and more. Technology alone is not enough – engineer experiences! Conclusions
  93. 93. Stanford EE 267
  94. 94. Stanford Computational Imaging Lab Light Field Displays Time-of-Flight Imaging Computational Microscopy Image Optimization Light Field Cameras Near-eye Displays
  95. 95. Stanford Computational Imaging Lab Open Lab this Friday (2/3) 10am-3pm at Stanford Please email Helen Lin (helenlin@stanford.edu) for more details! - "The Light Field Stereoscope" (demo), R. Konrad - "Gaze-contingent and Varifocal Near-eye Displays" (demo), N. Padmanaban - "Monovision Near-eye Displays" (demo), R. Konrad - "Saliency in VR: How do People Explore Virtual Environments?" (poster and demo), V. Sitzmann - "Accommodation-invariant Computational Near-eye Displays" (demo), R. Konrad - "Depth-dependent Visual Anchoring for Reducing Motion Sickness in VR", N. Padmanaban - "ProxImaL: Efficient Image Optimization using Proximal Algorithms" (poster), F. Heide - "Dirty Pixels: Optimizing Image Classification Architectures for Raw Sensor Data" (poster), S. Diamond - "Vortex: Live Cinematic Virtual Reality" (demo), R. Konrad - "Transient Imaging with Single Photon Detectors" (poster), M. O'Toole - "Robust Non-line-of-sight Imaging with Single Photon Detectors" (poster), F. Heide - "Variable Aperture Light Field Photography" (poster), J. Chang - "Computational Time-of-Flight Photography" (poster), F. Heide - "Wide Field-of-View Monocentric Light Field Imaging" (poster and demo), D. Dansereau - "Hacking the Vive Lighthouse - Arduino-based Positional Tracking in VR with Low-cost Components" (demo), K.
  96. 96. Acknowledgements Near-eye Displays • Robert Konrad (Stanford) • Nitish Padmanaban (Stanford) • Fu-Chung Huang (NVIDIA) • Emily Cooper (Dartmouth College) Saliency in VR • Vincent Sitzmann (Stanford) • Diego Gutierrez (U. Zaragoza) • Ana Serrano (U. Zaragoza) • Maneesh Agrawala (Stanford) Spinning VR Camera • Robert Konrad (Stanford) • Donald Dansereau (Stanford) Other • Wolfgang Heidrich (UBC/KAUST) • Ramesh Raskar (MIT/Facebook) • Douglas Lanman (Oculus) • Matt Hirsch (Lumii) • Matthew O’Toole (Stanford) • Felix Heide (Stanford)
  97. 97. Gordon Wetzstein Computational Imaging Lab Stanford University stanford.edu/~gordonwz www.computationalimaging.org

×