Making Virtual Reality better than Reality?
Gordon Wetzstein
Stanford University
IS&T Electronic Imaging 2017
www.computationalimaging.org
Personal Computer
e.g. Commodore PET 1983
Laptop
e.g. Apple MacBook
Smartphone
e.g. Google Pixel
AR/VR
e.g. Microsoft Hololens
???
A Brief History of Virtual Reality
1838 1968 2012-2017
Stereoscopes
Wheatstone, Brewster, …
VR & AR
Ivan Sutherland
VR explosion
Oculus, Sony, HTC, MS, …
Nintendo
Virtual Boy
1995
VR 2.0
Where we are now
IFIXIT teardown
Magnified Display
1
d
+
1
d'
=
1
f
d
d’
f
Real World:
Vergence &
Accommodation
Match!
Current VR Displays:
Vergence &
Accommodation
Mismatch
for people
with normal vision
Presbyopia
[Katz et al. 1997]
68%
age 80+
43%
age 40
25%
Hyperopia
[Krachmer et al. 2005]
Myopia
41.6%
[Vitale et al. 2009]
How Many People Have Normal Vision?
all numbers of US population
4D / 25cm Optical Infinity
Normal vision
Nearsighted/myopic
Farsighted/Hyperopic
Presbyopic
Focal range (range of clear vision)
Modified from Pamplona et al, Proc. of SIGGRAPH 2010
Nearsightedness & Farsightedness
Computational Near-eye Displays
• Q1: Can computational displays effectively replace glasses
in VR/AR?
• Q2: How to address the vergence-accommodation conflict
for users of different ages?
• Q3: What are (in)effective near-eye display technologies?
possible solutions: gaze-contingent focus, monovision,
multiplane, light field displays, …
• Q1: Can computational displays effectively replace glasses
in VR/AR?
• Q2: How to address the vergence-accommodation conflict
for users of different ages?
• Q3: What are (in)effective near-eye display technologies?
possible solutions: gaze-contingent focus, monovision,
multiplane, light field displays, …
Magnified Display
Display
Lens
Fixed Focus
1
d
+
1
d'
=
1
f
d
d’
f
Adaptive Focus
Magnified Display
Display
Lens
1
d
+
1
d'
=
1
f
actuator  vary d’
Adaptive Focus
Magnified Display
Display
Lens
focus-tunable
lens  vary f
1
d
+
1
d'
=
1
f
Adaptive Focus - History
• M. Heilig “Sensorama”, 1962 (US Patent #3,050,870)
• P. Mills, H. Fuchs, S. Pizer “High-Speed Interaction On A Vibrating-Mirror 3D Display”, SPIE 0507 1984
• S. Shiwa, K. Omura, F. Kishino “Proposal for a 3-D display with accommodative compensation: 3DDAC”, JSID 1996
• S. McQuaide, E. Seibel, J. Kelly, B. Schowengerdt, T. Furness “A retinal scanning display system that produces multiple focal planes with
a deformable membrane mirror”, Displays 2003
• S. Liu, D. Cheng, H. Hua “An optical see-through head mounted display with addressable focal planes”, Proc. ISMAR 2008
manual focus adjustment
Heilig 1962
automatic focus adjustment
Mills 1984
deformabe mirrors & lenses
McQuaide 2003, Liu 2008
Padmanaban et al., PNAS 2017
Padmanaban et al., PNAS 2017
Padmanaban et al., PNAS 2017
Padmanaban et al., PNAS 2017
Padmanaban et al., PNAS 2017
Padmanaban et al., PNAS 2017
Padmanaban et al., PNAS 2017
at ACM SIGGRAPH 2016
EyeNetra.com
at ACM SIGGRAPH 2016
participants of the study, 152 total
EyeNetra.com
Participants - Prescription
Padmanaban et al., PNAS 2017
n = 70, ages 21-64
How sharp is the target? (blurry, medium, sharp)
Is the target fused? (yes, no)
4D
(0.25m)
3D
(0.33m)
2D
(0.50m)
1D
(1m)
Four simulated distances
Task
far near
1D
1m
2D
0.5m
3D
0.3m
4D
0.25m
Distance
medium 0
Relativesharpness
sharp 1
blurry -1
VR uncorrected
VR corrected
Results - Sharpness
Padmanaban et al., PNAS 2017
far near
far near
1D
1m
2D
0.5m
3D
0.3m
4D
0.25m
Distance
medium 0
sharp 1
blurry -1
Relativesharpness
VR uncorrected
VR corrected
Results - Sharpness
Padmanaban et al., PNAS 2017
far near
far near
1D
1m
2D
0.5m
3D
0.3m
4D
0.25m
Distance
medium 0
sharp 1
blurry -1
Relativesharpness
VR uncorrected
VR corrected
Results - Sharpness
Padmanaban et al., PNAS 2017
Mean = 0.63
Mean = 0.60
far near
far near
1D
1m
2D
0.5m
3D
0.3m
4D
0.25m
Distance
medium 0
sharp 1
blurry -1
Relativesharpness
VR uncorrected
VR corrected
normal correction
Results - Sharpness
Padmanaban et al., PNAS 2017
far
far near
1D
1m
2D
0.5m
3D
0.3m
4D
0.25m
Distance
VR uncorrected
VR corrected
1
Proportionfused
0.8
0.6
0.4
0.2
0
Results - Fusion
Padmanaban et al., PNAS 2017
far
far near
1D
1m
2D
0.5m
3D
0.3m
4D
0.25m
Distance
VR uncorrected
VR corrected
1
Proportionfused
0.8
0.6
0.4
0.2
0
Results - Fusion
Padmanaban et al., PNAS 2017
Computational Near-eye Displays
• Q1: Can computational displays effectively replace glasses
in VR/AR?
• Q2: How to address the vergence-accommodation conflict
for users of different ages?
• Q3: What are (in)effective near-eye display technologies?
possible solutions: gaze-contingent focus, monovision,
light field displays, …
vergence
accommodation
Conventional Stereo / VR Display
• Visual discomfort (eye tiredness & eyestrain) after ~20 minutes of
stereoscopic depth judgments (Hoffman et al. 2008; Shibata et al.
2011)
• Degrades visual performance in terms of reaction times and acuity
for stereoscopic vision (Hoffman et al. 2008; Konrad et al. 2016;
Johnson et al. 2016)
Consequences of Vergence-Accommodation Conflict
vergence
accommodation
Removing VAC with Adaptive Focus
Follow the target with your eyes
4D
(0.25m)
0.5D
(2m)
Task
Stimulus
Padmanaban et al., PNAS 2017
Accommodative Response
RelativeDistance[D]
Time [s]
Stimulus
Accommodation
n = 59, mean gain = 0.29
Padmanaban et al., PNAS 2017
Accommodative Response
RelativeDistance[D]
Time [s]
Stimulus
Padmanaban et al., PNAS 2017
Accommodative Response
RelativeDistance[D]
Time [s]
Stimulus
Accommodation
Padmanaban et al., PNAS 2017
Accommodative Response
RelativeDistance[D]
Time [s]
n = 24, mean gain = 0.77
Duane, 1912
Nearestfocusdistance
Age (years)
8 16 24 32 40 48 56 64 72
4D (25cm)
8D (12.5cm)
12D (8cm)
Presbyopia
0D (∞cm)
16D (6cm)
Presbyopia
Padmanaban et al., PNAS 2017
Do Presbyopes Benefit from Dynamic Focus?
Gain
Age
Padmanaban et al., PNAS 2017
Do Presbyopes Benefit from Dynamic Focus?
Gain
Age
conventional
Padmanaban et al., PNAS 2017
Do Presbyopes Benefit from Dynamic Focus?
Gain
Age
conventional
dynamic
Padmanaban et al., PNAS 2017
Do Presbyopes Benefit from Dynamic Focus?
Gain
Age
conventional
dynamic
Response for Physical Stimulus
Heron & Charman 2004
far near far near
Padmanaban et al., PNAS 2017
Age-dependent FusionPercentFused
far near far near
Padmanaban et al., PNAS 2017
Age-dependent FusionPercentFused
far near far near
Padmanaban et al., PNAS 2017
Age-dependent FusionPercentFused
far near far near
Padmanaban et al., PNAS 2017
Age-dependent SharpnessRelativeSharpness
far near far near
Padmanaban et al., PNAS 2017
Age-dependent SharpnessRelativeSharpness
far near far near
Padmanaban et al., PNAS 2017
Age-dependent SharpnessRelativeSharpness
• Q1: Can computational displays effectively replace glasses
in VR/AR?
• Q2: How to address the vergence-accommodation conflict
for users of different ages?
• Q3: What are (in)effective near-eye display technologies?
possible solutions: gaze-contingent focus, monovision,
multiplane, light field displays, …
Gaze-contingent Focus
• non-presbyopes: adaptive focus is like real world, but needs eye tracking!
HMD
lens
micro
display
virtual image
eye
tracking
Padmanaban et al., PNAS 2017
Gaze-contingent Focus
Padmanaban et al., PNAS 2017
Gaze-contingent Focus
Padmanaban et al., PNAS 2017
Gaze-contingent Focus
Padmanaban et al., PNAS 2017
at ACM SIGGRAPH 2016
Gaze-contingent Focus – User Preference
Padmanaban et al., PNAS 2017
Monovision VR
Konrad et al., SIGCHI 2016; Johnson et al., Optics Express 2016; Padmanaban et al., PNAS 2017
Monovision VR
Konrad et al., SIGCHI 2016; Johnson et al., Optics Express 2016; Padmanaban et al., PNAS 2017
• monovision did not drive accommodation
more than conventional
• visually comfortable for most; particularly
uncomfortable for some users
Multiplane VR Displays
• Rolland J, Krueger M, Goon A (2000) Multifocal planes head-mounted displays. Applied Optics 39
• Akeley K, Watt S, Girshick A, Banks M (2004) A stereo display prototype with multiple focal distances. ACM Trans. Graph. (SIGGRAPH)
• Waldkirch M, Lukowicz P, Tröster G (2004) Multiple imaging technique for extending depth of focus in retinal displays. Optics Express
• Schowengerdt B, Seibel E (2006) True 3-d scanned voxel displays using single or multiple light sources. JSID
• Liu S, Cheng D, Hua H (2008) An optical see-through head mounted display with addressable focal planes in Proc. ISMAR
• Love GD et al. (2009) High-speed switchable lens enables the development of a volumetric stereoscopic display. Optics Express
• … many more ...
idea introduced
Rolland et al. 2000
benchtop prototype
Akeley 2004
near-eye display prototype
Liu 2008, Love 2009
Multiplane VR Displays
• Rolland J, Krueger M, Goon A (2000) Multifocal planes head-mounted displays. Applied Optics 39
• Akeley K, Watt S, Girshick A, Banks M (2004) A stereo display prototype with multiple focal distances. ACM Trans. Graph. (SIGGRAPH)
• Waldkirch M, Lukowicz P, Tröster G (2004) Multiple imaging technique for extending depth of focus in retinal displays. Optics Express
• Schowengerdt B, Seibel E (2006) True 3-d scanned voxel displays using single or multiple light sources. JSID
• Liu S, Cheng D, Hua H (2008) An optical see-through head mounted display with addressable focal planes in Proc. ISMAR
• Love GD et al. (2009) High-speed switchable lens enables the development of a volumetric stereoscopic display. Optics Express
• … many more ...
idea introduced
Rolland et al. 2000
benchtop prototype
Akeley 2004
near-eye display prototype
Liu 2008, Love 2009
Light Field CamerasLight Field Stereoscope
Huang et al., SIGGRAPH 2015
Backlight
Thin Spacer & 2nd panel (6mm)
Magnifying Lenses
LCD Panel
Light Field Stereoscope
Huang et al., SIGGRAPH 2015
Near-eye Light Field Displays
Idea: project multiple different perspectives into different parts of the pupil!
Target Light Field
Input: 4D light field for each eye
Multiplicative Two-layer Modulation Input: 4D light field for each eye
Multiplicative Two-layer Modulation Input: 4D light field for each eye
Multiplicative Two-layer Modulation Input: 4D light field for each eye
Multiplicative Two-layer Modulation
Reconstruction:
for layer t1
Tensor Displays,
Wetzstein et al. 2012
Input: 4D light field for each eye
Traditional HMDs
- No Focus Cues
The Light Field HMD
Stereoscope
Light Field Stereoscope
Huang et al., SIGGRAPH 2015
Traditional HMDs
- No Focus Cues
The Light Field HMD
Stereoscope
Light Field Stereoscope
Huang et al., SIGGRAPH 2015
Traditional HMDs
- No Focus Cues
The Light Field HMD
Stereoscope
Light Field Stereoscope
Huang et al., SIGGRAPH 2015
Traditional HMDs
- No Focus Cues
The Light Field HMD
Stereoscope
Light Field Stereoscope
Huang et al., SIGGRAPH 2015
Vision-correcting Display
iPod Touch prototypeprinted transparency
Huang et al., SIGGRAPH 2014
prototype
300 dpi or higher
Huang et al., SIGGRAPH 2014
Diffraction in Multilayer Light Field Displays
Wetzstein et al., SIGGRAPH 2011
Lanman et al., SIGGRAPH Asia 2011
Wetzstein et al., SIGGRAPH 2012
Maimone et all., Trans. Graph. 2013
…
Hirsch et al, SIGGRAPH 2014
No diffraction artifacts with LCoS
blur!
Summary
• focus cues in VR/AR are challenging
• adaptive focus can correct for refractive errors (myopia, hyperopia)
• gaze-contingent focus gives natural focus cues for non-presbyopes, but
require eyes tracking
• presbyopes require fixed focal plane with correction
• multiplane displays require very high speed microdisplays
• monovision has not demonstrated significant improvements
• light field displays may be the “ultimate” display  need to solve “diffraction
problem”
Making Virtual Reality Better Than Reality?
• focus cues in VR/AR are challenging
• adaptive focus can correct for refractive errors (myopia, hyperopia)
• gaze-contingent focus gives natural focus cues for non-presbyopes, but
require eyes tracking
• presbyopes require fixed focal plane with correction, better than reality!
• multiplane displays require very high speed microdisplays
• monovision has not demonstrated significant improvements
• light field displays may be the “ultimate” display  need to solve “diffraction
problem”
VR/AR = Frontier of Engineering
• Focus cues / visual accessibility
• Vestibular-visual conflict (motion sickness)
• AR • occlusions
• aesthetics / form factor
• battery life
• heat
• wireless operation
• low-power computer vision
• registration of physical /
virtual world and eyes
• consistent lighting
• scanning real world
• VAC more important
• display contrast &
brightness
• fast, embedded GPUs
• …
Capturing and Sharing Experiences
It’s Not About Technology but Experiences!
Facebook’s Surround 360
RAW Data: 17 Gb/sec
Compute time: days to weeks on conventional computer,
minutes to hours on data center
Facebook’s Surround 360
RAW Data: 17 Gb/sec
Compute time: days to weeks on conventional computer,
minutes to hours on data center
Konrad et al., arxiv 2017
Konrad et al., arxiv 2017
Konrad et al., arxiv 2017
Konrad et al., arxiv 2017
Advancing AR/VR technology requires deep
understanding of human vision, optics, signal processing,
computation, and more.
Technology alone is not enough – engineer experiences!
Conclusions
Stanford EE 267
Stanford Computational Imaging Lab
Light Field Displays
Time-of-Flight Imaging
Computational
Microscopy
Image Optimization
Light Field Cameras
Near-eye Displays
Stanford Computational Imaging Lab
Open Lab this Friday (2/3) 10am-3pm at Stanford
Please email Helen Lin (helenlin@stanford.edu) for more details!
- "The Light Field Stereoscope" (demo), R. Konrad
- "Gaze-contingent and Varifocal Near-eye Displays" (demo), N. Padmanaban
- "Monovision Near-eye Displays" (demo), R. Konrad
- "Saliency in VR: How do People Explore Virtual Environments?" (poster and demo), V. Sitzmann
- "Accommodation-invariant Computational Near-eye Displays" (demo), R. Konrad
- "Depth-dependent Visual Anchoring for Reducing Motion Sickness in VR", N. Padmanaban
- "ProxImaL: Efficient Image Optimization using Proximal Algorithms" (poster), F. Heide
- "Dirty Pixels: Optimizing Image Classification Architectures for Raw Sensor Data" (poster), S. Diamond
- "Vortex: Live Cinematic Virtual Reality" (demo), R. Konrad
- "Transient Imaging with Single Photon Detectors" (poster), M. O'Toole
- "Robust Non-line-of-sight Imaging with Single Photon Detectors" (poster), F. Heide
- "Variable Aperture Light Field Photography" (poster), J. Chang
- "Computational Time-of-Flight Photography" (poster), F. Heide
- "Wide Field-of-View Monocentric Light Field Imaging" (poster and demo), D. Dansereau
- "Hacking the Vive Lighthouse - Arduino-based Positional Tracking in VR with Low-cost Components" (demo), K.
Acknowledgements
Near-eye Displays
• Robert Konrad (Stanford)
• Nitish Padmanaban (Stanford)
• Fu-Chung Huang (NVIDIA)
• Emily Cooper (Dartmouth College)
Saliency in VR
• Vincent Sitzmann (Stanford)
• Diego Gutierrez (U. Zaragoza)
• Ana Serrano (U. Zaragoza)
• Maneesh Agrawala (Stanford)
Spinning VR Camera
• Robert Konrad (Stanford)
• Donald Dansereau (Stanford)
Other
• Wolfgang Heidrich (UBC/KAUST)
• Ramesh Raskar (MIT/Facebook)
• Douglas Lanman (Oculus)
• Matt Hirsch (Lumii)
• Matthew O’Toole (Stanford)
• Felix Heide (Stanford)
Gordon Wetzstein
Computational Imaging Lab
Stanford University
stanford.edu/~gordonwz
www.computationalimaging.org

VR2.0: Making Virtual Reality Better Than Reality?

  • 1.
    Making Virtual Realitybetter than Reality? Gordon Wetzstein Stanford University IS&T Electronic Imaging 2017 www.computationalimaging.org
  • 3.
    Personal Computer e.g. CommodorePET 1983 Laptop e.g. Apple MacBook Smartphone e.g. Google Pixel AR/VR e.g. Microsoft Hololens ???
  • 4.
    A Brief Historyof Virtual Reality 1838 1968 2012-2017 Stereoscopes Wheatstone, Brewster, … VR & AR Ivan Sutherland VR explosion Oculus, Sony, HTC, MS, … Nintendo Virtual Boy 1995 VR 2.0
  • 5.
    Where we arenow IFIXIT teardown
  • 8.
  • 15.
  • 16.
    Current VR Displays: Vergence& Accommodation Mismatch for people with normal vision
  • 17.
    Presbyopia [Katz et al.1997] 68% age 80+ 43% age 40 25% Hyperopia [Krachmer et al. 2005] Myopia 41.6% [Vitale et al. 2009] How Many People Have Normal Vision? all numbers of US population
  • 18.
    4D / 25cmOptical Infinity Normal vision Nearsighted/myopic Farsighted/Hyperopic Presbyopic Focal range (range of clear vision) Modified from Pamplona et al, Proc. of SIGGRAPH 2010 Nearsightedness & Farsightedness
  • 19.
  • 20.
    • Q1: Cancomputational displays effectively replace glasses in VR/AR? • Q2: How to address the vergence-accommodation conflict for users of different ages? • Q3: What are (in)effective near-eye display technologies? possible solutions: gaze-contingent focus, monovision, multiplane, light field displays, …
  • 21.
    • Q1: Cancomputational displays effectively replace glasses in VR/AR? • Q2: How to address the vergence-accommodation conflict for users of different ages? • Q3: What are (in)effective near-eye display technologies? possible solutions: gaze-contingent focus, monovision, multiplane, light field displays, …
  • 22.
  • 23.
  • 24.
  • 25.
    Adaptive Focus -History • M. Heilig “Sensorama”, 1962 (US Patent #3,050,870) • P. Mills, H. Fuchs, S. Pizer “High-Speed Interaction On A Vibrating-Mirror 3D Display”, SPIE 0507 1984 • S. Shiwa, K. Omura, F. Kishino “Proposal for a 3-D display with accommodative compensation: 3DDAC”, JSID 1996 • S. McQuaide, E. Seibel, J. Kelly, B. Schowengerdt, T. Furness “A retinal scanning display system that produces multiple focal planes with a deformable membrane mirror”, Displays 2003 • S. Liu, D. Cheng, H. Hua “An optical see-through head mounted display with addressable focal planes”, Proc. ISMAR 2008 manual focus adjustment Heilig 1962 automatic focus adjustment Mills 1984 deformabe mirrors & lenses McQuaide 2003, Liu 2008
  • 26.
  • 27.
  • 28.
  • 29.
  • 30.
  • 31.
  • 32.
  • 36.
    at ACM SIGGRAPH2016 EyeNetra.com
  • 37.
    at ACM SIGGRAPH2016 participants of the study, 152 total EyeNetra.com
  • 38.
    Participants - Prescription Padmanabanet al., PNAS 2017 n = 70, ages 21-64
  • 39.
    How sharp isthe target? (blurry, medium, sharp) Is the target fused? (yes, no) 4D (0.25m) 3D (0.33m) 2D (0.50m) 1D (1m) Four simulated distances Task
  • 40.
    far near 1D 1m 2D 0.5m 3D 0.3m 4D 0.25m Distance medium 0 Relativesharpness sharp1 blurry -1 VR uncorrected VR corrected Results - Sharpness Padmanaban et al., PNAS 2017
  • 41.
    far near far near 1D 1m 2D 0.5m 3D 0.3m 4D 0.25m Distance medium0 sharp 1 blurry -1 Relativesharpness VR uncorrected VR corrected Results - Sharpness Padmanaban et al., PNAS 2017
  • 42.
    far near far near 1D 1m 2D 0.5m 3D 0.3m 4D 0.25m Distance medium0 sharp 1 blurry -1 Relativesharpness VR uncorrected VR corrected Results - Sharpness Padmanaban et al., PNAS 2017
  • 43.
    Mean = 0.63 Mean= 0.60 far near far near 1D 1m 2D 0.5m 3D 0.3m 4D 0.25m Distance medium 0 sharp 1 blurry -1 Relativesharpness VR uncorrected VR corrected normal correction Results - Sharpness Padmanaban et al., PNAS 2017
  • 44.
    far far near 1D 1m 2D 0.5m 3D 0.3m 4D 0.25m Distance VR uncorrected VRcorrected 1 Proportionfused 0.8 0.6 0.4 0.2 0 Results - Fusion Padmanaban et al., PNAS 2017
  • 45.
    far far near 1D 1m 2D 0.5m 3D 0.3m 4D 0.25m Distance VR uncorrected VRcorrected 1 Proportionfused 0.8 0.6 0.4 0.2 0 Results - Fusion Padmanaban et al., PNAS 2017
  • 46.
    Computational Near-eye Displays •Q1: Can computational displays effectively replace glasses in VR/AR? • Q2: How to address the vergence-accommodation conflict for users of different ages? • Q3: What are (in)effective near-eye display technologies? possible solutions: gaze-contingent focus, monovision, light field displays, …
  • 47.
  • 48.
    • Visual discomfort(eye tiredness & eyestrain) after ~20 minutes of stereoscopic depth judgments (Hoffman et al. 2008; Shibata et al. 2011) • Degrades visual performance in terms of reaction times and acuity for stereoscopic vision (Hoffman et al. 2008; Konrad et al. 2016; Johnson et al. 2016) Consequences of Vergence-Accommodation Conflict
  • 49.
  • 50.
    Follow the targetwith your eyes 4D (0.25m) 0.5D (2m) Task
  • 51.
    Stimulus Padmanaban et al.,PNAS 2017 Accommodative Response RelativeDistance[D] Time [s]
  • 52.
    Stimulus Accommodation n = 59,mean gain = 0.29 Padmanaban et al., PNAS 2017 Accommodative Response RelativeDistance[D] Time [s]
  • 53.
    Stimulus Padmanaban et al.,PNAS 2017 Accommodative Response RelativeDistance[D] Time [s]
  • 54.
    Stimulus Accommodation Padmanaban et al.,PNAS 2017 Accommodative Response RelativeDistance[D] Time [s] n = 24, mean gain = 0.77
  • 55.
    Duane, 1912 Nearestfocusdistance Age (years) 816 24 32 40 48 56 64 72 4D (25cm) 8D (12.5cm) 12D (8cm) Presbyopia 0D (∞cm) 16D (6cm)
  • 56.
  • 57.
    Padmanaban et al.,PNAS 2017 Do Presbyopes Benefit from Dynamic Focus? Gain Age
  • 58.
    Padmanaban et al.,PNAS 2017 Do Presbyopes Benefit from Dynamic Focus? Gain Age conventional
  • 59.
    Padmanaban et al.,PNAS 2017 Do Presbyopes Benefit from Dynamic Focus? Gain Age conventional dynamic
  • 60.
    Padmanaban et al.,PNAS 2017 Do Presbyopes Benefit from Dynamic Focus? Gain Age conventional dynamic Response for Physical Stimulus Heron & Charman 2004
  • 61.
    far near farnear Padmanaban et al., PNAS 2017 Age-dependent FusionPercentFused
  • 62.
    far near farnear Padmanaban et al., PNAS 2017 Age-dependent FusionPercentFused
  • 63.
    far near farnear Padmanaban et al., PNAS 2017 Age-dependent FusionPercentFused
  • 64.
    far near farnear Padmanaban et al., PNAS 2017 Age-dependent SharpnessRelativeSharpness
  • 65.
    far near farnear Padmanaban et al., PNAS 2017 Age-dependent SharpnessRelativeSharpness
  • 66.
    far near farnear Padmanaban et al., PNAS 2017 Age-dependent SharpnessRelativeSharpness
  • 67.
    • Q1: Cancomputational displays effectively replace glasses in VR/AR? • Q2: How to address the vergence-accommodation conflict for users of different ages? • Q3: What are (in)effective near-eye display technologies? possible solutions: gaze-contingent focus, monovision, multiplane, light field displays, …
  • 68.
    Gaze-contingent Focus • non-presbyopes:adaptive focus is like real world, but needs eye tracking! HMD lens micro display virtual image eye tracking Padmanaban et al., PNAS 2017
  • 69.
  • 70.
  • 71.
  • 72.
  • 73.
    Gaze-contingent Focus –User Preference Padmanaban et al., PNAS 2017
  • 74.
    Monovision VR Konrad etal., SIGCHI 2016; Johnson et al., Optics Express 2016; Padmanaban et al., PNAS 2017
  • 75.
    Monovision VR Konrad etal., SIGCHI 2016; Johnson et al., Optics Express 2016; Padmanaban et al., PNAS 2017 • monovision did not drive accommodation more than conventional • visually comfortable for most; particularly uncomfortable for some users
  • 76.
    Multiplane VR Displays •Rolland J, Krueger M, Goon A (2000) Multifocal planes head-mounted displays. Applied Optics 39 • Akeley K, Watt S, Girshick A, Banks M (2004) A stereo display prototype with multiple focal distances. ACM Trans. Graph. (SIGGRAPH) • Waldkirch M, Lukowicz P, Tröster G (2004) Multiple imaging technique for extending depth of focus in retinal displays. Optics Express • Schowengerdt B, Seibel E (2006) True 3-d scanned voxel displays using single or multiple light sources. JSID • Liu S, Cheng D, Hua H (2008) An optical see-through head mounted display with addressable focal planes in Proc. ISMAR • Love GD et al. (2009) High-speed switchable lens enables the development of a volumetric stereoscopic display. Optics Express • … many more ... idea introduced Rolland et al. 2000 benchtop prototype Akeley 2004 near-eye display prototype Liu 2008, Love 2009
  • 77.
    Multiplane VR Displays •Rolland J, Krueger M, Goon A (2000) Multifocal planes head-mounted displays. Applied Optics 39 • Akeley K, Watt S, Girshick A, Banks M (2004) A stereo display prototype with multiple focal distances. ACM Trans. Graph. (SIGGRAPH) • Waldkirch M, Lukowicz P, Tröster G (2004) Multiple imaging technique for extending depth of focus in retinal displays. Optics Express • Schowengerdt B, Seibel E (2006) True 3-d scanned voxel displays using single or multiple light sources. JSID • Liu S, Cheng D, Hua H (2008) An optical see-through head mounted display with addressable focal planes in Proc. ISMAR • Love GD et al. (2009) High-speed switchable lens enables the development of a volumetric stereoscopic display. Optics Express • … many more ... idea introduced Rolland et al. 2000 benchtop prototype Akeley 2004 near-eye display prototype Liu 2008, Love 2009
  • 78.
    Light Field CamerasLightField Stereoscope Huang et al., SIGGRAPH 2015
  • 79.
    Backlight Thin Spacer &2nd panel (6mm) Magnifying Lenses LCD Panel Light Field Stereoscope Huang et al., SIGGRAPH 2015
  • 80.
    Near-eye Light FieldDisplays Idea: project multiple different perspectives into different parts of the pupil!
  • 81.
    Target Light Field Input:4D light field for each eye
  • 82.
    Multiplicative Two-layer ModulationInput: 4D light field for each eye
  • 83.
    Multiplicative Two-layer ModulationInput: 4D light field for each eye
  • 84.
    Multiplicative Two-layer ModulationInput: 4D light field for each eye
  • 85.
    Multiplicative Two-layer Modulation Reconstruction: forlayer t1 Tensor Displays, Wetzstein et al. 2012 Input: 4D light field for each eye
  • 86.
    Traditional HMDs - NoFocus Cues The Light Field HMD Stereoscope Light Field Stereoscope Huang et al., SIGGRAPH 2015
  • 87.
    Traditional HMDs - NoFocus Cues The Light Field HMD Stereoscope Light Field Stereoscope Huang et al., SIGGRAPH 2015
  • 88.
    Traditional HMDs - NoFocus Cues The Light Field HMD Stereoscope Light Field Stereoscope Huang et al., SIGGRAPH 2015
  • 89.
    Traditional HMDs - NoFocus Cues The Light Field HMD Stereoscope Light Field Stereoscope Huang et al., SIGGRAPH 2015
  • 90.
    Vision-correcting Display iPod Touchprototypeprinted transparency Huang et al., SIGGRAPH 2014
  • 91.
    prototype 300 dpi orhigher Huang et al., SIGGRAPH 2014
  • 92.
    Diffraction in MultilayerLight Field Displays Wetzstein et al., SIGGRAPH 2011 Lanman et al., SIGGRAPH Asia 2011 Wetzstein et al., SIGGRAPH 2012 Maimone et all., Trans. Graph. 2013 … Hirsch et al, SIGGRAPH 2014 No diffraction artifacts with LCoS blur!
  • 93.
    Summary • focus cuesin VR/AR are challenging • adaptive focus can correct for refractive errors (myopia, hyperopia) • gaze-contingent focus gives natural focus cues for non-presbyopes, but require eyes tracking • presbyopes require fixed focal plane with correction • multiplane displays require very high speed microdisplays • monovision has not demonstrated significant improvements • light field displays may be the “ultimate” display  need to solve “diffraction problem”
  • 94.
    Making Virtual RealityBetter Than Reality? • focus cues in VR/AR are challenging • adaptive focus can correct for refractive errors (myopia, hyperopia) • gaze-contingent focus gives natural focus cues for non-presbyopes, but require eyes tracking • presbyopes require fixed focal plane with correction, better than reality! • multiplane displays require very high speed microdisplays • monovision has not demonstrated significant improvements • light field displays may be the “ultimate” display  need to solve “diffraction problem”
  • 95.
    VR/AR = Frontierof Engineering • Focus cues / visual accessibility • Vestibular-visual conflict (motion sickness) • AR • occlusions • aesthetics / form factor • battery life • heat • wireless operation • low-power computer vision • registration of physical / virtual world and eyes • consistent lighting • scanning real world • VAC more important • display contrast & brightness • fast, embedded GPUs • …
  • 96.
  • 97.
    It’s Not AboutTechnology but Experiences!
  • 103.
    Facebook’s Surround 360 RAWData: 17 Gb/sec Compute time: days to weeks on conventional computer, minutes to hours on data center
  • 104.
    Facebook’s Surround 360 RAWData: 17 Gb/sec Compute time: days to weeks on conventional computer, minutes to hours on data center
  • 105.
    Konrad et al.,arxiv 2017
  • 106.
    Konrad et al.,arxiv 2017
  • 107.
    Konrad et al.,arxiv 2017
  • 108.
    Konrad et al.,arxiv 2017
  • 109.
    Advancing AR/VR technologyrequires deep understanding of human vision, optics, signal processing, computation, and more. Technology alone is not enough – engineer experiences! Conclusions
  • 110.
  • 111.
    Stanford Computational ImagingLab Light Field Displays Time-of-Flight Imaging Computational Microscopy Image Optimization Light Field Cameras Near-eye Displays
  • 112.
    Stanford Computational ImagingLab Open Lab this Friday (2/3) 10am-3pm at Stanford Please email Helen Lin (helenlin@stanford.edu) for more details! - "The Light Field Stereoscope" (demo), R. Konrad - "Gaze-contingent and Varifocal Near-eye Displays" (demo), N. Padmanaban - "Monovision Near-eye Displays" (demo), R. Konrad - "Saliency in VR: How do People Explore Virtual Environments?" (poster and demo), V. Sitzmann - "Accommodation-invariant Computational Near-eye Displays" (demo), R. Konrad - "Depth-dependent Visual Anchoring for Reducing Motion Sickness in VR", N. Padmanaban - "ProxImaL: Efficient Image Optimization using Proximal Algorithms" (poster), F. Heide - "Dirty Pixels: Optimizing Image Classification Architectures for Raw Sensor Data" (poster), S. Diamond - "Vortex: Live Cinematic Virtual Reality" (demo), R. Konrad - "Transient Imaging with Single Photon Detectors" (poster), M. O'Toole - "Robust Non-line-of-sight Imaging with Single Photon Detectors" (poster), F. Heide - "Variable Aperture Light Field Photography" (poster), J. Chang - "Computational Time-of-Flight Photography" (poster), F. Heide - "Wide Field-of-View Monocentric Light Field Imaging" (poster and demo), D. Dansereau - "Hacking the Vive Lighthouse - Arduino-based Positional Tracking in VR with Low-cost Components" (demo), K.
  • 113.
    Acknowledgements Near-eye Displays • RobertKonrad (Stanford) • Nitish Padmanaban (Stanford) • Fu-Chung Huang (NVIDIA) • Emily Cooper (Dartmouth College) Saliency in VR • Vincent Sitzmann (Stanford) • Diego Gutierrez (U. Zaragoza) • Ana Serrano (U. Zaragoza) • Maneesh Agrawala (Stanford) Spinning VR Camera • Robert Konrad (Stanford) • Donald Dansereau (Stanford) Other • Wolfgang Heidrich (UBC/KAUST) • Ramesh Raskar (MIT/Facebook) • Douglas Lanman (Oculus) • Matt Hirsch (Lumii) • Matthew O’Toole (Stanford) • Felix Heide (Stanford)
  • 114.
    Gordon Wetzstein Computational ImagingLab Stanford University stanford.edu/~gordonwz www.computationalimaging.org