• Save
Depth & space
Upcoming SlideShare
Loading in...5
×
 

Like this? Share it with your network

Share

Depth & space

on

  • 374 views

 

Statistics

Views

Total Views
374
Views on SlideShare
322
Embed Views
52

Actions

Likes
0
Downloads
0
Comments
0

1 Embed 52

http://beccakennedy.me 52

Accessibility

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

Depth & space Presentation Transcript

  • 1. Depth & SpaceBecca KennedyPerception in Real & Virtual Environments9/19/12
  • 2. Overview• Edges, lines, and texture elements must be interpreted interms of 3D structure to understand the world• Observer must determine:▫ Depth – distance of the surface from the observer▫ Surface orientation – slant and tilt• Depth and surface orientation are recovered together▫ 3D orientation determines distances of object partsfrom the observer, and distance of parts determines3D orientation
  • 3. Overview• Slant – size of the angle between the observer’s line of sightand the surface normal▫ Surface normal – virtual line sticking out perpendicularlyout of the surface at that point• Tilt – the direction of the depth gradient relative to thefrontal plane
  • 4. The Problem of Depth Perception• Depth perception from a 2-D retinal image is ambiguous
  • 5. The Problem of Depth Perception –Heuristic Assumptions• Visual system implicitly makes heuristic assumptionsabout the nature of the world• Our visual system is fooled by 3-D movies▫ Visual system implicitly assumes that both eyes arelooking at the same scene▫ The different image presented to each eye isinterpreted as depth▫ But usually this heuristic is correct
  • 6. The Problem of Depth Perception –Marr’s 2.5-D Sketch• There are many independent processing modulescomputing depth information from separate sources▫ Each module processes different kinds of information• The final common depth interpretation is expressed as a2.5-D sketch
  • 7. Sources of Depth Information• Ocular information vs. optical information▫ Ocular information arises from factors that depend on the state ofthe eyes themselves▫ Optical information arises from the structure of the light enteringthe eyes• Binocular information vs. monocular information• Static information vs. dynamic information
  • 8. Ocular Information• Accommodation – the process through which the ciliarymuscles in the eye control the optical focus of the lens bytemporarily changing its shape▫ Monocular cue▫ Thick lens for close objects, thin lens for far objects▫ Weak source of depth information, but used at close distances
  • 9. Ocular Information• Convergence – the extent to which the two eyes areturned inward to fixate an object▫ Binocular cue▫ Fixating on a close object results in a large convergence angle▫ Fixating on a far object results in a small convergence angle▫ Visual system uses the angle of eye convergence to determinedistance to the fixated point
  • 10. Stereoscopic Information• Stereopsis is the process of perceiving the relativedistance to objects based on their lateral displacement inthe two retinal images▫ This relative lateral displacement is binocular disparity Direction of binocular disparity provides info about which points are closerand which are farther than the fixated point Magnitude of binocular disparity provides information about how muchcloser or farther they are• Specifies ratios of distances to objects rather than simplywhich is farther and which is closer
  • 11. Corresponding Retinal Positions• Corresponding positions on the two retinae are positionsthat would coincide if the two foveae were superimposedby simple lateral displacement▫ Binocular disparity occurs when a given point in the external worlddoesn’t project to corresponding positions Crossed disparity indicate that a point is closer than the fixated point Uncrossed disparity indicates that a point is farther away than the fixated point
  • 12. Corresponding Retinal Positions• The horopter is the set of environmental points thatstimulate corresponding points on the two retinae▫ Theoretical horopter –defined as the locus of points which makethe same angle at the eyes▫ Empirical horopter –defined by singleness of vision; larger thantheoretical horopter• Panum’s fusional area is the area around the horopterwithin which disparate images are perceptually fused, sowe don’t see double images▫ Points that lie outside Panum’s area create disparity that weexperience as depth
  • 13. The Correspondence Problem• How does the visual system determine which features inone retinal image correspond to which features in theother?• For many years, theorists assumed that this problem wassolved by a shape analysis for each left and right imagethat occurred before stereopsis
  • 14. The Correspondence Problem• The alternative possibility is that stereopsis occurs first▫ Random dot stereograms When each image is viewed alone, the dots look random Shape-first theory would predict that depth perception of random-dotimages stereoscopically would be impossible Random dot stereograms show that stereoscopic depth can beperceived without monocular shape information
  • 15. Computational Theories• Most dots in the left image have a corresponding dot in theright image▫ The visual system needs to figure out which pairs of dots go together• The first Marr-Poggio algorithm (1976, 1977)▫ Individual pixels in the left and right images are matched according tolocation and color Among these matches, there are the correct ones that correspond to the visibleportions of the actual surfaces in the real world▫ Two heuristic constraints help provide the correct solution Surface opacity – only the nearest surface can be seen Surface continuity – correct solution will tend to be one in which matches are closetogether in depth
  • 16. Edge-Based Algorithms• Marr and Poggio suggested a second algorithm in 1979• Differed from the first in the following ways:▫ Edge-based matching – matches edges in the left and rightimages rather than pixels▫ Multiple scales – visual system first looks for correspondingedges at a large spatial scale, followed by more detailed matchingat finer-grained levels▫ Single-pass operation – noniterative; finds best edge-basedcorrespondence in a single pass through a multistage operation
  • 17. Multi-Orientation, Multi-Scale (MOMS) filters• Jones and Malik (1990)▫ A process of matching the vector representing a given position inone eye to each of the vectors representing laterally displacedpositions in the other eye Specifies the most likely correspondence▫ Better and more robust matches because MOMS vectors carry alot of spatial information Compared to outputs of single receptors or edge detectors
  • 18. Physiology of Stereoscopic Vision• Binocular depth cells▫ Hubel and Wiesel (1962) Discovered cells in V1 of the visual cortex that were sensitive to binocular stimulation▫ Barlow, Blakemore, and Pettigrew (1967) Reported that some binocular cells in area V1 responded optimally to stimulation indisparate locations of the two retinae• To show that these cells are involved in depth perception, need toalso demonstrate a connection between disparity and behavior▫ Blake and Hirsch (1975) Reared cats so that their vision was alternated between left and right eyes for 6 months These cats had few binocular neurons and they were not able to use binocular disparity toperceive depth▫ Recent brain imaging experiments have shown that many different areasare activated by stimuli that create binocular disparity Depth perception involves many stages of processing that extend from primary visual cortex
  • 19. Environmental depth cues
  • 20. Dynamic Information• Motion parallax▫ The differential motion of pairs of points due to their different depthsrelative to the fixation point Nearby objects move quickly, far off objects appear stationary• Optic flow caused by a moving observer▫ Relative to the fixation point… Points closer to the observer flow in the direction opposite the observer’s motion Points farther than fixation point flow in the same direction as the observer’s motion
  • 21. Dynamic Information• Another pattern of optic flow is optic expansion or looming▫ Fixated point is stationary on the retina▫ Other points flow outward, faster with more distance from fixationpoint
  • 22. Dynamic Information• Optic flow caused by moving objects▫ Kinetic depth effect (KDE; Wallach & O’Connell, 1953) – ability toperceive depth from object motion▫ Visual system uses a rigidity heuristic Biased toward perceiving rigid motions rather than plastic motions• Accretion/Deletion of Texture▫ Appearance and disappearance of texture behind a moving edge
  • 23. Pictorial Information• Convergence of parallel lines• Position relative to the horizon
  • 24. Pictorial Information• Relative size• Familiar size▫ In a VE, if not enough depth cues are present, the observer begins todepend on retinal size (Kenyon, Sandin, Smith, Pawlicki, & Defanti, 2007)• Texture gradients
  • 25. Pictorial Information – Edge Interpretation• Edge and contour interpretations▫ E.g., occlusion or interposition – blocking of light from an object by anopaque object causing occlusion or interposition▫ Edges provide relative rather than absolute depth information▫ Available from virtually unlimited distances within visible range• Vertex (edge intersection) classification▫ Guzman’s (1968, 1969) program SEE attempted to interpret linedrawings of simple configurations of blocks He developed a classification scheme for edge intersections (vertices): Ts, Ys, Ks, Xs,Ls, etc.▫ Huffman and Clowes (1971) developed a complete catalog of the vertextypes that arise in viewing simple trihedral angles from all possibleviewpoints
  • 26. Pictorial Information – Edge Interpretation• Four types of edges:1. Orientation edges – places where there are discontinuities insurface orientation; when two different orientations meet alongan edge2. Depth edges – places where there is a spatial discontinuity indepth between surfaces; places where one surface occludesanother that extends behind it, with space between3. Illumination edges – places where there is a difference in theamount of light falling on a homogenous surface; edge of ashadow, highlight, or spotlight4. Reflectance edges – places where there is a change in thelight-reflecting properties of the surface material; e.g., designspainted on a surface
  • 27. Pictorial Information – Edge Interpretation• Edge labels▫ Two kinds of orientation edges Convex orientation edges are labeled with a + Concave orientation edges are labeled with a -▫ Arrows indicate that the closer surface is on the right
  • 28. Pictorial Information – Edge Interpretation
  • 29. Pictorial Information – Edge Interpretation• Physical constraints▫ Not all logically possible labelings are physically possible
  • 30. Pictorial Information – Edge Interpretation• Extensions and Generalizations▫ Waltz (1975) extended the Huffman-Clowes analysis to include 11types of edges, including shadows and “cracks” (orientation edgesat 180 degree angles) Adding shadows making interpretation more accurate because itprovides further constraints▫ Malik (1987) extended analysis of edge labeling to curvedobjects New depth edge type extremal edge or limb (double arrow) occurswhen a surface curves smoothly around to partly exclude itself
  • 31. Pictorial Information – Edge Interpretation• Extensions and Generalizations▫ Barrow and Tennenbaum (1978)’s analysis contained additionalconstraints: The smoothness assumption – if an occluding edge in the image issmooth, then so is the contour of the surface that produced it The general viewpoint assumption – small changes in viewpoint willnot cause qualitative differences in the image
  • 32. Pictorial Information• Shading information▫ Shading – variations in the amount of light reflected from the surface asa result of variations in the orientation of the surface relative to a lightsource▫ Horn’s (1975, 1977) Computational Analysis Showed that percentage changes in image luminance are directly proportional topercentage changes in the orientation of the surface▫ Humans are able to interpret surfaces with significantly specularcharacteristics, like glossy surfaces that reflect light more coherentlythan matte surfaces do How?
  • 33. Pictorial Information• Shading information▫ Cast shadows Shadows of one objet that fall on the surface of another object provide moredepth information Distance between object and its shadow cast on the surface gives height of itsbottom above the surface
  • 34. Pictorial Information• Aerial perspective▫ Refers to certain systematic differences in the contrast and color ofobjects that occur when they are viewed from great distances Contrast is reduced by additional atmosphere through which they are viewed, whichcontains particles of dust, water, or pollutants that scatter light Mountains that are far away appear bluer because the atmosphere scatters longerwavelengths of light more than shorter wavelengths
  • 35. Integrating Information Sources• Depth cues are often highly correlated, making them easy to integrate• What happens when cues are in conflict with one another? 3 possibilities:1. One source dominates a conflicting source E.g., In Ames room, perspective information dominates familiar size2. A compromise is achieved between two conflicting sources Visual system makes independent estimates of depth from each source alone, thenintegrates them according to a mathematical rule Bruno and Cutting (1988) found that information integration wasadditive; sum independent effects of sources3. The two sources interact to arrive at an optimal solution E.g., convergence specifies absolute depth, binocular disparity specifies ratios ofdistances; together they can provide a complete depth map
  • 36. Depth Perception and VEs• Our visual system is really good at depth perception in realenvironments, but this is hard to replicate in virtual scenes▫ Ocular depth information (accommodation, convergence) is less useful▫ Stereoscopic depth information may not be available▫ Motion cues may not be faithfully represented▫ Depth cues may be conflicting▫ Etc.!• But augmented reality can also improve real-world depthperception