Identify those parts of a scene that are visible from a chosen viewing position.
Visible-surface detection algorithms are broadly classified according to whether
they deal with object definitions directly or with their projected images.
These two approaches are called object-space methods and image-space methods, respectively
An object-space method compares
objects and parts of objects to each other within the scene definition to determine which surfaces, as a whole, we should label as visible.
In an image-space algorithm, visibility is decided point by point at each pixel position on the projection plane.
2. Identify those parts of a scene that are visibleIdentify those parts of a scene that are visible
from a chosen viewing position.from a chosen viewing position.
Visible-surface detection algorithms areVisible-surface detection algorithms are
broadly classified according to whetherbroadly classified according to whether
they deal with object definitions directly orthey deal with object definitions directly or
with their projected images.with their projected images.
TheseThese two approaches are called object-two approaches are called object-
space methods and image-space methods,space methods and image-space methods,
respectivelyrespectively 22
3. An object-space method comparesAn object-space method compares
objectsobjects and parts of objectsand parts of objects to each otherto each other
within the scene definition to determine whichwithin the scene definition to determine which
surfaces, as a whole, we should label assurfaces, as a whole, we should label as
visible.visible.
In an image-space algorithm, visibility isIn an image-space algorithm, visibility is
decided point by point at each pixel positiondecided point by point at each pixel position
on the projection plane.on the projection plane.
33
4. BACK-FACE DETECTIONBACK-FACE DETECTION
A pointA point (x, y,(x, y, z) is "inside" a polygon surfacez) is "inside" a polygon surface
with plane parameters A, B, C, and D ifwith plane parameters A, B, C, and D if
Ax+ Bx + Cz + D < 0Ax+ Bx + Cz + D < 0
When an inside point is along the line of sightWhen an inside point is along the line of sight
to the surface, the polygon must be a back faceto the surface, the polygon must be a back face
(we are inside that face and cannot see the(we are inside that face and cannot see the
front of it from our viewing position).front of it from our viewing position).
44
5. • Consider the normal vector N to a polygonConsider the normal vector N to a polygon
surface, which has Cartesian componentssurface, which has Cartesian components
(A, B, C).(A, B, C).
• In general, if V is a vector inIn general, if V is a vector in the viewingthe viewing
direction from the eye (or "camera") position,direction from the eye (or "camera") position,
then this polygon is a back face ifthen this polygon is a back face if
V . N > 0V . N > 0
55
6. if object descriptions have been converted toif object descriptions have been converted to
projection coordinates and our viewingprojection coordinates and our viewing
direction is parallel to the viewing zdirection is parallel to the viewing z vv axis, thenaxis, then
V =V = (0, 0,(0, 0, V)V) andand
V . N = VV . N = Vzz . C. C
so that we only need to consider the sign of C,so that we only need to consider the sign of C,
the z component of the normal vector Nthe z component of the normal vector N
66
7. we can label any polygon as a back face if itswe can label any polygon as a back face if its
normal vector has a z component value:normal vector has a z component value:
C < = 0C < = 0
77
8. By examining parameter C for the differentBy examining parameter C for the different
planes defining an object, we can immediatelyplanes defining an object, we can immediately
identify all the back facesidentify all the back faces
88
9. DEPTH-BUFFER METHODDEPTH-BUFFER METHOD
compares surface depths at each pixel positioncompares surface depths at each pixel position
on the projection plane.on the projection plane.
This procedure is also referred to as the z-This procedure is also referred to as the z-
buffer methodbuffer method
Each surface of a scene is processedEach surface of a scene is processed
separately, one point at a time across theseparately, one point at a time across the
surface.surface.
The method is usually applied to scenesThe method is usually applied to scenes
containing only polygon surfacescontaining only polygon surfaces
99
10. • for each pixel positionfor each pixel position (x, y) on the view(x, y) on the view
plane, object depths can be compared by comparing zplane, object depths can be compared by comparing z
values.values.
• FigureFigure 13-4 shows13-4 shows three surfaces at varying distancesthree surfaces at varying distances
along the orthographic projection line from positionalong the orthographic projection line from position(x,y) in a(x,y) in a
view plane taken as the xview plane taken as the xvv yyvv plane. Surface S1, is closest atplane. Surface S1, is closest at
thisthis position, so its surface intensity value atposition, so its surface intensity value at (x, y) is saved.(x, y) is saved.
1010
11. • Two buffer areas are required. ATwo buffer areas are required. A depth bufferdepth buffer
is used to store depth values for each (x, y)is used to store depth values for each (x, y)
position as surfaces are processed, and theposition as surfaces are processed, and the
refresh bufferrefresh buffer stores the intensity values forstores the intensity values for
each position.each position.
• Initially, all positions in the depth buffer areInitially, all positions in the depth buffer are
set toset to 0 (minimum depth), and the refresh0 (minimum depth), and the refresh
buffer is initialized to the backgroundbuffer is initialized to the background
intensity.intensity.
• Each surface listed in the polygon tables isEach surface listed in the polygon tables is
then processed, one scan line at a time,then processed, one scan line at a time,
calculating the depth (z value) at each (x, y)calculating the depth (z value) at each (x, y)
pixel position.pixel position.
1111
13. For any scan line (Fig. 13-5), adjacent horizontalFor any scan line (Fig. 13-5), adjacent horizontal
positions across the line differ by 1, and a vertical ypositions across the line differ by 1, and a vertical y
value on an adjacent scan line differs by 1.value on an adjacent scan line differs by 1.
If the depth of positionIf the depth of position (x, y) has been determined to be(x, y) has been determined to be
z, then the depth z' of the next position (x +z, then the depth z' of the next position (x +1, y) along1, y) along
the scan line is obtained from Eq. 13-4 asthe scan line is obtained from Eq. 13-4 as
1313
15. • On each scan line, we start by calculating the depthOn each scan line, we start by calculating the depth
on a left edge of the polygon that intersects that scanon a left edge of the polygon that intersects that scan
line (Fig. 13-6).line (Fig. 13-6).
• Depth values at each successive position across theDepth values at each successive position across the
scan line are then calculated by Eq. 13-6.scan line are then calculated by Eq. 13-6.
• We first determine the y-coordinate extents of eachWe first determine the y-coordinate extents of each
polygon, and process the surface from the topmostpolygon, and process the surface from the topmost
scan line to the bottom scan line .scan line to the bottom scan line .
• Starting at a top vertex, we can recursivelyStarting at a top vertex, we can recursively
calculate x positions down a left edge of thecalculate x positions down a left edge of the
polygon aspolygon as
x' =x' = x - l/mx - l/m
where m is the slope of the edgewhere m is the slope of the edge
1515
16. Depth values down the edge are then obtainedDepth values down the edge are then obtained
recursively asrecursively as
1616
17. A-BUFFER METHODA-BUFFER METHOD
A drawback of the depth-buffer method is thatA drawback of the depth-buffer method is that
it can only find one visible surface at eachit can only find one visible surface at each
pixel position.pixel position.
Depth buffer method deals only with opaqueDepth buffer method deals only with opaque
surfaces and cannot accumulate intensitysurfaces and cannot accumulate intensity
values for more than one surface, as isvalues for more than one surface, as is
necessary if transparent surfaces are to benecessary if transparent surfaces are to be
displayeddisplayed
1717
18. • Each position in the A-buffer has two fields:Each position in the A-buffer has two fields:
• depth field - stores a positive or negative realdepth field - stores a positive or negative real
numbernumber
• intensity field - stores surface-intensityintensity field - stores surface-intensity
information or a pointer value.information or a pointer value.
1818
19. • If the depth field is positive, the number stored at thatIf the depth field is positive, the number stored at that
position is the depth of a single surface overlappingposition is the depth of a single surface overlapping
the corresponding pixel area.the corresponding pixel area.
• The intensity field then stores the RGB componentsThe intensity field then stores the RGB components
of the surface color at that point and the percent ofof the surface color at that point and the percent of
pixel coverage, as illustrated in Fig.pixel coverage, as illustrated in Fig.
Organization of an A-buffer pixel position: (a) single-surface overlap ofOrganization of an A-buffer pixel position: (a) single-surface overlap of
the corresponding pixel areathe corresponding pixel area
1919
20. If the depth field is negative, this indicatesIf the depth field is negative, this indicates
multiple-surface contributions to the pixelmultiple-surface contributions to the pixel
intensity.intensity.
The intensity field then stores a pointer to aThe intensity field then stores a pointer to a
linked Iist of surface data, as in Fig.linked Iist of surface data, as in Fig.
Organization of an A-buffer pixel position for multiple surfaceOrganization of an A-buffer pixel position for multiple surface
overlapoverlap
2020
21. Data for each surface in the linked list includesData for each surface in the linked list includes
RGB intensity componentsRGB intensity components
• opacity parameter (percent of transparency)opacity parameter (percent of transparency)
• depthdepth
• percent of area coveragepercent of area coverage
• surface identifiersurface identifier
• other surface-rendering parametersother surface-rendering parameters
• pointer to next surfacepointer to next surface
2121
22. SCAN-LINE METHODSCAN-LINE METHOD
• we now deal with multiple surfaceswe now deal with multiple surfaces
• As each scan line is processed, all polygonAs each scan line is processed, all polygon
surfaces intersecting that line are examined tosurfaces intersecting that line are examined to
determine which are visible.determine which are visible.
• Across each scan line, depth calculations areAcross each scan line, depth calculations are
made for each overlapping surface tomade for each overlapping surface to
determine which is nearest to the view plane.determine which is nearest to the view plane.
• When the visible surface has been determined,When the visible surface has been determined,
the intensity value for that position is enteredthe intensity value for that position is entered
into the refresh buffer.into the refresh buffer.
2222
23. The edge table contains coordinate endpointsThe edge table contains coordinate endpoints
for each line in-the scene, the inverse slope offor each line in-the scene, the inverse slope of
each line, and pointers into the polygon tableeach line, and pointers into the polygon table
to identify the surfaces bounded by each lineto identify the surfaces bounded by each line
The polygon table contains coefficients of theThe polygon table contains coefficients of the
plane equation for each surface, intensityplane equation for each surface, intensity
information for the surfaces, and possiblyinformation for the surfaces, and possibly
pointers into the edge tablepointers into the edge table
2323
24. • we can set up an active list of edges fromwe can set up an active list of edges from
information in the edge table. This active list willinformation in the edge table. This active list will
contain only edges that cross the current scancontain only edges that cross the current scan
line, sorted in order of increasingline, sorted in order of increasing x.x.
• InIn addition, we define a flag for each surface thataddition, we define a flag for each surface that
is set on or off to indicate whether a positionis set on or off to indicate whether a position
along a scan line is inside or outside of thealong a scan line is inside or outside of the
surface.surface.
• Scan lines are processed from left to right. At theScan lines are processed from left to right. At the
leftmost boundary of a surface, the surface flag isleftmost boundary of a surface, the surface flag is
turned on; and at the rightmost boundary, it isturned on; and at the rightmost boundary, it is
turned off.turned off.
2424
25. Scan lines crossing the projection of two surfaces,Scan lines crossing the projection of two surfaces, S1 and S2S1 and S2
in thein the view plane. Dashed lines indicate the boundaries ofview plane. Dashed lines indicate the boundaries of
hidden surfaces.hidden surfaces.
2525
26. • For scan lines 2 and 3 in Fig. 13-10, the activeFor scan lines 2 and 3 in Fig. 13-10, the active
edge list contains edgesedge list contains edges AD,AD, EH,EH, BC, and FG.BC, and FG.
• Along scan line 2 from edge AD to edge EH,Along scan line 2 from edge AD to edge EH,
only the flag foronly the flag for surface S1 is on. But betweensurface S1 is on. But between
edges EH andedges EH and BC, the flags for both surfaces areBC, the flags for both surfaces are
on.on.
• In this interval, depth calculations must be madeIn this interval, depth calculations must be made
using the plane coefficients for the two surfaces.using the plane coefficients for the two surfaces.
• For this example, the depth of surfaceFor this example, the depth of surface SI isSI is
assumed to be lessassumed to be less than that of S2, so intensitiesthan that of S2, so intensities
for surface S1 are loaded into the refresh bufferfor surface S1 are loaded into the refresh buffer
until boundary BC is encountered. Then the flaguntil boundary BC is encountered. Then the flag
for surfacefor surface SI goes off, and intensitiesSI goes off, and intensities for surfacefor surface
S2 are stored until edge FG is passed.S2 are stored until edge FG is passed. 2626
27. Thank You
Thank You
(Follow me to get update on engineering , technology
(Follow me to get update on engineering , technology
and science)
and science)
2727