1. Bresenham’s line drawing algorithm
Ans: Bresenham’s Line Generation
The Bresenham algorithm is another incremental scan conversion algorithm.
The big advantage of this algorithm is that, it uses only integer calculations.
Moving across the x axis in unit intervals and at each step choose between
two different y coordinates.
For example, as shown in the following illustration, from position (2, 3) you
need to choose between (3, 3) and (3, 4). You would like the point that is
closer to the original line.
At sample position Xk+1,Xk+1, the vertical separations from the
mathematical line are labelled as dupperdupper and dlowerdlower.
2. From the above illustration, the y coordinate on the mathematical line
at xk+1xk+1 is −
Y = m(XkXk+1) + b
So, dupperdupper and dlowerdlower are given as follows −
dlower=y−ykdlower=y−yk
=m(Xk+1)+b−Yk=m(Xk+1)+b−Yk
and
dupper=(yk+1)−ydupper=(yk+1)−y
=Yk+1−m(Xk+1)−b=Yk+1−m(Xk+1)−b
You can use these to make a simple decision about which pixel is closer to
the mathematical line. This simple decision is based on the difference
between the two pixel positions.
dlower−dupper=2m(xk+1)−2yk+2b−1dlower−dupper=2m(xk+1)−2yk+2b−1
Let us substitute m with dy/dx where dx and dy are the differences between
the end-points.
dx(dlower−dupper)=dx(2dydx(xk+1)−2yk+2b−1)dx(dlower−dupper)=dx(2dydx(xk+1)−2yk+2
b−1)
=2dy.xk−2dx.yk+2dy+2dx(2b−1)=2dy.xk−2dx.yk+2dy+2dx(2b−1)
=2dy.xk−2dx.yk+C=2dy.xk−2dx.yk+C
3. So, a decision parameter PkPk for the kth step along a line is given by −
pk=dx(dlower−dupper)pk=dx(dlower−dupper)
=2dy.xk−2dx.yk+C=2dy.xk−2dx.yk+C
The sign of the decision parameter PkPk is the same as that
of dlower−dupperdlower−dupper.
If pkpk is negative, then choose the lower pixel, otherwise choose the upper
pixel.
Remember, the coordinate changes occur along the x axis in unit steps, so
you can do everything with integer calculations. At step k+1, the decision
parameter is given as −
pk+1=2dy.xk+1−2dx.yk+1+Cpk+1=2dy.xk+1−2dx.yk+1+C
Subtracting pkpk from this we get −
pk+1−pk=2dy(xk+1−xk)−2dx(yk+1−yk)pk+1−pk=2dy(xk+1−xk)−2dx(yk+1−yk)
But, xk+1xk+1 is the same as (xk)+1(xk)+1. So −
pk+1=pk+2dy−2dx(yk+1−yk)pk+1=pk+2dy−2dx(yk+1−yk)
Where, Yk+1–YkYk+1–Yk is either 0 or 1 depending on the sign of PkPk.
The first decision parameter p0p0 is evaluated at (x0,y0)(x0,y0) is given as −
p0=2dy−dxp0=2dy−dx
Now, keeping in mind all the above points and calculations, here is the
Bresenham algorithm for slope m < 1 −
Step 1 − Input the two end-points of line, storing the left end-point
in (x0,y0)(x0,y0).
Step 2 − Plot the point (x0,y0)(x0,y0).
Step 3 − Calculate the constants dx, dy, 2dy, and (2dy – 2dx) and get the
first value for the decision parameter as −
p0=2dy−dxp0=2dy−dx
Step 4 − At each XkXk along the line, starting at k = 0, perform the
following test −
If pkpk < 0, the next point to plot is (xk+1,yk)(xk+1,yk) and
pk+1=pk+2dypk+1=pk+2dy
Otherwise,
4. (xk,yk+1)(xk,yk+1)
pk+1=pk+2dy−2dxpk+1=pk+2dy−2dx
Step 5 − Repeat step 4 (dx – 1) times.
For m > 1, find out whether you need to increment x while incrementing y
each time.
After solving, the equation for decision parameter PkPk will be very similar,
just the x and y in the equation gets interchanged.
Mid point line drawing algorithm
Mid-Point Algorithm
Mid-point algorithm is due to Bresenham which was modified by Pitteway
and Van Aken. Assume that you have already put the point P at (x, y)
coordinate and the slope of the line is 0 ≤ k ≤ 1 as shown in the following
illustration.
Now you need to decide whether to put the next point at E or N. This can be
chosen by identifying the intersection point Q closest to the point N or E. If
the intersection point Q is closest to the point N then N is considered as the
next point; otherwise E.
5. To determine that, first calculate the mid-point M(x+1, y + ½). If the
intersection point Q of the line with the vertical line connecting E and N is
below M, then take E as the next point; otherwise take N as the next point.
In order to check this, we need to consider the implicit equation −
F(x,y) = mx + b - y
For positive m at any given X,
If y is on the line, then F(x, y) = 0
If y is above the line, then F(x, y) < 0
If y is below the line, then F(x, y) > 0
Difference betweenFloodFillandBoundaryFill Algorithm
Flood Filling:
In this method a particular seed point is picked and we start filling upwards and
downwards pixels until boundary is reached. Seed fill method is of two types: Boundary
fill and Flood fill.
Boundary fill algorithm:
In Boundary filling a seed point is fixed, and then neighboring pixels are checked to
match with the boundary color. Then, color filling is done until boundary is reached. A
region may be 4 connected or 8 connected:
6. Procedure for filling a 4- connected region:
Color is specified by parameter fill color (f-color) and boundary color is specified by
boundary color (b-color). getpixel() function gives the color of specified pixel and
putpixel() fills the pixel with particular color.
boundary_ fill (x,y, f_color, b_color)
{
If ( getpixel (x,y) != b_color && getpixel (x,y) != f_color)
putpixel(x,y,f_color)
boundary_fill( x+1, y, f_color, b_color);
boundary_fill( x, y+1, f_color, b_color);
boundary_fill( x-1, y, f_color, b_color);
boundary_fill( x, y-1, f_color, b_color);
}
Flood fill algorithm:
There are some cases where the boundary color is different than the fill color. For
situations like these Flood fill algorithm is used. Here the process is started in a similar
way by examining the colors of neighboring pixels. But instead of matching it with a
boundary color a specified color is matched.
Procedure for filling a 8- connected region:
7. flood_ fill (x,y, old_color, new_color)
{
putpixel(x,y,new_color)
flood_ fill (x+1, y, old_color, new_color)
flood_ fill (x-1, y, old_color, new_color)
flood_ fill (x, y+1, old_color, new_color)
flood_ fill (x, y-1, old_color, new_color)
flood_ fill (x+1, y+1, old_color, new_color)
flood_ fill (x-1, y-1, old_color, new_color)
flood_ fill (x+1, y-1, old_color, new_color)
flood_ fill (x-1, y+1, old_color, new_color)
}
Flood Fill Algorithm Boundary Fill Algorithm
Flood fill colors an entire area in an
enclosed figure through interconnected
pixels using a single color
Here area gets colored with pixels of a
chosen color as boundary this giving the
technique its name
So, Flood Fill is one in which all connected
pixels of a selected color get replaced by a
fill color.
Boundary Fill is very similar with the
difference being the program stopping
when a given color boundary is found.
A flood fill may use an unpredictable
amount of memory to finish because it isn't
known how many sub-fills will be spawned
Boundary fill is usually more complicated
but it is a linear algorithm and doesn't
require recursion
Time Consuming It is less time Consuming
8. SutherlandHodgemanalgorithmforpolygonclipping
Sutherland - HodgmanPolygonClipping
The Sutherland - Hodgman algorithm performs a clipping of a polygon against each
window edge in turn. It accepts an ordered sequence of verices v1, v2, v3, ..., vn and
puts out a set of vertices defining the clipped polygon.
This figure represents a polygon (the large, solid, upward
pointing arrow) before clipping has occurred.
The following figures show how this algorithm works at each edge, clipping the
polygon.
a. Clipping against the left side of the clip window.
b. Clipping against the top side of the clip window.
c. Clipping against the right side of the clip window.
d. Clipping against the bottom side of the clip window.
9. Four Types of Edges
As the algorithm goes around the edges of the window, clipping the polygon, it
encounters four types of edges. All four edge types are illustrated by the polygon in
the following figure. For each edge type, zero, one, or two vertices are added to the
output list of vertices that define the clipped polygon.
The four types of edges are:
1. Edges that are totally inside the clip window. - add the second inside vertex
point
2. Edges that are leaving the clip window. - add the intersection point as a vertex
3. Edges that are entirely outside the clip window. - add nothing to the vertex
output list
4. Edges that are entering the clip window. - save the intersection and inside
points as vertices
How To Calculate Intersections
Assume that we're clipping a polgon's edge with vertices at (x1,y1) and (x2,y2)
against a clip window with vertices at (xmin, ymin) and (xmax,ymax).
The location (IX, IY) of the intersection of the edge with the left side of the window
is:
i. IX = xmin
ii. IY = slope*(xmin-x1) + y1, where the slope = (y2-y1)/(x2-x1)
The location of the intersection of the edge with the right side of the window is:
10. i. IX = xmax
ii. IY = slope*(xmax-x1) + y1, where the slope = (y2-y1)/(x2-x1)
The intersection of the polygon's edge with the top side of the window is:
i. IX = x1 + (ymax - y1) / slope
ii. IY = ymax
Finally, the intersection of the edge with the bottom side of the window is:
i. IX = x1 + (ymin - y1) / slope
ii. IY = ymin
What do youmeanby shearing?Twoclassificationof sheartransformation
Shearing:
A transformation that slants the shape of an object is called the shear transformation.
There are two shear transformations X-Shear and Y-Shear. One shifts X coordinates
values and other shifts Y coordinate values. However; in both the cases only one
coordinate changes its coordinates and other preserves its values. Shearing is also
termed as Skewing.
X-Shear:
The X-Shear preserves the Y coordinate and changes are made to X coordinates,
which causes the vertical lines to tilt right or left as shown in below figure.
The transformation matrix for X-Shear can be represented as:
11. Y-Shear:
The Y-Shear preserves the X coordinates and changes the Y coordinates which causes
the horizontal lines to transform into lines which slopes up or down as shown in the
following
figure.
The Y-Shear can be represented in matrix from as:
12. Difference betweenwindowandviewport
Window Viewport
defines a rectangular area in world
coordinates.
defines in normalized coordinates a
rectangular area on the display device
where the image of the data appears.
A window can be defined with a
GWINDOW statement.
A viewport is defined with the GPORT
command.
The window can be defined to be
larger than, the same size as or
smaller than the actual range of data
values, depending on whether we
want to show all of the data or only
part of the data.
We can have our graph take up the
entire display device or show it in only
a portion, say the upper-right part.
Describe howa 3D objectis presentedonthe screenusingperspective projection
Prove successive scalingismultiplicative.Show forwhatconditionandscalingare commutative.
Any sequence of transformations can be represented as a composite
transformation matrix by calculating the product of the individual transformation
matrices. Forming products by transformation matrices is usually referred to as a
concatenation, or composition, of matrices.
Translations
Two successive translations of an object can be carried out by first concatenating
the translations matrices, then applying the composite matrix to the coordinate
points. Specifying the two successive translation distances as (Tx1, Ty1) and
(Tx2, Ty2), we calculate the composite matrix as
13. Which demonstrates that two successive translations are additive.
Scalings
Concatenating transformation matrices for two successive scaling operations produces
the following composite scaling matrix:
S(Sx1, Sy1) . S(Sx2, Sy2) = S(Sx1.Sx2, Sy1.Sy2)
The resulting matrix in this case indicates that successive scaling operations are
multiplicative. That is, if we were to triple the size of an object twice in succession, the
final size would be nine times that of the original.
Rotations
The composite matrix for two successive rotations is calculated as
R(θ1) . R(θ2) = R(θ1 + θ2)
As is the case with translations, successive rotations are additive.
two scaling transformation are commutative, that is
S1S2=S2S1.
The Scaling matrix S is given as,
we have
14. How homogeneousco-ordinate systemrelatedintransformationmatrix.Advantages
HOMOGENEOUS COORDINATES
We have seenthatbasictransformationscanbe expressedinmatrix form.Butmanygraphicapplication
involve sequencesof geometrictransformations.Hence we needageneral formof matrix torepresent
such transformations.Thiscanbe expressedas:
Where P and P' - representthe rowvectors.
T1 - isa 2 by 2 array containingmultiplicativefactors.
T2 - isa 2 elementrowmatrix containingtranslationterms.
We can combine multiplicativeandtranslationaltermsfor2D geometrictransformationsintoasingle
matrix representationbyexpandingthe 2by 2 matrix representationsto3 by 3 matrices. Thisallowsus
to expressall transformationequationsasmatrix multiplications,providingthatwe alsoexpandthe
matrix representationsforcoordinate positions.Toexpressany2D transformationsasa matrix
multiplication,we representeachCartesiancoordinateposition (x,y) withthe homogeneous
coordinate triple (xh,yh,h),
such that
Thus,a general homogeneouscoordinaterepresentationcanalsobe writtenas (h.x,h.y,h). For2D
geometrictransformations,we canchoose the homogeneousparameterhtoany non-zerovalue.Thus,
there isan infinite numberof equivalenthomogeneousrepresentationsforeachcoordinate point (x,y).
A convenientchoice issimplyto h=1.Each 2D positionisthenrepresentedwithhomogeneous
coordinates (x,y,1).Othervalues forparameterhare needed,foreg,inmatrix formulationsof 3D
viewingtransformations.
Expressingpositionsinhomogeneouscoordinatesallowsustorepresentall geometrictransformation
equationsasmatrix multiplications.Coordinatesare represented withthree elementrow vectorsand
transformationoperationsare writtenas3 by 3 matrices.
For Translation, we have
16. whichcan be writtenin abbreviatedformas
Capturingcomposite transformationsconveniently
On the basisof the matrix productof the individual transformationswe cansetup a matrix for any
sequence of transformationknownascomposite transformationmatrix.Forrow-matrix representation
we formcomposite transformationsbymultiplyingmatricesinorderfromlefttorightwhereasin
column-matrix representationwe formcompositetransformationsbymultiplyingmatricesinorderfrom
rightto left.
Nonlineartransformations(3D-perspective transformations)
Representingpointsatinfinity.
Homogeneouscoordinatescanbe usedtodisplaya pointat infinity.Forexample
In the above example the pointatinfinityispresentedinthe formof homogeneouscoordinates.Thisis
oftenneededwhenwe wanttorepresentapointat infinityinacertaindirection.Forinstance,for
findingthe vanishingpointinperspective projectionswe cantransformthe pointat infinityinthe given
direction.
Projection/3D Projection
In the 2D system, we use only two coordinates X and Y but in 3D, an extra
coordinate Z is added. 3D graphics techniques and their application are
fundamental to the entertainment, games, and computer-aided design
industries. It is a continuing area of research in scientific visualization.
Furthermore, 3D graphics components are now a part of almost every
personal computer and, although traditionally intended for graphics-
intensive software such as games, they are increasingly being used by other
applications.
17. Parallel Projection
Parallel projection discards z-coordinate and parallel lines from each vertex
on the object are extended until they intersect the view plane. In parallel
projection, we specify a direction of projection instead of center of
projection.
In parallel projection, the distance from the center of projection to project
plane is infinite. In this type of projection, we connect the projected vertices
by line segments which correspond to connections on the original object.
Parallel projections are less realistic, but they are good for exact
measurements. In this type of projections, parallel lines remain parallel and
angles are not preserved. Various types of parallel projections are shown in
the following hierarchy.
18. Orthographic Projection
In orthographic projection the direction of projection is normal to the
projection of the plane. There are three types of orthographic projections −
Front Projection
Top Projection
Side Projection
Oblique Projection
In orthographic projection, the direction of projection is not normal to the
projection of plane. In oblique projection, we can view the object better
than orthographic projection.
There are two types of oblique projections − Cavalier and Cabinet. The
Cavalier projection makes 45° angle with the projection plane. The
projection of a line perpendicular to the view plane has the same length as
the line itself in Cavalier projection. In a cavalier projection, the
foreshortening factors for all three principal directions are equal.
The Cabinet projection makes 63.4° angle with the projection plane. In
Cabinet projection, lines perpendicular to the viewing surface are projected
at ½ their actual length. Both the projections are shown in the following
figure −
19. Isometric Projections
Orthographic projections that show more than one side of an object are
called axonometric orthographic projections. The most common
axonometric projection is an isometric projection where the projection
plane intersects each coordinate axis in the model coordinate system at an
equal distance. In this projection parallelism of lines are preserved but
angles are not preserved. The following figure shows isometric projection −
Perspective Projection
In perspective projection, the distance from the center of projection to
project plane is finite and the size of the object varies inversely with
distance which looks more realistic.
The distance and angles are not preserved and parallel lines do not remain
parallel. Instead, they all converge at a single point called center of
projection or projection reference point. There are 3 types of
perspective projections which are shown in the following chart.
One point perspective projection is simple to draw.
Two point perspective projection gives better impression of depth.
Three point perspective projection is most difficult to draw.
20. The following figure shows all the three types of perspective projection −
Translation
In 3D translation, we transfer the Z coordinate along with the X and Y
coordinates. The process for translation in 3D is similar to 2D translation. A
translation moves an object into a different position on the screen.
The following figure shows the effect of translation −
21. A point can be translated in 3D by adding translation
coordinate (tx,ty,tz)(tx,ty,tz) to the original coordinate (X, Y, Z) to get the
new coordinate (X’, Y’, Z’).
z-bufferalgorithm,
The z-Buffer Algorithm
22. The z-Buffer algorithm is one of the most commonly used routines. It is simple, easy
to implement, and is often found in hardware.
The idea behind it is uncomplicated: Assign a z-value to each polygon and then
display the one (pixel by pixel) that has the smallest value.
There are some advantages and disadvantages to this:
Advantages:
Simple to use
Can be implemented easily in object or image sapce
Can be executed quickly, even with many polygons
Disadvantages:
Takesup a lot of memory
Can'tdo transparentsurfaceswithoutadditionalcode
For example:
Consider these two polygons (right:edge-on left:head-on)
The computer would start (arbitrarily) with Polygon 1 and put it's depth value into the
buffer. It would do the same for the next polygon, P2.
It will then check each overlapping pixel and check to see which one is closer to the viewer,
and display the appropriate color.
This is a simplistic example, but the basic ideas are valid for polygons in any orientation
and permutation (this algorithm will properly display polygons piercing one another, and
polygons with conflicting depths, such as:
23. Z-Buffer or Depth-Buffer method
When viewing a picture containing non transparent objects and surfaces, it is not possible to see those
objects from view which are behind from the objects closer to eye. To get the realistic screen image,
removal of these hidden surfaces is must. The identification and removal of these surfaces is called as
the Hidden-surface problem.
Z-buffer, which is also known as the Depth-buffer method is one of the commonly used method for hidden
surface detection. It is an Image space method. Image space methods are based on the pixel to be
drawn on 2D. For these methods, the running time complexity is the number of pixels times number of
objects. And the space complexity is two times the number of pixels because two arrays of pixels are
required, one for frame buffer and the other for the depth buffer.
The Z-buffer method compares surface depths at each pixel position on the projection plane. Normally z-
axis is represented as the depth. The algorithm for the Z-buffer method is given below :
Algorithm :
First of all, initialize the depth of each pixel.
i.e, d(i, j) = infinite (max length)
Initialize the color value for each pixel
as c(i, j) = background color
for each polygon, do the following steps :
for (each pixel in polygon's projection)
{
find depth i.e, z of polygon
at (x, y) corresponding to pixel (i, j)
if (z < d(i, j))
{
d(i, j) = z;
c(i, j) = color;
}
}
24. Let’s consider an example to understand the algorithm in a better way. Assume the polygon given is as
below :
In starting, assume that the depth of each pixel is infinite.
As the z value i.e, the depth value at every place in the given polygon is 3, on applying the algorithm, the
result is:
25. Now, let’s change the z values. In the figure given below, the z values goes from 0 to 3.
In starting, the depth of each pixel will be infinite as :
Now, the z values generated on the pixel will be different which are as shown below :
Therefore, in the Z buffer method, each surface is processed separately one position at a time across the
surface. After that the depth values i.e, the z values for a pixel are compared and the closest i.e, (smallest
z) surface determines the color to be displayed in frame buffer. The z values, i.e, the depth values are
usually normalized to the range [0, 1]. When the z = 0, it is known as Back Clipping Pane and when z
= 1, it is called as the Front Clipping Pane.
In this method, 2 buffers are used :
1. Frame buffer
26. 2. Depth buffer
Calculation of depth :
As we know that the equation of the plane is :
ax + by + cz + d = 0, this implies
z = -(ax + by + d)/c, c!=0
Calculation of each depth could be very expensive, but the computation can be reduced to a single add
per pixel by using an increment method as shown in figure below :
Let’s denote the depth at point A as Z and at point B as Z’. Therefore :
AX + BY + CZ + D = 0 implies
Z = (-AX - BY - D)/C ------------(1)
Similarly, Z' = (-A(X + 1) - BY -D)/C ----------(2)
Hence from (1) and (2), we conclude :
Z' = Z - A/C ------------(3)
Hence, calculation of depth can be done by recording the plane equation of each polygon in the
(normalized) viewing coordinate system and then using the incremental method to find the depth Z.
So, to summarize, it can be said that this approach compares surface depths at each pixel position on the
projection plane. Object depth is usually measured from the view plane along the z-axis of a viewing
system.
27. Example :
Let S1, S2, S3 are the surfaces. The surface closest to projection plane is called visible surface. The
computer would start (arbitrarily) with surface 1 and put it’s value into the buffer. It would do the same for
the next surface. It would then check each overlapping pixel and check to see which one is closer to the
viewer and then display the appropriate color. As at view-plane position (x, y), surface S1 has the
smallest depth from the view plane, so it is visible at that position.
Points to remember :
1) Z buffer method does not require pre-sorting of polygons.
2) This method can be executed quickly even with many polygons.
3) This can be implemented in hardware to overcome the speed problem.
4) No object to object comparison is required.
5) This method can be applied to non-polygonal objects.
6) Hardware implementation of this algorithm are available in some graphics workstations.
7) The method is simple to use and does not require additional data structure.
8) The z-value of a polygon can be calculated incrementally.
9) Cannot be applied on transparent surfaces i.e, it only deals with opaque surfaces. For ex :
28. 10) If only a few objects in the scene are to be rendered, then this method is less attractive because of
additional buffer and the overhead involved in updating the buffer.
11) Wastage of time may occur because of drawing of hidden objects.
Painter’sAlgorithm,
Visible Surface Determination: Painter's Algorithm
The painter's algorithm is based on depth sorting and is a combined object
and image space algorithm. It is as follows:
1. Sort all polygons according to z value (object space); Simplest to use
maximum z value
2. Draw polygons from back (maximum z) to front (minimum z)
This can be used for wireframe drawings as well by the following:
1. Draw solid polygons using Polyscan (in the background color) followed
by Polyline (polygon color).
2. Polyscan erases Polygons behind it then Polyline draws new Polygon.
Problems with simple Painter's algorithm
29. Look at cases where it doesn't work correctly. S
has a greater depth than S' and so will be drawn
first. But S' should be drawn first since it is
obscured by S. We must somehow reorder S and
S'.:
We will perform a series of tests to determine, if two polygons need to be
reordered. If the polygons fail a test, then the next test must be performed. If
the polygons fail all tests, then they are reordered. The initial tests are
computationally cheap, but the later tests are more expensive.
So look at revised algorithm to test for possible reordering
Could store Zmax, Zmin for each Polygon.
- sort on Zmax
- start with polygon with greatest depth (S)
- compare S with all other polygons (P) to see if there is any depth
overlap(Test 0)
If S.Zmin <= P.Zmax then have depth overlap (as in above and below figures)
If have depth overlap (failed Test 0) we may need to reorder polygons.
30. Next (Test 1) check to see if polygons overlap in xy plane (use bounding
rectangles)
Do above tests for x and y
If have case 1 or 2 then we are done (passed Test 1) but for case 3 we need
further testing failed Test 1)
Next test (Test 2) to see if polygon S is "outside" of polygon P (relative to view
plane)
31. Remember: a point (x, y, z) is "outside" of a plane if we put that point into the
plane equation and get:
Ax + By + Cz + D > 0
So to test for S outside of P, put all vertices of S into the plane equation for P
and check that all vertices give a result that is > 0.
i.e. Ax' + By' + Cz' + D > 0 x', y', z' are S vertices
A, B, C, D are from plane equation of P (choose normal away from view plane
since define "outside" with respect to the view plane)
If the test of S "outside" of P fails, then test to see if P is "inside" of S (again
with respect to the view plane) (Test 3).
Compute plane equation of S and put in all vertices of P, if all vertices of P
inside of S then P inside.
inside test: Ax' + By' + Cz' + D <0 where x', y', z' are coordinates of P
vertices
32. so for above case:
Then we do the 4th test and check for overlap for actual projections in xy
plane since may have bounding rectangles overlap but not actual overlap
For example: Look at projection of two polygons in the xy plane
Then have two possible cases.
33. All 4 tests have failed therefore interchange P and S and scan convert P
before S. But before we scan convert P we must test P against all other
polygons. Look at an example of multiple interchanges
Test S1 against S2 and it fails all tests so reorder: S2, S1, S3
Test S2 against S3 and it fails all tests so reorder: S3, S2, S1
34. Possible Problem: Polygons that alternately obscure one another. These three
polygons will continuously reorder.
One solution might be to flag a reordered polygon and subdivide the polygon
into several smaller polygons.
CubicB-Spline,
3D rotation/transformation
What do youmeanby B-Spline curve anditspropertices.