SlideShare a Scribd company logo
1 of 143
Download to read offline
UNIT III - VISUAL REALISM
❖ Hidden Line removal algorithms
❖ Hidden Surface removal
algorithms
❖ Hidden Solid removal algorithms
❖ Shading
❖ Colouring
❖ Computer animation.
EASE
OF
VISULIZATION
ORTHOGRAPHIC OBLIQUE ISOMETRIC PERSPECTIVE
VISUAL REALISM HAS TWO COMPONENTS:
Geometric realism - The virtual object looks like the real object.
Illumination realism - Refers to the fidelity of the lighting model.
(more realistic and visually appealing image can be produced.)
VISUALISATION IS OF TWO TYPES:
❑ Visualisation in geometric modelling-
i.e., Geometric models of objects are displayed.
❑ Visualization in scientific computing-
Ie., Results related to science and engineering are displayed.
INTRODUCTION – VISUAL REALISM
IN GEOMETRIC MODELING:
An effective and less expensive way of reviewing various
design alternatives.
For design of complex surfaces, like as those in automobile
bodies and aircraft frames.
IN SCIENTIFIC COMPUTING:
For displaying results of finite element analysis, heat-transfer
analysis, computational fluid dynamics and structural dynamics and
vibration.
In medical field for joint ball replacement operations.
❖ The performance of any CAD/CAM systems is evaluated on the
basis of their ability of displaying realistic visual image.
❖ Visualization can be defined as a technique for creating images,
diagrams or animations to communicate ideas.
❖ The visual realism concentrates basically on the visual
appearance of objects.
❖ Various techniques of C.G are applied on the model to make it
appear as realistic as possible.
INTRODUCTION – VISUAL REALISM
Projection and shading are two most common methods for
visualizing geometric models.
There are two important and popular form of
visualization methods are, such as animation and
simulation.
INTRODUCTION – VISUAL REALISM
PARALLEL
PROJECTION
PERSPECTIVE
PROJECTION
MAJOR PROBLEM IN VISUALIZATION
An object consist of number of vertices, edges, surfaces which are
represented realistically in 3D modeling.
The major problem in visualization of object is representing the depth
of 3D object into 2D screens.
Projecting 3D object into 2D screen displays thecomplex lines
and curves which may not give a clear picture.
The first step towards visual realism isto eliminate these
ambiguities which can be obtained using hidden line removal (HLR),
hidden surface removal (HSR) and hidden solid removal approaches.
MODEL CLEAN-UP
Model clean-up consists of three processes in sequence:
(1) Generating orthographic views of the model,
(2) Eliminating hidden lines in each view by applying visual realism principle,
(3) Changing the necessary hidden lines as dashed line.
Advantage: User has control over which entities should be removed and
which should be dashed.
Disadvantage: Tedious, time consuming and error-prone process is a big.
Depth information may be lost when hidden lines are eliminated completely.
Manual model clean up is a commonly applicable to wire frame models.
To display all parts of the object to the viewer, simply as a collection
of lines.
For real objects,
❑ Internal details,
❑ Back faces,
❑ Shadow will be cast,
❑ Surfaces will take on different intensities,
❑ According to local lighting conditions.
OBJECT SPACE (OBJECT-PRECISION) IMAGE-
SPACE METHOD
There are two approaches for removing hidden lines & surface
problems −
1. Object-Space method.
2. Image-space method.
3. Hybrid.
.
THREE APPROACHES FOR VISIBLE LINE AND
SURFACE DETERMINATION:
1. OBJECT SPACE METHOD: Determines which parts of any objects are
visible by using spatial and geometrical relationships. It operates with
object database precision. The Object-space method is implemented
in physical coordinate system. Hidden Line Removal Algorithms
2. IMAGE SPACE METHOD: Determines what is visible at each image pixel.
It operates with image resolution precision. (Adaptable using in raster
displays.) Image-space method is implemented in screen coordinate
system. Hidden Surface Removal Algorithms
3. HYBRID: combines both types of object space and image space.
OBJECT SPACE
METHOD IMAGE
SPACE METHOD
❑ In object-space method, the object is described in the physical coordinate system. It
compares the objects and parts to each other within the scene definition to determine
which surfaces are visible.
❑ Object-space methods are generally used in hidden line removal algorithms.
❑ Image-space method is implemented in thescreen coordinatesystem in which
the objects are viewed.
❑ In an image-space algorithm, the visibility is decided point by point at each pixel position
on the view plane. Hence, zooming of the object does not degrade its quality of display.
❑ Most of the hidden line and hidden surface algorithms use the image-space method.
HIDDEN LINE
REMOVAL
ALGORITHM
HIDDEN LINE ELIMINATION
PROCESS
is
an
whic
h
a
given
Sorting
operatio
n
arrange
s set of
record
s
VISIBILITY
TECHNIQUE:
❑ Normally
checks for
overlapping
of pairs of
polygons. If
Overlapping
the occurs,
depth
comparisons
are used
to
determine.
according to
the selected
criterion.
HIDDEN LINE ELIMINATION
HIDDEN LINE ELIMINATION
HIDDEN LINE REMOVAL:
Removing hidden line and surfaces greatly improve the visualization of objects by
displaying clear and more realistic images.
H.L.E stated as, "For a given three dimensional scene, a given viewing point and a given
direction eliminate from an appropriate two dimensional projection of the edges and
faces which the observer cannot see".
Various hidden line and hidden surface removal algorithms may be classified into:
1. object-space (object-precision) method(the object is described in the physical
coordinate system)
2. Image-space method (the visibility is decided point by point at each pixel position on
the view plane) - Rastor algorithms & Vector algorithms.
3. Hybrid methods (combination of both object-space and image-space methods).
VISIBILITY
TECHNIQUES
Therefore, the following visibility techniques are developed for improving
the efficiency of algorithm:
1. Minimax test
2. Surface test
3. Edge intersection
4. Segment comparisons
MINIMAX (Bounding Box)
TEST
Minimax test compares whether two polygonsoverlap
or not. Here, each polygon is enclosed in a box by finding its
maximum and minimum x and y coordinates. Therefore, it is termed
as minimax test.
Then these boxes are compared with each other to identify the
intersection for any two boxes.
If there is no intersection of two boxes as shown in Figure, their
surrounding polygons do not overlap and hence, no elements are
removed.
If two boxes intersect, the polygons may or may not overlap as
shown in Figure.
M
I
N
I
M
T
E
S
BACK FACE / SURFACE TEST:
In a solid object, there are surfaces which
are facing the viewer (front faces) and there are
surfaces which are opposite to the viewer (back
faces).
These back faces contribute to
approximately half of the total number of surfaces.
A back face test is used to determine the
location of a surface with respect to other surface.
This test can provide an efficient way of
implementing the depth comparison to remove the
faces which are not visible in a specific view port.
EDGE
INTERSECTION
❑ In this technique, hidden line algorithms initially calculate
the edge intersections in two-dimensions.
❑ These intersections are used to determine the edge
visibility. Figure shows the concept of edge interaction
technique.
❑ The two edges intersect at a point where Y2 - YI = 0. It will
produce the point of intersection which will be further
used for segmentation and dealt with the visibility
concepts discussed earlier.
SEGMENT COMPARISON:
❖ This visibility technique is used to solve hidden surface
problems in image space. Hence, the display screen is
divided into number of small segments.
❖ In image display, scan lines are
arranged on
display screen from top to bottom and left to right.
❖ This technique tends to solve the problem piecewise
and not as a total image. The scan line is divided into
spans.
❖ To compute the depth, plane equations are used.
HIDDEN
LINE
REMOVAL
The appearance of the object is greatly complicated by the visibility
of hidden details.
Therefore, it is necessary to remove hidden details such as edges and
surfaces.
One of the most challenging problems considered in
computer graphics is the determination of hidden edges and surfaces.
HIDDEN LINE REMOVAL
ALGORITHM
(i) Area-oriented algorithms.
(ii) Overlay algorithm
(iii) Robert's algorithm
HIDDEN LINE ELIMINATION ALGORITHMS
AREA-ORIENTED
ALGORITHM
❖ This algorithm is based on the subdivision of given
data set in a stepwise fashion until all visible areas
in the scene are determined and displayed.
❖ In this data structure, all the adjacency relations of
each edge are described by explicit relation.
❖ Since the edge is formed by two faces, it is a
component in two loops, one for each face.
❖ No penetration of faces is allowed in both area
oriented as well as depth algorithms.
1. First step of this algorithm is to identify silhouette polygons.
2. Then, quantitative hiding values are assigned to each edge of
silhouette polygons, The edges are visible if this value is 0 and the
edges are invisible if this value is 1.
3. Next step is to find out the visible silhouette segment which can
be determined from the quantitative hiding values.
4. Now, the visible silhouette segment is intersected with partially
visible faces to determine whether the silhouette segment is
partially hide or fully hide non-silhouette edges in partially visible
faces.
AREA-ORIENTED ALGORITHM - IN THIS
ALGORITHM, THE FOLLOWING STEPS/
PROCEDURES ARE CARRIED OUT.
OVERLAY
ALGORITHM.
❖ In overlay method, the u-vgrid is used to creategrid
surface which consists of the regions having straight-edges.
❖ The curves in each region of u-v grid are approximated as a line
segment. This algorithm is called overlay algorithm.
❖ In this algorithm, the first step is to calculate the u-v grid using the
surface equation.
❖
❖
Then the grid surface with linear edges is created.
The visibility of the grid surface is determined using
various
criteria discussed earlier.
ROBERT'S
ALGORITHM
❖ The hidden line algorithms described earlier are only suitable for
polyhedral objects which contain flat faces.
❖ The earliest visible-line algorithm was developed by Roberts. The
primary requirement of this algorithm is that each edge is a part of
the face of a convex polyhedron.
❖ In the phase of this algorithm, all edges shared by a pair of
polyhedron's back facing polygons are removed using a back-face
culling technique.
STEPS FOR THE
ALGORITHM:
1. Treat each volume separately and eliminate self-hidden (back-faces) planes and
self hidden lines.
2. Treat each edge (or line segment) separately eliminates those which are entirely
hidden by one or more other volumes.
3. It identifies those lines which are entirely visible.
4. For each of the remaining edge, junction lines are constructed.
5. New edges are constructed if there is inter-penetration of two volumes.
No Lines
Removed
Hidden Lines
Removed
Hidden Surfaces
Removed
HIDDEN SURFACE REMOVAL
ALGORITHMS
❖ Hidden line removal is the process of eliminating lines of parts of
objects which are covered by others. It is extensively used for
objects represented as wireframe skeletons and it is a bit trickier.
❖ Hidden surface removal does the same job for the objects
represented as solid models. The elimination of parts of solid
objects that are covered by others is called hidden surface removal.
Hidden Line Removal – Object space Algorithms Hidden
Surface Removal - Image-space Algorithms
The following are the image-space algorithms widely used,
(i) Depth-buffer algorithm or z-buffer algorithm
(ii) Area-coherence algorithm or Warnock's algorithm
(iii) Scan-line algorithm or Watkin's algorithm'
(iv) Depth or Priority algorithm
VISIBLE SURFACE DETERMINATION
• Area-Subdivision Algorithms
• z-buffer Algorithm
• List Priority Algorithms
• BSP (Binary Space Partitioning Tree)
• Scan-line Algorithms
1. DEPTH-BUFFER ALGORITHM OR Z-BUFFER
ALGORITHM
❑ The easiest way to achieve the hidden surface removal.
❑ This algorithm compares surface depths at each pixel position on the
projection plane. Since the object depth is usually measured from the
view plane along the z-axis of a viewing system, this algorithm is also
called z-buffer algorithm.
❑ Hence, two buffers are required for each pixel.
a) Depth buffer or z buffer which stores the smallest z value for
each pixel
b) Refresh buffer or frame buffer which stores the intensity value
for each position.
Let us consider two surfaces P and Q with varying distances along the position
(x, y) in a view plane as shown in Figure
1. DEPTH-BUFFER ALGORITHM OR Z-BUFFER
ALGORITHM
The steps of a depth-buffer algorithm
(1) Initially, each pixel of the z-buffer is set to the maximum
depth value (the depth of the back clipping plane).
(2) The image buffer is set to the background colour.
(3) Surfaces are rendered one at a time.
(4) For the first surface, the depth value of each pixel is
calculated.
(5) If this depth value is smaller than the corresponding
depth value in the z-buffer (i.e. it is closer to the view
point), both the depth value in z-buffer and the color
value in the image buffer are replaced by the depth
value and the colr value of this surface is calculated at
the pixel position.
(6) Step 4 and step 5 are repeated for the remaining
surfaces.
AREA-COHERENCE ALGORITHM OR WARNOCK'S
ALGORITHM
❖ John Warnock proposed an elegant divide-and-conquer hidden
surface algorithm.
❖ This algorithm relies on the area coherence of polygons to resolve
the visibility of many polygons in image space.
❖ Depth sorting is simplified and performed only in those cases
involving the image-space overlap. This method is also called area-
subdivision method as the process involves the division of viewing
window into four equal sub-windows or sub-divisions.
HIDDEN LINE
LINE PANETRATING A SURFACE
PRIORTY ALGORITHM:
*The faces of objects can sometimes
be given
priorty ordering from which their visibility
can be computed.
*Once an actual viewpoint is specified, the
back faces are eliminated and the priority
numbers are assigned to the remaining
faces to tell which face is in front of
another one. *Since the assignment of
priorities is done according to the largest Z
coordinate value of each face, the algorithm
The algorithm is also known as the depth or z-algorithm. The
algorithm is based on sorting all the faces in the scene according to
the largest z coordinate value of each.
The surface test as discussed earlier is
used to remove the back faces. This improves
the efficiency of the priority algorithm .
In some scenes, ambiquities may result
after applying the priority test. To rectify this
ambiquity, additional criteria to determine
coverage must be added to the priority
algorithm.
Area oriented algorithm described here
subdivides the data set of a given scene in a
stepwise fashion until all the visible areas in the
scene are determined and displayed.
97
HIDDEN SURFACE
ELIMINATION
• Object space algorithms: determine which objects are in front
of others
• Works for static scenes
• May be difficult to determine
• Image space algorithms: determine which object is visible at
each pixel
• Works for dynamic scenes
DEPTH-BUFFER ALGORITHM OR Z-
BUFFER ALGORITHM
• 0 (nearer to view) to 1 (away from view) Z VALUES.
The z-buffer algorithm require a z-buffer in which z values can be sorted for each
pixel.
• The z-buffer is initialized to the smallest z-value, while the frame buffer is
initialized to the background pixel value.
• Both the frame and z-buffers are indexed by pixel coordinates(x,y). These
coordinates are actually screen coordinates.
HOW IT WORKS?
• For each polygon in the scene, find all the pixels (x,y) that lie inside or on
the boundries of the polygon when projected onto the screen.
• For each of these pixels, calculate the depth z of the polygon at (x,y).
• If z>depth (x , y) the polygon is closer to the viewing eye than others already
stored in the pixel.
Initially, all positions in the depth buffer are set at 0 (minimum depth), and
the refresh buffer is initialized to the background intensity. Z=0: Zmax=1
…DEPTH-BUFFER ALGORITHM OR Z-
BUFFER ALGORITHM
In this case, the z buffer is updated by setting the depth at(x,y) to
z. Similarly, the intensity of the frame buffer location corresponding to
the pixel is updated to the intensity of the polygon at (x,y).
After all the polygons have been processed, the frame
buffer contains the solution.
…DEPTH-BUFFER ALGORITHM OR Z-
BUFFER ALGORITHM
101
Z-
BUFFERING
IMAGE PRECISION ALGORITHM:
• Determine which object is visible at each pixel
• Order of polygons not critical
• Works for dynamic scenes
• Takes more memory
BASIC IDEA:
• Rasterize (scan-convert) each polygon
• Keep track of a z value at each pixel
• Interpolate z value of polygon vertices during rasterization
• Replace pixel with new color if z value is smaller (i.e., if
object is closer to eye)
Z-Buffer
Advantages
Simple and easy to implement
Amenable to scan-line algorithms
Can easily resolve visibility cycles
Z-Buffer
Disadvantages
❑ Does not do transparency easily
❑ Aliasing occurs! Since not all depth questions can be resolved
❑ Anti-aliasing solutions non-trivial
❑ Shadows are not easy
❑ Higher order illumination is hard in general
WARNOCK’S ALGORITHM
❑ This is one of the first area-coherence algorithms.
❑ Warnock’s algorithm solves thehiddensurface problem by recursively
subdividing the image into sub-images.
❑
❑
It first attempts to solve the problem for a window that covers the entire image.
If the polygon overlap, the algorithm rises to analyze the relationship between
the polygons and generates the display for the window.
❑ If the algorithm cannot decide easily, it subdivides the window into four smaller
windows.
❑ The recursion terminates if the hidden-surface
problem can be solved for all the windows or if
the window becomes as small as a single pixel on
the screen.
❑ In this case, the intensity of the pixel is chosen
equal to the polygon visible in the pixel.
❑ The subdivision process results in a window tree.
…WARNOCK’S
ALGORITHM
The hidden surface
algorithms can be
adapted to hidden wave
removal also by
displaying only the
boundaries of visible
surfaces.
110
• An area-subdivision technique
Warnock’s
Algorithm
113
Warnock’s
Algorithm
Initial scene
114
Warnock’s Algorithm
First subdivision
115
Warnock’s Algorithm
Second subdivision
116
Warnock’s Algorithm
Third subdivision
117
Warnock’s Algorithm
Fourth subdivision
Surrounding surface: A surface completely encloses the area.
Intersecting or overlapping surface: A surface that is partly inside
and partly outside the area.
Inside surface: A surface that is completely inside the area.
Outside surface: A surface that is completely outside the area.
SCAN-LINE ALGORITHM OR WATKIN'S ALGORITHM
The scan line
algorithm is identical to z-
buffer algorithm except that
one scan line at a time is
processed, hence, a much
smaller buffer would be
needed.
QUESTIONS:
1. Different types of hiddel line algorithm
2. User driven, procedural and data driven animation?
3. RGB, CMY colour models
4. Back face removal algorithm, Z-Buffer algorithm,
5. Colouring importance,
6. Gourand shading differed from other shading techniques.
7. Interpolative shading & its methods
8. How to find visible surface determination
HIDDEN SOLID REMOVAL ALGORITHMS
The hidden line removal and hidden surface removal algorithms
described in the previous sections are applicable to hidden solid
removal of B-rep models.
Certain algorithms such as the z-buffer can be extended to CSG
models.
Ray-Tracing or Ray-Casting Algorithm
Ray-tracing is the process of tracking and plotting the path taken
by the rays of light starting at a light source to the centre of projection
(viewing position).
One of the most popular and powerful technique for hidden
solid removal because of its simple, elegant and easy implementation
nature.
DISADVANTAGE:
Ray tracing is performance
❑ Edge-oriented approach
❑ Silhoutte (countour oriented approach)
❑ Area oriented approach
HIDDEN LINE ELIMINATION ALGORITHM:
1. Depth or z algorithm,
2. Area oriented algorithm,
3. Overlay algorithm,
4. Roberts algorithm,
HIDDEN SURFACE ELIMINATION ALGORITHM:
❑ Depth-buffer or z-buffer algorithm.
❑ Area coherence algorithm,
❑ Scan-line algorithm,
❑ Depth or priority algorithm
SHADIN
G
They determine the shade of a point of an object in terms of light sources,
surface characteristics, and the positions and orientations of the surfaces and
sources.
2 types of light can be identified,
❑ Point lighting(flash light effect in black room)
❑ Ambient lighting(light of uniform brightness and is caused by multiple
reflections)
….SHADING
A three dimensional model can be displayed by assigning different
degrees of shading to the surfaces. A virtual light source is assumed
and various shading techniques are available to determine strikes on
each portion of the surfaces to provide a realistic image of the object.
The shading techniques are based on the recognition of distance
(depth) and shape as a function of illumination.
….SHADING
As the shading concept involves lighting and illumination as the basic, it
is essential to have better understanding of light source. As it is well-
known that all objects emit light whose origin could be many and
varied. The object itself may be emitting light.
….
SHADING
ILLUMINATION OR SHADING MODELS
Illumination models simulate the way visible surfaces of object reflects light.
The shade of a point of an object in terms of light sources, surface properties and
the position and orientation of the surfaces and sources are determined by these
models.
There are two types of light sources:
LIGHT-EMITTING SOURCES & LIGHT-REFLECTING
SOURCES
LIGHT-EMITTING SOURCES:
I. Ambient light
II. Point light source
LIGHT-REFLECTING SOURCES
(i) Diffuse reflection
(ii) Specular reflection.
LIGHT-EMITTING
SOURCES
AMBIENT LIGHT:
It is a light of uniform
brightness and it is caused by the
multiple reflections of light from many
sources present in the environment.
The amount of ambient light
incident on each object is a constant
for all surfaces and over all directions.
POINT LIGHT SOURCE:
A light source is considered as a point source if it is specified with a
coordinate position and an intensity value. Object is illuminated in one
direction only. The light reflected from an object can be divided into
two components.
LIGHT-REFLECTING
SOURCES
1. Specular reflection 2.Diffuse reflection:
SHADING
ALGORITHMS
Shading method is expensive and it requires large number of
calculations. This section deals with more efficient shading methods
for surfaces defined by polygons. Each polygon can be drawn with a
single intensity or different intensity obtained at each point on the
surface. There are numbers of shading algorithms exists which are as
follows.
(i) Constant-intensity shading or Lambert shading
(ii) Gourand or first-derivative shading
(iii) Phong or second-derivative shading
(iv) Half-tone shading
(i) Constant-intensity shading or Lambert
shading:
The fast and simple method for shading polygon is constant
intensity shading which is also known as Lambert shading or
faceted shading or flat shading.
EXISTING SHADING ALGORITHMS ARE
1. CONSTANT SHADING
2. GOURAND SHADING OR FIRST-DERRIVATIVE
3. PHONG OR SHADING SECOND-DERRIVATIVE
That phong shading is much more superior to flat and gouraud
shading but requires lot of time for processing and results in better outputs.
THIS VIRTUAL PLANT ILLUSTRATES THE ACTION OFLIGHTING CONDITIONS
ON TO THE SHAPE AND SIZE OFTHE
SIMULATED MODELS
TEXTURING
:
It will shows the surface of the respective object by roughness or softness.
COLOUR
:
❖ Colours can be used in geometric construction.
❖ Give the realistic look of the objects.
❖ Shows the difference b/w components.
There are two types of colours: Chromatic Colour & Achromatic
Colour.
150
Three Characteristics of Color:
hue
brightness: the luminance of the object
saturation: the blue sky
❑ Chromatic colours are provided multi-colour image
❑ Achromatic colours provide only black-and-white displays.
Achromatic colour can have the variation of three different
patterns such as white, black, and various levels of gray which is a
combination of white and black. These variations are achieved by
assigning the different intensity values.
The intensity value of 1 provides white Colour, 0 displays the
black Colour.
…COLOU
R
PREPARE THE LIST OF VARIOUS COLOURS
IN COMPUTER GRAPHICS:
Colour Wave-Length Nanometer
1. Violet 400
2. Blue 450
3. Cyan 500
4. Green 550
5. Yellow 600
6. Orange 650
7. Red 700
COLOR MODELS:
The description of color generally includes three properties:
❑ Hue,
❑ Saturation
❑ Brightness,
defining a position in the color spectrum, purity and the intensity value of a color.
COLOR MODELS:
Colour model is an orderly system for creating a whole range of colours from a
small set of primary colours.
There are two types of colour models:
❑ Subtractive
❑ Additive.
Additive colour models use light to display colour while subtractive models use
printing
inks. Ex: Electro-luminance produced by CRT or TV monitors, LCD projectors.
Transmitted light.
Colours perceived in subtractive models are the result of reflected light.
There are number of colour
models available. Some of the important colour models are as follows.
1. RGB (Red, Green, Blue) color model
2. CMY (Cyan, Magenta, Yellow) color model
3. YIQ color model
4. HSV (hue, saturation, value) color model, also called HSB (brightness)
model.
Three hardware-oriented color models are RGB (with color CRT monitors),
5. YIQ (TV color system) and CMY (certain color-printing devices).
DISADVANTAGE:
They do not relate directly to intuitive color notions of hue,
saturation, and brightness.
Due to the different absorption curves of the cones, colors are seen as
variable combinations of the so-called primary colors: red, green, and
blue
Their wavelengths were standardized by the CIE in 1931:
red=700 nm, green=546.1 nm, and blue=435.8 nm
The primary colors can be added to produce the secondary colors of
light, magenta
(R+B),
cyan (G+B), and
yellow (R+G)
PRIMARY AND SECONDARY COLORS
ADDITIVE COLOR MODEL SUBTRACTIVE COLOR MODEL
RGB COLOR MODEL
R=G=B=1---------WHITE COLOR
R=G=B=0---------BLACK COLOR
If the values are 0.5, the colour is still white but only at half intensity, so it
appears gray.
If R = G = 1 and B = 0 (full red and green with no blue),
RGB model is more suitable for quantifying direct light such as the one
generated in a CRT monitor, TV screens.
CMY COLOR MODEL
YIQ COLOR MODEL
The YIQ model takes advantage of the human eye response characteristics. The human
eye is more sensible to luminance that to colour information.
NTSC video signal about 4 MHz - Y.
Also the eye is more sensitive to the orange - blue range (I)
than in the green - magenta range (Q). So a bandwidth of 1.5 MHz - I and
0.6 MHz - Q parameter.
The conversion from YIQ space to RGB space is achieved by the
following transformation.
The YIQ model is used for raster colour
HSV COLOR MODEL
HSL COLOR MODEL
Colors in computer graphics and vision
• How to specify a color?
– set of coordinates in a color space
• Several Color spaces
• Relation to the task/perception
– blue for hot water
COLOR MODELS
The purpose of a color model
(or color space or color system) is to facilitate the
specification of colors in some standard way
A color model provides a coordinate system and a
subspace in it where each color is represented by
a single point
Color spaces
• Device based color spaces:
– color spaces based on the internal of
the device: RGB, CMYK, YCbCr
• Perception based color spaces:
– color spaces made for interaction: HSV
• Conversion between them?
ANIMATION
❑ Animation is a process in which the illusion of movement is achieved by
creating and displaying a sequence of images with elements that appear to
have a motion.
❑ Animation is a valuable extension of modelling and simulation in the world
of science and engineering.
❑ Usefull visulaization aid for many modelling and simulation applications.
Each still image is called a frame.
Animation may also be defined as the process of dynamically
creating a series of frames of a set of objects in which each frame is an
alteration of the previous frame.
In order to animate something, the animator has to be able to
specify directly or indirectly how the 'thing' has to move through time
and space.
Animation can be achieved by the following ways.
(a) By changing the position of various elements in the scene at
different time frames in a particular sequence
b) By transforming an object to other object at different time frames in a
particular sequence
(c) By changing the colour of the objectat different time frames
in a particular sequence
d) By changing the light intensities of the scene at different
Degrees of freedom for a stationary,
single-arm robot
MORPHING:
Transformation of object shapes from one form to another is
called morphing, which is a shortened iorm of metamorphosis.
Morphing methods can he applied to any motion or transition
involving a change in shape.
COMPUTER ANIMATION LANGUAGE:
Design and control of animation sequences are handled with a
set of animation routines.
❖ C,
❖ Lisp,
❖ Pascal, or FORTRAN.
TRANSFORMING A TRAINGLE IN TO
QUADRILATERAL
MOVING CAR IN TO A TIGER
AMONG THESE
DIFFERENT
METHODS
❑ Conventionally or traditionally using manual work or using computer
multimedia for producing movies, cartoons, logos and advertisements.
In conventional or traditional method, most of the animation was done
by hand.
❑ All frames in an animation had to be drawn by hand. Since each second
of animation requires 24 frames (film),
❑ No calculations or physical principles required in this method.
❑ To create cartoon characters.
These animations are use modeling of muscles and human body
kinematics to create facial expressions, deformable body shape,
unrealistic fight sequence, transformations etc.
COMPUTER
ANIMATION
Animation is the process of illusion of continuous movement of objects created
by a series of still images with elements that appear to have motion. Each still image is
called a frame.
Animation may also be defined as the process of dynamically creating a series
of frames of a set of objects in which each frame is an alteration of the previous frame.
Animation can be achieved by the following ways.
(a) By changing the position of various elements in the scene at different time frames in
a particular sequence
(b) By transforming an object to other object at different time frames in a particular
sequence
(c) By changing the colour of the object at different time frames in a particular range
(d) By changing the light intensities of the scene at different time frames in a
particular sequence.
COMPUTER
ANIMATION
Applications of Animation:
There are several areas where the animation can be extensively used. These areas
can be arbitrarily divided into five categories.
1. Television:
TV has used it for titles, logos and inserts as a powerful motivator for the rapid
development of animation. But its main uses are in cartoons for children and
commercials for a general audience.
2. Cinema:
Animation as a cinematic technique has always held an important role in this
industry. Complete animation films are still produced by the cinema industry. But,
it is also a good way of making special effects and it is frequently used for titles
and generics.
3. Government:
Animation is an excellent method of mass communication and governments are of
….Applications of Animation:
4. Education and research:
Animation canbe extensively used for educational purposes.
Fundamentals
concepts are easily explained to students using visual effects involving motion.
Finally, animation can be a great help to research teams because it can simulate
the situations, e.g., in medicine or science.
5. Business:
The role of animation in business is very similar to its role in government.
Animation is useful for marketing, personnel education and public relations.
6. Engineering:
Engineers do not require the realistic images the entertainment field demands. It
must be possible to identify unambiguously each separate part and the animation
must be produced quickly.
CONVENTIONAL
ANIMATION
Conventional animation is generally based on a frame-by-frame
technique. It is very expensive in terms of man power, time and money.
This type of animation is oriented mainly towards the production of
two-dimensional cartoons.
Every frame is a flat picture and it is purely hand-drawn. These
cartoons are complex to produce and it may involve large teams such
as Walt Disney or Hannah-Barbera Productions. It is better to
understand the various steps/process involved in the conventional
animation. It can be described with an example of making an animated
film as illustrated in Figure.
A typical task in an animation
specilkation is scene description.
CONVENTIONAL
ANIMATION
LINKAGE_ANIMATION-
UNIVERSITY QUESTION:
COMPUTER
ANIMATION
As the conventional animation has number of limitations such as
time consuming, expensive etc., Computer animations are more
widely used as the solution for these limitations. Computer animation
generally refers any time sequence of visual changes in a scene by
using computers and related software.
CLASSIFICATION OF COMPUTER ANIMATION:
There are a number of different ways of classifying the computer
animation systems. First, we can define the various levels of systems.
Level l:
It is used only to interactively create, paint, store, retrieve and modify drawings.
They do not take much time. They are basically just graphics editors used only by designers.
Level 2:
It can compute “in-betweens” and move an object along a trajectory. These
systems generally take more time and they are mainly intended to be used by or even
replace in betweens.
Level 3:
It provides the animator with operations which can be applied to objects for
example, translation or rotation. These systems may also include virtual camera operations
such as zoom, pan or tilt.
Level 4:
It provides a means of defining actors, ie., objects which possess their own animation.
The motion of these objects may also be constrained.
Level 5:
They are extensible and they can learn as they work. With each use, such a system
becomes more powerful and "intelligent".
Computer animation can be further classified into two types based on its application in major
field.(i) Entertainment animation & (ii) Engineering animation.
(I) ENTERTAINMENT
ANIMATION:
Entertainment type of computer animation is mainly used to
make movies and advertisement for entertainment purposes.
The procedure is similar the conventional animation procedure
described in Figure.
The drawings of “key frames” and “in-betweens” are created by
using computer generation techniques.
The drawings of key frames are created by using various
interactive graphics software programs which utilizes the different
transformation techniques such as rotation, reflection, translation etc.
The entertainment animation can be further classified into the following two types.
(a) COMPUTER-ASSISTED two-dimensional animation
(b) MODELED ANIMATION or three-dimensional animation.
COMPUTER-ASSISTED ANIMATION, sometimes, called key frame animation
consists mainly of assisting conventional animation by computer. Key frame
animation systems are typically of level 2.
MODELED ANIMATION means drawing and manipulation of more general
representations which move about in three-dimensional space. This process is very
complex without a computer. Modeled animation systems are generally of level 3
to level 4. Systems of level 5 are not yet available.
MODELLED ANIMATION:
(II) ENGINEERING
ANIMATION:
CAD/CAM applications using the animation technique extensively in variety of
applications such as generating NC tool paths, simulation of automated assembly and
disassembly, simulation of finite element results, mechanism movements, rapid
prototyping etc.
Engineering animation is mainly an extension of modeled animation. But, it is
more science oriented rather than art and image. No real time image simulation is
required as in case of entertainment animation. Many cases only wireframe models can
give the satisfactory results. However, engineering animation systems should meet the
following criteria.
(a) Exact representation and display of data, (b) High-speed and automatic production of
animation & (c) Low host dependency:
ANIMATION TECHNIQUES:
(i) Keyframe animation:
(li) Linear interpolation:
(iii) Curved interpolation:
(iv) Interpolation o/position and orientation:
(v) Interpolation of shape:
(vi) Interpolation of attributes:
ANIMATION TECHNIQUES: Keyframe
animation:
A key frame is defined by its particular moment in the animation timeline
as well as by all parameters or attributes associated
A sequence with three keyframes and two
interpolations one quicker than the
other
A sequence with three keyframes and two
interpolations one quicker than
the other
Key frames techniques have not proven their applicability for
ANIMATION
TYPES
In the preceding sections, the classification animation systems are on the basis
of their role in the animation process. Another consideration is the inode of
production.
Computer animation is just a special case of animation defined as a succession
of mages but each is differing from the one preceding. Also, the computer is
used to produce each frame individually to be photographed. In other words,
it can be explained as the "film" produced is directly on a terminal.
The animations are classified into the following three types.
(i) Frame-buffer animation,
(ii) Frame-by-frame animation,
(iii) Real-time playback, &animation.
ANIMATION
TYPES
SIMULATION APPROACH:
This approach is on the basis of physical laws which control the
motion or the dynamic behaviour of the object to be animated.
HYBRID APPROACH:
Enlargement animation is restricted in simulation approach even the
approach is' very much attractive to describe the dynamic behaviour of the
object.
CAMERA ANIMATION:
The camera plays an important role in computer animation because
its motion and the changes in some of its attributes can have a powerful
storytelling effect.
The point of view of a camera and the type of camera shot are both
defined by the position and orientation of the camera.
All camera motions require a change in position and orientation of
the camera.

More Related Content

Similar to Computer Aided Design visual realism notes

Real time implementation of object tracking through
Real time implementation of object tracking throughReal time implementation of object tracking through
Real time implementation of object tracking througheSAT Publishing House
 
Two marks with answers ME6501 CAD
Two marks with answers ME6501 CADTwo marks with answers ME6501 CAD
Two marks with answers ME6501 CADPriscilla CPG
 
Hidden Surface Removal methods.pptx
Hidden Surface Removal methods.pptxHidden Surface Removal methods.pptx
Hidden Surface Removal methods.pptxbcanawakadalcollege
 
A Method of Survey on Object-Oriented Shadow Detection & Removal for High Res...
A Method of Survey on Object-Oriented Shadow Detection & Removal for High Res...A Method of Survey on Object-Oriented Shadow Detection & Removal for High Res...
A Method of Survey on Object-Oriented Shadow Detection & Removal for High Res...IJERA Editor
 
Intelligent Auto Horn System Using Artificial Intelligence
Intelligent Auto Horn System Using Artificial IntelligenceIntelligent Auto Horn System Using Artificial Intelligence
Intelligent Auto Horn System Using Artificial IntelligenceIRJET Journal
 
3d vision.pptxvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvv
3d vision.pptxvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvv3d vision.pptxvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvv
3d vision.pptxvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvshesnasuneer
 
Matching algorithm performance analysis for autocalibration method of stereo ...
Matching algorithm performance analysis for autocalibration method of stereo ...Matching algorithm performance analysis for autocalibration method of stereo ...
Matching algorithm performance analysis for autocalibration method of stereo ...TELKOMNIKA JOURNAL
 
IRJET- Image Feature Extraction using Hough Transformation Principle
IRJET- Image Feature Extraction using Hough Transformation PrincipleIRJET- Image Feature Extraction using Hough Transformation Principle
IRJET- Image Feature Extraction using Hough Transformation PrincipleIRJET Journal
 
3D Reconstruction from Multiple uncalibrated 2D Images of an Object
3D Reconstruction from Multiple uncalibrated 2D Images of an Object3D Reconstruction from Multiple uncalibrated 2D Images of an Object
3D Reconstruction from Multiple uncalibrated 2D Images of an ObjectAnkur Tyagi
 
Simulation of collision avoidance by navigation
Simulation of collision avoidance by navigationSimulation of collision avoidance by navigation
Simulation of collision avoidance by navigationeSAT Publishing House
 
Wujanz_Error_Projection_2011
Wujanz_Error_Projection_2011Wujanz_Error_Projection_2011
Wujanz_Error_Projection_2011Jacob Collstrup
 
Vision based non-invasive tool for facial swelling assessment
Vision based non-invasive tool for facial swelling assessment Vision based non-invasive tool for facial swelling assessment
Vision based non-invasive tool for facial swelling assessment University of Moratuwa
 
Automatic rectification of perspective distortion from a single image using p...
Automatic rectification of perspective distortion from a single image using p...Automatic rectification of perspective distortion from a single image using p...
Automatic rectification of perspective distortion from a single image using p...ijcsa
 
Stereo Correspondence Algorithms for Robotic Applications Under Ideal And Non...
Stereo Correspondence Algorithms for Robotic Applications Under Ideal And Non...Stereo Correspondence Algorithms for Robotic Applications Under Ideal And Non...
Stereo Correspondence Algorithms for Robotic Applications Under Ideal And Non...CSCJournals
 
Robotic navigation algorithm with machine vision
Robotic navigation algorithm with machine vision Robotic navigation algorithm with machine vision
Robotic navigation algorithm with machine vision IJECEIAES
 
Image Segmentation Using Pairwise Correlation Clustering
Image Segmentation Using Pairwise Correlation ClusteringImage Segmentation Using Pairwise Correlation Clustering
Image Segmentation Using Pairwise Correlation ClusteringIJERA Editor
 
Gesture Recognition Review: A Survey of Various Gesture Recognition Algorithms
Gesture Recognition Review: A Survey of Various Gesture Recognition AlgorithmsGesture Recognition Review: A Survey of Various Gesture Recognition Algorithms
Gesture Recognition Review: A Survey of Various Gesture Recognition AlgorithmsIJRES Journal
 

Similar to Computer Aided Design visual realism notes (20)

Real time implementation of object tracking through
Real time implementation of object tracking throughReal time implementation of object tracking through
Real time implementation of object tracking through
 
Two marks with answers ME6501 CAD
Two marks with answers ME6501 CADTwo marks with answers ME6501 CAD
Two marks with answers ME6501 CAD
 
Hidden Surface Removal.pptx
Hidden Surface Removal.pptxHidden Surface Removal.pptx
Hidden Surface Removal.pptx
 
Hidden Surface Removal methods.pptx
Hidden Surface Removal methods.pptxHidden Surface Removal methods.pptx
Hidden Surface Removal methods.pptx
 
A Method of Survey on Object-Oriented Shadow Detection & Removal for High Res...
A Method of Survey on Object-Oriented Shadow Detection & Removal for High Res...A Method of Survey on Object-Oriented Shadow Detection & Removal for High Res...
A Method of Survey on Object-Oriented Shadow Detection & Removal for High Res...
 
Intelligent Auto Horn System Using Artificial Intelligence
Intelligent Auto Horn System Using Artificial IntelligenceIntelligent Auto Horn System Using Artificial Intelligence
Intelligent Auto Horn System Using Artificial Intelligence
 
3d vision.pptxvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvv
3d vision.pptxvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvv3d vision.pptxvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvv
3d vision.pptxvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvv
 
Matching algorithm performance analysis for autocalibration method of stereo ...
Matching algorithm performance analysis for autocalibration method of stereo ...Matching algorithm performance analysis for autocalibration method of stereo ...
Matching algorithm performance analysis for autocalibration method of stereo ...
 
IRJET- Image Feature Extraction using Hough Transformation Principle
IRJET- Image Feature Extraction using Hough Transformation PrincipleIRJET- Image Feature Extraction using Hough Transformation Principle
IRJET- Image Feature Extraction using Hough Transformation Principle
 
3D Reconstruction from Multiple uncalibrated 2D Images of an Object
3D Reconstruction from Multiple uncalibrated 2D Images of an Object3D Reconstruction from Multiple uncalibrated 2D Images of an Object
3D Reconstruction from Multiple uncalibrated 2D Images of an Object
 
Simulation of collision avoidance by navigation
Simulation of collision avoidance by navigationSimulation of collision avoidance by navigation
Simulation of collision avoidance by navigation
 
L010427275
L010427275L010427275
L010427275
 
Wujanz_Error_Projection_2011
Wujanz_Error_Projection_2011Wujanz_Error_Projection_2011
Wujanz_Error_Projection_2011
 
Vision based non-invasive tool for facial swelling assessment
Vision based non-invasive tool for facial swelling assessment Vision based non-invasive tool for facial swelling assessment
Vision based non-invasive tool for facial swelling assessment
 
E017443136
E017443136E017443136
E017443136
 
Automatic rectification of perspective distortion from a single image using p...
Automatic rectification of perspective distortion from a single image using p...Automatic rectification of perspective distortion from a single image using p...
Automatic rectification of perspective distortion from a single image using p...
 
Stereo Correspondence Algorithms for Robotic Applications Under Ideal And Non...
Stereo Correspondence Algorithms for Robotic Applications Under Ideal And Non...Stereo Correspondence Algorithms for Robotic Applications Under Ideal And Non...
Stereo Correspondence Algorithms for Robotic Applications Under Ideal And Non...
 
Robotic navigation algorithm with machine vision
Robotic navigation algorithm with machine vision Robotic navigation algorithm with machine vision
Robotic navigation algorithm with machine vision
 
Image Segmentation Using Pairwise Correlation Clustering
Image Segmentation Using Pairwise Correlation ClusteringImage Segmentation Using Pairwise Correlation Clustering
Image Segmentation Using Pairwise Correlation Clustering
 
Gesture Recognition Review: A Survey of Various Gesture Recognition Algorithms
Gesture Recognition Review: A Survey of Various Gesture Recognition AlgorithmsGesture Recognition Review: A Survey of Various Gesture Recognition Algorithms
Gesture Recognition Review: A Survey of Various Gesture Recognition Algorithms
 

More from KushKumar293234

Personal-Management-and-Industrial-Relation-2.pdf
Personal-Management-and-Industrial-Relation-2.pdfPersonal-Management-and-Industrial-Relation-2.pdf
Personal-Management-and-Industrial-Relation-2.pdfKushKumar293234
 
DOUBLE SLIDER INV_031220.pptx
DOUBLE SLIDER INV_031220.pptxDOUBLE SLIDER INV_031220.pptx
DOUBLE SLIDER INV_031220.pptxKushKumar293234
 
STRESS MANAGEMENT IN WORKPLACE 2.pptx
STRESS MANAGEMENT IN WORKPLACE 2.pptxSTRESS MANAGEMENT IN WORKPLACE 2.pptx
STRESS MANAGEMENT IN WORKPLACE 2.pptxKushKumar293234
 

More from KushKumar293234 (6)

Personal-Management-and-Industrial-Relation-2.pdf
Personal-Management-and-Industrial-Relation-2.pdfPersonal-Management-and-Industrial-Relation-2.pdf
Personal-Management-and-Industrial-Relation-2.pdf
 
Turbine.pptx
Turbine.pptxTurbine.pptx
Turbine.pptx
 
DOUBLE SLIDER INV_031220.pptx
DOUBLE SLIDER INV_031220.pptxDOUBLE SLIDER INV_031220.pptx
DOUBLE SLIDER INV_031220.pptx
 
Module 3.pptx
Module 3.pptxModule 3.pptx
Module 3.pptx
 
STRESS MANAGEMENT IN WORKPLACE 2.pptx
STRESS MANAGEMENT IN WORKPLACE 2.pptxSTRESS MANAGEMENT IN WORKPLACE 2.pptx
STRESS MANAGEMENT IN WORKPLACE 2.pptx
 
RTI (1).pptx
RTI (1).pptxRTI (1).pptx
RTI (1).pptx
 

Recently uploaded

Independent Solar-Powered Electric Vehicle Charging Station
Independent Solar-Powered Electric Vehicle Charging StationIndependent Solar-Powered Electric Vehicle Charging Station
Independent Solar-Powered Electric Vehicle Charging Stationsiddharthteach18
 
Artificial Intelligence in due diligence
Artificial Intelligence in due diligenceArtificial Intelligence in due diligence
Artificial Intelligence in due diligencemahaffeycheryld
 
Interfacing Analog to Digital Data Converters ee3404.pdf
Interfacing Analog to Digital Data Converters ee3404.pdfInterfacing Analog to Digital Data Converters ee3404.pdf
Interfacing Analog to Digital Data Converters ee3404.pdfragupathi90
 
analog-vs-digital-communication (concept of analog and digital).pptx
analog-vs-digital-communication (concept of analog and digital).pptxanalog-vs-digital-communication (concept of analog and digital).pptx
analog-vs-digital-communication (concept of analog and digital).pptxKarpagam Institute of Teechnology
 
Circuit Breakers for Engineering Students
Circuit Breakers for Engineering StudentsCircuit Breakers for Engineering Students
Circuit Breakers for Engineering Studentskannan348865
 
engineering chemistry power point presentation
engineering chemistry  power point presentationengineering chemistry  power point presentation
engineering chemistry power point presentationsj9399037128
 
Passive Air Cooling System and Solar Water Heater.ppt
Passive Air Cooling System and Solar Water Heater.pptPassive Air Cooling System and Solar Water Heater.ppt
Passive Air Cooling System and Solar Water Heater.pptamrabdallah9
 
Artificial intelligence presentation2-171219131633.pdf
Artificial intelligence presentation2-171219131633.pdfArtificial intelligence presentation2-171219131633.pdf
Artificial intelligence presentation2-171219131633.pdfKira Dess
 
Involute of a circle,Square, pentagon,HexagonInvolute_Engineering Drawing.pdf
Involute of a circle,Square, pentagon,HexagonInvolute_Engineering Drawing.pdfInvolute of a circle,Square, pentagon,HexagonInvolute_Engineering Drawing.pdf
Involute of a circle,Square, pentagon,HexagonInvolute_Engineering Drawing.pdfJNTUA
 
Basics of Relay for Engineering Students
Basics of Relay for Engineering StudentsBasics of Relay for Engineering Students
Basics of Relay for Engineering Studentskannan348865
 
UNIT-2 image enhancement.pdf Image Processing Unit 2 AKTU
UNIT-2 image enhancement.pdf Image Processing Unit 2 AKTUUNIT-2 image enhancement.pdf Image Processing Unit 2 AKTU
UNIT-2 image enhancement.pdf Image Processing Unit 2 AKTUankushspencer015
 
Final DBMS Manual (2).pdf final lab manual
Final DBMS Manual (2).pdf final lab manualFinal DBMS Manual (2).pdf final lab manual
Final DBMS Manual (2).pdf final lab manualBalamuruganV28
 
electrical installation and maintenance.
electrical installation and maintenance.electrical installation and maintenance.
electrical installation and maintenance.benjamincojr
 
Filters for Electromagnetic Compatibility Applications
Filters for Electromagnetic Compatibility ApplicationsFilters for Electromagnetic Compatibility Applications
Filters for Electromagnetic Compatibility ApplicationsMathias Magdowski
 
Instruct Nirmaana 24-Smart and Lean Construction Through Technology.pdf
Instruct Nirmaana 24-Smart and Lean Construction Through Technology.pdfInstruct Nirmaana 24-Smart and Lean Construction Through Technology.pdf
Instruct Nirmaana 24-Smart and Lean Construction Through Technology.pdfEr.Sonali Nasikkar
 
Dynamo Scripts for Task IDs and Space Naming.pptx
Dynamo Scripts for Task IDs and Space Naming.pptxDynamo Scripts for Task IDs and Space Naming.pptx
Dynamo Scripts for Task IDs and Space Naming.pptxMustafa Ahmed
 
21scheme vtu syllabus of visveraya technological university
21scheme vtu syllabus of visveraya technological university21scheme vtu syllabus of visveraya technological university
21scheme vtu syllabus of visveraya technological universityMohd Saifudeen
 
UNIT 4 PTRP final Convergence in probability.pptx
UNIT 4 PTRP final Convergence in probability.pptxUNIT 4 PTRP final Convergence in probability.pptx
UNIT 4 PTRP final Convergence in probability.pptxkalpana413121
 
Research Methodolgy & Intellectual Property Rights Series 1
Research Methodolgy & Intellectual Property Rights Series 1Research Methodolgy & Intellectual Property Rights Series 1
Research Methodolgy & Intellectual Property Rights Series 1T.D. Shashikala
 
5G and 6G refer to generations of mobile network technology, each representin...
5G and 6G refer to generations of mobile network technology, each representin...5G and 6G refer to generations of mobile network technology, each representin...
5G and 6G refer to generations of mobile network technology, each representin...archanaece3
 

Recently uploaded (20)

Independent Solar-Powered Electric Vehicle Charging Station
Independent Solar-Powered Electric Vehicle Charging StationIndependent Solar-Powered Electric Vehicle Charging Station
Independent Solar-Powered Electric Vehicle Charging Station
 
Artificial Intelligence in due diligence
Artificial Intelligence in due diligenceArtificial Intelligence in due diligence
Artificial Intelligence in due diligence
 
Interfacing Analog to Digital Data Converters ee3404.pdf
Interfacing Analog to Digital Data Converters ee3404.pdfInterfacing Analog to Digital Data Converters ee3404.pdf
Interfacing Analog to Digital Data Converters ee3404.pdf
 
analog-vs-digital-communication (concept of analog and digital).pptx
analog-vs-digital-communication (concept of analog and digital).pptxanalog-vs-digital-communication (concept of analog and digital).pptx
analog-vs-digital-communication (concept of analog and digital).pptx
 
Circuit Breakers for Engineering Students
Circuit Breakers for Engineering StudentsCircuit Breakers for Engineering Students
Circuit Breakers for Engineering Students
 
engineering chemistry power point presentation
engineering chemistry  power point presentationengineering chemistry  power point presentation
engineering chemistry power point presentation
 
Passive Air Cooling System and Solar Water Heater.ppt
Passive Air Cooling System and Solar Water Heater.pptPassive Air Cooling System and Solar Water Heater.ppt
Passive Air Cooling System and Solar Water Heater.ppt
 
Artificial intelligence presentation2-171219131633.pdf
Artificial intelligence presentation2-171219131633.pdfArtificial intelligence presentation2-171219131633.pdf
Artificial intelligence presentation2-171219131633.pdf
 
Involute of a circle,Square, pentagon,HexagonInvolute_Engineering Drawing.pdf
Involute of a circle,Square, pentagon,HexagonInvolute_Engineering Drawing.pdfInvolute of a circle,Square, pentagon,HexagonInvolute_Engineering Drawing.pdf
Involute of a circle,Square, pentagon,HexagonInvolute_Engineering Drawing.pdf
 
Basics of Relay for Engineering Students
Basics of Relay for Engineering StudentsBasics of Relay for Engineering Students
Basics of Relay for Engineering Students
 
UNIT-2 image enhancement.pdf Image Processing Unit 2 AKTU
UNIT-2 image enhancement.pdf Image Processing Unit 2 AKTUUNIT-2 image enhancement.pdf Image Processing Unit 2 AKTU
UNIT-2 image enhancement.pdf Image Processing Unit 2 AKTU
 
Final DBMS Manual (2).pdf final lab manual
Final DBMS Manual (2).pdf final lab manualFinal DBMS Manual (2).pdf final lab manual
Final DBMS Manual (2).pdf final lab manual
 
electrical installation and maintenance.
electrical installation and maintenance.electrical installation and maintenance.
electrical installation and maintenance.
 
Filters for Electromagnetic Compatibility Applications
Filters for Electromagnetic Compatibility ApplicationsFilters for Electromagnetic Compatibility Applications
Filters for Electromagnetic Compatibility Applications
 
Instruct Nirmaana 24-Smart and Lean Construction Through Technology.pdf
Instruct Nirmaana 24-Smart and Lean Construction Through Technology.pdfInstruct Nirmaana 24-Smart and Lean Construction Through Technology.pdf
Instruct Nirmaana 24-Smart and Lean Construction Through Technology.pdf
 
Dynamo Scripts for Task IDs and Space Naming.pptx
Dynamo Scripts for Task IDs and Space Naming.pptxDynamo Scripts for Task IDs and Space Naming.pptx
Dynamo Scripts for Task IDs and Space Naming.pptx
 
21scheme vtu syllabus of visveraya technological university
21scheme vtu syllabus of visveraya technological university21scheme vtu syllabus of visveraya technological university
21scheme vtu syllabus of visveraya technological university
 
UNIT 4 PTRP final Convergence in probability.pptx
UNIT 4 PTRP final Convergence in probability.pptxUNIT 4 PTRP final Convergence in probability.pptx
UNIT 4 PTRP final Convergence in probability.pptx
 
Research Methodolgy & Intellectual Property Rights Series 1
Research Methodolgy & Intellectual Property Rights Series 1Research Methodolgy & Intellectual Property Rights Series 1
Research Methodolgy & Intellectual Property Rights Series 1
 
5G and 6G refer to generations of mobile network technology, each representin...
5G and 6G refer to generations of mobile network technology, each representin...5G and 6G refer to generations of mobile network technology, each representin...
5G and 6G refer to generations of mobile network technology, each representin...
 

Computer Aided Design visual realism notes

  • 1. UNIT III - VISUAL REALISM ❖ Hidden Line removal algorithms ❖ Hidden Surface removal algorithms ❖ Hidden Solid removal algorithms ❖ Shading ❖ Colouring ❖ Computer animation.
  • 3. VISUAL REALISM HAS TWO COMPONENTS: Geometric realism - The virtual object looks like the real object. Illumination realism - Refers to the fidelity of the lighting model. (more realistic and visually appealing image can be produced.) VISUALISATION IS OF TWO TYPES: ❑ Visualisation in geometric modelling- i.e., Geometric models of objects are displayed. ❑ Visualization in scientific computing- Ie., Results related to science and engineering are displayed. INTRODUCTION – VISUAL REALISM
  • 4. IN GEOMETRIC MODELING: An effective and less expensive way of reviewing various design alternatives. For design of complex surfaces, like as those in automobile bodies and aircraft frames. IN SCIENTIFIC COMPUTING: For displaying results of finite element analysis, heat-transfer analysis, computational fluid dynamics and structural dynamics and vibration. In medical field for joint ball replacement operations.
  • 5. ❖ The performance of any CAD/CAM systems is evaluated on the basis of their ability of displaying realistic visual image. ❖ Visualization can be defined as a technique for creating images, diagrams or animations to communicate ideas. ❖ The visual realism concentrates basically on the visual appearance of objects. ❖ Various techniques of C.G are applied on the model to make it appear as realistic as possible. INTRODUCTION – VISUAL REALISM
  • 6. Projection and shading are two most common methods for visualizing geometric models. There are two important and popular form of visualization methods are, such as animation and simulation. INTRODUCTION – VISUAL REALISM
  • 7.
  • 9. MAJOR PROBLEM IN VISUALIZATION An object consist of number of vertices, edges, surfaces which are represented realistically in 3D modeling. The major problem in visualization of object is representing the depth of 3D object into 2D screens. Projecting 3D object into 2D screen displays thecomplex lines and curves which may not give a clear picture. The first step towards visual realism isto eliminate these ambiguities which can be obtained using hidden line removal (HLR), hidden surface removal (HSR) and hidden solid removal approaches.
  • 10. MODEL CLEAN-UP Model clean-up consists of three processes in sequence: (1) Generating orthographic views of the model, (2) Eliminating hidden lines in each view by applying visual realism principle, (3) Changing the necessary hidden lines as dashed line. Advantage: User has control over which entities should be removed and which should be dashed. Disadvantage: Tedious, time consuming and error-prone process is a big. Depth information may be lost when hidden lines are eliminated completely. Manual model clean up is a commonly applicable to wire frame models.
  • 11. To display all parts of the object to the viewer, simply as a collection of lines. For real objects, ❑ Internal details, ❑ Back faces, ❑ Shadow will be cast, ❑ Surfaces will take on different intensities, ❑ According to local lighting conditions.
  • 12.
  • 13. OBJECT SPACE (OBJECT-PRECISION) IMAGE- SPACE METHOD There are two approaches for removing hidden lines & surface problems − 1. Object-Space method. 2. Image-space method. 3. Hybrid. .
  • 14. THREE APPROACHES FOR VISIBLE LINE AND SURFACE DETERMINATION: 1. OBJECT SPACE METHOD: Determines which parts of any objects are visible by using spatial and geometrical relationships. It operates with object database precision. The Object-space method is implemented in physical coordinate system. Hidden Line Removal Algorithms 2. IMAGE SPACE METHOD: Determines what is visible at each image pixel. It operates with image resolution precision. (Adaptable using in raster displays.) Image-space method is implemented in screen coordinate system. Hidden Surface Removal Algorithms 3. HYBRID: combines both types of object space and image space.
  • 15. OBJECT SPACE METHOD IMAGE SPACE METHOD ❑ In object-space method, the object is described in the physical coordinate system. It compares the objects and parts to each other within the scene definition to determine which surfaces are visible. ❑ Object-space methods are generally used in hidden line removal algorithms. ❑ Image-space method is implemented in thescreen coordinatesystem in which the objects are viewed. ❑ In an image-space algorithm, the visibility is decided point by point at each pixel position on the view plane. Hence, zooming of the object does not degrade its quality of display. ❑ Most of the hidden line and hidden surface algorithms use the image-space method.
  • 16.
  • 18. HIDDEN LINE ELIMINATION PROCESS is an whic h a given Sorting operatio n arrange s set of record s VISIBILITY TECHNIQUE: ❑ Normally checks for overlapping of pairs of polygons. If Overlapping the occurs, depth comparisons are used to determine. according to the selected criterion.
  • 22. Removing hidden line and surfaces greatly improve the visualization of objects by displaying clear and more realistic images. H.L.E stated as, "For a given three dimensional scene, a given viewing point and a given direction eliminate from an appropriate two dimensional projection of the edges and faces which the observer cannot see". Various hidden line and hidden surface removal algorithms may be classified into: 1. object-space (object-precision) method(the object is described in the physical coordinate system) 2. Image-space method (the visibility is decided point by point at each pixel position on the view plane) - Rastor algorithms & Vector algorithms. 3. Hybrid methods (combination of both object-space and image-space methods).
  • 23. VISIBILITY TECHNIQUES Therefore, the following visibility techniques are developed for improving the efficiency of algorithm: 1. Minimax test 2. Surface test 3. Edge intersection 4. Segment comparisons
  • 24. MINIMAX (Bounding Box) TEST Minimax test compares whether two polygonsoverlap or not. Here, each polygon is enclosed in a box by finding its maximum and minimum x and y coordinates. Therefore, it is termed as minimax test. Then these boxes are compared with each other to identify the intersection for any two boxes. If there is no intersection of two boxes as shown in Figure, their surrounding polygons do not overlap and hence, no elements are removed. If two boxes intersect, the polygons may or may not overlap as shown in Figure.
  • 26. BACK FACE / SURFACE TEST: In a solid object, there are surfaces which are facing the viewer (front faces) and there are surfaces which are opposite to the viewer (back faces). These back faces contribute to approximately half of the total number of surfaces. A back face test is used to determine the location of a surface with respect to other surface. This test can provide an efficient way of implementing the depth comparison to remove the faces which are not visible in a specific view port.
  • 27.
  • 28. EDGE INTERSECTION ❑ In this technique, hidden line algorithms initially calculate the edge intersections in two-dimensions. ❑ These intersections are used to determine the edge visibility. Figure shows the concept of edge interaction technique. ❑ The two edges intersect at a point where Y2 - YI = 0. It will produce the point of intersection which will be further used for segmentation and dealt with the visibility concepts discussed earlier.
  • 29. SEGMENT COMPARISON: ❖ This visibility technique is used to solve hidden surface problems in image space. Hence, the display screen is divided into number of small segments. ❖ In image display, scan lines are arranged on display screen from top to bottom and left to right. ❖ This technique tends to solve the problem piecewise and not as a total image. The scan line is divided into spans. ❖ To compute the depth, plane equations are used.
  • 30. HIDDEN LINE REMOVAL The appearance of the object is greatly complicated by the visibility of hidden details. Therefore, it is necessary to remove hidden details such as edges and surfaces. One of the most challenging problems considered in computer graphics is the determination of hidden edges and surfaces.
  • 31. HIDDEN LINE REMOVAL ALGORITHM (i) Area-oriented algorithms. (ii) Overlay algorithm (iii) Robert's algorithm
  • 33. AREA-ORIENTED ALGORITHM ❖ This algorithm is based on the subdivision of given data set in a stepwise fashion until all visible areas in the scene are determined and displayed. ❖ In this data structure, all the adjacency relations of each edge are described by explicit relation. ❖ Since the edge is formed by two faces, it is a component in two loops, one for each face. ❖ No penetration of faces is allowed in both area oriented as well as depth algorithms.
  • 34. 1. First step of this algorithm is to identify silhouette polygons. 2. Then, quantitative hiding values are assigned to each edge of silhouette polygons, The edges are visible if this value is 0 and the edges are invisible if this value is 1. 3. Next step is to find out the visible silhouette segment which can be determined from the quantitative hiding values. 4. Now, the visible silhouette segment is intersected with partially visible faces to determine whether the silhouette segment is partially hide or fully hide non-silhouette edges in partially visible faces. AREA-ORIENTED ALGORITHM - IN THIS ALGORITHM, THE FOLLOWING STEPS/ PROCEDURES ARE CARRIED OUT.
  • 35.
  • 36.
  • 37. OVERLAY ALGORITHM. ❖ In overlay method, the u-vgrid is used to creategrid surface which consists of the regions having straight-edges. ❖ The curves in each region of u-v grid are approximated as a line segment. This algorithm is called overlay algorithm. ❖ In this algorithm, the first step is to calculate the u-v grid using the surface equation. ❖ ❖ Then the grid surface with linear edges is created. The visibility of the grid surface is determined using various criteria discussed earlier.
  • 38. ROBERT'S ALGORITHM ❖ The hidden line algorithms described earlier are only suitable for polyhedral objects which contain flat faces. ❖ The earliest visible-line algorithm was developed by Roberts. The primary requirement of this algorithm is that each edge is a part of the face of a convex polyhedron. ❖ In the phase of this algorithm, all edges shared by a pair of polyhedron's back facing polygons are removed using a back-face culling technique.
  • 39. STEPS FOR THE ALGORITHM: 1. Treat each volume separately and eliminate self-hidden (back-faces) planes and self hidden lines. 2. Treat each edge (or line segment) separately eliminates those which are entirely hidden by one or more other volumes. 3. It identifies those lines which are entirely visible. 4. For each of the remaining edge, junction lines are constructed. 5. New edges are constructed if there is inter-penetration of two volumes.
  • 43. HIDDEN SURFACE REMOVAL ALGORITHMS ❖ Hidden line removal is the process of eliminating lines of parts of objects which are covered by others. It is extensively used for objects represented as wireframe skeletons and it is a bit trickier. ❖ Hidden surface removal does the same job for the objects represented as solid models. The elimination of parts of solid objects that are covered by others is called hidden surface removal.
  • 44. Hidden Line Removal – Object space Algorithms Hidden Surface Removal - Image-space Algorithms The following are the image-space algorithms widely used, (i) Depth-buffer algorithm or z-buffer algorithm (ii) Area-coherence algorithm or Warnock's algorithm (iii) Scan-line algorithm or Watkin's algorithm' (iv) Depth or Priority algorithm
  • 45. VISIBLE SURFACE DETERMINATION • Area-Subdivision Algorithms • z-buffer Algorithm • List Priority Algorithms • BSP (Binary Space Partitioning Tree) • Scan-line Algorithms
  • 46. 1. DEPTH-BUFFER ALGORITHM OR Z-BUFFER ALGORITHM ❑ The easiest way to achieve the hidden surface removal. ❑ This algorithm compares surface depths at each pixel position on the projection plane. Since the object depth is usually measured from the view plane along the z-axis of a viewing system, this algorithm is also called z-buffer algorithm. ❑ Hence, two buffers are required for each pixel. a) Depth buffer or z buffer which stores the smallest z value for each pixel b) Refresh buffer or frame buffer which stores the intensity value for each position.
  • 47. Let us consider two surfaces P and Q with varying distances along the position (x, y) in a view plane as shown in Figure 1. DEPTH-BUFFER ALGORITHM OR Z-BUFFER ALGORITHM
  • 48. The steps of a depth-buffer algorithm (1) Initially, each pixel of the z-buffer is set to the maximum depth value (the depth of the back clipping plane). (2) The image buffer is set to the background colour. (3) Surfaces are rendered one at a time. (4) For the first surface, the depth value of each pixel is calculated. (5) If this depth value is smaller than the corresponding depth value in the z-buffer (i.e. it is closer to the view point), both the depth value in z-buffer and the color value in the image buffer are replaced by the depth value and the colr value of this surface is calculated at the pixel position. (6) Step 4 and step 5 are repeated for the remaining surfaces.
  • 49. AREA-COHERENCE ALGORITHM OR WARNOCK'S ALGORITHM ❖ John Warnock proposed an elegant divide-and-conquer hidden surface algorithm. ❖ This algorithm relies on the area coherence of polygons to resolve the visibility of many polygons in image space. ❖ Depth sorting is simplified and performed only in those cases involving the image-space overlap. This method is also called area- subdivision method as the process involves the division of viewing window into four equal sub-windows or sub-divisions.
  • 52. PRIORTY ALGORITHM: *The faces of objects can sometimes be given priorty ordering from which their visibility can be computed. *Once an actual viewpoint is specified, the back faces are eliminated and the priority numbers are assigned to the remaining faces to tell which face is in front of another one. *Since the assignment of priorities is done according to the largest Z coordinate value of each face, the algorithm
  • 53. The algorithm is also known as the depth or z-algorithm. The algorithm is based on sorting all the faces in the scene according to the largest z coordinate value of each.
  • 54. The surface test as discussed earlier is used to remove the back faces. This improves the efficiency of the priority algorithm . In some scenes, ambiquities may result after applying the priority test. To rectify this ambiquity, additional criteria to determine coverage must be added to the priority algorithm. Area oriented algorithm described here subdivides the data set of a given scene in a stepwise fashion until all the visible areas in the scene are determined and displayed.
  • 55. 97 HIDDEN SURFACE ELIMINATION • Object space algorithms: determine which objects are in front of others • Works for static scenes • May be difficult to determine • Image space algorithms: determine which object is visible at each pixel • Works for dynamic scenes
  • 56. DEPTH-BUFFER ALGORITHM OR Z- BUFFER ALGORITHM • 0 (nearer to view) to 1 (away from view) Z VALUES. The z-buffer algorithm require a z-buffer in which z values can be sorted for each pixel. • The z-buffer is initialized to the smallest z-value, while the frame buffer is initialized to the background pixel value. • Both the frame and z-buffers are indexed by pixel coordinates(x,y). These coordinates are actually screen coordinates. HOW IT WORKS? • For each polygon in the scene, find all the pixels (x,y) that lie inside or on the boundries of the polygon when projected onto the screen. • For each of these pixels, calculate the depth z of the polygon at (x,y). • If z>depth (x , y) the polygon is closer to the viewing eye than others already stored in the pixel.
  • 57. Initially, all positions in the depth buffer are set at 0 (minimum depth), and the refresh buffer is initialized to the background intensity. Z=0: Zmax=1 …DEPTH-BUFFER ALGORITHM OR Z- BUFFER ALGORITHM
  • 58. In this case, the z buffer is updated by setting the depth at(x,y) to z. Similarly, the intensity of the frame buffer location corresponding to the pixel is updated to the intensity of the polygon at (x,y). After all the polygons have been processed, the frame buffer contains the solution. …DEPTH-BUFFER ALGORITHM OR Z- BUFFER ALGORITHM
  • 59. 101 Z- BUFFERING IMAGE PRECISION ALGORITHM: • Determine which object is visible at each pixel • Order of polygons not critical • Works for dynamic scenes • Takes more memory BASIC IDEA: • Rasterize (scan-convert) each polygon • Keep track of a z value at each pixel • Interpolate z value of polygon vertices during rasterization • Replace pixel with new color if z value is smaller (i.e., if object is closer to eye)
  • 60. Z-Buffer Advantages Simple and easy to implement Amenable to scan-line algorithms Can easily resolve visibility cycles Z-Buffer Disadvantages ❑ Does not do transparency easily ❑ Aliasing occurs! Since not all depth questions can be resolved ❑ Anti-aliasing solutions non-trivial ❑ Shadows are not easy ❑ Higher order illumination is hard in general
  • 61. WARNOCK’S ALGORITHM ❑ This is one of the first area-coherence algorithms. ❑ Warnock’s algorithm solves thehiddensurface problem by recursively subdividing the image into sub-images. ❑ ❑ It first attempts to solve the problem for a window that covers the entire image. If the polygon overlap, the algorithm rises to analyze the relationship between the polygons and generates the display for the window. ❑ If the algorithm cannot decide easily, it subdivides the window into four smaller windows.
  • 62.
  • 63. ❑ The recursion terminates if the hidden-surface problem can be solved for all the windows or if the window becomes as small as a single pixel on the screen. ❑ In this case, the intensity of the pixel is chosen equal to the polygon visible in the pixel. ❑ The subdivision process results in a window tree. …WARNOCK’S ALGORITHM
  • 64. The hidden surface algorithms can be adapted to hidden wave removal also by displaying only the boundaries of visible surfaces.
  • 65. 110 • An area-subdivision technique Warnock’s Algorithm
  • 71. Surrounding surface: A surface completely encloses the area.
  • 72. Intersecting or overlapping surface: A surface that is partly inside and partly outside the area.
  • 73. Inside surface: A surface that is completely inside the area.
  • 74. Outside surface: A surface that is completely outside the area.
  • 75. SCAN-LINE ALGORITHM OR WATKIN'S ALGORITHM The scan line algorithm is identical to z- buffer algorithm except that one scan line at a time is processed, hence, a much smaller buffer would be needed.
  • 76. QUESTIONS: 1. Different types of hiddel line algorithm 2. User driven, procedural and data driven animation? 3. RGB, CMY colour models 4. Back face removal algorithm, Z-Buffer algorithm, 5. Colouring importance, 6. Gourand shading differed from other shading techniques. 7. Interpolative shading & its methods 8. How to find visible surface determination
  • 77. HIDDEN SOLID REMOVAL ALGORITHMS The hidden line removal and hidden surface removal algorithms described in the previous sections are applicable to hidden solid removal of B-rep models. Certain algorithms such as the z-buffer can be extended to CSG models.
  • 79.
  • 80. Ray-tracing is the process of tracking and plotting the path taken by the rays of light starting at a light source to the centre of projection (viewing position). One of the most popular and powerful technique for hidden solid removal because of its simple, elegant and easy implementation nature. DISADVANTAGE: Ray tracing is performance
  • 81. ❑ Edge-oriented approach ❑ Silhoutte (countour oriented approach) ❑ Area oriented approach HIDDEN LINE ELIMINATION ALGORITHM: 1. Depth or z algorithm, 2. Area oriented algorithm, 3. Overlay algorithm, 4. Roberts algorithm,
  • 82. HIDDEN SURFACE ELIMINATION ALGORITHM: ❑ Depth-buffer or z-buffer algorithm. ❑ Area coherence algorithm, ❑ Scan-line algorithm, ❑ Depth or priority algorithm
  • 83. SHADIN G They determine the shade of a point of an object in terms of light sources, surface characteristics, and the positions and orientations of the surfaces and sources. 2 types of light can be identified, ❑ Point lighting(flash light effect in black room) ❑ Ambient lighting(light of uniform brightness and is caused by multiple reflections)
  • 84. ….SHADING A three dimensional model can be displayed by assigning different degrees of shading to the surfaces. A virtual light source is assumed and various shading techniques are available to determine strikes on each portion of the surfaces to provide a realistic image of the object. The shading techniques are based on the recognition of distance (depth) and shape as a function of illumination.
  • 85. ….SHADING As the shading concept involves lighting and illumination as the basic, it is essential to have better understanding of light source. As it is well- known that all objects emit light whose origin could be many and varied. The object itself may be emitting light.
  • 87. ILLUMINATION OR SHADING MODELS Illumination models simulate the way visible surfaces of object reflects light. The shade of a point of an object in terms of light sources, surface properties and the position and orientation of the surfaces and sources are determined by these models. There are two types of light sources: LIGHT-EMITTING SOURCES & LIGHT-REFLECTING SOURCES LIGHT-EMITTING SOURCES: I. Ambient light II. Point light source LIGHT-REFLECTING SOURCES (i) Diffuse reflection (ii) Specular reflection.
  • 88. LIGHT-EMITTING SOURCES AMBIENT LIGHT: It is a light of uniform brightness and it is caused by the multiple reflections of light from many sources present in the environment. The amount of ambient light incident on each object is a constant for all surfaces and over all directions.
  • 89. POINT LIGHT SOURCE: A light source is considered as a point source if it is specified with a coordinate position and an intensity value. Object is illuminated in one direction only. The light reflected from an object can be divided into two components.
  • 91. SHADING ALGORITHMS Shading method is expensive and it requires large number of calculations. This section deals with more efficient shading methods for surfaces defined by polygons. Each polygon can be drawn with a single intensity or different intensity obtained at each point on the surface. There are numbers of shading algorithms exists which are as follows. (i) Constant-intensity shading or Lambert shading (ii) Gourand or first-derivative shading (iii) Phong or second-derivative shading (iv) Half-tone shading
  • 92. (i) Constant-intensity shading or Lambert shading: The fast and simple method for shading polygon is constant intensity shading which is also known as Lambert shading or faceted shading or flat shading.
  • 93. EXISTING SHADING ALGORITHMS ARE 1. CONSTANT SHADING 2. GOURAND SHADING OR FIRST-DERRIVATIVE 3. PHONG OR SHADING SECOND-DERRIVATIVE
  • 94.
  • 95. That phong shading is much more superior to flat and gouraud shading but requires lot of time for processing and results in better outputs.
  • 96. THIS VIRTUAL PLANT ILLUSTRATES THE ACTION OFLIGHTING CONDITIONS ON TO THE SHAPE AND SIZE OFTHE SIMULATED MODELS
  • 97. TEXTURING : It will shows the surface of the respective object by roughness or softness.
  • 98. COLOUR : ❖ Colours can be used in geometric construction. ❖ Give the realistic look of the objects. ❖ Shows the difference b/w components. There are two types of colours: Chromatic Colour & Achromatic Colour.
  • 99. 150 Three Characteristics of Color: hue brightness: the luminance of the object saturation: the blue sky
  • 100. ❑ Chromatic colours are provided multi-colour image ❑ Achromatic colours provide only black-and-white displays. Achromatic colour can have the variation of three different patterns such as white, black, and various levels of gray which is a combination of white and black. These variations are achieved by assigning the different intensity values. The intensity value of 1 provides white Colour, 0 displays the black Colour. …COLOU R
  • 101. PREPARE THE LIST OF VARIOUS COLOURS IN COMPUTER GRAPHICS: Colour Wave-Length Nanometer 1. Violet 400 2. Blue 450 3. Cyan 500 4. Green 550 5. Yellow 600 6. Orange 650 7. Red 700
  • 102. COLOR MODELS: The description of color generally includes three properties: ❑ Hue, ❑ Saturation ❑ Brightness, defining a position in the color spectrum, purity and the intensity value of a color. COLOR MODELS: Colour model is an orderly system for creating a whole range of colours from a small set of primary colours. There are two types of colour models: ❑ Subtractive ❑ Additive. Additive colour models use light to display colour while subtractive models use printing inks. Ex: Electro-luminance produced by CRT or TV monitors, LCD projectors. Transmitted light. Colours perceived in subtractive models are the result of reflected light.
  • 103. There are number of colour models available. Some of the important colour models are as follows. 1. RGB (Red, Green, Blue) color model 2. CMY (Cyan, Magenta, Yellow) color model 3. YIQ color model 4. HSV (hue, saturation, value) color model, also called HSB (brightness) model. Three hardware-oriented color models are RGB (with color CRT monitors), 5. YIQ (TV color system) and CMY (certain color-printing devices). DISADVANTAGE: They do not relate directly to intuitive color notions of hue, saturation, and brightness.
  • 104. Due to the different absorption curves of the cones, colors are seen as variable combinations of the so-called primary colors: red, green, and blue Their wavelengths were standardized by the CIE in 1931: red=700 nm, green=546.1 nm, and blue=435.8 nm The primary colors can be added to produce the secondary colors of light, magenta (R+B), cyan (G+B), and yellow (R+G) PRIMARY AND SECONDARY COLORS
  • 105. ADDITIVE COLOR MODEL SUBTRACTIVE COLOR MODEL
  • 106.
  • 107.
  • 108. RGB COLOR MODEL R=G=B=1---------WHITE COLOR R=G=B=0---------BLACK COLOR If the values are 0.5, the colour is still white but only at half intensity, so it appears gray. If R = G = 1 and B = 0 (full red and green with no blue), RGB model is more suitable for quantifying direct light such as the one generated in a CRT monitor, TV screens.
  • 110. YIQ COLOR MODEL The YIQ model takes advantage of the human eye response characteristics. The human eye is more sensible to luminance that to colour information. NTSC video signal about 4 MHz - Y. Also the eye is more sensitive to the orange - blue range (I) than in the green - magenta range (Q). So a bandwidth of 1.5 MHz - I and 0.6 MHz - Q parameter. The conversion from YIQ space to RGB space is achieved by the following transformation. The YIQ model is used for raster colour
  • 113. Colors in computer graphics and vision • How to specify a color? – set of coordinates in a color space • Several Color spaces • Relation to the task/perception – blue for hot water
  • 114. COLOR MODELS The purpose of a color model (or color space or color system) is to facilitate the specification of colors in some standard way A color model provides a coordinate system and a subspace in it where each color is represented by a single point
  • 115. Color spaces • Device based color spaces: – color spaces based on the internal of the device: RGB, CMYK, YCbCr • Perception based color spaces: – color spaces made for interaction: HSV • Conversion between them?
  • 116. ANIMATION ❑ Animation is a process in which the illusion of movement is achieved by creating and displaying a sequence of images with elements that appear to have a motion. ❑ Animation is a valuable extension of modelling and simulation in the world of science and engineering. ❑ Usefull visulaization aid for many modelling and simulation applications.
  • 117. Each still image is called a frame. Animation may also be defined as the process of dynamically creating a series of frames of a set of objects in which each frame is an alteration of the previous frame. In order to animate something, the animator has to be able to specify directly or indirectly how the 'thing' has to move through time and space. Animation can be achieved by the following ways. (a) By changing the position of various elements in the scene at different time frames in a particular sequence b) By transforming an object to other object at different time frames in a particular sequence (c) By changing the colour of the objectat different time frames in a particular sequence d) By changing the light intensities of the scene at different
  • 118. Degrees of freedom for a stationary, single-arm robot
  • 119. MORPHING: Transformation of object shapes from one form to another is called morphing, which is a shortened iorm of metamorphosis. Morphing methods can he applied to any motion or transition involving a change in shape. COMPUTER ANIMATION LANGUAGE: Design and control of animation sequences are handled with a set of animation routines. ❖ C, ❖ Lisp, ❖ Pascal, or FORTRAN.
  • 120. TRANSFORMING A TRAINGLE IN TO QUADRILATERAL
  • 121. MOVING CAR IN TO A TIGER
  • 122. AMONG THESE DIFFERENT METHODS ❑ Conventionally or traditionally using manual work or using computer multimedia for producing movies, cartoons, logos and advertisements. In conventional or traditional method, most of the animation was done by hand. ❑ All frames in an animation had to be drawn by hand. Since each second of animation requires 24 frames (film), ❑ No calculations or physical principles required in this method. ❑ To create cartoon characters. These animations are use modeling of muscles and human body kinematics to create facial expressions, deformable body shape, unrealistic fight sequence, transformations etc.
  • 123. COMPUTER ANIMATION Animation is the process of illusion of continuous movement of objects created by a series of still images with elements that appear to have motion. Each still image is called a frame. Animation may also be defined as the process of dynamically creating a series of frames of a set of objects in which each frame is an alteration of the previous frame. Animation can be achieved by the following ways. (a) By changing the position of various elements in the scene at different time frames in a particular sequence (b) By transforming an object to other object at different time frames in a particular sequence (c) By changing the colour of the object at different time frames in a particular range (d) By changing the light intensities of the scene at different time frames in a particular sequence.
  • 124. COMPUTER ANIMATION Applications of Animation: There are several areas where the animation can be extensively used. These areas can be arbitrarily divided into five categories. 1. Television: TV has used it for titles, logos and inserts as a powerful motivator for the rapid development of animation. But its main uses are in cartoons for children and commercials for a general audience. 2. Cinema: Animation as a cinematic technique has always held an important role in this industry. Complete animation films are still produced by the cinema industry. But, it is also a good way of making special effects and it is frequently used for titles and generics. 3. Government: Animation is an excellent method of mass communication and governments are of
  • 125. ….Applications of Animation: 4. Education and research: Animation canbe extensively used for educational purposes. Fundamentals concepts are easily explained to students using visual effects involving motion. Finally, animation can be a great help to research teams because it can simulate the situations, e.g., in medicine or science. 5. Business: The role of animation in business is very similar to its role in government. Animation is useful for marketing, personnel education and public relations. 6. Engineering: Engineers do not require the realistic images the entertainment field demands. It must be possible to identify unambiguously each separate part and the animation must be produced quickly.
  • 126. CONVENTIONAL ANIMATION Conventional animation is generally based on a frame-by-frame technique. It is very expensive in terms of man power, time and money. This type of animation is oriented mainly towards the production of two-dimensional cartoons. Every frame is a flat picture and it is purely hand-drawn. These cartoons are complex to produce and it may involve large teams such as Walt Disney or Hannah-Barbera Productions. It is better to understand the various steps/process involved in the conventional animation. It can be described with an example of making an animated film as illustrated in Figure.
  • 127. A typical task in an animation specilkation is scene description.
  • 130. COMPUTER ANIMATION As the conventional animation has number of limitations such as time consuming, expensive etc., Computer animations are more widely used as the solution for these limitations. Computer animation generally refers any time sequence of visual changes in a scene by using computers and related software. CLASSIFICATION OF COMPUTER ANIMATION: There are a number of different ways of classifying the computer animation systems. First, we can define the various levels of systems.
  • 131. Level l: It is used only to interactively create, paint, store, retrieve and modify drawings. They do not take much time. They are basically just graphics editors used only by designers. Level 2: It can compute “in-betweens” and move an object along a trajectory. These systems generally take more time and they are mainly intended to be used by or even replace in betweens. Level 3: It provides the animator with operations which can be applied to objects for example, translation or rotation. These systems may also include virtual camera operations such as zoom, pan or tilt.
  • 132. Level 4: It provides a means of defining actors, ie., objects which possess their own animation. The motion of these objects may also be constrained. Level 5: They are extensible and they can learn as they work. With each use, such a system becomes more powerful and "intelligent". Computer animation can be further classified into two types based on its application in major field.(i) Entertainment animation & (ii) Engineering animation.
  • 133. (I) ENTERTAINMENT ANIMATION: Entertainment type of computer animation is mainly used to make movies and advertisement for entertainment purposes. The procedure is similar the conventional animation procedure described in Figure. The drawings of “key frames” and “in-betweens” are created by using computer generation techniques. The drawings of key frames are created by using various interactive graphics software programs which utilizes the different transformation techniques such as rotation, reflection, translation etc.
  • 134. The entertainment animation can be further classified into the following two types. (a) COMPUTER-ASSISTED two-dimensional animation (b) MODELED ANIMATION or three-dimensional animation. COMPUTER-ASSISTED ANIMATION, sometimes, called key frame animation consists mainly of assisting conventional animation by computer. Key frame animation systems are typically of level 2. MODELED ANIMATION means drawing and manipulation of more general representations which move about in three-dimensional space. This process is very complex without a computer. Modeled animation systems are generally of level 3 to level 4. Systems of level 5 are not yet available.
  • 136. (II) ENGINEERING ANIMATION: CAD/CAM applications using the animation technique extensively in variety of applications such as generating NC tool paths, simulation of automated assembly and disassembly, simulation of finite element results, mechanism movements, rapid prototyping etc. Engineering animation is mainly an extension of modeled animation. But, it is more science oriented rather than art and image. No real time image simulation is required as in case of entertainment animation. Many cases only wireframe models can give the satisfactory results. However, engineering animation systems should meet the following criteria. (a) Exact representation and display of data, (b) High-speed and automatic production of animation & (c) Low host dependency:
  • 137. ANIMATION TECHNIQUES: (i) Keyframe animation: (li) Linear interpolation: (iii) Curved interpolation: (iv) Interpolation o/position and orientation: (v) Interpolation of shape: (vi) Interpolation of attributes:
  • 138. ANIMATION TECHNIQUES: Keyframe animation: A key frame is defined by its particular moment in the animation timeline as well as by all parameters or attributes associated
  • 139. A sequence with three keyframes and two interpolations one quicker than the other
  • 140. A sequence with three keyframes and two interpolations one quicker than the other Key frames techniques have not proven their applicability for
  • 141. ANIMATION TYPES In the preceding sections, the classification animation systems are on the basis of their role in the animation process. Another consideration is the inode of production. Computer animation is just a special case of animation defined as a succession of mages but each is differing from the one preceding. Also, the computer is used to produce each frame individually to be photographed. In other words, it can be explained as the "film" produced is directly on a terminal.
  • 142. The animations are classified into the following three types. (i) Frame-buffer animation, (ii) Frame-by-frame animation, (iii) Real-time playback, &animation. ANIMATION TYPES
  • 143. SIMULATION APPROACH: This approach is on the basis of physical laws which control the motion or the dynamic behaviour of the object to be animated. HYBRID APPROACH: Enlargement animation is restricted in simulation approach even the approach is' very much attractive to describe the dynamic behaviour of the object. CAMERA ANIMATION: The camera plays an important role in computer animation because its motion and the changes in some of its attributes can have a powerful storytelling effect. The point of view of a camera and the type of camera shot are both defined by the position and orientation of the camera. All camera motions require a change in position and orientation of the camera.