Textures are used to add variability to a surface.
Mirrors the different ‘look and feel’ of materials.
Texture mapping used to mimic real world
Not a straightforward process to apply to a
Textures are often represented through
the use of a 2D bitmap.
Applied to a polymesh like a wallpaper.
Pattern can be applied once, or repeatedly.
Used to simulate fine-grained detail.
Pleats on clothing
Proper way to handle this is through
approximation by extremely small
Polymeshes are approximations of real world
How do you wrap a 2D texture around an
Imagine trying to paste wallpaper into an
irregularly shaped rock.
Common approach to make use of texture-
Mapping takes pixels from an image and maps them
on to pixels in the rasterised image.
Pixels in the texture often referred to as texels.
Texel information must be combined with light,
shadow and shading information.
Pixel on the monitor maps onto a point on the 3D representation on the
object, which in turn maps onto an appropriate texel.
For simple shapes, a projection can be applied
that permits reasonably clean application of a
The first two are easiest to visualise.
Imagine it like using digital projectors.
Texture mapping is a complex task.
As usual, many different models exist for managing
Two part mapping process is much used in real-
First, we map the texture onto an ‘easy’ intermediate
It is then shrink-wrapped onto the object
Can cause distortion problems
Working out these
the algorithm to
build up a mapping
of textures to
Bump mapping is used to allow a surface to
appear dimpled without actually changing the
Texture is applied and alters the normals through
the use of a specified heightmap.
About the fine-grained shape of the object more than
the actual colour.
Interacts primarily with light representation of a shape.
This is also known as normal mapping.
applied with a
bump map –
The shape itself is not changed – only the
calculation for light with regards to the surface
This can be seen in the silhouette and
shadows of objects.
Good enough for most applications.
Bump maps represented by colours. Colours in the
R,G and B wavelengths represent the X, Y, and Z
perturbations of the surface normal.
Bump map pixels represent a single unit
Thus, (x*x) + (y*y) + (z*z) == 1
X, Y, and Z must lie between -1 and 1
Bumps represented as:
R = (X+1) / 2
G = (Y+1) / 2
B = (Z+1) / 2
All combinations of unit normals can be
represented this way.
Since we are representing bumps usually
primarily in the Z axis, bump maps tend to be
Environment mapping is used to reflect
the surrounding environment.
As if in a mirror.
Environment stored as a precomputed
More efficient way of handling ‘ray tracing’
More on this later.
Only handles direct illumination.
Limitation of a local reflection model.
Effect can be
convincing, but lacks
the details present
such as ray tracing.
Effect suffers from
limitation – reflection
calculated from the
object’s centre, not its
However, it is
quick to process
and does away
with the need to
Handled via reflected rays.
Trace a ray from the viewpoint of the object and then
trace the resultant ray backwards.
Cannot react instantly to environmental changes.
Texture is pre-computed and stored.
Changes in lighting or object placement require
Relatively low cost, thus can be done in real time.
Optimisations must be applied regardless.
Rendering is a complex process.
Involving the combination of many differing
Let’s talk about how this all fits together inside
The order and type of processes applied are
performed according to the graphics pipeline.
THE GRAPHICS PIPELINE
First of all, models are oriented in the 3D
world space of the model.
Often includes various transforms, and so is
known as modelling transformation.
Next, all geometry in the scene is lit
according to light sources.
Only at the vertexes, and thus this is known
as per-vertex lighting.
The various parts of the scene are
transformed in 3D space in relation to the
THE GRAPHICS PIPELINE
Next, the geometry represented by the 3D images
is rendered in 2D viewing space
Culling is done to remove polygons that need not
Rasterisation builds up the image in the
With the combination of z-buffering
THE GRAPHICS PIPLEINE
Texture maps are applied to the
appropriate pixels based on interpolation
of pixels in the frame buffer.
Finalised pixels are lit and shaded
Finally, the rendered image is displayed
on the screen as a rasterised bitmap.
The size of the raster is dependant on the
resolution of the monitor.
Textures are a useful approximation of
fine-grained surface detail.
They do away with the need to vastly increase
They can be simple texture maps
Used to give a colour mapping to a polygon
They can be bump maps
Used to pertubate surface normals.
The graphics pipeline ensures all parts of
the rendering process occur at the right