GRPHICS07 - Textures
Upcoming SlideShare
Loading in...5
×
 

GRPHICS07 - Textures

on

  • 47 views

This is a course on the theoretical underpinnings of 3D Graphics in computing, suitable for students with a suitable grounding in technical computing.

This is a course on the theoretical underpinnings of 3D Graphics in computing, suitable for students with a suitable grounding in technical computing.

Statistics

Views

Total Views
47
Views on SlideShare
47
Embed Views
0

Actions

Likes
0
Downloads
0
Comments
0

0 Embeds 0

No embeds

Accessibility

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

GRPHICS07 - Textures GRPHICS07 - Textures Presentation Transcript

  • TEXTURES Michael Heron
  • INTRODUCTION  Textures are used to add variability to a surface.  Mirrors the different ‘look and feel’ of materials.  Texture mapping used to mimic real world properties.  Not a straightforward process to apply to a discontinuous approximation.
  • TEXTURES  Textures are often represented through the use of a 2D bitmap. Applied to a polymesh like a wallpaper. Pattern can be applied once, or repeatedly.  Used to simulate fine-grained detail. Ingrain wallpaper Tree bark Pleats on clothing  Proper way to handle this is through approximation by extremely small polygons. Prohibitively expensive. View slide
  • TEXTURE DIFFICULTIES  Polymeshes are approximations of real world shapes.  How do you wrap a 2D texture around an approximation?  Imagine trying to paste wallpaper into an irregularly shaped rock. 2D texture ? View slide
  • TEXTURE MAPPING  Common approach to make use of texture- mapping.  Mapping takes pixels from an image and maps them on to pixels in the rasterised image.  Pixels in the texture often referred to as texels.  Texel information must be combined with light, shadow and shading information.
  • TEXTURE MAPPING Pixel on the monitor maps onto a point on the 3D representation on the object, which in turn maps onto an appropriate texel.
  • TEXTURE PROJECTION  For simple shapes, a projection can be applied that permits reasonably clean application of a texture.  Flat  Cube  Tube  Sphere  The first two are easiest to visualise.  Imagine it like using digital projectors.
  • TEXTURE PROJECTION
  • TEXTURE MAPPING  Texture mapping is a complex task.  As usual, many different models exist for managing it.  Two part mapping process is much used in real- time rendering.  First, we map the texture onto an ‘easy’ intermediate surface  It is then shrink-wrapped onto the object  Can cause distortion problems
  • TWO-PART MAPPING Then, ‘shrink-wrapped’ onto object texture shaped texture
  • TWO-PART MAPPING Working out these projections allows the algorithm to build up a mapping of textures to objects
  • BUMP MAPPING  Bump mapping is used to allow a surface to appear dimpled without actually changing the object space.  Texture is applied and alters the normals through the use of a specified heightmap.  About the fine-grained shape of the object more than the actual colour.  Interacts primarily with light representation of a shape.  This is also known as normal mapping.
  • BUMP MAPPING Letters here applied with a bump map – perturbation of surface normals handles the greater definition. The shape itself is not changed – only the calculation for light with regards to the surface normal. This can be seen in the silhouette and shadows of objects. Good enough for most applications.
  • BUMP MAPPING Bump maps represented by colours. Colours in the R,G and B wavelengths represent the X, Y, and Z perturbations of the surface normal.
  • BUMP MAPPING  Bump map pixels represent a single unit length. Thus, (x*x) + (y*y) + (z*z) == 1  X, Y, and Z must lie between -1 and 1  Bumps represented as: R = (X+1) / 2 G = (Y+1) / 2 B = (Z+1) / 2  All combinations of unit normals can be represented this way. Since we are representing bumps usually primarily in the Z axis, bump maps tend to be blue.
  • ENVIRONMENT MAPPING  Environment mapping is used to reflect the surrounding environment. As if in a mirror.  Environment stored as a precomputed texture map. More efficient way of handling ‘ray tracing’ style rendering.  More on this later.  Only handles direct illumination. Limitation of a local reflection model.
  • ENVIRONMENT MAPPING Effect can be convincing, but lacks the details present using more computationally expensive techniques such as ray tracing. Effect suffers from limitation – reflection calculated from the object’s centre, not its actual reflecting surface.
  • ENVIRONMENT MAPPING Objects look artificially far away, because calculation is done from object’s exact centre. However, it is quick to process and does away with the need to do expensive ray-tracing passes.
  • ENVIRONMENT MAPPING  Handled via reflected rays.  Trace a ray from the viewpoint of the object and then trace the resultant ray backwards.  Cannot react instantly to environmental changes.  Texture is pre-computed and stored.  Changes in lighting or object placement require recalculation.  Relatively low cost, thus can be done in real time.  Optimisations must be applied regardless.
  • THAT’S RENDERING!  Rendering is a complex process.  Involving the combination of many differing elements.  Let’s talk about how this all fits together inside the computer.  The order and type of processes applied are performed according to the graphics pipeline.
  • THE GRAPHICS PIPELINE  First of all, models are oriented in the 3D world space of the model. Often includes various transforms, and so is known as modelling transformation.  Next, all geometry in the scene is lit according to light sources. Only at the vertexes, and thus this is known as per-vertex lighting.  The various parts of the scene are transformed in 3D space in relation to the viewport camera. Viewing Transformation
  • THE GRAPHICS PIPELINE  Next, the geometry represented by the 3D images is rendered in 2D viewing space  Projection Transformation  Culling is done to remove polygons that need not be rendered.  Clipping  Rasterisation builds up the image in the framebuffer.  With the combination of z-buffering
  • THE GRAPHICS PIPLEINE  Texture maps are applied to the appropriate pixels based on interpolation of pixels in the frame buffer. Texturing  Finalised pixels are lit and shaded appropriately Fragment shading  Finally, the rendered image is displayed on the screen as a rasterised bitmap. The size of the raster is dependant on the resolution of the monitor.
  • SUMMARY  Textures are a useful approximation of fine-grained surface detail. They do away with the need to vastly increase polygon counts.  They can be simple texture maps Used to give a colour mapping to a polygon  They can be bump maps Used to pertubate surface normals.  The graphics pipeline ensures all parts of the rendering process occur at the right times.