This document discusses rendering algorithms and techniques. It begins by defining rendering as the process of generating 2D or 3D images from 3D models. There are two main categories of rendering: real-time rendering used for interactive graphics, and pre-rendering used where image quality is prioritized over speed. The three main computational techniques are ray casting, ray tracing, and shading. Ray tracing simulates physically accurate lighting by tracing the path of light rays. Shading determines an object's shade based on attributes like diffuse illumination and light source contributions.
2. What is rendering
Rendering is the process involved in the generation of a
two-dimensional or three-dimensional image from a model
by means of application programs. Rendering is mostly
used in architectural designs, video games, and animated
movies, simulators, TV special effects and design
visualization. The techniques and features used vary
according to the project. Rendering helps increase
efficiency and reduce cost in design.
RENDERING ALGORITHM
2
3. Categories of rendering
3
• There are two categories of rendering: pre-rendering and real-time rendering. The
striking difference between the two lies in the speed at which the computation
and finalization of images takes place.
• Real-Time Rendering: The prominent rendering technique using in interactive
graphics and gaming where images must be created at a rapid pace. Because user
interaction is high in such environments, real-time image creation is required.
Dedicated graphics hardware and pre-compiling of the available information has
improved the performance of real-time rendering.
• Pre-Rendering: This rendering technique is used in environments where speed is
not a concern and the image calculations are performed using multi-core central
processing units rather than dedicated graphics hardware. This rendering
technique is mostly used in animation and visual effects, where photorealism
needs to be at the highest standard possible.
4. Computational techniques
4
• For these rendering types, the three major computational techniques used are:
Ray Casting
Ray Tracing
Shading
5. Ray Tracing
• Ray tracing is a rendering technique that can realistically simulate the lighting of a
scene and its objects by rendering physically accurate reflections, refractions,
shadows, and indirect lighting.
• Ray tracing generates computer graphics images by tracing the path of light from
the view camera (which determines your view into the scene), through the 2D
viewing plane (pixel plane), out into the 3D scene, and back to the light sources.
As it traverses the scene, the light may reflect from one object to another (causing
reflections), be blocked by objects (causing shadows), or pass through transparent
or semi-transparent objects (causing refractions).
• All of these interactions are combined to produce the final color and illumination
of a pixel that is then displayed on the screen. This reverse tracing process of
eye/camera to light source is chosen because it is far more efficient than tracing all
light rays emitted from light sources in multiple directions.
5
6. • Another way to think of ray tracing is to look around you, right now. The objects
you’re seeing are illuminated by beams of light. Now turn that around and follow
the path of those beams backwards from your eye to the objects that light interacts
with. That’s ray tracing.
• The primary application of ray tracing is in computer graphics, both non-real-time
(film and television) and real-time (video games). Other applications include those
in architecture, engineering, and lighting design.
6
8. Ray Tracing Fundamentals
• Ray casting is the process in a ray tracing algorithm that shoots one or more rays
from the camera (eye position) through each pixel in an image plane, and then
tests to see if the rays intersect any primitives (triangles) in the scene.
• If a ray passing through a pixel and out into the 3D scene hits a primitive, then the
distance along the ray from the origin (camera or eye point) to the primitive is
determined, and the color data from the primitive contributes to the final color of
the pixel.
• The ray may also bounce and hit other objects and pick up color and lighting
information from them.
9. • Ray casting is the most basic of many computer graphics rendering algorithms that
use the geometric algorithm of ray tracing. Ray tracing-based rendering algorithms
operate in image order to render three-dimensional scenes to two-dimensional
images.
• Geometric rays are traced from the eye of the observer to sample the light
(radiance) travelling toward the observer from the ray direction. The speed and
simplicity of ray casting comes from computing the color of the light without
recursively tracing additional rays that sample the radiance incident on the point
that the ray hit.
• This eliminates the possibility of accurately rendering reflections, refractions, or
the natural falloff of shadows; however all of these elements can be faked to a
degree, by creative use of texture maps or other methods. The high speed of
calculation made ray casting a handy rendering method in early real-time 3D
video games.
9
10. • The idea behind ray casting is to trace rays from the eye, one per pixel, and find
the closest object blocking the path of that ray – think of an image as a screen-
door, with each square in the screen being a pixel. This is then the object the eye
sees through that pixel.
• Using the material properties and the effect of the lights in the scene, this
algorithm can determine the shading of this object. The simplifying assumption is
made that if a surface faces a light, the light will reach that surface and not be
blocked or in shadow. The shading of the surface is computed using traditional 3D
computer graphics shading models.
• One important advantage ray casting offered over older scanline algorithms was
its ability to easily deal with non-planar surfaces and solids, such
as cones and spheres. If a mathematical surface can be intersected by a ray, it can
be rendered using ray casting. Elaborate objects can be created by using solid
modelling techniques and easily rendered.
10
11. • Path Tracing is a more intensive form of ray tracing that traces hundreds or
thousands of rays through each pixel and follows the rays through numerous
bounces off or through objects before reaching the light source in order to collect
color and lighting information.
• Bounding Volume Hierarchy (BVH) is a popular ray tracing acceleration
technique that uses a tree-based “acceleration structure” that contains multiple
hierarchically-arranged bounding boxes (bounding volumes) that encompass or
surround different amounts of scene geometry or primitives.
• Testing each ray against every primitive intersection in the scene is inefficient and
computationally expensive, and BVH is one of many techniques and optimizations
that can be used to accelerate it. The BVH can be organized in different types of
tree structures and each ray only needs to be tested against the BVH using a depth-
first tree traversal process instead of against every primitive in the scene.
• Prior to rendering a scene for the first time, a BVH structure must be created
(called BVH building) from source geometry. The next frame will require either a
new BVH build operation or a BVH refitting based on scene changes.
11
12. • Denoising Filtering is an advanced filtering techniques that can improve
performance and image quality without requiring additional rays to be cast.
Denoising can significantly improve the visual quality of noisy images that might
be constructed of sparse data, have random artifacts, visible quantization noise, or
other types of noise.
• Denoising filtering is especially effective at reducing the time ray traced images
take to render, and can produce high fidelity images from ray tracers that appear
visually noiseless. Applications of denoising include real-time ray tracing and
interactive rendering. Interactive rendering allows a user to dynamically interact
with scene properties and instantly see the results of their changes updated in the
rendered image.
12
15. Shading algorithm
• Shading is referred to as the implementation of the illumination model at the pixel points or
polygon surfaces of the graphics objects.
• Shading model is used to compute the intensities and colors to display the surface. The shading
model has two primary ingredients: properties of the surface and properties of the illumination
falling on it. The principal surface property is its reflectance, which determines how much of the
incident light is reflected. If a surface has different reflectance for the light of different
wavelengths, it will appear to be colored.
• An object illumination is also significant in computing intensity. The scene may have to save
illumination that is uniform from all direction, called diffuse illumination.
15
16. • Shading models determine the shade of a point on the surface of an object in terms
of a number of attributes. The shading Mode can be decomposed into three parts, a
contribution from diffuse illumination, the contribution for one or more specific
light sources and a transparency effect.
• Each of these effects contributes to shading term E which is summed to find the
total energy coming from a point on an object. This is the energy a display should
generate to present a realistic image of the object. The energy comes not from a
point on the surface but a small area around the point.
• The simplest form of shading considers only diffuse illumination:
• where Epd is the energy coming from point P due to diffuse illumination. Id is the
diffuse illumination falling on the entire scene, and Rp is the reflectance
coefficient at P which ranges from shading contribution from specific light sources
will cause the shade of a surface to vary as to its orientation concerning the light
sources changes and will also include specular reflection effects.
16