2. What is Rendering?
Rendering is the process of generating an 2D
image from a model using 3D software. The term
rendering refers to the calculations by a Render
Engine to translate the scene from a mathematical
approximation to the final image.
The Render Engine takes into account the
geometry, camera angle and settings, textures and
lighting of a scene to generate the final image.
There are two types of rendering, Real-time and
Offline.
3. Real Time Rendering
Real Time Rendering - renders the image at an incredibly rapid rate
(24+ frames per second). This is predominantly used in videogames
because the animations can be change dependent on a users
interaction.
Real Time rendering is generally achieved with a dedicated Graphics
Processing Unit (GPU).
Most of the environment’s lighting information is pre-computed into
the model’s textures to greatly improve render speed. This process
is called Baking. While Baking the lighting can reduce render times,
most real-time rendering engines cannot accurately recreate certain
effects such as reflections and caustics. They generally fake these
effects (but engine advancements are starting to change this)
4. Offline Rendering
Offline Rendering - also called pre-rendering, is used in any
animation where speed is not an issue. Offline rendering can take as
little as a few seconds to as long as several hours to compute a
single frame.
Because speed is not an issue, it doesn’t have to cheat to simulate
any effect, therefore it can simulate effects such as reflections,
refractions and global illumination much more accurately.
Offline rendering is typically used in film and television because of
its ability to create photorealistic imagery over real time rendering.
Maya typically uses Offline Rendering Engines.
5. Batch Rendering
When creating an animation, you do not typically create a video file all at once.
Instead, the render engine will create each frame of the animation individually.
This is called Batch Rendering.
Since the process of rendering can be very intensive of a computer, it is ideal to
save each frame as an individual image, and then to compose the frames as an
animation after the rendering is complete. There are two advantages of this, the
first is that if the computer crashes, you only lose the render time of the current
frame being rendered. The second is that you can render in batches, this means
you can pause rendering and continue it at different times (this is important when
you need to stop rendering to do other work on your computer)
Render Farm- is a large scale computer cluster built to render CGI. Render Farms
are typically used by large scale television and film productions to produce high
quality images in a reduced amount of time.
6. CPU vs. GPU rendering
CPU render engines use the central processing unit of a computer to
render the image. CPU’s with more cores and threads will render faster.
CPU rendering is typically very stable and supports most effects and
features. CPU rendering uses the computer’s random access memory
(ram) when rendering (this ram is cheaper and generally much larger)
GPU render engines use the graphics processing unit of a computer to
render the image. GPU rendering is generally much faster than CPU
rendering. Most modern GPU render engines support most common
effects and features. GPU rendering uses the GPU’s ram when rendering
(this is usually much smaller), because of this, GPU rendering cannot
render scenes that need an extreme amount of memory (lots of
textures/effects/high polycount).
CPU
GPU
7. Biased vs. Unbiased
Unbiased Renderers are brute force light calculations. This means that they
calculate all the light rays in a scene equally, more closely mimicking how real
light is transmitted. Because of this, unbiased renderers are very consistent.
Biased renderers means that the render engine throws more computational power
at rendering areas with more details or less smooth surfaces. This leads to less
noise in less render passes and a reduced render time. Biased rendering can lead
to some less realistic imagery. The inconsistency between renders in biased
rendering made them less than ideal for animation but modern render engines
have mostly eliminated many common problems such as splotchy imagery and
light differences.
8. Texture Mapping
Texture Mapping is a method of applying 2D images
to a 3D model.
Texture maps can add a great amount of detail to a
lower polycount model. They can also be
photorealistic if needed.
In order for a 2D image to be applied to a 3D model,
the image needs to be UV mapped to the models U
and V axes. (this was covered in the intro to
modelling course)
9. Bump and Normal Mapping
Bump mapping is a rendering technique that can simulate how
light moves across the bumps and grooves of a texture.
A bump map is a separate map from the texture map that tells the
rendering engine that it should calculate a flat surface as not flat.
It's important to understand that the detail a bump map creates is
an illusion.
On a bump map, when values get brighter, working their way to
white, details appear to pull out of the surface. To contrast that,
when values get darker and closer to black, they appear to be
pushing into the surface.
Normal Maps - are a newer slightly more complex version of
bump maps.
10. Shadows
A shadow is the dark area where light from a source is
blocked or obstructed by an object. All Offline
Rendering can simulate basic shadows and most Real
time rendering can also simulate shadows.
Soft Shadows- allow for varying levels of darkness in a
shadow dependent on if the light is partially
obstructed. This creates much more photorealistic
imagery. Most Offline renderers can simulate realtime
shadows, most real time renderers fake this effect.
11. Reflection
Reflections - simulate how light bounces off mirror like or glossy
surfaces.
Offline Renderers are all capable of creating realistic reflections.
Typically rougher reflective surfaces take longer to render
because they produce more noise.
Most Offline renderers cannot create reflections (The unreal
engine demo recently showed it’s capable of this). They fake
reflections to cut down on compute time.
Often a rendering engine can control how many times the light will
bounce off a glossy surface. The more light bounces will lead to
greater realism but at an increased render time.
12. Refraction
When a material is transparent or partially transparent (like glass), light
bends as it passes through. This is called refraction.
We measure how light passes through transparent objects by using the
index of refraction (IOF). Every object has an IOF. Most Offline renderers
have shaders where the IOF can be adjusted to create photorealistic
results.
Most Real time Renderers cannot compute refraction fast enough so
they fake the effect.
Caustics- is the reflection of light off a shiny object, or focusing of light
through a transparent object, to produce bright highlights on another
object. Caustics are handled very differently by most offline renderers
but generally add a significant amount of time to computer.
13. Indirect Illumination
Indirect or Global Illumination (GI) simulates realistic lighting
in a 3D scene. Indirect Illumination doesn’t just take into
effect the light directly coming from a light source (direct
illumination), it also takes into effect how the light bounces
off of objects and casts light on other objects (indirect)
Indirect Illumination greatly enhances the photorealism of a
scene.
Shadows, Reflections and Refractions are all enhanced and
calculated by the Indirect Illumination algorithms.
GI OFF
GI ON
15. Camera Properties
Maya creates virtual cameras in a
scene that simulate how a real
video or photo camera moves and
captures images.
Because the virtual cameras
closely simulate real-world
cameras, it is beneficial to
understand some of the properties
of these cameras and compare
them to how Maya emulates them.
16. Depth of Field
-depth of field is a visual term for the amount of area that is in focus within an
image.
-on a camera the amount of depth of field can be adjusted using the camera’s
aperture. The aperture of a lens controls the amount of light that is let in. The
aperture is measured in F-Stops.
-We can adjust the amount of depth of field by adjusting the F-Stop slider in the
camera attributes.
-The lower the F-Stop (1.2-3.5), the shallower (more blurred) the depth of field will
be. The higher (5.6+), the less shallow it will be.
18. Focal Length
The Focal Length is the measurement of the distance between
the lens and the film gate.
The lower the Focal Length (8-24mm) the wider and more
distorted (fish-eyed) your image will be. This will make the nose
of a subject appear larger and the face to appear thinner.
Medium focal lengths (35-85mm) have minimal distortion.
The higher the Focal Length (100mm+), the closer and less
distorted your image will be. This will make a face appear wider
than it is.
20. Film Gate Size
On a camera, this is often referred to as the sensor size.
The size of the film gate can greatly affect the aesthetics of your image.
The larger the gate, the brighter the image will be. This will affect the depth of field.
(more light means shallower depth of field)
The larger the gate, the wider the focal length of your lens will be (this means that
you can get closer to a subject, important if your cameras are moving around a
smaller space)
The most common Film Gate Sizes are 35mm Full Aperture (matches 35mm photo
camera) and 35mm Film Academy (matches 35mm film/video camera)
22. Maya Software
-CPU rendering
-offers raytracing and scan-line rendering
-supports all effects built into Maya including particles,
paint-effects and fluid effects
-not really capable of creating photorealism, but fine
for animation.
-a slower, low quality renderer that will give you the
fewest problems and doesn’t require much extra
knowledge to use.
23. Arnold
-Included in Maya 2017 (need to activate plugin)
-Does not support batch rendering without a paid license
(you can batch render with a watermark)
-CPU rendering
-Unbiased Rendering
-Ray tracing renderer, photorealistic
-simple and easy to learn (ai nodes)
-large user base (lots of support)
24. Mental Ray
Website
-Included in Maya 2016 and earlier (able to batch render)
-Can download for Maya 2017 but batch rendering not available.
-CPU rendering
-Biased Rendering, Unbiased Rendering is optional
-Ray tracing renderer, photorealistic
-large user base (lots of support)
-can use the power of CUDA (nvidia graphics cards) to compute the
indirect/global illumination on GPU. This can speed up your render
times.
25. V-Ray
Website
-Need to purchase license (demo available)
-CPU or GPU rendering (V-Ray RT)
-Biased rendering, Unbiased Rendering is optional
-raytracing (global illumination, photon mapping),
photorealistic
26. Octane
Website
-Need to purchase license to use (demo available)
-Uses its own material and lighting nodes
-Unbiased Rendering
-GPU rendering (fast)
-generally less noise in rendered images (needs less render time to eliminate
noise)
-Does not have a proper farm based workflow, this means it's ideal for
independent projects but not large scale productions.