SlideShare a Scribd company logo
1 of 34
Download to read offline
A Workflow for Gamma
Correction in Computer Graphics
S T E F A N S V E B E C K
Bachelor of Science Thesis
Stockholm, Sweden 2009
A Workflow for Gamma
Correction in Computer Graphics
S T E F A N S V E B E C K
Bachelor’s Thesis in Media Technology (15 ECTS credits)
at the Degree Programme in Media Technology
Royal Institute of Technology year 2009
Supervisor at CSC was Lars Kjelldahl
Examiner was Daniel Pargman
TRITA-CSC-E 2009:103
ISRN-KTH/CSC/E--09/103--SE
ISSN-1653-5715
Royal Institute of Technology
School of Computer Science and Communication
KTH CSC
SE-100 44 Stockholm, Sweden
URL: www.csc.kth.se
Abstract
A Workflow for Gamma Correction In Computer Graphics
Currently, 3D graphics production studios do not have time nor resources for setting up a proper workflow for
their working environment. The task for this project was to investigate different alternatives for implementing
gamma correction in a workflow with 3D graphics. A group of unbiased subjects where used to determine the
quality between images rendered with or without gamma correction. Additional tests on texture quality and
determination of realism was also carried through.
The results indicate that a workflow with a proper gamma correction adds realism and increases the predictability
of the rendered images. Thus, the time used for trial and error renderings can be minimized, adding to the time
available for increasing the image quality. Another great benefit for a proper gamma setup is that it will make
sure no unnecessary loss of information occurs when an image is processed. Therefore, we propose a schematic
overview in relation to gamma that can be used as a guide for any 3D graphics work in general. The schematics
tells what gamma space the rendered image or resource texture has in a specific part of the workflow, enabling
us to predict and setup our software accordingly. We can also apply these schematics in other areas that include
some kind of digital image processing, such as video and photography.
Sammanfattning
Ett arbetsflöde för gammakorrigering i datorgrafik
För närvarande har 3D produktion studios ofta för lite tid och resurser för att planera ett grafiskt arbetsflöde.
Målet var att undersöka olika alternativ för hur man kan implementera gamma korrektion i 3D grafik. En grupp
oberoende testpersoner fick avgöra kvalitetsskillnader mellan bilder renderade med och utan gammakorrektion.
Ytterligare testning genomfördes för att uppskatta skillnader i texturkvalitet och hur verklighetstrogna bilderna
uppfattas.
Resultatetvisadeattgammakorrektiongörbildenmerverklighetstrogenochrenderingsprocessenmeraförutsägbar.
Detta innebär att resurser kan användas för att generera bättre bildkvalitet istället för upprepade testförsök. En
annan fördel är att vi inte behöver oroa oss för att förlora bildinformation i onödan när bilden behandlas. Därför
har vi föreslagit ett generaliserande schema som tydligt visar hur gammas relation till bildbehandlingsprocessen.
Det går enkelt att se när och var en textur eller bild har ett visst gamma vilket ökar förutsägbarheten och förenklar
korrekt konfiguration av den mjukvara vi använder. Vi kan även tillämpa detta i andra områden som berör
bildbehandling, t.ex. video och fotografi.
Keywords, realistic image processing, linear workflow, tone mapping, high dynamic range imaging
Acknowledgements
I want to say thank you to my family for supporting me throughout this project and special thanks to Karl
Palmskog who dedicated a lot of his spare time into proofreading the document. I also want to thank my
supervisor at Cadwalk, Magnus Fuxner, who offered me the opportunity to do this project and arranged the
necessary resources. Last but not least I want to thank my supervisor Lars Kjelldahl at the Royal Institute of
Technology for aiding me with critique and ideas.
Content
1 Introduction
1.1 	Workflow in 3D Graphics	..........1
1.2 	Problem and Motivation.............1
1.3 	Thesis Overview..........................1
2 Background
2.1 Lighting Terminology...................2
2.1.1 Brightness				
2.1.2 Contrast
2.1.4 Luminance
2.1.5 Perceptually Uniform
2.1.6 Directional Light
2.1.7 Diffuse Lighte
2.1.8 Interreflection
2.1.9 Ground Reflection
2.1.10 Global Illumination
2.2 Gamma Correction......................3
2.3 HDRI	.........................................4
2.4 Color Mapping............................5
2.5 Linear Workflow .........................5
3 Method and Tasks
3.1 Solution Approach........................7
3.2 Methodology.................................8
3.3 Tools.............................................8
3.4 Linear Workflow Set	up.................8
3.4.1 Monitor Calibration
3.4.2 Gamma Setup
3.4.3 Render Setup
4 Result
4.1 Overview.......................................10
4.2 Gamma Correction Test................10
4.2.1 Scene Setup
4.2.2 Rendered Images
4.3 Texture Comparison	.....................12
4.3.1 Scene Setup
4.3.2 Prediction
4.3.3 Rendered Images			
4.4 Exterior Comparison....................14
4.4.1 Scene Setup
4.4.2 Prediction
4.4.3 Rendered Images
5 Conclusions & Discussions
5.1 Schematics....................................17
5.2 HDRI formats..............................18
5.3 Linear Workflow In Other Tools...18
5.4 Interior Environments..................19
5.5 Conclusions..................................19
5.6 Future Work.................................20
6 Bibliography
6.1 References....................................20
6.2 Online Resources.........................21
7 Appendix
7.1 Tables...........................................22
7.2 Texture Interview Form................23
7.3 Exterior Interview Form...............25
7.4 Interior Example .........................27
7.5 Glossary ......................................28
1
1 	 Introduction
1.1 	Workflows in Computer
Graphics
Workflow is a very broad topic and it is sometimes
referred to as pipeline in computer graphics. It
it a definition of what tools, image formats and
environment settings to use and in which order. To
efficiently execute an artistic idea into reality a well
defined workflow is necessary. In a production studio
working with computer graphics, there is a need to set
up a proper environment which includes tool settings
and monitor calibration. Without a specific workflow,
artists may experience a conflict between tools and
environment settings. While one scene might appear
and behave correct for one artist it might render
incorrectly for another artist because of different
environment settings.
Image formats are another issue. For instance, if two
artists use different image formats and tools these
might not support each other if they need to share
materials.Therefore,toolsworkingtogetherinaspecific
workflow should ensure support for the specific format
used between both applications. Differently calibrated
monitors are an issue that will lead to misinterpretation
of images. Thus, complicating the process of evaluation
for a rendered image. Hence, monitor calibration is a
required task for setting up an efficient workflow.
Gamma, gamma correction or nonlinear encoding
is an operation on images that adjust luminance for
display on a computer monitor. Computer graphic
applications can process images in either linear or
nonlinear gamma. Incorrect use of gamma will result in
poor image quality and reduce predictability of colors
and textures [16]. Therefore, the correct use of gamma
is very important for a computer graphics workflow.
1.2 	Problem and Motivation
The main reason why small media production
companies do not bother with gamma is often due
to a lack of knowledge or resources. Documentation
on this topic can be rather confusing and difficult
to find for a particular application environment.
Gamma has also been a troublesome topic for any
regular graphic designer to deal with and is often
completely misunderstood or neglected because of
its mathematical nature. Another interesting aspect
regarding gamma and its related issues to graphic
design is that most discussions can be found outside
the scientific community. This is perhaps because it is
thought of as trivial and scientifically unimportant that
non-expert users do not understand the importance of
gamma. However, the correct use of gamma in graphic
workflows and pipelines are extremely important
for anyone working with realistic image processing.
Hence, an investigation of how to implement this was
commissioned by Magnus Fuxner at Cadwalk.
The task was to investigate how to integrate gamma
correction in a three-dimensional graphics workflow
working in linear gamma. What the common pitfalls
and errors are, and how they can be eliminated.
1.3 	Thesis Overview
In Chapter 2, we explain some terms that are important
for understanding the concept of gamma and three-
dimensional computer graphics. A more detailed
survey of gamma and its related issues are covered in
Section 2.2.
In Chapter 3, we discuss possible methods and tasks
that will compare workflows with and without gamma
correction and reveal any advantages and disadvantages
for each one. The complete setup for the proposed
workflow can be seen in Section 3.5.
As can be seen in Chapter 4, colors and materials in
images where gamma correction was used had a higher
quality and increased realism (Section 4.2). In Section
4.3 we note that the physical workflow also enhances
the realism of the rendered images (Section 4.4).
In Chapter 5 a schematic overview is proposed for
our workflow and different tools and environments
are discussed. We conclude that setting up gamma
correctly is highly beneficial and should be considered
by all computer graphics artisans. Although there is
still a lot of work that could be done to help fellow
artisans to understand the concept of gamma.
A colored version of this thesis with high resolution is
available at http://stefan.svebeck.se
2
2 	 Background
2.1 	Lighting Terminology
There are a number of important lighting effects that
are important for our investigation regarding gamma
correction and realistic imagery. Indirect illuminance,
or interreflections are important for adding realism
to interior rendered images [6]. However, an exterior
scene is no exception; both skylight, ground reflections
and interreflections are very important for rendering
realistic images. Without ground reflections and
interreflections, shadows from direct sunlight will
appear unnaturally dark and conceal geometry that
would otherwise be visible in a realistic image [11].
Thus, a proper lighting setup where light behaves
physically correct is very important for any realistic
image processing.
2.1.1 	Brightness
According to the Central Bureau of the Commission
Internationale de L’Éclairage (CIE), which is the
accepted authority of lighting, color and vision, and
image technology, brightness is defined to be an
attribute of a visual sensation according to which an
area appears to emit more or less light [2]. Brightness
is therefore something highly subjective to the viewer
and is very difficult to quantify.
2.1.2 	Contrast
In the perceptual sense contrast is defined by CIE
as an assessment of the difference in appearance of
two or more parts of a field seen simultaneously
or successively. But contrast is also measurable by
quantifying the difference in luminance using dL1
/
L2
near the luminance threshold and L1
/L2
if the
luminance values are high.
2.1.3 	Luminance
Because brightness is so difficult to quantify, CIE
definedluminancewhichisamoremeasurablequantity.
According to Poynton’s color FAQ [9] luminance is
proportional to the intensity of a light source. But the
spectral composition of luminance is related to the
brightness sensitivity of the human vision.
2.1.4 	Perceptual Uniformity
A change in color difference (i.e. Euclidan distances) in
the RGB color space should correspond to an equally
perceived visual difference. A more trivial explanation
can be found in Poynton’s color FAQ:
The volume control on your radio is designed to be
perceptually uniform: rotating the knob ten degrees
produces approximately the same perceptual increment
in volume anywhere across the range of the control.
If the control were physically linear, the logarithmic
nature of human loudness perception would place all
of the perceptual ”action” of the control at the bottom
of its range.
2.1.5 	Directional Light
In graphics processing, sunlight is referred to as
directional light and comes from only one source in
one direction. In three-dimensional visualization,
shadows from directional light are sharp and black.
2.1.6 	Diffuse Light
In a physical world, sunlight is scattered by the
atmosphere and give rise to skylight, or diffuse lighting.
In contrast with sunlight, skylight is omnidirectional
and radiates in every direction.
2.1.7 	Interreflection
When light bounces between objects it is called
interreflection. Interreflected light (as seen in Figure
1.6) can brighten up areas that cannot be reached by
directional light. This phenomena is usually referred to
as ”bounce light”.
Figure 1.6: Interreflection.
3
perceptually uniform domain and optimize perceptual
performance, in as few bits as possible [7]. If an image
could be composed of an infinite number of bits, the
use of gamma correction would not be necessary.
The luminance of the monitor is given by the power
function (a). Luminance (L) is proportional to intensity
and is measured in candela per square centimeter (cd/
cm2). The CRT transfer function has a black-offset
variable (ε) that is affected directly by the monitor’s
brightness.
(a) L = (V’ + ε)γ
As can be seen in the graph below in Figure 2.1, the
relationship between the monitor’s input value and the
2.1.8 	Ground Reflection
Similar to interreflection, ground reflection (Figure
1.7) is indirect light which interacts with the ground.
2.1.9	 Global illumination
In computer graphics applications global illumination
(GI) is a simulation of skylight (diffuse light) to achieve
near physically correct lighting.
2.1.10 Material Categories
Materials can be divided into four different categories
depending on how light reacts to the material [5]. A
diffuse reflection is uniform and is view independent
without gloss or highlights whereas a specular
reflection is view dependent and is often glossy and
have edge highlights. A typical diffuse material is
paper and a typical specular material is polished metal.
Transmissive materials are transparent and a typical
diffuse transmissive material is frosted glass, a typical
specular transmissive material is the glass in windows.
Material categories
1. Diffuse reflection
2. Specular reflection
3. Diffuse transmission
4. Specular transmission
2.2 	Gamma Correction
Gamma correction is a very hot topic in the computer
graphics community [17]. However, there have been
a lot of misconceptions about this topic as well. One
particular misconception is that gamma correction’s
main purpose is to compensate for the nonlinear
gamma of the cathode ray tube (CRT) monitor [8].
Hence the name ”gamma correction”. However, the
main purpose of gamma correction is to transform code
tristimulus values (proportional to linear light) into a
0 0.2 0.4 0.6 0.8
0
0.2
0.4
0.6
0.8
1
Luminosity,L
Input Signal, V’
Figure 2.1: The relationship between monitor input
values and luminance normalized between 0 and 1.
0 0.2 0.4 0.6 0.8 1
0
0.2
0.4
0.6
0.8
1
Luminosity,L
Input Signal, V’
Figure 2.2: The relationship between input values and
luminance for a gamma correction curve normalized
between 0 and 1.
Figure 1.7: Ground reflection.
4
output luminance is not linear. Thus, a pixel with an
input RGB value of 128, or 50% input value, will be
rendered with only 22% luminance. In CRT monitors,
the nonlinear relation is due to the electron gun and
not phosphor as commonly believed [8].
For simplicity, we will only cover gamma correction
for a PC environment. To make the image appear
perceptually correct, we have to apply a gamma
correction of 2.2 to our image, as seen in Figure 2.2.
The gamma correction function (b) is roughly the
inverse of the monitor gamma function.
(b) L = (V’) 1/γ
Without gamma correction an image will appear very
dark or washed out depending on the image current
gamma space. It will not make efficient use of the
number of bits available. Gamma correction can be
applied either during the rendering process in a three-
dimensional application or post-process in an image
manipulation tool, as long as the source image is in
linear gamma.
However, even though we apply gamma correction,
images will lose information and contrast if the
monitor’s brightness (black-offset) is either set too low
or too high.
2.3 	High Dynamic Range
Images
The importance of a high dynamic range (HDR)
format in image processing is widely recognized [3].
Additionally, high dynamic range imaging and its
relevance for realistic color reproduction has been
described earlier by [12].
Regular 8-bit, low dynamic range (LDR) images
cannot reproduce colors outside the color gamut,
or color range, and thus colors will be clamped and
perhaps result in burn-out spots. Color values above 1
or below 0 are considered out of gamut.
The images in Figures 2.3 - 2.5 are examples that
show us that toggling between overexposure and
underexposure without any loss of data is possible for
HDR images. This is crucial for any kind of advanced
image operation, and the image in Figure 2.6 show us
Figure 2.4: Overexposed, 32-bit LogLuv TIFF.
Figure 2.3: Normal exposure, 32-bit LogLuv TIFF.
Figure 2.5: Underexposed, 32-bit LogLuv TIFF.
5
what happens if we try to reverse the overexposure to
normal exposure with a LDR image. Information is
lost in the overexposed region of the teapot, highlights
in particular. Therefore, 16-bit or lower range is not
always sufficient for post-processing.
HDR images store their color values in float and not
in traditional 8/16-bit values per RGB channel. This
greatly increases the amount of information an image
can keep and does not limit it to color values between
0 and 1, although the image can usually only be viewed
on a LDR display. Also, float values would not be of
any use if there was no applications that could process
float values. Fortunately, float values are exactly what
is used for computation in most graphic processing
software.
Another difference is that HDR images are encoded
in linear gamma. This is very beneficial, since most
gradient and exposure functions are adapted and
optimal for linear gamma. However, LDR images with
16-bits per color channel can be considered sufficient
in many cases, especially if there is no need for any
advanced post-processing. LDR images with 8-bits
per color channel can only be considered if there is no
need for post-processing at all.
An important aspect of using HDR images is their
file size. A 32-bit version of the image in Figure 2.3 is
roughly seven times larger than an 8-bit version.
2.4 	Color Mapping
Color mapping is a frequently used function by
computer graphic artists and can also be referred to as
tone reproduction or tone mapping. The main purpose
of color mapping is to preserve a specific characteristic
of a HDR image for output on a LDR display.
However, color mapping is often misunderstood by
computer artists and sometimes misused for correcting
a poor lighting setup [19].
LDRdisplaysandprintcannotreproducehighdynamic
range values correctly. Thus, there is a need to map
values into low dynamic range. Today, almost every
display is a LDR display and not able to reproduce
all the information available in a HDR image. Even
though there are displays with high dynamic range
they are mainly used in scientific research so far and
they are not even near the dynamic range of the human
vision [10].
There are multiple tone reproduction operators
available and they are all specific in what effect they
have on a rendered image. From an artistic point of
view, exponential color mapping is mainly used to
reduce burn-out spots in the rendered image.
In the rendering process, however, the tone mapping
operator is something we want to avoid since we want
to preserve the image in linear gamma. An exponential
tone mapping operator would also clamp the output
colors to a value between 0 and 1. Using a linear tone
mapping operator will preserve the image float values
and linear gamma space.
2.5 	Using a Linear Workflow
Linear workflow is a term essentially used to describe
a computer graphics pipeline that is working with
linear gamma space. A linear gamma space is linear to
the intensity of light. A linear environment could be
any graphic application that process images in linear
gamma, such as 3D Studio Max [13] or Shake [23].
There are a few tutorials that address linear workflow
and gamma correction in specific environments [27],
[21].
Input
Textures can be divided into two different categories
accordingtogammaspace:nonlinearandlinear.Images
in formats such as Jpeg have a nonlinear gamma space
[18] and thus need to be corrected into linear gamma
Figure 2.6: The left image is a close up of the image
in Figure 2.3. The right image was first overexposed
as the image in Figure 2.4 and then saved as an 16-bit
TIFF image. It was then reverted to normal exposure.
6
throughout the interior space. Another solution is of
course to increase exposure in our physical camera.
This would also seem to be more physically correct,
since material properties are the same regardless of
whether the camera is placed in exterior or interior
environments.
A more detailed presentation of V-Ray and its physical
environmentcanbefoundinTheV-RayDocumentation,
[26].
space before being imported into the linear rendering
process. HDR images, however, already have a linear
gamma space and hence do not need to be changed.
Output
There are several options to consider for the output
process in a three-dimensional application. We can
either add the gamma correction permanently into our
image while processing or do it in a post-process step.
Both ways have their own advantages, but if we are
going to do any composition in another application it
is better to leave the image in linear gamma. A short
representation of a linear workflow can be divided into
four stages as shown below.
Linear workflow in short
1. Identify gamma values for source images
2. Nullify gamma values
3. Process
4. Apply gamma values for correct output
2.6 	A Physical Environment
A very interesting topic among graphic artists using
three-dimensional computer graphics software is to
work in a physical environment that is as close to real-
life as possible. A physical camera has shutter speed
and iris and much more that enables it to tweak the
amount of light exposing the medium. A real-life
sun has a very high intensity which is correct from
a physical point of view, but not very convenient for
the regular 3D artist as it might lead to overexposed
images if misunderstood as can be seen in Lele’s video
tutorial [19]. However, a physically correct rendered
image will be more realistic if executed properly.
A render engine named V-Ray [25] support a physical
environment. Issues related to overexposure and the
physical sun in V-Ray are generally solved by darkening
the color values on materials with a RGB multiplier.
This can be done by adding a V-Ray color map in the
diffuse channel, as seen in Figure 1.5. Textures can be
treated in a similar fashion. Exact values and settings
for the V-Ray color map is usually determined by the
scene setup. Interior scenes, for example, demands
brighter color maps so that light can spread nicely
Figure 1.5: V-Ray color map, RGB multiplier is set
to 0.255.
7
3 	 Method and Tasks
3.1 	Solution Approach
In section 2.2, we claimed that the use of gamma
correction in a linear workflow is often misunderstood
and thought of as unnecessary or overrated for realistic
image processing.
Therefore, the task was to investigate how and when
gamma correction should be applied in our graphic
pipeline, and to find out what the differences are
between using a linear workflow and a nonlinear
workflow. Since gamma correction is indeed known
to be required for displaying an image perceptually
correct on a low dynamic range display, a gamma
correction test is mainly proof of theory. Although we
cannot be entirely confident that our workflow setup
is correct without testing.
To find the most efficient workflow, we also need to
think about structure and organization. There might
be parts in our workflow that seem unnecessarily
complicated without yielding enough quality in
return. A comparison between linear workflow in
other environments would be one approach to find
solutions.
Task1: Gamma Correction Test
First of all, we need to investigate how a linear workflow
affects a scene with simple geometry and diffuse
colored materials compared to a nonlinear workflow.
Although similar tests have been done previously [21]
it is imperative to present the problem in a simple form.
An analysis will be based upon the physical correctness
of lighting in the scene. Skylight, interreflections and
ground reflection should also contribute correctly to
the scene.
Task 2: Texture Comparison
After the initial task, a thorough investigation with
textured materials is required to reveal any difference
between texture quality for a linear and nonlinear
workflow. To define the quality of each render, a group
of unbiased subjects will then express their opinion
about texture, shadow and lighting quality.
Task 3: Exterior Comparison
To be confident that our workflow is viable for advance
scenery we have to test it in a exterior architectural
scene. This will reveal any difference in reflections,
displacement and other advance material properties.
Task 4: Other Environments
AnadditionaltaskistofindouthowtheV-Rayworkflow
environment compares to other environments in the
aspect of gamma and its related issues.
3.2 	Methodology
The quality of images is something that needs to
be defined and therefore we let a group of unbiased
subjects answer a set of questions about the quality
and realism of images.
At first, we did not use a calibrated machine for this
purpose, since it was very practical to let people do
the test via a remote connection. However, there is a
substantial risk that those results are inaccurate due to
bad viewing conditions and uncalibrated monitors.
Note that our aim was to define quality by feedback,
not a statistical significant result. Hence, it is more
important with a small set of accurate data, than a lot
of data of questionable integrity. Although, as can be
seen in the Appendix, Table 7.1 and Table 7.2, the
results did not deviate relatively by much between
calibrated and non-calibrated monitors but it is not
something we can significantly determine with so few
subjects.
The computer used for the test was calibrated with
gamma 2.2 and white point was set to 6500 Kelvin,
just as the workstation computer for this project. To
minimize any dependence on order we arranged the
question forms in a different order for the second half
of our group. We also made certain nobody viewed the
images in odd angles, because the output on a liquid
crystal display (LCD) varies a lot depending on which
angle it is viewed from. However, viewing conditions
might still have been influenced by the time of day.
The unbiased group consisted of both users familiar
to graphics and those who were not. Though it was
difficult to find any subjects completely ignorant of
8
is required. However, software calibration may be
sufficient in most cases and there are free tools and
calibration techniques available online [14,15]. In
any case, doing software calibration is better than not
calibrating at all.
A problem that is related to calibration of output
devices is that different devices have different color
gamut, or range of color space. If colors are out-of-
gamut, a color mapping algorithm needs to handle
the image before it can be rendered properly on an
LDR display [4].
However, there are other factors that have a large
impact on how images appear on an output device.
The output device itself does not solely determine the
luminosity of the viewed display; reflected light and
ambient light also adds to the total luminosity. Thus,
we need to make sure we do not expose our screens to
unnecessarily harsh conditions such as bright sunlight,
that might result in glare or specular reflections.
3.5.2 	Gamma Setup
For the tests, an LCD display was calibrated to
gamma of 2.2 and set white point to 6500 Kelvin
with a hardware calibration tool. Hence, the workflow
and 3D Studio Max gamma setup needs to adjusted
accordingly, see Figure 3.1. To do this, we need to
graphics.
3.3 	Tools
Our toolset consisted mainly of computer graphics
software. The tools can be split into four different
categories: calibration tools, post-production tools,
rendering engines and 3D graphics applications.
Spyder 2 Pro
Spyder2Proisahardwarecalibrationtoolformonitors.
It was used before any other program.
3D Studio Max
3D Studio Max is a full-featured 3D graphics
application, which was used for modeling and setting
up the scene required for each task.
V-Ray
V-Ray is a rendering engine that enables a physical
environment setup. It is ideal for producing realistic
images.
Blender and YafRay
Blender is a 3D graphics application and YafRay is a
rendering engine, they were used for comparison with
a workflow that incorporated 3D Studio Max and
V-Ray.
Photoshop
The standard 2D graphics application Photoshop was
used for post-production and compositing of images.
3.5 	Linear Workflow Setup
3.5.1 	Monitor Calibration
Doing proper monitor calibration is an absolute
requirement for doing any workflow-related tests.
There are two ways of doing monitor calibration: by
hardware or by software. Hardware calibration is more
accurate since only a small amount of user interaction
Figure 3.1: Gamma setup in 3ds Max.
9
enable gamma correction and set gamma to 2.2.
However, this is not enough if we want to make sure
our workflow is truly linear; we need to make sure our
input and output settings are correct as well. Input is
set to gamma 2.2 since we need to be aware that 8-bit
textures usually have a gamma correction applied to
them that needs to be nullified before entering the
rendering process. HDR images, on the other hand,
have a linear gamma and do not need any changes.
Thus we need to override the input gamma with 1.0
every time we use a HDR image. We can assume this
will be more convenient than overriding gamma for
every 8-bit texture.
We do not want our output and rendered image to
have any gamma correction applied to it. The with
output gamma set to 1.0 will enable us to save our
image in a linear format, which is essential if we want
to post-process the image in composition software.
Another very important aspect we need to consider is
that textures and colors should be viewed in gamma
2.2. Otherwise, the final result will not be predictable
and the selected linear color will look significantly
different from the rendered gamma corrected color. To
fix this we enabled the ”Affect Material Editor” and
”Affect Color Selectors”.
As seen in Figure 3.2, colors viewed in gamma 2.2
correlate almost perfectly with the rendered colors.
Colors in gamma 1.0 only correlate with black or
white. This is no surprise, since a change in gamma
does not affect the color values 0 or 1.
3.5.3 	Render Setup
In all renders we used an irridiance map for primary
and light cache for secondary bounces. Subdivision
settings are always fixed at level 3. However, the most
important thing is to leave color mapping in linear
multiply, as seen in Figure 3.3.
Linear multiply will simply multiply colors based
on their brightness. This is essential since we do not
want to bother with a nonlinear color mapping and
alter the linear workflow. There are several nonlinear
alternatives that have certain effects and are popular to
use, but they are not viable for a linear workflow. There
are a lot of parameters involved in the render process,
and we will not go through all of them here since
most have no significance for the linear workflow. The
color mapping alternatives are further explained in The
V-Ray Documentation.
Figure 3.3: Setup for linear color mapping.
Figure 3.2: Five spheres rendered with a gamma
correction of 2.2 applied. For comparison, the
uppermost row of color bars is in gamma 1.0 and the
bottom row is in gamma 2.2.
10
4 	 Results
4.1 Overview
In Section 3.1 we presented three tasks that would
yield enough image data for discussing the advantages
and disadvantages of our workflow. The complexity of
the task increase as they follow:
1. Gamma correction test
2. Texture comparison
3. Exterior comparison
The results in the first test in Section 4.2 show that
images without gamma correction do not have realistic
lighting and shadows are unnaturally dark even though
we use exponential color mapping. The predictability
of colors were also increased if we applied gamma
correction.
The second test in Section 4.3 compares the texture
quality between images rendered in a nonlinear and
linear workflow. A group of unbias subjects stated that
imagesrenderedinalinearworkflowhadbetterlighting
and texture quality.Textures were also more predictable
in the image rendered in a linear workflow.
In Section 4.4 we increased complexity by rendering a
complete exterior scene with advanced materials and
geometry. The test was a comparison between a scene
rendered both with a linear workflow and a physical
environment versus one in a nonlinear workflow and
a non physical environment. Yet again a group of
unbias subjects judged the realism and quality of the
rendered images. They stated that the image rendered
with a linear workflow in a physical environments had
much better lighting quality, increased realism and an
increased amount of visible details.
A summary of the results show that a workflow where
gamma is used correctly together with a physical
workflow is beneficial for processing realistic images.
4.2	 Gamma Correction Test
4.2.1 	The Scene Setup
The first test is just a basic investigation that illustrates
what could happen if you do not use a linear workflow
and neglect the fact that the relationship between the
output and a LCD monitor is not linear.
The test scene is composed of simple geometry, a
V-Ray sun, physical camera and sky. The sun is set
perpendicular to the plane and thus simulates daylight.
There is one black sphere, four white toruses and one
yellow sphere. All materials in this scene are diffuse
to minimize any view dependent interference. The
spheres are control objects to keep track of whether
we have correct exposure; if a sphere is underexposed,
it will not be yellow on top (rgb: 255, 255, 0). Sun
intensity was set to 1.0 and the exposure settings were
set to the following:
Exposure settings
f-stop: 8
Shutter speed: 100 s-1
ISO: 72
4.2.2 	Rendered Images
The image in Figure 4.1 is rendered with the gamma
settings previously mentioned in Section 3.3. As
can be seen, the image looks unnaturally dark and
underexposed. However, the image is actually not dark
but is in fact an image in linear gamma, which explains
its dark appearance. Now, many artists do not actually
know about gamma correction and starts tweaking
the lighting and materials to make the rendered image
appear bright again [24].
With increased sun intensity, materials in the rendered
image (Figure 4.2) are overexposed, but we notice that
some parts previously in the dark are now fairly visible
compared to the image in Figure 4.1. We also noticed
an increase of render time from roughly 1m 32s to 1m
48s. This increase is due to an increase of sun intensity,
causing an increase in the number of reflections and
hence more calculations during the rendering process.
Another inevitable fact is that the sky has become very
bright due to the intensity of the sun. The increase of
11
sunintensitymeansanincreaseintheskylightintensity,
making the shadows tainted blue. Of course, this is
not a realistic image at all and a common response is to
apply a tone mapping operator to map values outside
of gamut into a low dynamic range domain.
Exponential tone mapping was applied to image in
Figure 4.3 during the rendering process, which indeed
reduced and minimized overexposure. At first, this
might seem as a good idea since the rendered image
looks a lot better and we can now see a lot more
details in the shady areas. Note, however, that we
have unnatural dark shadows with very little ground
reflection underneath the object, which is not a
realistic condition in an exterior scene with daylight.
Additionally, a strong bluish taint is still present
because of the skylight intensity. Thus, an increase of
sun intensity and exponential color mapping is not the
best solution to render a realistic image.
However, brighter materials might be considered a
viable option since they increase interreflection and
thus brighten the unnaturally dark shadow without
affecting the sky. However, as in the previous example,
the render in Figure 4.4 was overexposed when it was
rendered with linear color mapping. Additionally,
the render time increased from 1m 36s to 1m 45s,
which again is due to an increase in the number of
reflections.
By applying an exponential color mapping operator we
managed to reduce overexposure in the image (Figure
4.5) with a similar result as in the image (Figure 4.3)
with increased sun intensity. However, increased
interreflection was not enough and the issue regarding
unnaturally dark shadows still persisted. The increase
of interreflections was in fact creating more trouble by
making already bright materials appear to glow or even
emit light. This is highly noticeable where the white
torus are in contact with each other. Another obvious
fact is that the sky has gone very dark and does not
simulate daylight anymore. The scene representation
is irrelevant for exponential color mapping, since it
is a mathematical operator. Hence, it will reduce the
brightness of the sky whether it is physically correct
or not.
There are numerous methods that lead to unrealistic
images if we do not use gamma correction. By applying
gamma correction, the render in Figure 4.6 does not
Figure 4.1: No gamma correction. Color mapping is
set at linear multiply and sun intensity is set to 1.0.
Figure 4.2: No gamma correction. Color mapping is
set to linear multiply and sun intensity is set to 3.0.
Figure 4.3: No gamma correction. Color mapping is
set to exponential and sun intensity is set to 3.0.
12
only appear brighter. It also appears to be correctly
exposed and the shadows seem more natural than in
the previous examples. Interreflection and ground
reflection is clearly present and the sky background also
appear to be fairly exposed for daylight conditions.
4.3 	Texture Comparison
4.3.1 	The Scene Setup
The scene setup is yet again very simple and we use a
white and black sphere as control objects to validate
that our exposure settings are not completely off. The
wooden sphere and ground are the objects of interest.
The sun intensity is set at 1.0 and we used the same
exposure settings as in the previous experiment:
Exposure settings
f-stop: 8
Shutter speed: 100 s-1
Film ISO: 72
4.3.2 	Prediction
Quality of textures needs a clarification. We focused
our attention to how textures appear in the rendered
images versus the original texture image. Our aim was
to have as little or no difference between the original
texture image and its appearance in rendered images.
A prediction is that the render in Figure 4.7, which is
processed in a linear workflow, will be perceived to have
a higher degree of texture quality. Another prediction
is that this image should be perceived to have a higher
degree of shadow quality due to its similarity with the
previous gamma correction experiment in chapter 3.
However, perceived brightness is more difficult to
predict since it is a psychophysical phenomena and
is very dependent on its surroundings. Colors and
three-dimensional objects are just two factors among
many [1]. But we could assume that exponential color
mapping will produce a brighter image but with less
contrast and dynamic range.
Figure 4.4: No gamma correction. Color mapping is
set to linear multiply. Textures are twice as bright.
Figure 4.6: Gamma correction of 2.2 is applied. Color
mapping is set to linear multiply.
Figure 4.5: No gamma correction. Color mapping is
set to exponential. Textures are twice as bright.
13
4.3.3 	Rendered Images
The result from the texture comparison show that the
image the image rendered in a linear workflow had was
perceived to have better texture and shadow quality.
However, whether any image had a higher degree
of realism was unclear because many subjects stated
that the scene was unrealistic to begin with. Table 4.1
display how many times each image was chosen for
each question and the following questions were asked:
Questions
2. In which render does the wooden sphere best
match the wood texture?
3. In which image does the ground best match the
ground texture?
4. Does any image appear to be brighter than the
other?
5. Does any of the images appear to have a higher
texture quality?
6. Does any of the images appear to have a higher
shadow quality?
7. Does any of the images appear to be more realistic?
The image in Figure 4.8 was an attempt to achieve a
realistic image in a nonlinear workflow. Exponential
color mapping is applied to decrease overexposure just
as we did in Section 4.1. However, this time we tried
to tweak both textures and lighting to achieve as simile
brightness between both renders without lowering
the white (rgb: 255,255,255) color of the sphere. The
image in Figure 4.7 was perceived to have textures
with more details, better quality and vivid colors. For
example, one person stated:
Figure 4.7: Linear workflow.
Figure 4.8: Nonlinear workflow, with exponential
color mapping.
Figure 4.9: Ground
texture.
Figure 4.10: Wood
texture.
Question
2
3
4
5
6
7
First image,
Figure 4.7
5
5
1
5
5
4
Second image,
Figure 4.8
1
1
5
1
1
2
Table 4.1: Answers for the texture comparison test.
See appendix for the complete interview form.
14
”The first image [Figure 4.7] has more details in the
grooves”
An evident correlation is that textures matching the
original texture image were considered to have a
higher degree of quality. According to our unbias
group the image rendered in a nonlinear workflow
and exponential color mapping was perceived to have
dull and washed out textures compared to the image
processed in a linear workflow. Other relevant quotes:
”The first image [Figure 4.7] seem sharper and has
higher texture quality”
”Reality is not sharp, thus the second image [Figure
4.8] is more realistic”
Even though one could argue that the texture in the
first image (Figure 4.7) might be too vivid and colorful
to be natural, it is not an issue. The solution is simple,
we should select a natural looking texture in the first
place. Thus, we can reduce any need for guessing the
final appearance of a texture in a three-dimensional
render. Several quotes refer to the unnaturally dark
shadows, for example:
”The shadows in the second image [Figure 4.8] are too
dark”
Just as we predicted many of our subjects perceived the
image in Figure 4.7 to have a higher degree of shadow
quality. Some commented that shadows were unnatural
and to dark to be realistic in the second image, as in
Figure 4.8, which confirms that physically correct
lighting is required for realistic shadows. There were a
few people, however, that appreciated the unnaturally
dark shadows. Hence, the personal preference for
contrast might differ slightly for each individual.
”The second image [Figure 4.8] has higher shadow
quality as it is more compact”
The second image (Figure 4.8) was perceived by many
as brighter. However, some answered that even though
the first image have slightly brighter highlights, the
second image has a more uniform brightness.
4.4 	Exterior Comparison
4.4.1 	The Scene Setup
The second test is an experiment where we started off
with an already finished exterior scene that was neither
processedinalinearworkflownorwithV-Ray’sphysical
environment. The old scene had the color mapping
operator set to Reinhard. This will preserve slightly
more saturation than with a regular exponential color
mapping.
The scene, materials and textures were then processed
in a linear workflow and the scene setup was converted
into V-Ray’s physical environment. That will indeed
give us a few extra variables to keep track of comparing
to the previous experiments; first of all the V-Ray sun
is much more intense than the standard sun and is
actually as intense as the real sun.
The sun intensity was set to 1.0, but because it was
set lower on the sky we had to change our exposure
settings to the following:
Exposure settings
f-stop: 8
Shutter speed: 75 s-1
Film ISO: 108
4.4.2 	Prediction
What we aimed for was to improve the quality of
shadows, textures and materials to achieve an increase
of realism.
As before, a group of unbiased subjects expressed their
opinions about several image quality aspects. However,
this time we did not let them see any original textures
related to the image, removing any possibility that this
would have influenced the perceived image quality.
We predicted that our results would be perceived
differently compared to our previous experiment in
Section 4.2 since it is much more complex. Materials
and geometry might distract the viewer.
Note, however, that we do not claim these renders to be
exceptionally beautiful, we are merely interested in the
differences between the end result of two workflows.
15
Figure 4.13: Grass. Figure 4.14: Wall. Figure 4.15: Wood. Figure 4.16: Asphalt.
Figure 4.12: Nonlinear workflow without V-Ray’s physical entities.
Figure 4.11: Linear workflow with V-Ray’s physical entities.
16
4.4.3 	Rendered Images
The result from this test show us that a physical correct
workflow is indeed very important for a realistic render.
Table 4.2 show how many times an image was chosen
for each question and the following questions have
their answers displayed in the table:
Questions
2. Can any image consider to have more details?
3. Can any image consider to have a higher degree of
shadow quality?
4. Which image do you think has a higher degree of
realism?
The image in Figure 4.11 was rendered in a linear
workflow and V-Ray’s physical environment. The
image in Figure 4.12 was rendered in a nonlinear
workflow and did not use any of V-Ray’s physical
entities. The render in Figure 4.11 was perceived to
be more detailed by all subjects in the group and it
is highly noticeable on the sidewalk, or reflecting
materials such as windows. Apparently the effect of
reflections and bump mapping is more evident in a
physical workflow.
”Grass and bushes are more distinct in the first image
[Figure 4.11]”
The image rendered in a nonlinear workflow was
perceived to have less depth than the image rendered
in a linear workflow. It was also perceived to have
unnaturally bright colors, almost as if they were
emitting light.
”Colors in the second image [Figure 4.12] looks like
they are exaggerated and emit light.”
If we observe the RGB histogram for the image
rendered in a nonlinear workflow has very sharp spikes
in the bright end of the spectrum. The histogram for
the image rendered in a linear workflow is dominated
by mid tones and does not have any spikes. Luminance
is evenly distributed compared to the first image. This
result in a higher degree of contrast and depth.
”The first image [Figure 4.11] has a greater difference
between light and dark areas.”
Many subjects mentioned that the image rendered in a
linear workflow had much better lighting.
Just as in the previous experiment it is apparent that the
four original textures, Figures 4.13 - 4.16, are much
more predictable in a linear workflow. Materials in the
image rendered in a nonlinear workflow was perceived
to be washed out and it is evident that they are much
brighter than the original textures.
Another benefit from using a physical workflow is that
time of day is something we can mimic. As we can see
by the shadow of the tree, the sun is set rather low on
the sky. This would indicate either an early evening
or an early morning. Several subjects answered that it
appeared to be daylight in the image rendered without
a physical environment, which does not coincide with
the position of the sun. This is was of course not the
casewiththeimagerenderedinaphysicalenvironment.
One respondent remarked:
”It is early evening in the first image [Figure 4.11] and
midday in the second image [Figure 4.12]”
Both the aspect of using a linear workflow and
changing into V-Ray’s physical environment certainly
had an effect and we can assume this setup will be our
preferred choice for our workflow.
Question
2
3
4
First image,
Figure 4.11
5
4
5
Second image,
Figure 4.12
-
2
1
Table 4.2: Answers for the exterior comparison test.
See Appendix, Section 7.3, for the complete interview
form.
17
Figure 5.1: A schematic overview of our proposed linear workflow in relation to gamma space.
5 	 Conclusions & Dis-
cussions
5.1 	The Schematics
Our schematic proposal for a linear workflow in
relation to gamma is fairly simple. The aim was to
focus our attention to where and when a particular
process is in linear or nonlinear gamma space.
As we can see in our schematics below, Figure 5.1,
texture maps and colors (as seen in the color selectors)
are in nonlinear gamma and needs to be nullified
into linear gamma before they are imported into the
linear workflow. This is very important if our textures
and colors are going to behave physically correct in
the rendering process. Another important advantage
is that we can see the color selector and materials in
nonlinear gamma. Of note is that HDRI maps already
have linear gamma and do not need to be changed
before they are imported into the linear workflow.
If we want to simulate the physical world, we need
to process (render) our scene in linear-light encoding
(gamma 1.0) and that is exactly how V-Ray operates.
Although processing in linear light is physically
accurate it is not suitable for computations involving
human perception. Many graphic applications such as
Photoshop or the GNU Image Manipulation Program
(GIMP) process in nonlinear gamma and thus images
created from scratch in those applications will appear
perceptually uniform.
After processing, the rendered image can be viewed in
the Frame Buffer (FB). By default, it is viewed in linear
space, but if we apply gamma correction we can view
it perceptually uniform. However, the source will still
be in linear gamma and thus we do not need to worry
about disturbing the linear workflow.
We can sum up three different options regarding how
to deal with gamma correction in our workflow.
1. The first option is to override gamma through color
mapping during the actual render process. This rules
out any possibility to store the image in linear gamma
and is not ideal for post-production.
2. The second option is to apply gamma correction in
the frame buffer. This is generally better as it leaves us
the possibility to choose between linear and nonlinear
gamma.
3. The third option is to apply gamma correction in
a post-process tool. This options might be the best
option as it preserve all the information. However, as
mentioned earlier this will increase the need of storage
capacity.
If we continue with post-processing, we will recognize
that most applications with a physically correct
18
environment works with linear gamma. However, even
though many applications process in linear gamma we
cannot assume our output will be in linear gamma
or high dynamic range. That is why a correct gamma
setup and choice of image format is so essential.
When we are finished processing, the image needs to
be gamma corrected to be perceptually pleasing on a
low dynamic range display.
5.2 	HDRI Formats
There are a number of specific HDR image formats
available for storing images in linear gamma. An
independent validation of several HDR image formats
was presented by [3]. They concluded that OpenEXR,
XYZE, TIFF LogLuv and RGBE (Radiance HDR) are
HDR image formats of very high quality. However,
their result showed that OpenEXR had the best test
results for reproduction accuracy and with less dynamic
range. OpenEXR is also supported by today’s high-end
graphic cards and most computer graphics software,
which is an obvious benefit compared to formats such
as XYZE and Pixar Log TIFF.
Below are some results where a typical architectural
render in Figure 5.2 was stored in two different 32-bit
formats without compression. We did not include any
extra channels.
HDRI formats				 File size
OpenEXR 32-bit			 54.962 KB
OpenEXR Half-float 16-bit		 27.511 KB
RGBE 32-bit				 11.561 KB
TIFF LogLuv 32-bit			 7.967 KB
Even though OpenEXR has better quality compared
to both TIFF LogLuv and RGBE, it is apparent that
OpenEXR requires a lot more storage capacity. Even
with lossy compression, it is unlikely that we could
reduce the file size down to the level of TIFF LogLuv.
One benefit of using OpenEXR is that there is support
for an arbitrary number of channels such as Z-buffer,
motion blur, etc.
However,weareabitconfusedconcerningtheactualbit
range of OpenEXR when saved from 3D Studio Max.
The half-float 16-bit format of OpenEXR is recognized
by Photoshop as a 32-bit format and thus enables
exposure control without any loss of information. We
are not yet sure why this is a fact, although our guess is
that it is recognized as a float format and thus exposure
control is enabled. If the half-float 16-bit format of
OpenEXR is near the quality of 32-bit it would save
us a lot of space and keep the possibility of arbitrary
channels.
5.3 	Linear Workflow In
Other Tools
Linear workflow is not just something that is related
to V-Ray or certain software. In all kinds of graphic
processing it is relevant, all the way from photography
to three-dimensional visualization.
Linear workflow in photography, or linear RAW
workflow since it is often based upon the RAW image
format, has been a popular topic among photographers
for as long as digital cameras have been available. The
principal behind the RAW workflow is to keep the
every image source in RAW format (linear gamma)
until the process and composition is final. [20]
Between software applications there is not much
difference regarding the linear workflow principals.
The big difference is in how the software handles input
and output for textures and colors. In the YafRay
Figure 5.2: Size test example, image size 2500x1875
pixels.
19
rendering engine, textures are always assumed to be in
linear gamma and needs to have their gamma nullified
through pre-processing [27]. This can be quite
bothersome and often leads to misunderstandings for
artists not familiar with gamma or linear workflow.
This could be solved if there was a possibility to control
gamma in shaders.
There can also be some issues concerning whether color
selectors are presented in nonlinear or linear gamma.
If color selectors are in linear gamma, it will be very
difficult to predict color values in a gamma corrected
render. This might be one of the reasons why gamma
has a bad reputation among artists, and not just in
computer science.
5.4 	Interior Environments
Interior environments are much more complicated
compared to exterior environments due to the amount
of occlusion. It is difficult to expose the interior space
bright enough and not overexpose the sky at the same
time without rendering the scene twice. The common
solution in photography is to take two pictures of
the same scene with different exposure settings. A
common opinion among architects is that white-walled
interiors should be unnaturally bright. Therefore we
decided to create an architectural scene for showing
off the workflow’s capabilities in interiors. The image
was stored in TIFF 16-bit as we did not do any post-
production. The resulting image is visible in Figure 5.3
and a large version is attached in the Appendix 7.4.
5.5 	Conclusions
Even though linear workflow has been considered
an advanced topic among artists it is mainly because
there have been so many misunderstandings. One
misconception, mentioned in Section 2.2, is that
gamma correction is required for compensating the
nonlinearity of a monitors. Another one among artists,
is that a linear workflow is referred to be a technique for
lighting experts only. In some cases this is valid, since
certain applications might not support full control of
gamma and thus makes the whole process a lot more
complicated. An example would be if the application
did not support full control of gamma, as mentioned
in Section 5.3. The ideal application would be one
where no artist would have to think about gamma
correction, neither what it is for nor why it exists. But
displays with HDR do not appear to be available in
the near future for the common artist or their clients as
mentioned in Section 2.4 and tools have no standard
user interface for how gamma should be displayed or
controlled. Therefore, knowledge about gamma is still
going to be required in the future.
Based on the results from our experiments we can
conclude that processing our renders with a proper
gamma setup is indeed very important for realistic
imagery. Thus, the linear workflow is not something
a computer graphics artist should neglect if the best
possible quality is greatly desired. Our schema shows
the simplicity of a linear workflow in relation to gamma
space and a key part of our schema is to increase the
predictability of the perceived colors and textures. A
predictable scene will decrease the amount of time
spent on processing ”trial and error” renderings. We
can also conclude that there is no increase of render
time to process in a linear workflow even though the
image quality is increased. However, a drawback is
that linear workflow requires a lot of storage capacity
if the rendered image is going to go through any kind
of advanced post-production. HDR image formats
with float values are indeed very large compared to
LDR images, see Section 5.2. Without any need for
advanced post-processing, a rendered image could
easily be stored in a uncompressed LDR format such
as Portable Network Graphics (PNG) [22]. If we had
unlimited storage capacity the preferred choice would
be OpenEXR as it has the best image quality.
Figure 5.3: Interior render, image size 2500x1500.
For a large version, see Appendix 7.4.
20
Hopefully this thesis might lead to a better
understanding and encourage the use of a linear
workflow in graphics processing.
5.6 	Future work
The schematic overview presented in Section 5.1
is supposed to be the foundation for explaining a
workflow in relation to gamma in a simple manner.
This overview could feature as a model to create
tutorials for different tools and environments.
Even though we have proposed a schematic overview
of our workflow regarding the three-dimensional
rendering process a more detailed investigation about
post-processing and video editing tools would be
needed. Just because the exponential tone mapping
operator featured in 3D Studio Max is not a viable
option there might be tone mapping operators in post-
production tools that would yield a higher quality
image. Therefore, an investigation of how and if a
specific tone reproduction operators might be viable
for our workflow should be done. Especially if we
want to use it for advanced post-production and map
certain aspects of a HDR image onto a LDR display.
Build an image library with examples to show off
the possibilities with a linear workflow in different
environments settings and tools.
6 	 Bibliography
6.1 	References
[1] Adelson, E. H. Perceptual organization and the
judgment of brightness. Science 262, 2042–2044
(1993).
[2] CIE No 17.4, International Lighting Vocabulary
(Vienna, Austria: Central Bureau of the
Commission Internationale de L’Éclairage)
[3] Debevec, Reinhard, Ward, and Pattanaik, High
Dynamic Range Imaging, SIGGRAPH 2004
Course #13.
[4] Hsien Che-Lee, Introduction to color imaging
science, 16, Cambridge University Press, 2005.
[5] Hunter, R. and Harold, R. The measurement of
appearance. Wiley, 2. ed., 5. print. edition, 1987.
[6] Nishita, T. and Nakamae, E. Continuous tone
representation of three-dimensional objects taking
account of shadows and interreflection. ACM
SIGGRAPH Computer Graphics, v.19 n.3,
p.23-30, Jul. 1985.
[7] Poynton, C. The Rehabilitation of Gamma.
Rogowitz ,B. E.,l and T. N. Pappas (eds.),
Human Vision and Electronic Imaging III,
Proceedings of SPIE vol. 3299, p. 232-249
(Bellingham, Wash.: SPIE, 1998).
[8] Poynton, C. Digital Video and HDTV Algorithms
and Interfaces, 1 edition, vol. 23, p. 257-259,
2003.
[9] Poynton, C. Frequently Asked Questions about
Colour, 2006-11-28.
[10] Seetzen, H. Whitehead, L., and Ward, G. A
high dynamic range display system using low and
high resolution modulators. In Proc. of the 2003
Society for Information Display Symposium.
[11] Takagi, A., Takaoka, H., Oshima, T., and
Ogata, Y. Accurate rendering technique based on
colorimetric conception. In Computer Graphics
(SIGGRAPH ’90 Proceedings) (Aug. 1990), F.
21
Baskett, Ed., vol. 24, p. 263–272.
[12] Ward, G. High Dynamic Range Imaging, Proc.
Ninth Color Imaging Conference, November
2001.
6.2 	Online Resources
[13] 3D Studio Max specifications, 31 May 2008.
http://usa.autodesk.com/adsk/servlet/
index?siteID=123112&id=8108755
[14] Black Point Calibration, 31 May 2008.
http://www.aim-dtp.net/aim/calibration/
blackpoint/crt_brightness_and_contrast.htm
[15] Color Calibration, 24 May 2008.
http://www.brilliantprints.com.au/colour_
calibration.html
[16] Gamma Correction, 24 May 2008.
http://www.happy-digital.com/freebies/tip_
gamma.html
[17] Gamma Correction in Computer Graphics,
31 May 2008.
http://www.teamten.com/lawrence/graphics/
gamma/
[18] Jpeg, 31 May 2008.
http://en.wikipedia.org/Jpeg
[19] Lele’s tutorial on V-Ray’s physical workflow,
31 May 2008.
http://www.chaosgroup.com/forums/vbulletin/
showthread.php?t=36359&page=29
[20] Linear RAW workflow, 28 May 2008.
http://www.aim-dtp.net/aim/techniques/linear_
raw/index.htm
[21] Linear Workflow ’Reloaded’, 15 May 2008.
http://www.gijsdezwart.nl/tutorials.php
[21] OpenEXR, 1 June 2008.
http://www.openexr.com
[22] Portable Network Graphics, 1 June 2008.
http://en.wikipedia.org/Portable_Network_
Graphics
[23] Shake specifications, 1 June 2008.
http://www.apple.com/shake/specs.html
[24] Tone and Gamma Correction in 3D,
15 May 2008.
http://www.ypoart.com/tutorials/tone/index.php
[25] V-Ray, 1 June 2008.
http://www.chaosgroup.com
[26] V-Ray Documentation, 31 May 2008.
http://www.V-Ray.us/V-Ray_documentation/
[27] YafRay Linear Workflow Tutorial, 28 May 2008.
http://forums.cgsociety.org/showthread.
php?t=305727
22
7 	Appendix
A colored version of this thesis with high resolution is available at my website and project blog, http://stefan.
svebeck.se
7.1 Tables
Question
2
3
4
5
6
7
First image,
Figure 4.7
5
5
1
5
5
4
Second image,
Figure 4.8
1
1
5
1
1
2
Table 7.1: Texture comparison test. With monitor
calibration.
Question
2
3
4
First image,
Figure 4.11
5
4
5
Second image,
Figure 4.12
-
2
1
Table 7.3: Exterior comparison test. With monitor
calibration.
Table 7.2: Texture comparison test. No monitor
calibration.
Question
2
3
4
5
6
7
First image,
Figure 4.7
8
7
3
7
6
5
Second image,
Figure 4.8
-
1
5
1
2
3
Table 7.4: Exterior comparison test. No monitor
calibration.
Question
2
3
4
First image,
Figure 4.11
8
7
8
Second image,
Figure 4.12
-
1
-
23
7.2 Texture Interview Form
Below are two images.
1. First
2. Second
Textures from left to right.
A. Wood
B. Ground
Questions
1. Do you have any experience or previous knowledge about graphics?
2. In which image does the wooden sphere best match the wood texture (A)?
3. In which image does the ground best match the ground texture (B)?
4. Does any image appear to be brighter than the other?
5. Does any of the images appear to have a higher texture quality?
6. Does any of the images appear to have a higher shadow quality?
7. Does any of the images appear to be more realistic?
8. Do you have any other comments about the differences in the rendered images? (optional)
24
1
2
A. Wood B. Ground
25
7.3 Exterior Interview Form
Below are two images.
1. First
2. Second
Questions
1. Do you have any experience or previous knowledge about graphics?
2. Can any image be considered to have more details?
3. Can any image be considered to have a higher quality of shadows?
4. Which image do you think has a higher degree of realism?
5. What time of the day do you think it is in each image?
6. Other comments about the differences between the rendered images? (optional)
26
1
2
27
7.4InteriorExample
Figure7.2:InteriorExamplefromourproposedworkflow.Full-sizeversionofFigure5.3.
28
7.5 Glossary
Brightness (Section 2.1.1)
The percieved amount of emitted light.
Color Mapping
See tone mapping.
Contrast (Section 2.1.2)
The percieved difference between one or more fields.
Diffuse Light (Section 2.1.6)
Omnidirectional light, the opposite from directional
light.
Directional Light (Section 2.1.5)
Light that only comes from one source in one
direction.
Gamma Correction (Section 2.2)
Transforms an image from linear gamma to non-linear
gamma to make it appear perceptually pleasing.
Global Illumination (Section 2.1.9)
A computer graphics simulation of omnidirectional
(diffuse) light.
Ground Reflection (Section 2.1.8)
Bouncing light between an object and the ground.
HDRI Formats (Sections 2.3, 5.2)
Image formats with high dynamic range.
Interreflection (Section 2.1.7)
Bouncing light between objects.
Linear Workflow (Section 2.5, 5.1)
A workflow that works with linear gamma.
Luminance (Section 2.1.3)
A measurable quantity that is proportional to the
intensity of light.
Perceptually Uniform (Section 2.1.4)
A change in color values correspond to an equal change
of percieved color values.
Tone Mapping (Section 2.1.1)
Transforms an image from high dynamic range into
low dynamic range.
Tone Reproduction
See tone mapping.
TRITA-CSC-E 2009:103
ISRN-KTH/CSC/E--09/103--SE
ISSN-1653-5715
www.kth.se

More Related Content

What's hot

GNorm and Rethinking pre training-ruijie
GNorm and Rethinking pre training-ruijieGNorm and Rethinking pre training-ruijie
GNorm and Rethinking pre training-ruijie
哲东 郑
 

What's hot (18)

Decomposing image generation into layout priction and conditional synthesis
Decomposing image generation into layout priction and conditional synthesisDecomposing image generation into layout priction and conditional synthesis
Decomposing image generation into layout priction and conditional synthesis
 
IRJET- Different Approaches for Implementation of Fractal Image Compressi...
IRJET-  	  Different Approaches for Implementation of Fractal Image Compressi...IRJET-  	  Different Approaches for Implementation of Fractal Image Compressi...
IRJET- Different Approaches for Implementation of Fractal Image Compressi...
 
Comparative Study between DCT and Wavelet Transform Based Image Compression A...
Comparative Study between DCT and Wavelet Transform Based Image Compression A...Comparative Study between DCT and Wavelet Transform Based Image Compression A...
Comparative Study between DCT and Wavelet Transform Based Image Compression A...
 
Performance boosting of discrete cosine transform using parallel programming ...
Performance boosting of discrete cosine transform using parallel programming ...Performance boosting of discrete cosine transform using parallel programming ...
Performance boosting of discrete cosine transform using parallel programming ...
 
Gadljicsct955398
Gadljicsct955398Gadljicsct955398
Gadljicsct955398
 
Fpga implementation of fusion technique for fingerprint application
Fpga implementation of fusion technique for fingerprint applicationFpga implementation of fusion technique for fingerprint application
Fpga implementation of fusion technique for fingerprint application
 
Introduction to Grad-CAM (complete version)
Introduction to Grad-CAM (complete version)Introduction to Grad-CAM (complete version)
Introduction to Grad-CAM (complete version)
 
Hybrid Technique for Image Enhancement
Hybrid Technique for Image EnhancementHybrid Technique for Image Enhancement
Hybrid Technique for Image Enhancement
 
A Comparative Case Study on Compression Algorithm for Remote Sensing Images
A Comparative Case Study on Compression Algorithm for Remote Sensing ImagesA Comparative Case Study on Compression Algorithm for Remote Sensing Images
A Comparative Case Study on Compression Algorithm for Remote Sensing Images
 
A Review on Image Compression using DCT and DWT
A Review on Image Compression using DCT and DWTA Review on Image Compression using DCT and DWT
A Review on Image Compression using DCT and DWT
 
GNorm and Rethinking pre training-ruijie
GNorm and Rethinking pre training-ruijieGNorm and Rethinking pre training-ruijie
GNorm and Rethinking pre training-ruijie
 
DCT and Simulink Based Realtime Robust Image Watermarking
DCT and Simulink Based Realtime Robust Image WatermarkingDCT and Simulink Based Realtime Robust Image Watermarking
DCT and Simulink Based Realtime Robust Image Watermarking
 
AN ENHANCEMENT FOR THE CONSISTENT DEPTH ESTIMATION OF MONOCULAR VIDEOS USING ...
AN ENHANCEMENT FOR THE CONSISTENT DEPTH ESTIMATION OF MONOCULAR VIDEOS USING ...AN ENHANCEMENT FOR THE CONSISTENT DEPTH ESTIMATION OF MONOCULAR VIDEOS USING ...
AN ENHANCEMENT FOR THE CONSISTENT DEPTH ESTIMATION OF MONOCULAR VIDEOS USING ...
 
Transfer learning with LTANN-MEM & NSA for solving multi-objective symbolic r...
Transfer learning with LTANN-MEM & NSA for solving multi-objective symbolic r...Transfer learning with LTANN-MEM & NSA for solving multi-objective symbolic r...
Transfer learning with LTANN-MEM & NSA for solving multi-objective symbolic r...
 
Image Resolution Enhancement using DWT and Spatial Domain Interpolation Techn...
Image Resolution Enhancement using DWT and Spatial Domain Interpolation Techn...Image Resolution Enhancement using DWT and Spatial Domain Interpolation Techn...
Image Resolution Enhancement using DWT and Spatial Domain Interpolation Techn...
 
0 nidhi sethi_finalpaper--1-5
0 nidhi sethi_finalpaper--1-50 nidhi sethi_finalpaper--1-5
0 nidhi sethi_finalpaper--1-5
 
Lossless Huffman coding image compression implementation in spatial domain by...
Lossless Huffman coding image compression implementation in spatial domain by...Lossless Huffman coding image compression implementation in spatial domain by...
Lossless Huffman coding image compression implementation in spatial domain by...
 
PPI_Technology_Overview
PPI_Technology_OverviewPPI_Technology_Overview
PPI_Technology_Overview
 

Similar to svebeck_stefan_09103

Image segmentation using advanced fuzzy c-mean algorithm [FYP @ IITR, obtaine...
Image segmentation using advanced fuzzy c-mean algorithm [FYP @ IITR, obtaine...Image segmentation using advanced fuzzy c-mean algorithm [FYP @ IITR, obtaine...
Image segmentation using advanced fuzzy c-mean algorithm [FYP @ IITR, obtaine...
Koteswar Rao Jerripothula
 
Orthogonal Matching Pursuit in 2D for Java with GPGPU Prospectives
Orthogonal Matching Pursuit in 2D for Java with GPGPU ProspectivesOrthogonal Matching Pursuit in 2D for Java with GPGPU Prospectives
Orthogonal Matching Pursuit in 2D for Java with GPGPU Prospectives
Matt Simons
 
HARDWARE SOFTWARE CO-SIMULATION FOR TRAFFIC LOAD COMPUTATION USING MATLAB SIM...
HARDWARE SOFTWARE CO-SIMULATION FOR TRAFFIC LOAD COMPUTATION USING MATLAB SIM...HARDWARE SOFTWARE CO-SIMULATION FOR TRAFFIC LOAD COMPUTATION USING MATLAB SIM...
HARDWARE SOFTWARE CO-SIMULATION FOR TRAFFIC LOAD COMPUTATION USING MATLAB SIM...
ijcsity
 
Technical Documentation_Embedded_Image_DSP_Projects
Technical Documentation_Embedded_Image_DSP_ProjectsTechnical Documentation_Embedded_Image_DSP_Projects
Technical Documentation_Embedded_Image_DSP_Projects
Emmanuel Chidinma
 
Digital image enhancement by brightness and contrast manipulation using Veri...
Digital image enhancement by brightness and contrast  manipulation using Veri...Digital image enhancement by brightness and contrast  manipulation using Veri...
Digital image enhancement by brightness and contrast manipulation using Veri...
IJECEIAES
 
Laureate Online Education Internet and Multimedia Technolog.docx
Laureate Online Education    Internet and Multimedia Technolog.docxLaureate Online Education    Internet and Multimedia Technolog.docx
Laureate Online Education Internet and Multimedia Technolog.docx
DIPESH30
 

Similar to svebeck_stefan_09103 (20)

Image segmentation using advanced fuzzy c-mean algorithm [FYP @ IITR, obtaine...
Image segmentation using advanced fuzzy c-mean algorithm [FYP @ IITR, obtaine...Image segmentation using advanced fuzzy c-mean algorithm [FYP @ IITR, obtaine...
Image segmentation using advanced fuzzy c-mean algorithm [FYP @ IITR, obtaine...
 
Orthogonal Matching Pursuit in 2D for Java with GPGPU Prospectives
Orthogonal Matching Pursuit in 2D for Java with GPGPU ProspectivesOrthogonal Matching Pursuit in 2D for Java with GPGPU Prospectives
Orthogonal Matching Pursuit in 2D for Java with GPGPU Prospectives
 
An Intelligent approach to Pic to Cartoon Conversion using White-box-cartooni...
An Intelligent approach to Pic to Cartoon Conversion using White-box-cartooni...An Intelligent approach to Pic to Cartoon Conversion using White-box-cartooni...
An Intelligent approach to Pic to Cartoon Conversion using White-box-cartooni...
 
IRJET-Underwater Image Enhancement by Wavelet Decomposition using FPGA
IRJET-Underwater Image Enhancement by Wavelet Decomposition using FPGAIRJET-Underwater Image Enhancement by Wavelet Decomposition using FPGA
IRJET-Underwater Image Enhancement by Wavelet Decomposition using FPGA
 
IMQA Paper
IMQA PaperIMQA Paper
IMQA Paper
 
A systematic literature review on hardware implementation of image processing
A systematic literature review on hardware implementation of  image processingA systematic literature review on hardware implementation of  image processing
A systematic literature review on hardware implementation of image processing
 
A L -H ARM E XPANSION M OVIE B ASED ON V IRTUAL R EALITY
A L -H ARM E XPANSION M OVIE B ASED ON V IRTUAL R EALITYA L -H ARM E XPANSION M OVIE B ASED ON V IRTUAL R EALITY
A L -H ARM E XPANSION M OVIE B ASED ON V IRTUAL R EALITY
 
Performance analysis on color image mosaicing techniques on FPGA
Performance analysis on color image mosaicing techniques on FPGAPerformance analysis on color image mosaicing techniques on FPGA
Performance analysis on color image mosaicing techniques on FPGA
 
HARDWARE SOFTWARE CO-SIMULATION FOR TRAFFIC LOAD COMPUTATION USING MATLAB SIM...
HARDWARE SOFTWARE CO-SIMULATION FOR TRAFFIC LOAD COMPUTATION USING MATLAB SIM...HARDWARE SOFTWARE CO-SIMULATION FOR TRAFFIC LOAD COMPUTATION USING MATLAB SIM...
HARDWARE SOFTWARE CO-SIMULATION FOR TRAFFIC LOAD COMPUTATION USING MATLAB SIM...
 
Technical Documentation_Embedded_Image_DSP_Projects
Technical Documentation_Embedded_Image_DSP_ProjectsTechnical Documentation_Embedded_Image_DSP_Projects
Technical Documentation_Embedded_Image_DSP_Projects
 
Digital image enhancement by brightness and contrast manipulation using Veri...
Digital image enhancement by brightness and contrast  manipulation using Veri...Digital image enhancement by brightness and contrast  manipulation using Veri...
Digital image enhancement by brightness and contrast manipulation using Veri...
 
Video to Video Translation CGAN
Video to Video Translation CGANVideo to Video Translation CGAN
Video to Video Translation CGAN
 
IRJET- American Sign Language Classification
IRJET- American Sign Language ClassificationIRJET- American Sign Language Classification
IRJET- American Sign Language Classification
 
Laureate Online Education Internet and Multimedia Technolog.docx
Laureate Online Education    Internet and Multimedia Technolog.docxLaureate Online Education    Internet and Multimedia Technolog.docx
Laureate Online Education Internet and Multimedia Technolog.docx
 
IRJET - Deep Learning Approach to Inpainting and Outpainting System
IRJET -  	  Deep Learning Approach to Inpainting and Outpainting SystemIRJET -  	  Deep Learning Approach to Inpainting and Outpainting System
IRJET - Deep Learning Approach to Inpainting and Outpainting System
 
image compression using matlab project report
image compression  using matlab project reportimage compression  using matlab project report
image compression using matlab project report
 
Analog signal processing approach for coarse and fine depth estimation
Analog signal processing approach for coarse and fine depth estimationAnalog signal processing approach for coarse and fine depth estimation
Analog signal processing approach for coarse and fine depth estimation
 
Analog signal processing approach for coarse and fine depth estimation
Analog signal processing approach for coarse and fine depth estimationAnalog signal processing approach for coarse and fine depth estimation
Analog signal processing approach for coarse and fine depth estimation
 
Restoration of Old Documents that Suffer from Degradation
Restoration of Old Documents that Suffer from DegradationRestoration of Old Documents that Suffer from Degradation
Restoration of Old Documents that Suffer from Degradation
 
IRJET- A Review on Image Denoising & Dehazing Algorithm to Improve Dark Chann...
IRJET- A Review on Image Denoising & Dehazing Algorithm to Improve Dark Chann...IRJET- A Review on Image Denoising & Dehazing Algorithm to Improve Dark Chann...
IRJET- A Review on Image Denoising & Dehazing Algorithm to Improve Dark Chann...
 

svebeck_stefan_09103

  • 1. A Workflow for Gamma Correction in Computer Graphics S T E F A N S V E B E C K Bachelor of Science Thesis Stockholm, Sweden 2009
  • 2. A Workflow for Gamma Correction in Computer Graphics S T E F A N S V E B E C K Bachelor’s Thesis in Media Technology (15 ECTS credits) at the Degree Programme in Media Technology Royal Institute of Technology year 2009 Supervisor at CSC was Lars Kjelldahl Examiner was Daniel Pargman TRITA-CSC-E 2009:103 ISRN-KTH/CSC/E--09/103--SE ISSN-1653-5715 Royal Institute of Technology School of Computer Science and Communication KTH CSC SE-100 44 Stockholm, Sweden URL: www.csc.kth.se
  • 3. Abstract A Workflow for Gamma Correction In Computer Graphics Currently, 3D graphics production studios do not have time nor resources for setting up a proper workflow for their working environment. The task for this project was to investigate different alternatives for implementing gamma correction in a workflow with 3D graphics. A group of unbiased subjects where used to determine the quality between images rendered with or without gamma correction. Additional tests on texture quality and determination of realism was also carried through. The results indicate that a workflow with a proper gamma correction adds realism and increases the predictability of the rendered images. Thus, the time used for trial and error renderings can be minimized, adding to the time available for increasing the image quality. Another great benefit for a proper gamma setup is that it will make sure no unnecessary loss of information occurs when an image is processed. Therefore, we propose a schematic overview in relation to gamma that can be used as a guide for any 3D graphics work in general. The schematics tells what gamma space the rendered image or resource texture has in a specific part of the workflow, enabling us to predict and setup our software accordingly. We can also apply these schematics in other areas that include some kind of digital image processing, such as video and photography. Sammanfattning Ett arbetsflöde för gammakorrigering i datorgrafik För närvarande har 3D produktion studios ofta för lite tid och resurser för att planera ett grafiskt arbetsflöde. Målet var att undersöka olika alternativ för hur man kan implementera gamma korrektion i 3D grafik. En grupp oberoende testpersoner fick avgöra kvalitetsskillnader mellan bilder renderade med och utan gammakorrektion. Ytterligare testning genomfördes för att uppskatta skillnader i texturkvalitet och hur verklighetstrogna bilderna uppfattas. Resultatetvisadeattgammakorrektiongörbildenmerverklighetstrogenochrenderingsprocessenmeraförutsägbar. Detta innebär att resurser kan användas för att generera bättre bildkvalitet istället för upprepade testförsök. En annan fördel är att vi inte behöver oroa oss för att förlora bildinformation i onödan när bilden behandlas. Därför har vi föreslagit ett generaliserande schema som tydligt visar hur gammas relation till bildbehandlingsprocessen. Det går enkelt att se när och var en textur eller bild har ett visst gamma vilket ökar förutsägbarheten och förenklar korrekt konfiguration av den mjukvara vi använder. Vi kan även tillämpa detta i andra områden som berör bildbehandling, t.ex. video och fotografi. Keywords, realistic image processing, linear workflow, tone mapping, high dynamic range imaging
  • 4. Acknowledgements I want to say thank you to my family for supporting me throughout this project and special thanks to Karl Palmskog who dedicated a lot of his spare time into proofreading the document. I also want to thank my supervisor at Cadwalk, Magnus Fuxner, who offered me the opportunity to do this project and arranged the necessary resources. Last but not least I want to thank my supervisor Lars Kjelldahl at the Royal Institute of Technology for aiding me with critique and ideas.
  • 5. Content 1 Introduction 1.1 Workflow in 3D Graphics ..........1 1.2 Problem and Motivation.............1 1.3 Thesis Overview..........................1 2 Background 2.1 Lighting Terminology...................2 2.1.1 Brightness 2.1.2 Contrast 2.1.4 Luminance 2.1.5 Perceptually Uniform 2.1.6 Directional Light 2.1.7 Diffuse Lighte 2.1.8 Interreflection 2.1.9 Ground Reflection 2.1.10 Global Illumination 2.2 Gamma Correction......................3 2.3 HDRI .........................................4 2.4 Color Mapping............................5 2.5 Linear Workflow .........................5 3 Method and Tasks 3.1 Solution Approach........................7 3.2 Methodology.................................8 3.3 Tools.............................................8 3.4 Linear Workflow Set up.................8 3.4.1 Monitor Calibration 3.4.2 Gamma Setup 3.4.3 Render Setup 4 Result 4.1 Overview.......................................10 4.2 Gamma Correction Test................10 4.2.1 Scene Setup 4.2.2 Rendered Images 4.3 Texture Comparison .....................12 4.3.1 Scene Setup 4.3.2 Prediction 4.3.3 Rendered Images 4.4 Exterior Comparison....................14 4.4.1 Scene Setup 4.4.2 Prediction 4.4.3 Rendered Images 5 Conclusions & Discussions 5.1 Schematics....................................17 5.2 HDRI formats..............................18 5.3 Linear Workflow In Other Tools...18 5.4 Interior Environments..................19 5.5 Conclusions..................................19 5.6 Future Work.................................20 6 Bibliography 6.1 References....................................20 6.2 Online Resources.........................21 7 Appendix 7.1 Tables...........................................22 7.2 Texture Interview Form................23 7.3 Exterior Interview Form...............25 7.4 Interior Example .........................27 7.5 Glossary ......................................28
  • 6. 1 1 Introduction 1.1 Workflows in Computer Graphics Workflow is a very broad topic and it is sometimes referred to as pipeline in computer graphics. It it a definition of what tools, image formats and environment settings to use and in which order. To efficiently execute an artistic idea into reality a well defined workflow is necessary. In a production studio working with computer graphics, there is a need to set up a proper environment which includes tool settings and monitor calibration. Without a specific workflow, artists may experience a conflict between tools and environment settings. While one scene might appear and behave correct for one artist it might render incorrectly for another artist because of different environment settings. Image formats are another issue. For instance, if two artists use different image formats and tools these might not support each other if they need to share materials.Therefore,toolsworkingtogetherinaspecific workflow should ensure support for the specific format used between both applications. Differently calibrated monitors are an issue that will lead to misinterpretation of images. Thus, complicating the process of evaluation for a rendered image. Hence, monitor calibration is a required task for setting up an efficient workflow. Gamma, gamma correction or nonlinear encoding is an operation on images that adjust luminance for display on a computer monitor. Computer graphic applications can process images in either linear or nonlinear gamma. Incorrect use of gamma will result in poor image quality and reduce predictability of colors and textures [16]. Therefore, the correct use of gamma is very important for a computer graphics workflow. 1.2 Problem and Motivation The main reason why small media production companies do not bother with gamma is often due to a lack of knowledge or resources. Documentation on this topic can be rather confusing and difficult to find for a particular application environment. Gamma has also been a troublesome topic for any regular graphic designer to deal with and is often completely misunderstood or neglected because of its mathematical nature. Another interesting aspect regarding gamma and its related issues to graphic design is that most discussions can be found outside the scientific community. This is perhaps because it is thought of as trivial and scientifically unimportant that non-expert users do not understand the importance of gamma. However, the correct use of gamma in graphic workflows and pipelines are extremely important for anyone working with realistic image processing. Hence, an investigation of how to implement this was commissioned by Magnus Fuxner at Cadwalk. The task was to investigate how to integrate gamma correction in a three-dimensional graphics workflow working in linear gamma. What the common pitfalls and errors are, and how they can be eliminated. 1.3 Thesis Overview In Chapter 2, we explain some terms that are important for understanding the concept of gamma and three- dimensional computer graphics. A more detailed survey of gamma and its related issues are covered in Section 2.2. In Chapter 3, we discuss possible methods and tasks that will compare workflows with and without gamma correction and reveal any advantages and disadvantages for each one. The complete setup for the proposed workflow can be seen in Section 3.5. As can be seen in Chapter 4, colors and materials in images where gamma correction was used had a higher quality and increased realism (Section 4.2). In Section 4.3 we note that the physical workflow also enhances the realism of the rendered images (Section 4.4). In Chapter 5 a schematic overview is proposed for our workflow and different tools and environments are discussed. We conclude that setting up gamma correctly is highly beneficial and should be considered by all computer graphics artisans. Although there is still a lot of work that could be done to help fellow artisans to understand the concept of gamma. A colored version of this thesis with high resolution is available at http://stefan.svebeck.se
  • 7. 2 2 Background 2.1 Lighting Terminology There are a number of important lighting effects that are important for our investigation regarding gamma correction and realistic imagery. Indirect illuminance, or interreflections are important for adding realism to interior rendered images [6]. However, an exterior scene is no exception; both skylight, ground reflections and interreflections are very important for rendering realistic images. Without ground reflections and interreflections, shadows from direct sunlight will appear unnaturally dark and conceal geometry that would otherwise be visible in a realistic image [11]. Thus, a proper lighting setup where light behaves physically correct is very important for any realistic image processing. 2.1.1 Brightness According to the Central Bureau of the Commission Internationale de L’Éclairage (CIE), which is the accepted authority of lighting, color and vision, and image technology, brightness is defined to be an attribute of a visual sensation according to which an area appears to emit more or less light [2]. Brightness is therefore something highly subjective to the viewer and is very difficult to quantify. 2.1.2 Contrast In the perceptual sense contrast is defined by CIE as an assessment of the difference in appearance of two or more parts of a field seen simultaneously or successively. But contrast is also measurable by quantifying the difference in luminance using dL1 / L2 near the luminance threshold and L1 /L2 if the luminance values are high. 2.1.3 Luminance Because brightness is so difficult to quantify, CIE definedluminancewhichisamoremeasurablequantity. According to Poynton’s color FAQ [9] luminance is proportional to the intensity of a light source. But the spectral composition of luminance is related to the brightness sensitivity of the human vision. 2.1.4 Perceptual Uniformity A change in color difference (i.e. Euclidan distances) in the RGB color space should correspond to an equally perceived visual difference. A more trivial explanation can be found in Poynton’s color FAQ: The volume control on your radio is designed to be perceptually uniform: rotating the knob ten degrees produces approximately the same perceptual increment in volume anywhere across the range of the control. If the control were physically linear, the logarithmic nature of human loudness perception would place all of the perceptual ”action” of the control at the bottom of its range. 2.1.5 Directional Light In graphics processing, sunlight is referred to as directional light and comes from only one source in one direction. In three-dimensional visualization, shadows from directional light are sharp and black. 2.1.6 Diffuse Light In a physical world, sunlight is scattered by the atmosphere and give rise to skylight, or diffuse lighting. In contrast with sunlight, skylight is omnidirectional and radiates in every direction. 2.1.7 Interreflection When light bounces between objects it is called interreflection. Interreflected light (as seen in Figure 1.6) can brighten up areas that cannot be reached by directional light. This phenomena is usually referred to as ”bounce light”. Figure 1.6: Interreflection.
  • 8. 3 perceptually uniform domain and optimize perceptual performance, in as few bits as possible [7]. If an image could be composed of an infinite number of bits, the use of gamma correction would not be necessary. The luminance of the monitor is given by the power function (a). Luminance (L) is proportional to intensity and is measured in candela per square centimeter (cd/ cm2). The CRT transfer function has a black-offset variable (ε) that is affected directly by the monitor’s brightness. (a) L = (V’ + ε)γ As can be seen in the graph below in Figure 2.1, the relationship between the monitor’s input value and the 2.1.8 Ground Reflection Similar to interreflection, ground reflection (Figure 1.7) is indirect light which interacts with the ground. 2.1.9 Global illumination In computer graphics applications global illumination (GI) is a simulation of skylight (diffuse light) to achieve near physically correct lighting. 2.1.10 Material Categories Materials can be divided into four different categories depending on how light reacts to the material [5]. A diffuse reflection is uniform and is view independent without gloss or highlights whereas a specular reflection is view dependent and is often glossy and have edge highlights. A typical diffuse material is paper and a typical specular material is polished metal. Transmissive materials are transparent and a typical diffuse transmissive material is frosted glass, a typical specular transmissive material is the glass in windows. Material categories 1. Diffuse reflection 2. Specular reflection 3. Diffuse transmission 4. Specular transmission 2.2 Gamma Correction Gamma correction is a very hot topic in the computer graphics community [17]. However, there have been a lot of misconceptions about this topic as well. One particular misconception is that gamma correction’s main purpose is to compensate for the nonlinear gamma of the cathode ray tube (CRT) monitor [8]. Hence the name ”gamma correction”. However, the main purpose of gamma correction is to transform code tristimulus values (proportional to linear light) into a 0 0.2 0.4 0.6 0.8 0 0.2 0.4 0.6 0.8 1 Luminosity,L Input Signal, V’ Figure 2.1: The relationship between monitor input values and luminance normalized between 0 and 1. 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 Luminosity,L Input Signal, V’ Figure 2.2: The relationship between input values and luminance for a gamma correction curve normalized between 0 and 1. Figure 1.7: Ground reflection.
  • 9. 4 output luminance is not linear. Thus, a pixel with an input RGB value of 128, or 50% input value, will be rendered with only 22% luminance. In CRT monitors, the nonlinear relation is due to the electron gun and not phosphor as commonly believed [8]. For simplicity, we will only cover gamma correction for a PC environment. To make the image appear perceptually correct, we have to apply a gamma correction of 2.2 to our image, as seen in Figure 2.2. The gamma correction function (b) is roughly the inverse of the monitor gamma function. (b) L = (V’) 1/γ Without gamma correction an image will appear very dark or washed out depending on the image current gamma space. It will not make efficient use of the number of bits available. Gamma correction can be applied either during the rendering process in a three- dimensional application or post-process in an image manipulation tool, as long as the source image is in linear gamma. However, even though we apply gamma correction, images will lose information and contrast if the monitor’s brightness (black-offset) is either set too low or too high. 2.3 High Dynamic Range Images The importance of a high dynamic range (HDR) format in image processing is widely recognized [3]. Additionally, high dynamic range imaging and its relevance for realistic color reproduction has been described earlier by [12]. Regular 8-bit, low dynamic range (LDR) images cannot reproduce colors outside the color gamut, or color range, and thus colors will be clamped and perhaps result in burn-out spots. Color values above 1 or below 0 are considered out of gamut. The images in Figures 2.3 - 2.5 are examples that show us that toggling between overexposure and underexposure without any loss of data is possible for HDR images. This is crucial for any kind of advanced image operation, and the image in Figure 2.6 show us Figure 2.4: Overexposed, 32-bit LogLuv TIFF. Figure 2.3: Normal exposure, 32-bit LogLuv TIFF. Figure 2.5: Underexposed, 32-bit LogLuv TIFF.
  • 10. 5 what happens if we try to reverse the overexposure to normal exposure with a LDR image. Information is lost in the overexposed region of the teapot, highlights in particular. Therefore, 16-bit or lower range is not always sufficient for post-processing. HDR images store their color values in float and not in traditional 8/16-bit values per RGB channel. This greatly increases the amount of information an image can keep and does not limit it to color values between 0 and 1, although the image can usually only be viewed on a LDR display. Also, float values would not be of any use if there was no applications that could process float values. Fortunately, float values are exactly what is used for computation in most graphic processing software. Another difference is that HDR images are encoded in linear gamma. This is very beneficial, since most gradient and exposure functions are adapted and optimal for linear gamma. However, LDR images with 16-bits per color channel can be considered sufficient in many cases, especially if there is no need for any advanced post-processing. LDR images with 8-bits per color channel can only be considered if there is no need for post-processing at all. An important aspect of using HDR images is their file size. A 32-bit version of the image in Figure 2.3 is roughly seven times larger than an 8-bit version. 2.4 Color Mapping Color mapping is a frequently used function by computer graphic artists and can also be referred to as tone reproduction or tone mapping. The main purpose of color mapping is to preserve a specific characteristic of a HDR image for output on a LDR display. However, color mapping is often misunderstood by computer artists and sometimes misused for correcting a poor lighting setup [19]. LDRdisplaysandprintcannotreproducehighdynamic range values correctly. Thus, there is a need to map values into low dynamic range. Today, almost every display is a LDR display and not able to reproduce all the information available in a HDR image. Even though there are displays with high dynamic range they are mainly used in scientific research so far and they are not even near the dynamic range of the human vision [10]. There are multiple tone reproduction operators available and they are all specific in what effect they have on a rendered image. From an artistic point of view, exponential color mapping is mainly used to reduce burn-out spots in the rendered image. In the rendering process, however, the tone mapping operator is something we want to avoid since we want to preserve the image in linear gamma. An exponential tone mapping operator would also clamp the output colors to a value between 0 and 1. Using a linear tone mapping operator will preserve the image float values and linear gamma space. 2.5 Using a Linear Workflow Linear workflow is a term essentially used to describe a computer graphics pipeline that is working with linear gamma space. A linear gamma space is linear to the intensity of light. A linear environment could be any graphic application that process images in linear gamma, such as 3D Studio Max [13] or Shake [23]. There are a few tutorials that address linear workflow and gamma correction in specific environments [27], [21]. Input Textures can be divided into two different categories accordingtogammaspace:nonlinearandlinear.Images in formats such as Jpeg have a nonlinear gamma space [18] and thus need to be corrected into linear gamma Figure 2.6: The left image is a close up of the image in Figure 2.3. The right image was first overexposed as the image in Figure 2.4 and then saved as an 16-bit TIFF image. It was then reverted to normal exposure.
  • 11. 6 throughout the interior space. Another solution is of course to increase exposure in our physical camera. This would also seem to be more physically correct, since material properties are the same regardless of whether the camera is placed in exterior or interior environments. A more detailed presentation of V-Ray and its physical environmentcanbefoundinTheV-RayDocumentation, [26]. space before being imported into the linear rendering process. HDR images, however, already have a linear gamma space and hence do not need to be changed. Output There are several options to consider for the output process in a three-dimensional application. We can either add the gamma correction permanently into our image while processing or do it in a post-process step. Both ways have their own advantages, but if we are going to do any composition in another application it is better to leave the image in linear gamma. A short representation of a linear workflow can be divided into four stages as shown below. Linear workflow in short 1. Identify gamma values for source images 2. Nullify gamma values 3. Process 4. Apply gamma values for correct output 2.6 A Physical Environment A very interesting topic among graphic artists using three-dimensional computer graphics software is to work in a physical environment that is as close to real- life as possible. A physical camera has shutter speed and iris and much more that enables it to tweak the amount of light exposing the medium. A real-life sun has a very high intensity which is correct from a physical point of view, but not very convenient for the regular 3D artist as it might lead to overexposed images if misunderstood as can be seen in Lele’s video tutorial [19]. However, a physically correct rendered image will be more realistic if executed properly. A render engine named V-Ray [25] support a physical environment. Issues related to overexposure and the physical sun in V-Ray are generally solved by darkening the color values on materials with a RGB multiplier. This can be done by adding a V-Ray color map in the diffuse channel, as seen in Figure 1.5. Textures can be treated in a similar fashion. Exact values and settings for the V-Ray color map is usually determined by the scene setup. Interior scenes, for example, demands brighter color maps so that light can spread nicely Figure 1.5: V-Ray color map, RGB multiplier is set to 0.255.
  • 12. 7 3 Method and Tasks 3.1 Solution Approach In section 2.2, we claimed that the use of gamma correction in a linear workflow is often misunderstood and thought of as unnecessary or overrated for realistic image processing. Therefore, the task was to investigate how and when gamma correction should be applied in our graphic pipeline, and to find out what the differences are between using a linear workflow and a nonlinear workflow. Since gamma correction is indeed known to be required for displaying an image perceptually correct on a low dynamic range display, a gamma correction test is mainly proof of theory. Although we cannot be entirely confident that our workflow setup is correct without testing. To find the most efficient workflow, we also need to think about structure and organization. There might be parts in our workflow that seem unnecessarily complicated without yielding enough quality in return. A comparison between linear workflow in other environments would be one approach to find solutions. Task1: Gamma Correction Test First of all, we need to investigate how a linear workflow affects a scene with simple geometry and diffuse colored materials compared to a nonlinear workflow. Although similar tests have been done previously [21] it is imperative to present the problem in a simple form. An analysis will be based upon the physical correctness of lighting in the scene. Skylight, interreflections and ground reflection should also contribute correctly to the scene. Task 2: Texture Comparison After the initial task, a thorough investigation with textured materials is required to reveal any difference between texture quality for a linear and nonlinear workflow. To define the quality of each render, a group of unbiased subjects will then express their opinion about texture, shadow and lighting quality. Task 3: Exterior Comparison To be confident that our workflow is viable for advance scenery we have to test it in a exterior architectural scene. This will reveal any difference in reflections, displacement and other advance material properties. Task 4: Other Environments AnadditionaltaskistofindouthowtheV-Rayworkflow environment compares to other environments in the aspect of gamma and its related issues. 3.2 Methodology The quality of images is something that needs to be defined and therefore we let a group of unbiased subjects answer a set of questions about the quality and realism of images. At first, we did not use a calibrated machine for this purpose, since it was very practical to let people do the test via a remote connection. However, there is a substantial risk that those results are inaccurate due to bad viewing conditions and uncalibrated monitors. Note that our aim was to define quality by feedback, not a statistical significant result. Hence, it is more important with a small set of accurate data, than a lot of data of questionable integrity. Although, as can be seen in the Appendix, Table 7.1 and Table 7.2, the results did not deviate relatively by much between calibrated and non-calibrated monitors but it is not something we can significantly determine with so few subjects. The computer used for the test was calibrated with gamma 2.2 and white point was set to 6500 Kelvin, just as the workstation computer for this project. To minimize any dependence on order we arranged the question forms in a different order for the second half of our group. We also made certain nobody viewed the images in odd angles, because the output on a liquid crystal display (LCD) varies a lot depending on which angle it is viewed from. However, viewing conditions might still have been influenced by the time of day. The unbiased group consisted of both users familiar to graphics and those who were not. Though it was difficult to find any subjects completely ignorant of
  • 13. 8 is required. However, software calibration may be sufficient in most cases and there are free tools and calibration techniques available online [14,15]. In any case, doing software calibration is better than not calibrating at all. A problem that is related to calibration of output devices is that different devices have different color gamut, or range of color space. If colors are out-of- gamut, a color mapping algorithm needs to handle the image before it can be rendered properly on an LDR display [4]. However, there are other factors that have a large impact on how images appear on an output device. The output device itself does not solely determine the luminosity of the viewed display; reflected light and ambient light also adds to the total luminosity. Thus, we need to make sure we do not expose our screens to unnecessarily harsh conditions such as bright sunlight, that might result in glare or specular reflections. 3.5.2 Gamma Setup For the tests, an LCD display was calibrated to gamma of 2.2 and set white point to 6500 Kelvin with a hardware calibration tool. Hence, the workflow and 3D Studio Max gamma setup needs to adjusted accordingly, see Figure 3.1. To do this, we need to graphics. 3.3 Tools Our toolset consisted mainly of computer graphics software. The tools can be split into four different categories: calibration tools, post-production tools, rendering engines and 3D graphics applications. Spyder 2 Pro Spyder2Proisahardwarecalibrationtoolformonitors. It was used before any other program. 3D Studio Max 3D Studio Max is a full-featured 3D graphics application, which was used for modeling and setting up the scene required for each task. V-Ray V-Ray is a rendering engine that enables a physical environment setup. It is ideal for producing realistic images. Blender and YafRay Blender is a 3D graphics application and YafRay is a rendering engine, they were used for comparison with a workflow that incorporated 3D Studio Max and V-Ray. Photoshop The standard 2D graphics application Photoshop was used for post-production and compositing of images. 3.5 Linear Workflow Setup 3.5.1 Monitor Calibration Doing proper monitor calibration is an absolute requirement for doing any workflow-related tests. There are two ways of doing monitor calibration: by hardware or by software. Hardware calibration is more accurate since only a small amount of user interaction Figure 3.1: Gamma setup in 3ds Max.
  • 14. 9 enable gamma correction and set gamma to 2.2. However, this is not enough if we want to make sure our workflow is truly linear; we need to make sure our input and output settings are correct as well. Input is set to gamma 2.2 since we need to be aware that 8-bit textures usually have a gamma correction applied to them that needs to be nullified before entering the rendering process. HDR images, on the other hand, have a linear gamma and do not need any changes. Thus we need to override the input gamma with 1.0 every time we use a HDR image. We can assume this will be more convenient than overriding gamma for every 8-bit texture. We do not want our output and rendered image to have any gamma correction applied to it. The with output gamma set to 1.0 will enable us to save our image in a linear format, which is essential if we want to post-process the image in composition software. Another very important aspect we need to consider is that textures and colors should be viewed in gamma 2.2. Otherwise, the final result will not be predictable and the selected linear color will look significantly different from the rendered gamma corrected color. To fix this we enabled the ”Affect Material Editor” and ”Affect Color Selectors”. As seen in Figure 3.2, colors viewed in gamma 2.2 correlate almost perfectly with the rendered colors. Colors in gamma 1.0 only correlate with black or white. This is no surprise, since a change in gamma does not affect the color values 0 or 1. 3.5.3 Render Setup In all renders we used an irridiance map for primary and light cache for secondary bounces. Subdivision settings are always fixed at level 3. However, the most important thing is to leave color mapping in linear multiply, as seen in Figure 3.3. Linear multiply will simply multiply colors based on their brightness. This is essential since we do not want to bother with a nonlinear color mapping and alter the linear workflow. There are several nonlinear alternatives that have certain effects and are popular to use, but they are not viable for a linear workflow. There are a lot of parameters involved in the render process, and we will not go through all of them here since most have no significance for the linear workflow. The color mapping alternatives are further explained in The V-Ray Documentation. Figure 3.3: Setup for linear color mapping. Figure 3.2: Five spheres rendered with a gamma correction of 2.2 applied. For comparison, the uppermost row of color bars is in gamma 1.0 and the bottom row is in gamma 2.2.
  • 15. 10 4 Results 4.1 Overview In Section 3.1 we presented three tasks that would yield enough image data for discussing the advantages and disadvantages of our workflow. The complexity of the task increase as they follow: 1. Gamma correction test 2. Texture comparison 3. Exterior comparison The results in the first test in Section 4.2 show that images without gamma correction do not have realistic lighting and shadows are unnaturally dark even though we use exponential color mapping. The predictability of colors were also increased if we applied gamma correction. The second test in Section 4.3 compares the texture quality between images rendered in a nonlinear and linear workflow. A group of unbias subjects stated that imagesrenderedinalinearworkflowhadbetterlighting and texture quality.Textures were also more predictable in the image rendered in a linear workflow. In Section 4.4 we increased complexity by rendering a complete exterior scene with advanced materials and geometry. The test was a comparison between a scene rendered both with a linear workflow and a physical environment versus one in a nonlinear workflow and a non physical environment. Yet again a group of unbias subjects judged the realism and quality of the rendered images. They stated that the image rendered with a linear workflow in a physical environments had much better lighting quality, increased realism and an increased amount of visible details. A summary of the results show that a workflow where gamma is used correctly together with a physical workflow is beneficial for processing realistic images. 4.2 Gamma Correction Test 4.2.1 The Scene Setup The first test is just a basic investigation that illustrates what could happen if you do not use a linear workflow and neglect the fact that the relationship between the output and a LCD monitor is not linear. The test scene is composed of simple geometry, a V-Ray sun, physical camera and sky. The sun is set perpendicular to the plane and thus simulates daylight. There is one black sphere, four white toruses and one yellow sphere. All materials in this scene are diffuse to minimize any view dependent interference. The spheres are control objects to keep track of whether we have correct exposure; if a sphere is underexposed, it will not be yellow on top (rgb: 255, 255, 0). Sun intensity was set to 1.0 and the exposure settings were set to the following: Exposure settings f-stop: 8 Shutter speed: 100 s-1 ISO: 72 4.2.2 Rendered Images The image in Figure 4.1 is rendered with the gamma settings previously mentioned in Section 3.3. As can be seen, the image looks unnaturally dark and underexposed. However, the image is actually not dark but is in fact an image in linear gamma, which explains its dark appearance. Now, many artists do not actually know about gamma correction and starts tweaking the lighting and materials to make the rendered image appear bright again [24]. With increased sun intensity, materials in the rendered image (Figure 4.2) are overexposed, but we notice that some parts previously in the dark are now fairly visible compared to the image in Figure 4.1. We also noticed an increase of render time from roughly 1m 32s to 1m 48s. This increase is due to an increase of sun intensity, causing an increase in the number of reflections and hence more calculations during the rendering process. Another inevitable fact is that the sky has become very bright due to the intensity of the sun. The increase of
  • 16. 11 sunintensitymeansanincreaseintheskylightintensity, making the shadows tainted blue. Of course, this is not a realistic image at all and a common response is to apply a tone mapping operator to map values outside of gamut into a low dynamic range domain. Exponential tone mapping was applied to image in Figure 4.3 during the rendering process, which indeed reduced and minimized overexposure. At first, this might seem as a good idea since the rendered image looks a lot better and we can now see a lot more details in the shady areas. Note, however, that we have unnatural dark shadows with very little ground reflection underneath the object, which is not a realistic condition in an exterior scene with daylight. Additionally, a strong bluish taint is still present because of the skylight intensity. Thus, an increase of sun intensity and exponential color mapping is not the best solution to render a realistic image. However, brighter materials might be considered a viable option since they increase interreflection and thus brighten the unnaturally dark shadow without affecting the sky. However, as in the previous example, the render in Figure 4.4 was overexposed when it was rendered with linear color mapping. Additionally, the render time increased from 1m 36s to 1m 45s, which again is due to an increase in the number of reflections. By applying an exponential color mapping operator we managed to reduce overexposure in the image (Figure 4.5) with a similar result as in the image (Figure 4.3) with increased sun intensity. However, increased interreflection was not enough and the issue regarding unnaturally dark shadows still persisted. The increase of interreflections was in fact creating more trouble by making already bright materials appear to glow or even emit light. This is highly noticeable where the white torus are in contact with each other. Another obvious fact is that the sky has gone very dark and does not simulate daylight anymore. The scene representation is irrelevant for exponential color mapping, since it is a mathematical operator. Hence, it will reduce the brightness of the sky whether it is physically correct or not. There are numerous methods that lead to unrealistic images if we do not use gamma correction. By applying gamma correction, the render in Figure 4.6 does not Figure 4.1: No gamma correction. Color mapping is set at linear multiply and sun intensity is set to 1.0. Figure 4.2: No gamma correction. Color mapping is set to linear multiply and sun intensity is set to 3.0. Figure 4.3: No gamma correction. Color mapping is set to exponential and sun intensity is set to 3.0.
  • 17. 12 only appear brighter. It also appears to be correctly exposed and the shadows seem more natural than in the previous examples. Interreflection and ground reflection is clearly present and the sky background also appear to be fairly exposed for daylight conditions. 4.3 Texture Comparison 4.3.1 The Scene Setup The scene setup is yet again very simple and we use a white and black sphere as control objects to validate that our exposure settings are not completely off. The wooden sphere and ground are the objects of interest. The sun intensity is set at 1.0 and we used the same exposure settings as in the previous experiment: Exposure settings f-stop: 8 Shutter speed: 100 s-1 Film ISO: 72 4.3.2 Prediction Quality of textures needs a clarification. We focused our attention to how textures appear in the rendered images versus the original texture image. Our aim was to have as little or no difference between the original texture image and its appearance in rendered images. A prediction is that the render in Figure 4.7, which is processed in a linear workflow, will be perceived to have a higher degree of texture quality. Another prediction is that this image should be perceived to have a higher degree of shadow quality due to its similarity with the previous gamma correction experiment in chapter 3. However, perceived brightness is more difficult to predict since it is a psychophysical phenomena and is very dependent on its surroundings. Colors and three-dimensional objects are just two factors among many [1]. But we could assume that exponential color mapping will produce a brighter image but with less contrast and dynamic range. Figure 4.4: No gamma correction. Color mapping is set to linear multiply. Textures are twice as bright. Figure 4.6: Gamma correction of 2.2 is applied. Color mapping is set to linear multiply. Figure 4.5: No gamma correction. Color mapping is set to exponential. Textures are twice as bright.
  • 18. 13 4.3.3 Rendered Images The result from the texture comparison show that the image the image rendered in a linear workflow had was perceived to have better texture and shadow quality. However, whether any image had a higher degree of realism was unclear because many subjects stated that the scene was unrealistic to begin with. Table 4.1 display how many times each image was chosen for each question and the following questions were asked: Questions 2. In which render does the wooden sphere best match the wood texture? 3. In which image does the ground best match the ground texture? 4. Does any image appear to be brighter than the other? 5. Does any of the images appear to have a higher texture quality? 6. Does any of the images appear to have a higher shadow quality? 7. Does any of the images appear to be more realistic? The image in Figure 4.8 was an attempt to achieve a realistic image in a nonlinear workflow. Exponential color mapping is applied to decrease overexposure just as we did in Section 4.1. However, this time we tried to tweak both textures and lighting to achieve as simile brightness between both renders without lowering the white (rgb: 255,255,255) color of the sphere. The image in Figure 4.7 was perceived to have textures with more details, better quality and vivid colors. For example, one person stated: Figure 4.7: Linear workflow. Figure 4.8: Nonlinear workflow, with exponential color mapping. Figure 4.9: Ground texture. Figure 4.10: Wood texture. Question 2 3 4 5 6 7 First image, Figure 4.7 5 5 1 5 5 4 Second image, Figure 4.8 1 1 5 1 1 2 Table 4.1: Answers for the texture comparison test. See appendix for the complete interview form.
  • 19. 14 ”The first image [Figure 4.7] has more details in the grooves” An evident correlation is that textures matching the original texture image were considered to have a higher degree of quality. According to our unbias group the image rendered in a nonlinear workflow and exponential color mapping was perceived to have dull and washed out textures compared to the image processed in a linear workflow. Other relevant quotes: ”The first image [Figure 4.7] seem sharper and has higher texture quality” ”Reality is not sharp, thus the second image [Figure 4.8] is more realistic” Even though one could argue that the texture in the first image (Figure 4.7) might be too vivid and colorful to be natural, it is not an issue. The solution is simple, we should select a natural looking texture in the first place. Thus, we can reduce any need for guessing the final appearance of a texture in a three-dimensional render. Several quotes refer to the unnaturally dark shadows, for example: ”The shadows in the second image [Figure 4.8] are too dark” Just as we predicted many of our subjects perceived the image in Figure 4.7 to have a higher degree of shadow quality. Some commented that shadows were unnatural and to dark to be realistic in the second image, as in Figure 4.8, which confirms that physically correct lighting is required for realistic shadows. There were a few people, however, that appreciated the unnaturally dark shadows. Hence, the personal preference for contrast might differ slightly for each individual. ”The second image [Figure 4.8] has higher shadow quality as it is more compact” The second image (Figure 4.8) was perceived by many as brighter. However, some answered that even though the first image have slightly brighter highlights, the second image has a more uniform brightness. 4.4 Exterior Comparison 4.4.1 The Scene Setup The second test is an experiment where we started off with an already finished exterior scene that was neither processedinalinearworkflownorwithV-Ray’sphysical environment. The old scene had the color mapping operator set to Reinhard. This will preserve slightly more saturation than with a regular exponential color mapping. The scene, materials and textures were then processed in a linear workflow and the scene setup was converted into V-Ray’s physical environment. That will indeed give us a few extra variables to keep track of comparing to the previous experiments; first of all the V-Ray sun is much more intense than the standard sun and is actually as intense as the real sun. The sun intensity was set to 1.0, but because it was set lower on the sky we had to change our exposure settings to the following: Exposure settings f-stop: 8 Shutter speed: 75 s-1 Film ISO: 108 4.4.2 Prediction What we aimed for was to improve the quality of shadows, textures and materials to achieve an increase of realism. As before, a group of unbiased subjects expressed their opinions about several image quality aspects. However, this time we did not let them see any original textures related to the image, removing any possibility that this would have influenced the perceived image quality. We predicted that our results would be perceived differently compared to our previous experiment in Section 4.2 since it is much more complex. Materials and geometry might distract the viewer. Note, however, that we do not claim these renders to be exceptionally beautiful, we are merely interested in the differences between the end result of two workflows.
  • 20. 15 Figure 4.13: Grass. Figure 4.14: Wall. Figure 4.15: Wood. Figure 4.16: Asphalt. Figure 4.12: Nonlinear workflow without V-Ray’s physical entities. Figure 4.11: Linear workflow with V-Ray’s physical entities.
  • 21. 16 4.4.3 Rendered Images The result from this test show us that a physical correct workflow is indeed very important for a realistic render. Table 4.2 show how many times an image was chosen for each question and the following questions have their answers displayed in the table: Questions 2. Can any image consider to have more details? 3. Can any image consider to have a higher degree of shadow quality? 4. Which image do you think has a higher degree of realism? The image in Figure 4.11 was rendered in a linear workflow and V-Ray’s physical environment. The image in Figure 4.12 was rendered in a nonlinear workflow and did not use any of V-Ray’s physical entities. The render in Figure 4.11 was perceived to be more detailed by all subjects in the group and it is highly noticeable on the sidewalk, or reflecting materials such as windows. Apparently the effect of reflections and bump mapping is more evident in a physical workflow. ”Grass and bushes are more distinct in the first image [Figure 4.11]” The image rendered in a nonlinear workflow was perceived to have less depth than the image rendered in a linear workflow. It was also perceived to have unnaturally bright colors, almost as if they were emitting light. ”Colors in the second image [Figure 4.12] looks like they are exaggerated and emit light.” If we observe the RGB histogram for the image rendered in a nonlinear workflow has very sharp spikes in the bright end of the spectrum. The histogram for the image rendered in a linear workflow is dominated by mid tones and does not have any spikes. Luminance is evenly distributed compared to the first image. This result in a higher degree of contrast and depth. ”The first image [Figure 4.11] has a greater difference between light and dark areas.” Many subjects mentioned that the image rendered in a linear workflow had much better lighting. Just as in the previous experiment it is apparent that the four original textures, Figures 4.13 - 4.16, are much more predictable in a linear workflow. Materials in the image rendered in a nonlinear workflow was perceived to be washed out and it is evident that they are much brighter than the original textures. Another benefit from using a physical workflow is that time of day is something we can mimic. As we can see by the shadow of the tree, the sun is set rather low on the sky. This would indicate either an early evening or an early morning. Several subjects answered that it appeared to be daylight in the image rendered without a physical environment, which does not coincide with the position of the sun. This is was of course not the casewiththeimagerenderedinaphysicalenvironment. One respondent remarked: ”It is early evening in the first image [Figure 4.11] and midday in the second image [Figure 4.12]” Both the aspect of using a linear workflow and changing into V-Ray’s physical environment certainly had an effect and we can assume this setup will be our preferred choice for our workflow. Question 2 3 4 First image, Figure 4.11 5 4 5 Second image, Figure 4.12 - 2 1 Table 4.2: Answers for the exterior comparison test. See Appendix, Section 7.3, for the complete interview form.
  • 22. 17 Figure 5.1: A schematic overview of our proposed linear workflow in relation to gamma space. 5 Conclusions & Dis- cussions 5.1 The Schematics Our schematic proposal for a linear workflow in relation to gamma is fairly simple. The aim was to focus our attention to where and when a particular process is in linear or nonlinear gamma space. As we can see in our schematics below, Figure 5.1, texture maps and colors (as seen in the color selectors) are in nonlinear gamma and needs to be nullified into linear gamma before they are imported into the linear workflow. This is very important if our textures and colors are going to behave physically correct in the rendering process. Another important advantage is that we can see the color selector and materials in nonlinear gamma. Of note is that HDRI maps already have linear gamma and do not need to be changed before they are imported into the linear workflow. If we want to simulate the physical world, we need to process (render) our scene in linear-light encoding (gamma 1.0) and that is exactly how V-Ray operates. Although processing in linear light is physically accurate it is not suitable for computations involving human perception. Many graphic applications such as Photoshop or the GNU Image Manipulation Program (GIMP) process in nonlinear gamma and thus images created from scratch in those applications will appear perceptually uniform. After processing, the rendered image can be viewed in the Frame Buffer (FB). By default, it is viewed in linear space, but if we apply gamma correction we can view it perceptually uniform. However, the source will still be in linear gamma and thus we do not need to worry about disturbing the linear workflow. We can sum up three different options regarding how to deal with gamma correction in our workflow. 1. The first option is to override gamma through color mapping during the actual render process. This rules out any possibility to store the image in linear gamma and is not ideal for post-production. 2. The second option is to apply gamma correction in the frame buffer. This is generally better as it leaves us the possibility to choose between linear and nonlinear gamma. 3. The third option is to apply gamma correction in a post-process tool. This options might be the best option as it preserve all the information. However, as mentioned earlier this will increase the need of storage capacity. If we continue with post-processing, we will recognize that most applications with a physically correct
  • 23. 18 environment works with linear gamma. However, even though many applications process in linear gamma we cannot assume our output will be in linear gamma or high dynamic range. That is why a correct gamma setup and choice of image format is so essential. When we are finished processing, the image needs to be gamma corrected to be perceptually pleasing on a low dynamic range display. 5.2 HDRI Formats There are a number of specific HDR image formats available for storing images in linear gamma. An independent validation of several HDR image formats was presented by [3]. They concluded that OpenEXR, XYZE, TIFF LogLuv and RGBE (Radiance HDR) are HDR image formats of very high quality. However, their result showed that OpenEXR had the best test results for reproduction accuracy and with less dynamic range. OpenEXR is also supported by today’s high-end graphic cards and most computer graphics software, which is an obvious benefit compared to formats such as XYZE and Pixar Log TIFF. Below are some results where a typical architectural render in Figure 5.2 was stored in two different 32-bit formats without compression. We did not include any extra channels. HDRI formats File size OpenEXR 32-bit 54.962 KB OpenEXR Half-float 16-bit 27.511 KB RGBE 32-bit 11.561 KB TIFF LogLuv 32-bit 7.967 KB Even though OpenEXR has better quality compared to both TIFF LogLuv and RGBE, it is apparent that OpenEXR requires a lot more storage capacity. Even with lossy compression, it is unlikely that we could reduce the file size down to the level of TIFF LogLuv. One benefit of using OpenEXR is that there is support for an arbitrary number of channels such as Z-buffer, motion blur, etc. However,weareabitconfusedconcerningtheactualbit range of OpenEXR when saved from 3D Studio Max. The half-float 16-bit format of OpenEXR is recognized by Photoshop as a 32-bit format and thus enables exposure control without any loss of information. We are not yet sure why this is a fact, although our guess is that it is recognized as a float format and thus exposure control is enabled. If the half-float 16-bit format of OpenEXR is near the quality of 32-bit it would save us a lot of space and keep the possibility of arbitrary channels. 5.3 Linear Workflow In Other Tools Linear workflow is not just something that is related to V-Ray or certain software. In all kinds of graphic processing it is relevant, all the way from photography to three-dimensional visualization. Linear workflow in photography, or linear RAW workflow since it is often based upon the RAW image format, has been a popular topic among photographers for as long as digital cameras have been available. The principal behind the RAW workflow is to keep the every image source in RAW format (linear gamma) until the process and composition is final. [20] Between software applications there is not much difference regarding the linear workflow principals. The big difference is in how the software handles input and output for textures and colors. In the YafRay Figure 5.2: Size test example, image size 2500x1875 pixels.
  • 24. 19 rendering engine, textures are always assumed to be in linear gamma and needs to have their gamma nullified through pre-processing [27]. This can be quite bothersome and often leads to misunderstandings for artists not familiar with gamma or linear workflow. This could be solved if there was a possibility to control gamma in shaders. There can also be some issues concerning whether color selectors are presented in nonlinear or linear gamma. If color selectors are in linear gamma, it will be very difficult to predict color values in a gamma corrected render. This might be one of the reasons why gamma has a bad reputation among artists, and not just in computer science. 5.4 Interior Environments Interior environments are much more complicated compared to exterior environments due to the amount of occlusion. It is difficult to expose the interior space bright enough and not overexpose the sky at the same time without rendering the scene twice. The common solution in photography is to take two pictures of the same scene with different exposure settings. A common opinion among architects is that white-walled interiors should be unnaturally bright. Therefore we decided to create an architectural scene for showing off the workflow’s capabilities in interiors. The image was stored in TIFF 16-bit as we did not do any post- production. The resulting image is visible in Figure 5.3 and a large version is attached in the Appendix 7.4. 5.5 Conclusions Even though linear workflow has been considered an advanced topic among artists it is mainly because there have been so many misunderstandings. One misconception, mentioned in Section 2.2, is that gamma correction is required for compensating the nonlinearity of a monitors. Another one among artists, is that a linear workflow is referred to be a technique for lighting experts only. In some cases this is valid, since certain applications might not support full control of gamma and thus makes the whole process a lot more complicated. An example would be if the application did not support full control of gamma, as mentioned in Section 5.3. The ideal application would be one where no artist would have to think about gamma correction, neither what it is for nor why it exists. But displays with HDR do not appear to be available in the near future for the common artist or their clients as mentioned in Section 2.4 and tools have no standard user interface for how gamma should be displayed or controlled. Therefore, knowledge about gamma is still going to be required in the future. Based on the results from our experiments we can conclude that processing our renders with a proper gamma setup is indeed very important for realistic imagery. Thus, the linear workflow is not something a computer graphics artist should neglect if the best possible quality is greatly desired. Our schema shows the simplicity of a linear workflow in relation to gamma space and a key part of our schema is to increase the predictability of the perceived colors and textures. A predictable scene will decrease the amount of time spent on processing ”trial and error” renderings. We can also conclude that there is no increase of render time to process in a linear workflow even though the image quality is increased. However, a drawback is that linear workflow requires a lot of storage capacity if the rendered image is going to go through any kind of advanced post-production. HDR image formats with float values are indeed very large compared to LDR images, see Section 5.2. Without any need for advanced post-processing, a rendered image could easily be stored in a uncompressed LDR format such as Portable Network Graphics (PNG) [22]. If we had unlimited storage capacity the preferred choice would be OpenEXR as it has the best image quality. Figure 5.3: Interior render, image size 2500x1500. For a large version, see Appendix 7.4.
  • 25. 20 Hopefully this thesis might lead to a better understanding and encourage the use of a linear workflow in graphics processing. 5.6 Future work The schematic overview presented in Section 5.1 is supposed to be the foundation for explaining a workflow in relation to gamma in a simple manner. This overview could feature as a model to create tutorials for different tools and environments. Even though we have proposed a schematic overview of our workflow regarding the three-dimensional rendering process a more detailed investigation about post-processing and video editing tools would be needed. Just because the exponential tone mapping operator featured in 3D Studio Max is not a viable option there might be tone mapping operators in post- production tools that would yield a higher quality image. Therefore, an investigation of how and if a specific tone reproduction operators might be viable for our workflow should be done. Especially if we want to use it for advanced post-production and map certain aspects of a HDR image onto a LDR display. Build an image library with examples to show off the possibilities with a linear workflow in different environments settings and tools. 6 Bibliography 6.1 References [1] Adelson, E. H. Perceptual organization and the judgment of brightness. Science 262, 2042–2044 (1993). [2] CIE No 17.4, International Lighting Vocabulary (Vienna, Austria: Central Bureau of the Commission Internationale de L’Éclairage) [3] Debevec, Reinhard, Ward, and Pattanaik, High Dynamic Range Imaging, SIGGRAPH 2004 Course #13. [4] Hsien Che-Lee, Introduction to color imaging science, 16, Cambridge University Press, 2005. [5] Hunter, R. and Harold, R. The measurement of appearance. Wiley, 2. ed., 5. print. edition, 1987. [6] Nishita, T. and Nakamae, E. Continuous tone representation of three-dimensional objects taking account of shadows and interreflection. ACM SIGGRAPH Computer Graphics, v.19 n.3, p.23-30, Jul. 1985. [7] Poynton, C. The Rehabilitation of Gamma. Rogowitz ,B. E.,l and T. N. Pappas (eds.), Human Vision and Electronic Imaging III, Proceedings of SPIE vol. 3299, p. 232-249 (Bellingham, Wash.: SPIE, 1998). [8] Poynton, C. Digital Video and HDTV Algorithms and Interfaces, 1 edition, vol. 23, p. 257-259, 2003. [9] Poynton, C. Frequently Asked Questions about Colour, 2006-11-28. [10] Seetzen, H. Whitehead, L., and Ward, G. A high dynamic range display system using low and high resolution modulators. In Proc. of the 2003 Society for Information Display Symposium. [11] Takagi, A., Takaoka, H., Oshima, T., and Ogata, Y. Accurate rendering technique based on colorimetric conception. In Computer Graphics (SIGGRAPH ’90 Proceedings) (Aug. 1990), F.
  • 26. 21 Baskett, Ed., vol. 24, p. 263–272. [12] Ward, G. High Dynamic Range Imaging, Proc. Ninth Color Imaging Conference, November 2001. 6.2 Online Resources [13] 3D Studio Max specifications, 31 May 2008. http://usa.autodesk.com/adsk/servlet/ index?siteID=123112&id=8108755 [14] Black Point Calibration, 31 May 2008. http://www.aim-dtp.net/aim/calibration/ blackpoint/crt_brightness_and_contrast.htm [15] Color Calibration, 24 May 2008. http://www.brilliantprints.com.au/colour_ calibration.html [16] Gamma Correction, 24 May 2008. http://www.happy-digital.com/freebies/tip_ gamma.html [17] Gamma Correction in Computer Graphics, 31 May 2008. http://www.teamten.com/lawrence/graphics/ gamma/ [18] Jpeg, 31 May 2008. http://en.wikipedia.org/Jpeg [19] Lele’s tutorial on V-Ray’s physical workflow, 31 May 2008. http://www.chaosgroup.com/forums/vbulletin/ showthread.php?t=36359&page=29 [20] Linear RAW workflow, 28 May 2008. http://www.aim-dtp.net/aim/techniques/linear_ raw/index.htm [21] Linear Workflow ’Reloaded’, 15 May 2008. http://www.gijsdezwart.nl/tutorials.php [21] OpenEXR, 1 June 2008. http://www.openexr.com [22] Portable Network Graphics, 1 June 2008. http://en.wikipedia.org/Portable_Network_ Graphics [23] Shake specifications, 1 June 2008. http://www.apple.com/shake/specs.html [24] Tone and Gamma Correction in 3D, 15 May 2008. http://www.ypoart.com/tutorials/tone/index.php [25] V-Ray, 1 June 2008. http://www.chaosgroup.com [26] V-Ray Documentation, 31 May 2008. http://www.V-Ray.us/V-Ray_documentation/ [27] YafRay Linear Workflow Tutorial, 28 May 2008. http://forums.cgsociety.org/showthread. php?t=305727
  • 27. 22 7 Appendix A colored version of this thesis with high resolution is available at my website and project blog, http://stefan. svebeck.se 7.1 Tables Question 2 3 4 5 6 7 First image, Figure 4.7 5 5 1 5 5 4 Second image, Figure 4.8 1 1 5 1 1 2 Table 7.1: Texture comparison test. With monitor calibration. Question 2 3 4 First image, Figure 4.11 5 4 5 Second image, Figure 4.12 - 2 1 Table 7.3: Exterior comparison test. With monitor calibration. Table 7.2: Texture comparison test. No monitor calibration. Question 2 3 4 5 6 7 First image, Figure 4.7 8 7 3 7 6 5 Second image, Figure 4.8 - 1 5 1 2 3 Table 7.4: Exterior comparison test. No monitor calibration. Question 2 3 4 First image, Figure 4.11 8 7 8 Second image, Figure 4.12 - 1 -
  • 28. 23 7.2 Texture Interview Form Below are two images. 1. First 2. Second Textures from left to right. A. Wood B. Ground Questions 1. Do you have any experience or previous knowledge about graphics? 2. In which image does the wooden sphere best match the wood texture (A)? 3. In which image does the ground best match the ground texture (B)? 4. Does any image appear to be brighter than the other? 5. Does any of the images appear to have a higher texture quality? 6. Does any of the images appear to have a higher shadow quality? 7. Does any of the images appear to be more realistic? 8. Do you have any other comments about the differences in the rendered images? (optional)
  • 30. 25 7.3 Exterior Interview Form Below are two images. 1. First 2. Second Questions 1. Do you have any experience or previous knowledge about graphics? 2. Can any image be considered to have more details? 3. Can any image be considered to have a higher quality of shadows? 4. Which image do you think has a higher degree of realism? 5. What time of the day do you think it is in each image? 6. Other comments about the differences between the rendered images? (optional)
  • 33. 28 7.5 Glossary Brightness (Section 2.1.1) The percieved amount of emitted light. Color Mapping See tone mapping. Contrast (Section 2.1.2) The percieved difference between one or more fields. Diffuse Light (Section 2.1.6) Omnidirectional light, the opposite from directional light. Directional Light (Section 2.1.5) Light that only comes from one source in one direction. Gamma Correction (Section 2.2) Transforms an image from linear gamma to non-linear gamma to make it appear perceptually pleasing. Global Illumination (Section 2.1.9) A computer graphics simulation of omnidirectional (diffuse) light. Ground Reflection (Section 2.1.8) Bouncing light between an object and the ground. HDRI Formats (Sections 2.3, 5.2) Image formats with high dynamic range. Interreflection (Section 2.1.7) Bouncing light between objects. Linear Workflow (Section 2.5, 5.1) A workflow that works with linear gamma. Luminance (Section 2.1.3) A measurable quantity that is proportional to the intensity of light. Perceptually Uniform (Section 2.1.4) A change in color values correspond to an equal change of percieved color values. Tone Mapping (Section 2.1.1) Transforms an image from high dynamic range into low dynamic range. Tone Reproduction See tone mapping.