Deferred rendering in WebGL requires techniques to work around limitations of the WebGL specification and browser support. Key steps include rendering position, normal, and texture data to textures in the first pass. Lighting is calculated and accumulated in the second pass before applying materials in the third pass. Support for multiple render targets, depth textures, and floating point textures varies by WebGL and browser capabilities. Deferred rendering is practical in WebGL if implementations account for browser and hardware support limitations.
2. Main topics
Overview of deferred rendering methods I used.
Techniques needed to use deferred rendering in WebGL.
Practicality to use deferred rendering in WebGL?
2
3. Background of this research
I am engaging in an open source project “jThree”
I take part in implementing renderer with WebGL.
This presentation contents are based on these
experience I had during this development.
jThre
e
3
4. Geometries
Summary of deferred rendering
Deferred rendering steps-1/3
Position
Normal
Texture Coordinate
For each vertex.
Transform vertices
by multiplying
matrices
Vertex shader
Invisible fragment
will be reduced here.
Rasterizer
Encode the data
needed later steps
into color
Fragment shader
Depth texture
Normal texture
4
5. Geometries
Summary of deferred rendering
Deferred rendering steps-2/3
Nothing special to
do
Vertex shader
There must not be
invisible fragment in
this step.
Rasterizer
Calculate and sum
light color for each
fragment
Fragment shader
Just a square geometry
Depth texture
Normal texture
• Light position
• Light direction
• Light color and..
Shader inputs
Depth
texture
Normal
texture
Light
accumulation
texture
5
6. Geometries
Summary of deferred rendering
Deferred rendering steps-3/3
Position
Normal
Texture Coordinate
For each vertex.
Transform vertices
by multiplying
matrices
Vertex shader
Invisible fragment
will be reduced here.
Rasterizer
Calculate color
with material
colors and light
accumulation.
Fragment shader
Result
• Material colors
( diffuse, specular, ambient)
• And..
Shader inputs
Light
accumulation
texture
6
8. In case of using WebGL 1
The feature to render into multiple texture is not guaranteed to be
able to use.
Just render multiple times into single texture.
(In this case, needs 2 times for the first step)
If the browser supports the WebGL extension “WEBGL_draw_buffers”,
It is able to use multiple texture output for a rendering step.
8
9. In case of using WebGL 1
The feature to render into multiple texture is not guaranteed to be
able to use.
Just render multiple times into single texture.
(In this case, needs 2 times for the first step)
If the browser supports the WebGL extension “WEBGL_draw_buffers”,
It is able to use multiple texture output for a rendering step.
9
10. In case of using WebGL 2
The feature to render into depth texture is not guaranteed to be able
to use.
I can use color buffer, but it needs little bit complex compression.
If the browser supports the WebGL extension
“WEBGL_depth_texture”,
It is able to use depth texture to render depth buffer directly.
10
11. In case of using WebGL 2
The feature to render into depth texture is not guaranteed to be able
to use.
I can use color buffer, but it needs little bit complex compression.
If the browser supports the WebGL extension
“WEBGL_depth_texture”,
It is able to use depth texture to render depth buffer directly.
11
12. In case of using WebGL 3
The basic texture used for output have only 32bits for each pixel.
Normal buffer needs 3 float values each pixel, thus 1 value have only 10 bits or
less.(It is difficult to encode float value into 10 bits, thus it might be 8 bits)
It can be reduced into 2 elements because normal vector is guaranteed to be
normalized(it means the length is 1).
𝐶 𝑥𝑦 =
𝑁 𝑥𝑦
𝑁 𝑥𝑦
𝑁𝑧 ∗
1
2
+
1
2
(Compose)
𝑁z = 𝐶 𝑥𝑦 ∗ 2 − 1 , 𝑁xy =
𝐶 𝑥𝑦
𝐶 𝑥𝑦
∗ 1 − 𝑁𝑧
2 (Decompose)
12
13. In case of using WebGL 3
If the browser supports the WebGL extension “OES_texture_float”,
It is able to use128bits per pixel.
(It means 32bit float for 4 elements for each pixel)
13
14. In case of using WebGL 3
If the browser supports the WebGL extension “OES_texture_float”,
It is able to use128bits per pixel.
(It means 32bit float for 4 elements for each pixel)
14
15. Practicality of deferred rendering in
WebGL(Conclusion)
When we use OpenGL in local environment,
We should consider about..
Graphics board performance
Which GL version is supported or not
15
16. Practicality of deferred rendering in
WebGL(Conclusion)
When we use WebGL,
We should consider about..
Graphics board performance
Which GL version is supported or not
Which WebGL extensions is supported or not for a browser
(like CSS or Javascript feature).
16
Editor's Notes
Hello, today, I will talk about “Practicality of Deferred Rendering in WebGL”.
These days, not only personal computer but also smartphone is capable of using WebGL.
Thanks for this compatibility, this can be a common way to achieve multiplatform and using graphic board features.
Actually,theory of deferred rendering is not so difficult, but it needs many feature that is not guaranteed to be able to use in WebGL.
Therefore, I will talk about how is the practicality of deferred rendering in WebGL.
This is topics I will talk in this presentation. This presentation has three topics.
Overview of deferred rendering methods I used.
Techniques needed to use deferred rendering in WebGL.
Practicality to use deferred rendering in WebGL.
First of all, I will talk about background of this research.
I’m hosting and engaging in an open source prohect “jThree”. It is kind of a library for 3DCG in web browsers.
jThree has a lot of good features the other libraries don’t have.
But, that is not today’s topic. So, I will focus on deffered rendering WebGL. That is the biggest work I did for this project.
Now, I will describe deferred rendering steps I used in this project. This step is one of deferred rendering type. And, it is generally called light pre pass deferred rendering.
This deferred rendering type has 3 steps for rendering into display.
In the first step, it needs 3 buffers as input. These are position vectors, normal vectors, texture coordinate vectors.
There is these 3 parameter for each vertices in geometries. These parameter are transformed with matrices in vertex shader.
Then, rasterizer reduce the vertices if the fragment is not visible and split and pass them to fragment shader.
After that, these parameter is going to be written in textures as depth or normal in view space.
This is second step figure,
In this step, the input geometry is just a square geometry. This geometry will be used like sprite.
In this vertex shader, There is nothing special to do, because these geometry was already transformed into clip space.
Then, these geometry pass rasterizer. But, there must not be the invisible fragment.
These parameter will be splitted for each fragments in Rasterizer.
After rasterizer, fragment shader calculate light color and intencity from depth texture and normal texture I made in 1st step.
The position in view space will be reconstructed from depth texture and inverted projection matrix.
Then,fragment shader can calculate position and normal for each pixel we can see from the camera.
Finally, this fragment shader write light colors into light accumulation texture.
This is the final step of this type of deferred rendering.
The input is same as first step, positon , normal and texture coordinate.
These vertices are transformed in vertex shader same as 1st step.
This fragment shader receive material colors and light accumulation texture as variable.
Then, material color and light color is added to output actual result.
This is entire step I used in this project.
I will show you an example I made.
This is my demo shows these buffer I told before.
Up to here, I told about general deferred rendering. I will talk about the problems we should concern when we use deferred rendering with WebGL.
Either DirectX and OpenGL supports multiple texture rendering. If we use this feature, we can draw normal buffer and depth buffer in one time.
But, WebGL is not guaranteed to support this feature. In this case, the solution is quite easy. Just draw many times.
This have huge effect for performance. In this deferred rendering, we use only depth texture and normal texture in 1st step.
But, if we need global illumination feature, it needs also albedo texture and roughness texture.
However, we can use this feature with WebGL through we use the extension of WebGL “WEBGL_draw_buffers”.
It sounds good, but this compatibility is not so high. If this feature is not fully supported, deferred rendering is not so efficient way to do lighting.
Either DirectX and OpenGL supports multiple texture rendering. If we use this feature, we can draw normal buffer and depth buffer in one time.
But, WebGL is not guaranteed to support this feature. In this case, the solution is quite easy. Just draw many times.
This have huge effect for performance. In this deferred rendering, we use only depth texture and normal texture in 1st step.
But, if we need global illumination feature, it needs also albedo texture and roughness texture.
However, we can use this feature with WebGL through we use the extension of WebGL “WEBGL_draw_buffers”.
It sounds good, but this compatibility is not so high. If this feature is not fully supported, deferred rendering is not so efficient way to do lighting.
In OpenGL, we can get depth texture from render buffer directly. It means we don’t need to draw it by my self because depth buffer is already used internally for Z depth test.
But this feature is also not guaranteed to be supported in WebGL.
When this feature is not supported, we can compute them in fragment shader and convert them into color.
The solution is simple, but color conversion is little bit complex algorithm.
However,this feature is also able to be used when the extension of WebGL ‘WEBGL_depth_buffer’ is supported.
But, this compatibility is not so high.
In OpenGL, we can get depth texture from render buffer directly. It means we don’t need to draw it by my self because depth buffer is already used internally for Z depth test.
But this feature is also not guaranteed to be supported in WebGL.
When this feature is not supported, we can compute them in fragment shader and convert them into color.
The solution is simple, but color conversion is little bit complex algorithm.
However,this feature is also able to be used when the extension of WebGL ‘WEBGL_depth_buffer’ is supported.
But, this compatibility is not so high.
Output texture of WebGL is basically 32bit per pixel. It means 4 of 8bit color for each pixel.
In normal buffer, we need 3 float value for each pixel. Thus, we can use up to 10 bit for each value.
But,It is still small.
We can use this simple formula to compose normal vector into 2 vector.
Normal vector is guaranteed to be normalized, so we can use this formula and compress into 2 dimension vector.
However, this is also able to solve by extension OES float texture.
By that extension, we can use 128bit per pixel texture. So, we can use 32bit for each elements.
Fortunately, this compatibility is enough high to use.
However, this is also able to solve by extension OES float texture.
By that extension, we can use 128bit per pixel texture. So, we can use 32bit for each elements.
Fortunately, this compatibility is enough high to use.
Let me talk about conclusion of this presentation.
When we use OpenGL in local environment, we should consider about graphic board performance, and GL version.
However, when we use WebGL, we need consider WebGL extension compatibility like new CSS feature or Javascript feature.
WebGL have native power like OpenGL, and multi platformable like Web development.
But, WebGL also have bad characteristic of Web.
This is good technology, but I think it needs more time to be used commonly.