OpenGL 4.4 provides new features for accelerating scenes with many objects, which are typically found in professional visualization markets. This talk will provide details on the usage of the features and their effect on real-life models. Furthermore we will showcase how more work for rendering a scene can be off-loaded to the GPU, such as efficient occlusion culling or matrix calculations.
Video presentation here: http://on-demand.gputechconf.com/gtc/2014/video/S4379-opengl-44-scene-rendering-techniques.mp4
OpenGL NVIDIA Command-List: Approaching Zero Driver OverheadTristan Lorach
This presentation introduces a new NVIDIA extension called Command-list.
The purpose of this presentation is to explain the basic concepts on how to use it and show what are the benefits.
The sample I used for the talk is here: https://github.com/nvpro-samples/gl_commandlist_bk3d_models
The driver for trying should be PreRelease 347.09
http://www.nvidia.com/download/driverResults.aspx/80913/en-us
Ever wondered how to use modern OpenGL in a way that radically reduces driver overhead? Then this talk is for you.
John McDonald and Cass Everitt gave this talk at Steam Dev Days in Seattle on Jan 16, 2014.
Optimizing the Graphics Pipeline with Compute, GDC 2016Graham Wihlidal
With further advancement in the current console cycle, new tricks are being learned to squeeze the maximum performance out of the hardware. This talk will present how the compute power of the console and PC GPUs can be used to improve the triangle throughput beyond the limits of the fixed function hardware. The discussed method shows a way to perform efficient "just-in-time" optimization of geometry, and opens the way for per-primitive filtering kernels and procedural geometry processing.
Takeaway:
Attendees will learn how to preprocess geometry on-the-fly per frame to improve rendering performance and efficiency.
Intended Audience:
This presentation is targeting seasoned graphics developers. Experience with DirectX 12 and GCN is recommended, but not required.
The goal of this session is to demonstrate techniques that improve GPU scalability when rendering complex scenes. This is achieved through a modular design that separates the scene graph representation from the rendering backend. We will explain how the modules in this pipeline are designed and give insights to implementation details, which leverage GPU''s compute capabilities for scene graph processing. Our modules cover topics such as shader generation for improved parameter management, synchronizing updates between scenegraph and rendering backend, as well as efficient data structures inside the renderer.
Video here: http://on-demand.gputechconf.com/gtc/2013/video/S3032-Advanced-Scenegraph-Rendering-Pipeline.mp4
A technical deep dive into the DX11 rendering in Battlefield 3, the first title to use the new Frostbite 2 Engine. Topics covered include DX11 optimization techniques, efficient deferred shading, high-quality rendering and resource streaming for creating large and highly-detailed dynamic environments on modern PCs.
OpenGL NVIDIA Command-List: Approaching Zero Driver OverheadTristan Lorach
This presentation introduces a new NVIDIA extension called Command-list.
The purpose of this presentation is to explain the basic concepts on how to use it and show what are the benefits.
The sample I used for the talk is here: https://github.com/nvpro-samples/gl_commandlist_bk3d_models
The driver for trying should be PreRelease 347.09
http://www.nvidia.com/download/driverResults.aspx/80913/en-us
Ever wondered how to use modern OpenGL in a way that radically reduces driver overhead? Then this talk is for you.
John McDonald and Cass Everitt gave this talk at Steam Dev Days in Seattle on Jan 16, 2014.
Optimizing the Graphics Pipeline with Compute, GDC 2016Graham Wihlidal
With further advancement in the current console cycle, new tricks are being learned to squeeze the maximum performance out of the hardware. This talk will present how the compute power of the console and PC GPUs can be used to improve the triangle throughput beyond the limits of the fixed function hardware. The discussed method shows a way to perform efficient "just-in-time" optimization of geometry, and opens the way for per-primitive filtering kernels and procedural geometry processing.
Takeaway:
Attendees will learn how to preprocess geometry on-the-fly per frame to improve rendering performance and efficiency.
Intended Audience:
This presentation is targeting seasoned graphics developers. Experience with DirectX 12 and GCN is recommended, but not required.
The goal of this session is to demonstrate techniques that improve GPU scalability when rendering complex scenes. This is achieved through a modular design that separates the scene graph representation from the rendering backend. We will explain how the modules in this pipeline are designed and give insights to implementation details, which leverage GPU''s compute capabilities for scene graph processing. Our modules cover topics such as shader generation for improved parameter management, synchronizing updates between scenegraph and rendering backend, as well as efficient data structures inside the renderer.
Video here: http://on-demand.gputechconf.com/gtc/2013/video/S3032-Advanced-Scenegraph-Rendering-Pipeline.mp4
A technical deep dive into the DX11 rendering in Battlefield 3, the first title to use the new Frostbite 2 Engine. Topics covered include DX11 optimization techniques, efficient deferred shading, high-quality rendering and resource streaming for creating large and highly-detailed dynamic environments on modern PCs.
A description of the next-gen rendering technique called Triangle Visibility Buffer. It offers up to 10x - 20x geometry compared to Deferred rendering and much higher resolution. Generally it aligns better with memory access patterns in modern GPUs compared to Deferred Lighting like Clustered Deferred Lighting etc.
presented at SIGGRAPH 2014 in Vancouver during NVIDIA's "Best of GTC" sponsored sessions
http://www.nvidia.com/object/siggraph2014-best-gtc.html
Watch the replay that includes a demo of GPU-accelerated Illustrator and several OpenGL 4 demos running on NVIDIA's Tegra Shield tablet.
http://www.ustream.tv/recorded/51255959
Find out more about the OpenGL examples for GameWorks:
https://developer.nvidia.com/gameworks-opengl-samples
Siggraph2016 - The Devil is in the Details: idTech 666Tiago Sousa
A behind-the-scenes look into the latest renderer technology powering the critically acclaimed DOOM. The lecture will cover how technology was designed for balancing a good visual quality and performance ratio. Numerous topics will be covered, among them details about the lighting solution, techniques for decoupling costs frequency and GCN specific approaches.
NVIDIA OpenGL and Vulkan Support for 2017Mark Kilgard
Learn how NVIDIA continues improving both Vulkan and OpenGL for cross-platform graphics and compute development. This high-level talk is intended for anyone wanting to understand the state of Vulkan and OpenGL in 2017 on NVIDIA GPUs. For OpenGL, the latest standard update maintains the compatibility and feature-richness you expect. For Vulkan, NVIDIA has enabled the latest NVIDIA GPU hardware features and now provides explicit support for multiple GPUs. And for either API, NVIDIA's SDKs and Nsight tools help you develop and debug your application faster.
NVIDIA booth theater presentation at SIGGRAPH in Los Angeles, August 1, 2017.
http://www.nvidia.com/object/siggraph2017-schedule.html?id=sig1732
Get your SIGGRAPH driver release with OpenGL 4.6 and the latest Vulkan functionality from
https://developer.nvidia.com/opengl-driver
Bindless Deferred Decals in The Surge 2Philip Hammer
These are the slides for my talk at Digital Dragons 2019 in Krakow.
Update: The recordings are online on youtube now:
https://www.youtube.com/watch?v=e2wPMqWETj8
Talk by Yuriy O’Donnell at GDC 2017.
This talk describes how Frostbite handles rendering architecture challenges that come with having to support a wide variety of games on a single engine. Yuriy describes their new rendering abstraction design, which is based on a graph of all render passes and resources. This approach allows implementation of rendering features in a decoupled and modular way, while still maintaining efficiency.
A graph of all rendering operations for the entire frame is a useful abstraction. The industry can move away from “immediate mode” DX11 style APIs to a higher level system that allows simpler code and efficient GPU utilization. Attendees will learn how it worked out for Frostbite.
Bill explains some of the ways that the Vertex Shader can be used to improve performance by taking a fast path through the Vertex Shader rather than generating vertices with other parts of the pipeline in this AMD technology presentation from the 2014 Game Developers Conference in San Francisco March 17-21. Check out more technical presentations at http://developer.amd.com/resources/documentation-articles/conference-presentations/
Secrets of CryENGINE 3 Graphics TechnologyTiago Sousa
In this talk, the authors will describe an overview of a different method for deferred lighting approach used in CryENGINE 3, along with an in-depth description of the many techniques used. Original file and videos at http://crytek.com/cryengine/presentations
In this technical presentation Johan Andersson shows how the Frostbite 3 game engine is using the low-level graphics API Mantle to deliver significantly improved performance in Battlefield 4 on PC and future games from Electronic Arts. He will go through the work of bringing over an advanced existing engine to an entirely new graphics API, the benefits and concrete details of doing low-level rendering on PC and how it fits into the architecture and rendering systems of Frostbite. Advanced optimization techniques and topics such as parallel dispatch, GPU memory management, multi-GPU rendering, async compute & async DMA will be covered as well as sharing experiences of working with Mantle in general.
Game engines have long been in the forefront of taking advantage of the ever increasing parallel compute power of both CPUs and GPUs. This talk is about how the parallel compute is utilized in practice on multiple platforms today in the Frostbite game engine and how we think the parallel programming models, hardware and software in the industry should look like in the next 5 years to help us make the best games possible
This session presents a detailed programmer oriented overview of our SPU based shading system implemented in DICE's Frostbite 2 engine and how it enables more visually rich environments in BATTLEFIELD 3 and better performance over traditional GPU-only based renderers. We explain in detail how our SPU Tile-based deferred shading system is implemented, and how it supports rich material variety, High Dynamic Range Lighting, and large amounts of light sources of different types through an extensive set of culling, occlusion and optimization techniques.
Graphics Gems from CryENGINE 3 (Siggraph 2013)Tiago Sousa
This lecture covers rendering topics related to Crytek’s latest engine iteration, the technology which powers titles such as Ryse, Warface, and Crysis 3. Among covered topics, Sousa presented SMAA 1TX: an update featuring a robust and simple temporal antialising component; performant and physically-plausible camera related post-processing techniques such as motion blur and depth of field were also covered.
A description of the next-gen rendering technique called Triangle Visibility Buffer. It offers up to 10x - 20x geometry compared to Deferred rendering and much higher resolution. Generally it aligns better with memory access patterns in modern GPUs compared to Deferred Lighting like Clustered Deferred Lighting etc.
presented at SIGGRAPH 2014 in Vancouver during NVIDIA's "Best of GTC" sponsored sessions
http://www.nvidia.com/object/siggraph2014-best-gtc.html
Watch the replay that includes a demo of GPU-accelerated Illustrator and several OpenGL 4 demos running on NVIDIA's Tegra Shield tablet.
http://www.ustream.tv/recorded/51255959
Find out more about the OpenGL examples for GameWorks:
https://developer.nvidia.com/gameworks-opengl-samples
Siggraph2016 - The Devil is in the Details: idTech 666Tiago Sousa
A behind-the-scenes look into the latest renderer technology powering the critically acclaimed DOOM. The lecture will cover how technology was designed for balancing a good visual quality and performance ratio. Numerous topics will be covered, among them details about the lighting solution, techniques for decoupling costs frequency and GCN specific approaches.
NVIDIA OpenGL and Vulkan Support for 2017Mark Kilgard
Learn how NVIDIA continues improving both Vulkan and OpenGL for cross-platform graphics and compute development. This high-level talk is intended for anyone wanting to understand the state of Vulkan and OpenGL in 2017 on NVIDIA GPUs. For OpenGL, the latest standard update maintains the compatibility and feature-richness you expect. For Vulkan, NVIDIA has enabled the latest NVIDIA GPU hardware features and now provides explicit support for multiple GPUs. And for either API, NVIDIA's SDKs and Nsight tools help you develop and debug your application faster.
NVIDIA booth theater presentation at SIGGRAPH in Los Angeles, August 1, 2017.
http://www.nvidia.com/object/siggraph2017-schedule.html?id=sig1732
Get your SIGGRAPH driver release with OpenGL 4.6 and the latest Vulkan functionality from
https://developer.nvidia.com/opengl-driver
Bindless Deferred Decals in The Surge 2Philip Hammer
These are the slides for my talk at Digital Dragons 2019 in Krakow.
Update: The recordings are online on youtube now:
https://www.youtube.com/watch?v=e2wPMqWETj8
Talk by Yuriy O’Donnell at GDC 2017.
This talk describes how Frostbite handles rendering architecture challenges that come with having to support a wide variety of games on a single engine. Yuriy describes their new rendering abstraction design, which is based on a graph of all render passes and resources. This approach allows implementation of rendering features in a decoupled and modular way, while still maintaining efficiency.
A graph of all rendering operations for the entire frame is a useful abstraction. The industry can move away from “immediate mode” DX11 style APIs to a higher level system that allows simpler code and efficient GPU utilization. Attendees will learn how it worked out for Frostbite.
Bill explains some of the ways that the Vertex Shader can be used to improve performance by taking a fast path through the Vertex Shader rather than generating vertices with other parts of the pipeline in this AMD technology presentation from the 2014 Game Developers Conference in San Francisco March 17-21. Check out more technical presentations at http://developer.amd.com/resources/documentation-articles/conference-presentations/
Secrets of CryENGINE 3 Graphics TechnologyTiago Sousa
In this talk, the authors will describe an overview of a different method for deferred lighting approach used in CryENGINE 3, along with an in-depth description of the many techniques used. Original file and videos at http://crytek.com/cryengine/presentations
In this technical presentation Johan Andersson shows how the Frostbite 3 game engine is using the low-level graphics API Mantle to deliver significantly improved performance in Battlefield 4 on PC and future games from Electronic Arts. He will go through the work of bringing over an advanced existing engine to an entirely new graphics API, the benefits and concrete details of doing low-level rendering on PC and how it fits into the architecture and rendering systems of Frostbite. Advanced optimization techniques and topics such as parallel dispatch, GPU memory management, multi-GPU rendering, async compute & async DMA will be covered as well as sharing experiences of working with Mantle in general.
Game engines have long been in the forefront of taking advantage of the ever increasing parallel compute power of both CPUs and GPUs. This talk is about how the parallel compute is utilized in practice on multiple platforms today in the Frostbite game engine and how we think the parallel programming models, hardware and software in the industry should look like in the next 5 years to help us make the best games possible
This session presents a detailed programmer oriented overview of our SPU based shading system implemented in DICE's Frostbite 2 engine and how it enables more visually rich environments in BATTLEFIELD 3 and better performance over traditional GPU-only based renderers. We explain in detail how our SPU Tile-based deferred shading system is implemented, and how it supports rich material variety, High Dynamic Range Lighting, and large amounts of light sources of different types through an extensive set of culling, occlusion and optimization techniques.
Graphics Gems from CryENGINE 3 (Siggraph 2013)Tiago Sousa
This lecture covers rendering topics related to Crytek’s latest engine iteration, the technology which powers titles such as Ryse, Warface, and Crysis 3. Among covered topics, Sousa presented SMAA 1TX: an update featuring a robust and simple temporal antialising component; performant and physically-plausible camera related post-processing techniques such as motion blur and depth of field were also covered.
The Titanium OpenGL Module (Ti.OpenGL) opens the door to sophisticated graphics development for the Titanium programmer by exposing the entire OpenGL ES 1 and ES 2 graphics API to the Ti Javascript environment. The Ti.OpenGL view extends Ti.UI.View with a graphics rendering canvas that is easily managed within the Titanium view hierarchy. In addition, the module provides a databuffer object to hold large datasets and mitigate any inefficiency that arises from modeling datasets in Javascript.
This talk demonstrates the pragmatics of building sophisticated graphics displays using Ti.OpenGL in both ES 1 and ES 2. It will reveal several reusable design abstractions that take advantage of features of the Javascript environment. Among the topics to be covered are:
- OpenGL basic setup and animation
- Use of databuffers for attribute and index arrays
- Connecting databuffers and vertex buffer objects (vbo’s)
- Using external resources (textures, shaders, etc.)
Implementation of Computational Algorithms using Parallel Programmingijtsrd
Parallel computing is a type of computation in which many processing are performed concurrently often by dividing large problems into smaller ones that execute independently of each other. There are several different types of parallel computing. The first one is the shared memory architecture which harnesses the power of multiple processors and multiple cores on a single machine and uses threads of programs and shared memory to exchange data. The second type of parallel computing is the distributed architecture which harnesses the power of multiple machines in a networked environment and uses message passing to communicate processes actions to one another. This paper implements several computational algorithms using parallel programming techniques namely distributed message passing. The algorithms are Mandelbrot set, Bucket Sort, Monte Carlo, Grayscale Image Transformation, Array Summation, and Insertion Sort algorithms. All these algorithms are to be implemented using C .NET and tested in a parallel environment using the MPI.NET SDK and the DeinoMPI API. Experiments conducted showed that the proposed parallel algorithms have faster execution time than their sequential counterparts. As future work, the proposed algorithms are to be redesigned to operate on shared memory multi processor and multi core architectures. Youssef Bassil ""Implementation of Computational Algorithms using Parallel Programming"" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-3 | Issue-3 , April 2019, URL: https://www.ijtsrd.com/papers/ijtsrd22947.pdf
Paper URL: https://www.ijtsrd.com/computer-science/parallel-computing/22947/implementation-of-computational-algorithms-using-parallel-programming/youssef-bassil
NET Systems Programming Learned the Hard Way.pptxpetabridge
What is a thread quantum and why is it different on Windows Desktop and Windows Server? What's the difference between a blocking call and a blocking flow? Why did our remoting benchmarks suddenly drop when we moved to .NET 6? When should I try to write lock-free code? What does the `volatile` keyword mean?
Welcome to the types of questions my team and I are asked, or ask ourselves, on a regular basis - we're the makers of Akka.NET, a high performance distributed actor system library and these are the sorts of low-level questions we need to answer in order to build great experiences for our own users.
In this talk we're going to learn about .NET systems programming, the low level components we hope we can take for granted, but sometimes can't. In particular:
- The `ThreadPool` and how work queues operate in practice;
- Synchronization mechanisms - including `lock`-less ones;
- Memory management, `Span<T>`, and garbage collection;
- `await`, `Task`, and the synchronization contexts; and
- Crossing user-code and system boundaries in areas such as sockets.
This talk will help .NET developers understand why their code works the way it does and what to do in scenarios that demand high performance.
A set of mobile game optimization best practices. This presentation extensively covers PowerVR series of GPUs from Imagination Technologies and iOS, however the majority of recommendations can be applied to other GPUs and mobile operating systems.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
AI for Every Business: Unlocking Your Product's Universal Potential by VP of ...
OpenGL 4.4 - Scene Rendering Techniques
1. OpenGL 4.4
Scene Rendering Techniques
Christoph Kubisch - ckubisch@nvidia.com
Markus Tavenrath - matavenrath@nvidia.com
2. 2
Scene complexity increases
– Deep hierarchies, traversal expensive
– Large objects split up into a lot of little
pieces, increased draw call count
– Unsorted rendering, lot of state changes
CPU becomes bottleneck when
rendering those scenes
Removing SceneGraph traversal:
– http://on-demand.gputechconf.com/gtc/2013/presentations/S3032-Advanced-
Scenegraph-Rendering-Pipeline.pdf
Scene Rendering
models courtesy of PTC
3. 3
Harder to render „Graphicscard“ efficiently than „Racecar“
Challenge not necessarily obvious
650 000 Triangles
68 000 Parts
~ 10 Triangles per part
3 700 000 Triangles
98 000 Parts
~ 37 Triangles per part
CPU
App/GL
GPU
GPU
idle
4. 4
Avoid data redundancy
– Data stored once, referenced multiple times
– Update only once (less host to gpu transfers)
Increase GPU workload per job (batching)
– Further cuts API calls
– Less driver CPU work
Minimize CPU/GPU interaction
– Allow GPU to update its own data
– Low API usage when scene is changed little
– E.g. GPU-based culling
Enabling GPU Scalability
5. 5
Avoids classic
SceneGraph design
Geometry
– Vertex & Index-Buffer (VBO & IBO)
– Parts (CAD features)
Material
Matrix Hierarchy
Object
References Geometry,
Matrix, Materials
Rendering Research Framework
Same geometry
multiple objects
Same geometry
(fan) multiple parts
6. 6
Benchmark System
– Core i7 860 2.8Ghz
– Kepler Quadro K5000
– Driver upcoming this summer
Showing evolution of techniques
– Render time basic technique 32ms (31fps), CPU limited
– Render time best technique 1.3ms (769fps)
– Total speedup of 24.6x
Performance baseline
110 geometries, 66 materials
2500 objects
7. 7
foreach (obj in scene) {
setMatrix (obj.matrix);
// iterate over different materials used
foreach (part in obj.geometry.parts) {
setupGeometryBuffer (part.geometry); // sets vertex and index buffer
setMaterial_if_changed (part.material);
drawPart (part);
}
}
Basic technique 1: 32ms CPU-bound
Classic uniforms for parameters
VBO bind per part, drawcall per part, 68k binds/frame
8. 8
Basic technique 2: 17 ms CPU-bound
Classic uniforms for parameters
VBO bind per geometry, drawcall per part, 2.5k binds/frame
foreach (obj in scene) {
setupGeometryBuffer (obj.geometry); // sets vertex and index buffer
setMatrix (obj.matrix);
// iterate over parts
foreach (part in obj.geometry.parts) {
setMaterial_if_changed (part.material);
drawPart (part);
}
}
9. 9
Combine parts with same state
– Object‘s part cache must be rebuilt
based on material/enabled state
Drawcall Grouping
a b c d e f
a b+c fd e
Parts with different materials in geometry
Grouped and „grown“ drawcallsforeach (obj in scene) {
// sets vertex and index buffer
setupGeometryBuffer (obj.geometry);
setMatrix (obj.matrix);
// iterate over material batches: 6.8 ms -> 2.5x
foreach (batch in obj.materialCache) {
setMaterial (batch.material);
drawBatch (batch.data);
}
}
10. 10
drawBatch (batch) { // 6.8 ms
foreach range in batch.ranges {
glDrawElements (GL_.., range.count, .., range.offset);
}
}
drawBatch (batch) { // 6.1 ms -> 1.1x
glMultiDrawElements (GL_.., batch.counts[], .., batch.offsets[],
batch.numRanges);
}
glMultiDrawElements supports
multiple index buffer ranges
glMultiDrawElements (GL 1.4)
a b c d e f
a b+c fd e
offsets[] and counts[] per batch
for glMultiDrawElements
Index Buffer Object
11. 11
foreach (obj in scene) {
setupGeometryBuffer (obj.geometry);
setMatrix (obj.matrix);
// iterate over different materials used
foreach (batch in obj.materialCache) {
setMaterial (batch.material);
drawBatch (batch.geometry);
}
}
Vertex Setup
12. 12
Vertex Format Description
Type Offset StrideIndex
0
1
2
float3
float3
float2
0
12
0
24
8
Stream
0
1
Name
position
normal
texcoord
Buffer=StreamAttribute
13. 13
One call required for each attribute and stream
Format is being passed when updating ‚streams‘
Each attribute could be considered as one stream
Vertex Setup VBO (GL 2.1)
void setupVertexBuffer (obj) {
glBindBuffer (GL_ARRAY_BUFFER, obj.positionNormal);
glVertexAttribPointer (0, 3, GL_FLOAT, GL_FALSE, 24, 0); // pos
glVertexAttribPointer (1, 3, GL_FLOAT, GL_FALSE, 24, 12); // normal
glBindBuffer (GL_ARRAY_BUFFER, obj.texcoord);
glVertexAttribPointer (2, 2, GL_FLOAT, GL_FALSE, 8, 0); // texcoord
}
17. 17
foreach (obj in scene) {
setupGeometryBuffer (obj.geometry);
setMatrix (obj.matrix); // once per object
// iterate over different materials used
foreach (batch in obj.materialCaches) {
setMaterial (batch.material); // once per batch
drawBatch (batch.geometry);
}
}
Parameter Setup
18. 18
Group parameters by frequency of change
Generate GLSL shader parameters
Parameter Setup
Effect "Phong" {
Group "material" {
vec4 "ambient"
vec4 "diffuse"
vec4 "specular"
}
Group "object" {
mat4 "world"
mat4 "worldIT"
}
Group "view" {
vec4 "viewProjTM"
}
... Code ...
}
OpenGL 2 uniforms
OpenGL 3.x, 4.x buffers
19. 19
glUniform (2.x)
– one glUniform per parameter
(simple)
– one glUniform array call for all
parameters (ugly)
Uniform
// matrices
uniform mat4 matrix_world;
uniform mat4 matrix_worldIT;
// material
uniform vec4 material_diffuse;
uniform vec4 material_emissive;
...
// material fast but „ugly“
uniform vec4 material_data[8];
#define material_diffuse material_data[0]
...
20. 20
foreach (obj in scene) {
...
glUniform (matrixLoc, obj.matrix);
glUniform (matrixITLoc, obj.matrixIT);
// iterate over different materials used
foreach ( batch in obj.materialCaches) {
glUniform (frontDiffuseLoc, batch.material.frontDiffuse);
glUniform (frontAmbientLoc, batch.material.frontAmbient);
glUniform (...)
...
glMultiDrawElements (...);
}
}
Uniform
21. 21
glBindBufferBase (GL_UNIFORM_BUFFER, 0, uboMatrix);
glBindBufferBase (GL_UNIFORM_BUFFER, 1, uboMaterial);
foreach (obj in scene) {
...
glNamedBufferSubDataEXT (uboMatrix, 0, maSize, obj.matrix);
// iterate over different materials used
foreach ( batch in obj.materialCaches) {
glNamedBufferSubDataEXT (uboMaterial, 1, mtlSize, batch.material);
glMultiDrawElements (...);
}
}
BufferSubData
22. 22
Changes to existing shaders are minimal
– Surround block of parameters with uniform block
– Actual shader code remains unchanged
Group parameters by frequency
Uniform to UBO transition
layout(std140,binding=0) uniform matrixBuffer {
mat4 matrix_world;
mat4 matrix_worldIT;
};
layout(std140,binding=1) uniform materialBuffer {
vec4 material_diffuse;
vec4 material_emissive;
...
};
// matrices
uniform mat4 matrix_world;
uniform mat4 matrix_worldIT;
// material
uniform vec4 material_diffuse;
uniform vec4 material_emissive;
...
23. 23
Good speedup over multiple glUniform calls
Efficiency still dependent on size of material
Performance
Technique Draw time
Uniform 5.2 ms
BufferSubData 2.7 ms 1.9x
24. 24
Use glBufferSubData for dynamic parameters
Restrictions to get effcient path
– Buffer only used as GL_UNIFORM_BUFFER
– Buffer is <= 64kb
– Buffer bound offset == 0 (glBindBufferRange)
– Offset and size passed to glBufferSubData be a multiple of 4
BufferSubData
0 2 4 6 8 10 12 14 16
314.07
332.21
future driver
Speedup
25. 25
UpdateMatrixAndMaterialBuffer();
foreach (obj in scene) {
...
glBindBufferRange (UBO, 0, uboMatrix, obj.matrixOffset, maSize);
// iterate over different materials used
foreach ( batch in obj.materialCaches) {
glBindBufferRange (UBO, 1, uboMaterial, batch.materialOffset, mtlSize);
glMultiDrawElements (...);
}
}
BindBufferRange
26. 26
glBindBufferRange speed independent of data size
– Material used in framework is small (128 bytes)
– glBufferSubData will suffer more with increasing data size
Performance
Technique Draw time
glUniforms 5.2 ms
glBufferSubData 2.7 ms 1.9x
glBindBufferRange 2.0 ms 2.6x
27. 27
Avoid expensive CPU -> GPU copies for static data
Upload static data once and bind subrange of buffer
– glBindBufferRange (target, index, buffer, offset, size);
– Offset aligned to GL_UNIFORM_BUFFER_OFFSET_ALIGNMENT
– Fastest path: One buffer per binding index
BindRange
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5
314.07
332.21
future driver
Speedup
28. 28
1 2 4 6 7
Buffer may be large and sparse
– Full update could be ‚slow‘ because of unused/padded data
– Too many small glBufferSubData calls
Use Shader to write into Buffer (via SSBO)
– Provides compact CPU -> GPU transfer
Incremental Buffer Updates
0Target Buffer:
x x xUpdate Data:
Shader scatters data
0 3 5Update Locations:
3 5
29. 29
All matrices stored on GPU
– Use ARB_compute_shader for hierarchy updates
– Send only local matrix changes, evaluate tree
Transform Tree Updates
model courtesy of PTC
30. 30
Update hierarchy on GPU
– Level- and Leaf-wise processing depending on workload
– world = parent.world * object
Transform Tree Updates
Hierarchy
levels
Level-wise waits for previous results
Risk of little work per level
Leaf-wise runs to top, then concats path downwards
per thread
Favors more total work over redundant calculations
31. 31
– TextureBufferObject
(TBO) for matrices
– UniformBufferObject
(UBO) with array data
to save binds
– Assignment indices
passed as vertex
attribute or uniform
– Caveat: costs for
indexed fetch
Indexed in vec4 oPos;
uniform samplerBuffer matrixBuffer;
uniform materialBuffer {
Material materials[512];
};
in ivec2 vAssigns;
flat out ivec2 fAssigns;
// in vertex shader
fAssigns = vAssigns;
worldTM = getMatrix (matrixBuffer,
vAssigns.x);
wPos = worldTM * oPos;
...
// in fragment shader
color = materials[fAssigns.y].color;
...
32. 32
setupSceneMatrixAndMaterialBuffer (scene);
foreach (obj in scene) {
setupVertexBuffer (obj.geometry);
// iterate over different materials used
foreach ( batch in obj.materialCache ) {
glVertexAttribI2i (indexAttr, batch.materialIndex, matrixIndex);
glMultiDrawElements (GL_TRIANGLES, batch.counts, GL_UNSIGNED_INT ,
batch.offsets,batched.numUsed);
}
}
}
Indexed
33. 33
Scene and hardware
dependent benefit
Indexed
avg 55 triangles per drawcall avg 1500 triangles per drawcall
Timer Graphicscard
Hardware K5000 K2000
BindBufferRange GPU 2.0 ms 3.3 ms
Racecar
K5000 K2000
2.4 ms 7.4 ms
Indexed GPU 1.6 ms 1.25x 3.6 ms 0.9x 2.5 ms 0.96x 7.7 ms 0.96x
BindBufferRange CPU 2.0 ms 0.5 ms
Indexed CPU 1.1 ms 1.8x 0.3 ms 1.6x
34. 34
Recap
glUniform
– Only for tiny data (<= vec4)
glBufferSubData
– Dynamic data
glBindBufferRange
– Static or GPU generated data
Indexed
– TBO/SSBO for large/random access data
– UBO for frequent changes (bad for divergent
access)
0
0.5
1
1.5
2
2.5
314.07 332.21 future
driver
Speed relative to
glUniform
glUniform
glBufferSubData
glBindBufferRange
35. 35
Combine even further
– Use MultiDrawIndirect for single
drawcall
– Can store array of drawcalls on GPU
Multi Draw Indirect
Grouped and „grown“ drawcalls
Single drawcall with material/matrix
changesDrawElementsIndirect
{
GLuint count;
GLuint instanceCount;
GLuint firstIndex;
GLint baseVertex;
GLuint baseInstance;
}
DrawElementsIndirect object.drawCalls[ N ];
encodes material/matrix assignment
36. 36
Parameters:
– TBO and UBO as before
– ARB_shader_draw_parameters
for gl_BaseInstanceARB access
– Caveat:
Currently slower than vertex-
divisor technique shown GTC
2013 for very low primitive
counts (improvement being
investigated)
Multi Draw Indirect
uniform samplerBuffer matrixBuffer;
uniform materialBuffer {
Material materials[256];
};
// encoded assignments in 32-bit
ivec2 vAssigns =
ivec2 (gl_BaseInstanceARB >> 16,
gl_BaseInstanceARB & 0xFFFF);
flat out ivec2 fAssigns;
// in vertex shader
fAssigns = vAssigns;
worldTM = getMatrix (matrixBuffer,
vAssigns.x);
...
// in fragment shader
color = materials[fAssigns.y].diffuse...
37. 37
Multi Draw Indirect
setupSceneMatrixAndMaterialBuffer (scene);
glBindBuffer (GL_DRAW_INDIRECT_BUFFER, scene.indirectBuffer)
foreach ( obj in scene.objects ) {
...
// draw everything in one go
glMultiDrawElementsIndirect ( GL_TRIANGLES, GL_UNSIGNED_INT,
obj->indirectOffset, obj->numIndirects, 0 );
}
38. 38
Multi Draw Indirect (MDI)
is primitive dependent
Performance
avg 55 triangles per drawcall avg 1500 triangles per drawcall
Timer Graphicscard
Indexed GPU 1.6 ms
Racecar
2.5 ms
MDI w. gl_BaseInstanceARB 2.0 ms 0.8x 2.5 ms
MDI w. vertex divisor 1.3 ms 1.5x 2.5 ms
Indexed CPU 1.1 ms 0.3 ms
MDI 0.5 ms 2.2x 0.3 ms
39. 39
Multi Draw Indirect (MDI)
is great for non-material-
batched case
Performance
68.000 drawcommands 98.000 drawcommands
Timer Graphicscard
Indexed (not batched) GPU 6.3 ms
Racecar
8.7 ms
MDI w. vertex divisor (not batched) 2.5 ms 2.5x 3.6 ms 2.4x
Indexed (not batched) CPU 6.4 ms 8.8 ms
MDI w. vertex divisor (not batched) 0.5 ms 12.8x 0.3 ms 29.3x
40. 40
DrawIndirect combined with VBUM
NV_bindless_multidraw_indirect
DrawElementsIndirect
{
GLuint count;
GLuint instanceCount;
GLuint firstIndex;
GLint baseVertex;
GLuint baseInstance;
}
BindlessPtr
{
Gluint index;
Gluint reserved;
GLuint64 address;
GLuint64 length;
}
MyDrawIndirectNV
{
DrawElementsIndirect cmd;
GLuint reserved;
BindlessPtr index;
BindlessPtr vertex; // for position, normal...
}
Caveat:
– more costly than regular MultiDrawIndirect
– Should have > 500 triangles worth of work
per drawcall
41. 41
NV_bindless_multidraw_indirect
// enable VBUM vertexformat
...
glBindBuffer (GL_DRAW_INDIRECT_BUFFER, scene.indirectBuffer)
// draw entire scene one go
// one call per shader
glMultiDrawElementsIndirectBindlessNV
(GL_TRIANGLES, GL_UNSIGNED_INT,
scene->indirectOffset, scene->numIndirects,
sizeof(MyDrawIndirectNV),
1 // 1 vertex attribute binding);
42. 42
NV_bindless_multi... is
primitive dependent
Performance
avg 55 triangles per drawcall avg 1500 triangles per drawcall
Timer Graphicscard
MDI w. gl_BaseInstanceARB GPU 2.0 ms
Racecar
2.5 ms
NV_bindless.. 2.3 ms 0.86x 2.5 ms
MDI w. gl_BaseInstanceARB CPU 0.5 ms 0.3 ms
NV_bindless.. 0.04 ms 12.5x 0.04 ms 7.5x
43. 43
Scalar data batching is „easy“,
how about textures?
– Test adds 4 unique textures per
material
– Tri-planar texturing, no
additional vertex attributes
Textured Materials
45. 45
NV/ARB_bindless_texture
– Manage residency
uint64 glGetTextureHandle (tex)
glMakeTextureHandleResident (hdl)
– Faster binds
glUniformHandleui64ARB (loc, hdl)
– store texture handles as 64bit
values inside buffers
Textured Materials
// NEW ARB_bindless_texture stored inside
buffer!
struct MaterialTex {
sampler2D tex0; // can be in struct
sampler2D tex1;
...
};
uniform materialTextures {
MaterialTex texs[128];
};
// in fragment shader
flat in ivec2 fAssigns;
...
color = texture ( texs[fAssigns.y] .tex0,
uv);
46. 46
CPU Performance
– Raw test, VBUM+VAB,
batched by material
Textured Materials
~2.400 x 4 texture binds
138 x 4 unique textures
~11.000 x 4 texture binds
66 x 4 unique textures
Timer Graphicscard
glBindTextures 6.7 ms (CPU-bound)
Racecar
1.2 ms
glUniformHandleui64
(BINDLESS)
4.3 ms 1.5x (CPU-bound) 1.0 ms 1.2x
Indexed handles inside UBO
(BINDLESS)
1.1 ms 6.0x 0.3 ms 4.0x
47. 47
Share geometry buffers for batching
Group parameters for fast updating
MultiDraw/Indirect for keeping objects
independent or remove additional loops
– BaseInstance to provide unique
index/assignments for drawcall
Bindless to reduce validation
overhead/add flexibility
Recap
48. 48
Try create less total workload
Many occluded parts in the car model (lots of vertices)
Occlusion Culling
49. 49
GPU friendly processing
– Matrix and bbox buffer, object buffer
– XFB/Compute or „invisible“ rendering
– Vs. old techniques: Single GPU job for ALL objects!
Results
– „Readback“ GPU to Host
Can use GPU to pack into bit stream
– „Indirect“ GPU to GPU
Set DrawIndirect‘s instanceCount to 0 or 1
GPU Culling Basics
0,1,0,1,1,1,0,0,0
buffer cmdBuffer{
Command cmds[];
};
...
cmds[obj].instanceCount = visible;
50. 50
OpenGL 4.2+
– Depth-Pass
– Raster „invisible“ bounding boxes
Disable Color/Depth writes
Geometry Shader to create the three
visible box sides
Depth buffer discards occluded
fragments (earlyZ...)
Fragment Shader writes output:
visible[objindex] = 1
Occlusion Culling
// GLSL fragment shader
// from ARB_shader_image_load_store
layout(early_fragment_tests) in;
buffer visibilityBuffer{
int visibility[]; // cleared to 0
};
flat in int objectID; // unique per box
void main()
{
visibility[objectID] = 1;
// no atomics required (32-bit write)
}
Passing bbox fragments
enable object
Algorithm by
Evgeny Makarov, NVIDIA
depth
buffer
51. 51
Few changes relative to camera
Draw each object only once
– Render last visible, fully shaded
(last)
– Test all against current depth:
(visible)
– Render newly added visible:
none, if no spatial changes made
(~last) & (visible)
– (last) = (visible)
Temporal Coherence
frame: f – 1
frame: f
last visible
bboxes occluded
bboxes pass depth
(visible)
new visible
invisible
visible
camera
camera
moved
Algorithm by
Markus Tavenrath, NVIDIA
52. 52
Culling Readback vs Indirect
0
500
1000
1500
2000
2500
readback indirect NVindirect
Timeinmicroseconds[us] Indirect not yet as efficient to
process „invisible“ commands
For readback results, CPU has to
wait for GPU idle, and GPU may
remain idle until new work
Lower is
Better
GPU
CPU
GPU time
without culling
GPU time optimum
with culling
53. 53
10 x the car: 45 fps
– everything but materials duplicated in
memory, NO instancing
– 1m parts, 16k objects, 36m tris, 34m verts
Readback culling: 145 fps 3.2x
– 6 ms CPU time, wait for sync takes 5 ms
Stall-free culling: 115 fps 2.5x
– 1 ms CPU time using
NV_bindless_multidraw_indirect
Will it scale?
54. 54
Temporal culling
– very useful for object/vertex-boundedness
Readback vs Indirect
– Readback should be delayed so GPU doesn‘t starve of work
– Indirect benefit depends on scene ( #VBOs and #primitives)
– „Graphicscard“ model didn‘t benefit of either culling on K5000
Working towards GPU autonomous system
– (NV_bindless)/ARB_multidraw_indirect, ARB_indirect_parameters
as mechanism for GPU creating its own work
Culling Results
56. 56
VBO: vertex buffer object to store vertex data on GPU (GL server), favor
bigger buffers to have less binds, or go bindless
IBO: index buffer object, GL_ELEMENT_ARRAY_BUFFER to store vertex indices
on GPU
VAB: vertex attribute binding, splits vertex attribute format from vertex
buffer
VBUM: vertex buffer unified memory, allows working with raw gpu address
pointer values, avoids binding objects completely
UBO: uniform buffer object, data you want to access uniformly inside shaders
TBO: texture buffer object, for random access data in shaders
SSBO: shader storage buffer object, read & write arbitrary data structures
stored in buffers
MDI: Multi Draw Indirect, store draw commands in buffers
Glossary