Deferred shading is a technique where scene geometry is rendered to multiple render targets storing position, normal, and material properties, and then lighting is applied as a post-process. This improves batching and ensures each pixel is lit only once. Challenges include large framebuffer sizes, handling alpha blending and multisampling, but it enables easy addition of new light types and post-processing effects.
This presentation gives an overview of the rendering techniques used in KILLZONE 2. We put the main focus on the lighting and shadowing techniques of our deferred shading engine and how we made them play nicely with anti-aliasing.
Secrets of CryENGINE 3 Graphics TechnologyTiago Sousa
In this talk, the authors will describe an overview of a different method for deferred lighting approach used in CryENGINE 3, along with an in-depth description of the many techniques used. Original file and videos at http://crytek.com/cryengine/presentations
Presentation by Andrew Hamilton and Ken Brown from DICE at GDC 2016.
Photogrammetry has started to gain steam within the Games Industry in recent years. At DICE, this technique was first used on Battlefield and they fully embraced the technology and workflow for Star Wars: Battlefront. This talk will cover their research and development, planning and production, techniques, key takeaways and plans for the future. The speakers will cover photogrammetry as a technology, but more than that, show that it's not a magic bullet but instead a tool like any other that can be used to help achieve your artistic vision and craft.
Takeaway
Come and learn how (and why) photogrammetry was used to create the world of Star Wars. This talk will cover Battlefront's use of of the technology from pre-production to launch as well as some of their philosophies around photogrammetry as a tool. Many visuals will be included!
Intended Audience
A content creator friendly talk intended for pretty much any developer, especially those involved in 3D content creation. It is not a technical talk focused on the code or engineering of photogrammetry. The speakers will quickly cover all basics, so absolutely no prerequisite knowledge required.
This talk will present a novel technique for the rendering of surfaces covered with fallen deformable snow featured in Batman: Arkham Origins. Scalable from current generation consoles to high-end PCs, as well as next-generation consoles, the technique allows for visually convincing and organically interactive deformable snow surfaces everywhere characters can stand/walk/fight/fall, is extremely fast, has a low memory footprint, and can be used extensively in an open world game. We will explain how this technique is novel in its approach of acquiring arbitrary deformation, as well as present all the details required for implementation. Moreover, we will share the results of our collaboration with NVIDIA, and how it allowed us to bring this technique to the next level on PC using DirectX 11 tessellation.
Rendering Technologies from Crysis 3 (GDC 2013)Tiago Sousa
This talk covers changes in CryENGINE 3 technology during 2012, with DX11 related topics such as moving to deferred rendering while maintaining backward compatibility on a multiplatform engine, massive vegetation rendering, MSAA support and how to deal with its common visual artifacts, among other topics.
CEDEC 2018 - Towards Effortless Photorealism Through Real-Time RaytracingElectronic Arts / DICE
Real-time raytracing holds the promise of simplifying rendering pipelines, eliminating artist-intensive workflows, and ultimately delivering photorealistic images. This talk by Tomasz Stachowiak provides a glimpse of the future through the lens of SEED's PICA PICA demo: a game made for artificial intelligence agents, with procedural level assembly, and no precomputation. We dive into technical details of several advanced rendering algorithms, and discuss how Microsoft's DirectX Raytracing technology allows for their intuitive implementation. Several challenges remain -- we will take a look at some of them, discuss how real-time raytracing fits in the spectrum of solutions, and start to plot the course towards robust and artist-friendly image synthesis.
GTC 2014 - DirectX 11 Rendering and NVIDIA GameWorks in Batman: Arkham OriginsColin Barré-Brisebois
An extended version from the GDC14 "Deformable Snow Rendering in Batman: Arkham Origins" talk. This talk focuses on several DirectX 11 features developed/integrated in collaboration with NVIDIA. Tessellation and how the integration NVIDIA GameWorks with features such as physically-based particles with PhysX, particle fields with Turbulence, HBAO+, and bokeh DOF is also presented. Other improvements are also showcased: Reoriented Normal Mapping and chroma subsampling pipeline improvements.
The rendering technology of 'lords of the fallen' philip hammerMary Chan
This session is about some important aspects of the rendering pipeline of the upcoming Action-RPG "Lords of the Fallen", developed by Deck13 Interactive and CI Games for PS4, Xbox One, and PC. The topic covers several closely related areas like the deferred rendering system, image-based lighting using deferred cubemaps, deferred decals, and an approach for transparent object lighting and shadowing. More specifically, the lecture will cover several strategies to keep the G-Buffer as small and efficient as possible. This includes the description of a G-Buffer attribute-packing scheme and how per-material attributes can be exposed using special parameter lookup tables. Furthermore, a traditional problem of most deferred rendering systems is the seamless integration of transparent objects into the lighting. The lecture will present several ways to approach this problem, for example multi-pass deferred rendering, coloured transparent shadows, and a novel method for deferred particle lighting.
Next generation gaming brought high resolutions, very complex environments and large textures to our living rooms. With virtually every asset being inflated, it's hard to use traditional forward rendering and hope for rich, dynamic environments with extensive dynamic lighting. Deferred rendering, on the other hand, has been traditionally described as a nice technique for rendering of scenes with many dynamic lights, that unfortunately suffers from fill-rate problems and lack of anti-aliasing and very few games that use it were published.
In this talk, we will discuss our approach to face this challenge and how we designed a deferred rendering engine that uses multi-sampled anti-aliasing (MSAA). We will give in-depth description of each individual stage of our real-time rendering pipeline and the main ingredients of our lighting, post-processing and data management. We'll show how we utilize PS3's SPUs for fast rendering of a large set of primitives, parallel processing of geometry and computation of indirect lighting. We will also describe our optimizations of the lighting and our parallel split (cascaded) shadow map algorithm for faster and stable MSAA output.
The presentation describes Physically Based Lighting Pipeline of Killzone : Shadow Fall - Playstation 4 launch title. The talk covers studio transition to a new asset creation pipeline, based on physical properties. Moreover it describes light rendering systems used in new 3D engine built from grounds up for upcoming Playstation 4 hardware. A novel real time lighting model, simulating physically accurate Area Lights, will be introduced, as well as hybrid - ray-traced / image based reflection system.
We believe that physically based rendering is a viable way to optimize asset creation pipeline efficiency and quality. It also enables the rendering quality to reach a new level that is highly flexible depending on art direction requirements.
This presentation gives an overview of the rendering techniques used in KILLZONE 2. We put the main focus on the lighting and shadowing techniques of our deferred shading engine and how we made them play nicely with anti-aliasing.
Secrets of CryENGINE 3 Graphics TechnologyTiago Sousa
In this talk, the authors will describe an overview of a different method for deferred lighting approach used in CryENGINE 3, along with an in-depth description of the many techniques used. Original file and videos at http://crytek.com/cryengine/presentations
Presentation by Andrew Hamilton and Ken Brown from DICE at GDC 2016.
Photogrammetry has started to gain steam within the Games Industry in recent years. At DICE, this technique was first used on Battlefield and they fully embraced the technology and workflow for Star Wars: Battlefront. This talk will cover their research and development, planning and production, techniques, key takeaways and plans for the future. The speakers will cover photogrammetry as a technology, but more than that, show that it's not a magic bullet but instead a tool like any other that can be used to help achieve your artistic vision and craft.
Takeaway
Come and learn how (and why) photogrammetry was used to create the world of Star Wars. This talk will cover Battlefront's use of of the technology from pre-production to launch as well as some of their philosophies around photogrammetry as a tool. Many visuals will be included!
Intended Audience
A content creator friendly talk intended for pretty much any developer, especially those involved in 3D content creation. It is not a technical talk focused on the code or engineering of photogrammetry. The speakers will quickly cover all basics, so absolutely no prerequisite knowledge required.
This talk will present a novel technique for the rendering of surfaces covered with fallen deformable snow featured in Batman: Arkham Origins. Scalable from current generation consoles to high-end PCs, as well as next-generation consoles, the technique allows for visually convincing and organically interactive deformable snow surfaces everywhere characters can stand/walk/fight/fall, is extremely fast, has a low memory footprint, and can be used extensively in an open world game. We will explain how this technique is novel in its approach of acquiring arbitrary deformation, as well as present all the details required for implementation. Moreover, we will share the results of our collaboration with NVIDIA, and how it allowed us to bring this technique to the next level on PC using DirectX 11 tessellation.
Rendering Technologies from Crysis 3 (GDC 2013)Tiago Sousa
This talk covers changes in CryENGINE 3 technology during 2012, with DX11 related topics such as moving to deferred rendering while maintaining backward compatibility on a multiplatform engine, massive vegetation rendering, MSAA support and how to deal with its common visual artifacts, among other topics.
CEDEC 2018 - Towards Effortless Photorealism Through Real-Time RaytracingElectronic Arts / DICE
Real-time raytracing holds the promise of simplifying rendering pipelines, eliminating artist-intensive workflows, and ultimately delivering photorealistic images. This talk by Tomasz Stachowiak provides a glimpse of the future through the lens of SEED's PICA PICA demo: a game made for artificial intelligence agents, with procedural level assembly, and no precomputation. We dive into technical details of several advanced rendering algorithms, and discuss how Microsoft's DirectX Raytracing technology allows for their intuitive implementation. Several challenges remain -- we will take a look at some of them, discuss how real-time raytracing fits in the spectrum of solutions, and start to plot the course towards robust and artist-friendly image synthesis.
GTC 2014 - DirectX 11 Rendering and NVIDIA GameWorks in Batman: Arkham OriginsColin Barré-Brisebois
An extended version from the GDC14 "Deformable Snow Rendering in Batman: Arkham Origins" talk. This talk focuses on several DirectX 11 features developed/integrated in collaboration with NVIDIA. Tessellation and how the integration NVIDIA GameWorks with features such as physically-based particles with PhysX, particle fields with Turbulence, HBAO+, and bokeh DOF is also presented. Other improvements are also showcased: Reoriented Normal Mapping and chroma subsampling pipeline improvements.
The rendering technology of 'lords of the fallen' philip hammerMary Chan
This session is about some important aspects of the rendering pipeline of the upcoming Action-RPG "Lords of the Fallen", developed by Deck13 Interactive and CI Games for PS4, Xbox One, and PC. The topic covers several closely related areas like the deferred rendering system, image-based lighting using deferred cubemaps, deferred decals, and an approach for transparent object lighting and shadowing. More specifically, the lecture will cover several strategies to keep the G-Buffer as small and efficient as possible. This includes the description of a G-Buffer attribute-packing scheme and how per-material attributes can be exposed using special parameter lookup tables. Furthermore, a traditional problem of most deferred rendering systems is the seamless integration of transparent objects into the lighting. The lecture will present several ways to approach this problem, for example multi-pass deferred rendering, coloured transparent shadows, and a novel method for deferred particle lighting.
Next generation gaming brought high resolutions, very complex environments and large textures to our living rooms. With virtually every asset being inflated, it's hard to use traditional forward rendering and hope for rich, dynamic environments with extensive dynamic lighting. Deferred rendering, on the other hand, has been traditionally described as a nice technique for rendering of scenes with many dynamic lights, that unfortunately suffers from fill-rate problems and lack of anti-aliasing and very few games that use it were published.
In this talk, we will discuss our approach to face this challenge and how we designed a deferred rendering engine that uses multi-sampled anti-aliasing (MSAA). We will give in-depth description of each individual stage of our real-time rendering pipeline and the main ingredients of our lighting, post-processing and data management. We'll show how we utilize PS3's SPUs for fast rendering of a large set of primitives, parallel processing of geometry and computation of indirect lighting. We will also describe our optimizations of the lighting and our parallel split (cascaded) shadow map algorithm for faster and stable MSAA output.
The presentation describes Physically Based Lighting Pipeline of Killzone : Shadow Fall - Playstation 4 launch title. The talk covers studio transition to a new asset creation pipeline, based on physical properties. Moreover it describes light rendering systems used in new 3D engine built from grounds up for upcoming Playstation 4 hardware. A novel real time lighting model, simulating physically accurate Area Lights, will be introduced, as well as hybrid - ray-traced / image based reflection system.
We believe that physically based rendering is a viable way to optimize asset creation pipeline efficiency and quality. It also enables the rendering quality to reach a new level that is highly flexible depending on art direction requirements.
A technical deep dive into the DX11 rendering in Battlefield 3, the first title to use the new Frostbite 2 Engine. Topics covered include DX11 optimization techniques, efficient deferred shading, high-quality rendering and resource streaming for creating large and highly-detailed dynamic environments on modern PCs.
CCTV CAMERA, DOME CAMERA, C MOUNT CAMERA, CS MOUNT CAMERA, ECONOMY CAMERA, BOX CAMERA, IR CAMERA, PAN TILT DOME CAMERA, VANDAL PROOF DOME CAMERA, VARI FOCAL AUTO IRIS CAMERA, ELEGANT DOME CAMERA, SPEED DOME CAMERA, DVR CARDS, 4 CHANNEL DVR CARD, 8 CHANNEL DVR CARD, 16 CHANNEL DVR CARD, 32 CHANNEL DVR CARD, 64 CHANNEL DVR CARD, SWITCHERS, WEB SERVERS, POWER SUPPLIES, ACCESSORIES, VIDEO DISTRIBUTOR AMPLIFIER, SURGE PROTECTOR, TELEMETRY RECEIVER
Are you about to start work on a new Web project? Have you planned the project accurately and completely? Thorough planning can avoid so many issues later in the project, but yet it is often ignored or done hastily. In this white paper you'll get a detailed look at the planning process that CommonPlaces employs. With documents such as site maps, site wireframes, content type descriptions, and technology assessments, you can give your project a much higher chance of success.
Unite Berlin 2018 - Book of the Dead Optimizing Performance for High End Cons...Unity Technologies
In this session, the Unity Demo team provides their best tips and tricks for optimizing detailed, complex environment scenes for modern console performance.
Speakers:
Rob Thompson (Unity Technologies)
A Practical and Robust Bump-mapping Technique for Today’s GPUs (slides)Mark Kilgard
I presented this on May 8, 2000 to the Stanford Shading Group in Palo Alto, California. The presentation explains how to use the, then state-of-the-art, NVIDIA register combiners of the GeForce 256 to implement per-pixel bump mapping, a technique that is now ubiquitous in most 3D computer games.
Taking Killzone Shadow Fall Image Quality Into The Next GenerationGuerrilla
This talk focuses on the technical side of Killzone Shadow Fall, the platform exclusive launch title for PlayStation 4.
We present the details of several new techniques that were developed in the quest for next generation image quality, and the talk uses key locations from the game as examples. We discuss interesting aspects of the new content pipeline, next-gen lighting engine, usage of indirect lighting and various shadow rendering optimizations. We also describe the details of volumetric lighting, the real-time reflections system, and the new anti-aliasing solution, and include some details about the image-quality driven streaming system. A common, very important, theme of the talk is the temporal coherency and how it was utilized to reduce aliasing, and improve the rendering quality and image stability above the baseline 1080p resolution seen in other games.
From Experimentation to Production: The Future of WebGLFITC
Presented at FITC Toronto 2017
More info at http://fitc.ca/event/to17/
Hector Arellano, Firstborn
Morgan Villedieu, Firstborn
Overview
You don’t need an advanced degree in graphics engineering to use WebGL as a robust solution in your web design and development. During this talk you will discover how to harness the power of WebGL for real-world application.
Objective
Discover real-world applications for advanced WebGL techniques
Target Audience
Designers or developers excited to conquer the complexity associated with WebGL
Five Things Audience Members Will Learn
Explore the outer limits of physics effects, shaders and experimentation
Understand how these techniques can be applied to transform 3D to 2D shadows and post-processing
Render real-time liquid in WebGL
Use DOM as a texture so you get the power of WebGL without having to worry about a fallback system
Master the basics by utilizing libraries
Progressive Lightmapper: An Introduction to Lightmapping in UnityUnity Technologies
In 2018.1 we removed the preview label from the Progressive Lightmapper – we’ve made memory improvements, optimizations, and have had customers battle test it. We are now also working on a GPU accelerated version of the lightmapper. In this session, Tobias and Kuba will provide an intro to the basics of lightmapping and address of the most common issues that users struggle with and how to solve them. They will also provide an update on the future roadmap for lightmapping in Unity.
Tobias Alexander Franke & Kuba Cupisz (Unity Technologies)
The most important part of a modern PostFX pipeline is picking the right color model to support. This way the whole PostFX pipeline can use 32-bit render targets and at the same time have increased color representation and luminance representation.
그래픽 최적화로 가...가버렷! (부제: 배치! 배칭을 보자!) , Batch! Let's take a look at Batching! -...ozlael ozlael
그래픽 최적화를 위해서 필수로 알아야하는 드로우콜과 배칭을 심화하여 다룹니다. 드로우콜의 개념, Batch와 SetPass Call의 차이, 드로우콜 감소 방법 등등 기초 개념부터 실무적인 깊이까지 다룹니다. 기반지식 여부 상관 없이 모두 들으실 수 있습니다. 특히 아티스트와 프로그래머에게 도움이 될 것입니다.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
Search and Society: Reimagining Information Access for Radical FuturesBhaskar Mitra
The field of Information retrieval (IR) is currently undergoing a transformative shift, at least partly due to the emerging applications of generative AI to information access. In this talk, we will deliberate on the sociotechnical implications of generative AI for information access. We will argue that there is both a critical necessity and an exciting opportunity for the IR community to re-center our research agendas on societal needs while dismantling the artificial separation between the work on fairness, accountability, transparency, and ethics in IR and the rest of IR research. Instead of adopting a reactionary strategy of trying to mitigate potential social harms from emerging technologies, the community should aim to proactively set the research agenda for the kinds of systems we should build inspired by diverse explicitly stated sociotechnical imaginaries. The sociotechnical imaginaries that underpin the design and development of information access technologies needs to be explicitly articulated, and we need to develop theories of change in context of these diverse perspectives. Our guiding future imaginaries must be informed by other academic fields, such as democratic theory and critical theory, and should be co-developed with social science scholars, legal scholars, civil rights and social justice activists, and artists, among others.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Let's dive deeper into the world of ODC! Ricardo Alves (OutSystems) will join us to tell all about the new Data Fabric. After that, Sezen de Bruijn (OutSystems) will get into the details on how to best design a sturdy architecture within ODC.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
2. Overview
• Don’t bother with any lighting while drawing scene
geometry
• Render to a “fat” framebuffer format, using multiple
rendertargets to store data such as the position and
normal of each pixel
• Apply lighting as a 2D postprocess, using these
buffers as input
3. Comparison: Single Pass Lighting
For each object:
Render mesh, applying all lights in one shader
• Good for scenes with small numbers of lights (eg.
outdoor sunlight)
• Difficult to organize if there are many lights
• Easy to overflow shader length limitations
4. Comparison: Multipass Lighting
For each light:
For each object affected by the light:
framebuffer += object * light
• Worst case complexity is num_objects * num_lights
• Sorting by light or by object are mutually exclusive:
hard to maintain good batching
• Ideally the scene should be split exactly along light
boundaries, but getting this right for dynamic lights
can be a lot of CPU work
5. Deferred Shading
For each object:
Render to multiple targets
For each light:
Apply light as a 2D postprocess
• Worst case complexity is num_objects + num_lights
• Perfect batching
• Many small lights are just as cheap as a few big ones
6. Multiple Render Targets
• Required outputs from the geometry rendering are:
– Position
– Normal
– Material parameters (diffuse color, emissive, specular
amount, specular power, etc)
• This is not good if your lighting needs many input
values! (spherical harmonics would be difficult)
7. Fat Framebuffers
• The obvious rendertarget layout goes something like:
– Position A32B32G32R32F
– Normal A16B16G16R16F
– Diffuse color A8R8G8B8
– Material parameters A8R8G8B8
• This adds up to 256 bits per pixel. At 1024x768, that
is 24 meg, even without antialiasing!
• Also, current hardware doesn’t support mixing
different bit depths for multiple rendertargets
8. Framebuffer Size Optimizations
• Store normals in A2R10G10B10 format
• Material attributes could be palettized, then looked up
in shader constants or a texture
• No need to store position as a vector3:
– Each 2D screen pixel corresponds to a ray from the eyepoint
into the 3D world (think raytracing)
– You inherently know the 2D position of each screen pixel,
and you know the camera settings
– So if you write out the distance along this ray, you can
reconstruct the original worldspace position
9. My Framebuffer Choices
• 128 bits per pixel = 12 meg @ 1024x768:
– Depth R32F
– Normal + scattering A2R10G10B10
– Diffuse color + emissive A8R8G8B8
– Other material parameters A8R8G8B8
• My material parameters are specular intensity,
specular power, occlusion factor, and shadow
amount. I store a 2-bit subsurface scattering control
in the alpha channel of the normal buffer
14. How Wasteful…
• One of my 32 bit buffers holds depth values
• But the hardware is already doing this for me!
• It would be nice if there was an efficient way to get
access to the existing contents of the Z buffer
• Pretty please? ☺
15. Global Lights
• Things like sunlight and fog affect the entire scene
• Draw them as a fullscreen quad
• Position, normal, color, and material settings are read
from texture inputs
• Lighting calculation is evaluated in the pixel shader
• Output goes to an intermediate lighting buffer
16. Local Lights
• These only affect part of the scene
• We only want to render to the affected pixels
• This requires projecting the light volume into
screenspace. The GPU is very good at that sort of
thing…
• At author time, build a simple mesh that bounds the
area affected by the light. At runtime, draw this in 3D
space, running your lighting shader over each pixel
that it covers
17. Convex Light Hulls
• The light bounding
mesh will have two
sides, but it is important
that each pixel only gets
lit once
• As long as the mesh is
convex, backface
culling can take care of
this
18. Convex Light Hulls
• The light bounding
mesh will have two
sides, but it is important
that each pixel only gets
lit once
• As long as the mesh is
convex, backface
culling can take care of
this
19. Convex Light Hulls #2
• If the camera is inside
the light volume, draw
backfaces
20. Convex Light Hulls #3
• If the light volume
intersects the far clip
plane, draw frontfaces
• If the light volume
intersects both near and
far clip planes, your
light is too big!
21. Volume Optimization
• We only want to shade
the area where the light
volume intersects scene
geometry
• There is no need to
shade where the
volume is suspended in
midair
• Or where it is buried
beneath the ground
22. Stencil Light Volumes
• This is exactly the same problem as finding the
intersection between scene geometry and an
extruded shadow volume
• The standard stencil buffer solution applies
• But using stencil requires changing renderstate for
each light, which prevents batching up multiple lights
into a single draw call
• Using stencil may or may not be a performance win,
depending on context
23. Light Volume Z Tests
• A simple Z test can get
things half-right, without
the overhead of using
stencil
• When drawing light
volume backfaces, use
D3DCMP_GREATER to
reject “floating in the air”
portions of the light
24. Light Volume Z Tests #2
• If drawing frontfaces, use
D3DCMP_LESS to reject
“buried underground”
light regions
• Which is faster depends
on the ratio of
aboveground vs.
underground pixels: use
a heuristic to make a
guess
25. Alpha Blending
• This is a big problem!
• You could simply not do any lighting on alpha
blended things, and draw them after the deferred
shading is complete. The emissive and occlusion
material controls are useful for crossfading between
lit and non-lit geometry
• The return of stippling?
• Depth peeling is the ultimate solution, but is
prohibitively expensive at least for the time being
26. High Dynamic Range
• You can do deferred shading without HDR, but that
wouldn’t be so much fun
• Render your scene to multiple 32 bit buffers, then use
a 64 bit accumulation buffer during the lighting phase
• Unfortunately, current hardware doesn’t support
additive blending into 64 bit rendertargets
• Workaround: use a pair of HDR buffers, and simulate
alpha blending in the pixel shader
27. DIY Alpha Blending
• Initialize buffers A and B to the same contents
• Set buffer A as a shader input, while rendering to B,
and roll your own blend at the end of the shader
• This only works as long as your geometry doesn’t
overlap in screenspace
• When things do overlap, you have to copy all
modified pixels from B back into A, to make sure the
input and output buffers stay in sync
• This is slow, but not totally impractical as long as you
sort your lights to minimize screenspace overlaps
29. Volumetric Depth Tests
• Neat side effect of having scene depth available as a
shader input
• Gradually fade out alpha as a polygon approaches
the Z fail threshold
• No more hard edges where alpha blended particles
intersect world geometry!
30. Deferred Shading Advantages
• Excellent batching
• Render each triangle exactly once
• Shade each visible pixel exactly once
• Easy to add new types of lighting shader
• Other kinds of postprocessing (blur, heathaze) are
just special lights, and fit neatly into the existing
framework
31. The Dark Side
• Alpha blending is a nightmare!
• Framebuffer bandwidth can easily get out of hand
• Can’t take advantage of hardware multisampling
• Forces a single lighting model across the entire
scene (everything has to be 100% per-pixel)
• Not a good approach for older hardware
• But will be increasingly attractive in the future…