This talk covers changes in CryENGINE 3 technology during 2012, with DX11 related topics such as moving to deferred rendering while maintaining backward compatibility on a multiplatform engine, massive vegetation rendering, MSAA support and how to deal with its common visual artifacts, among other topics.
Talk by Fabien Christin from DICE at GDC 2016.
Designing a big city that players can explore by day and by night while improving on the unique visual from the first Mirror's Edge game isn't an easy task.
In this talk, the tools and technology used to render Mirror's Edge: Catalyst will be discussed. From the physical sky to the reflection tech, the speakers will show how they tamed the new Frostbite 3 PBR engine to deliver realistic images with stylized visuals.
They will talk about the artistic and technical challenges they faced and how they tried to overcome them, from the simple light settings and Enlighten workflow to character shading and color grading.
Takeaway
Attendees will get an insight of technical and artistic techniques used to create a dynamic time of day system with updating radiosity and reflections.
Intended Audience
This session is targeted to game artists, technical artists and graphics programmers who want to know more about Mirror's Edge: Catalyst rendering technology, lighting tools and shading tricks.
Taking Killzone Shadow Fall Image Quality Into The Next GenerationGuerrilla
This talk focuses on the technical side of Killzone Shadow Fall, the platform exclusive launch title for PlayStation 4.
We present the details of several new techniques that were developed in the quest for next generation image quality, and the talk uses key locations from the game as examples. We discuss interesting aspects of the new content pipeline, next-gen lighting engine, usage of indirect lighting and various shadow rendering optimizations. We also describe the details of volumetric lighting, the real-time reflections system, and the new anti-aliasing solution, and include some details about the image-quality driven streaming system. A common, very important, theme of the talk is the temporal coherency and how it was utilized to reduce aliasing, and improve the rendering quality and image stability above the baseline 1080p resolution seen in other games.
Talk by Graham Wihlidal (Frostbite Labs) at GDC 2017.
Checkerboard rendering is a relatively new technique, popularized recently by the introduction of the PlayStation 4 Pro. Many modern game engines are adding support for it right now, and in this talk, Graham will present an in-depth look at the new implementation in Frostbite, which is used in shipping titles like 'Battlefield 1' and 'Mass Effect Andromeda'. Despite being conceptually simple, checkerboard rendering requires a deep integration into the post-processing chain, in particular temporal anti-aliasing, dynamic resolution scaling, and poses various challenges to existing effects. This presentation will cover the basics of checkerboard rendering, explain the impact on a game engine that powers a wide range of titles, and provide a detailed look at how the current implementation in Frostbite works, including topics like object id, alpha unrolling, gradient adjust, and a highly efficient depth resolve.
The presentation describes Physically Based Lighting Pipeline of Killzone : Shadow Fall - Playstation 4 launch title. The talk covers studio transition to a new asset creation pipeline, based on physical properties. Moreover it describes light rendering systems used in new 3D engine built from grounds up for upcoming Playstation 4 hardware. A novel real time lighting model, simulating physically accurate Area Lights, will be introduced, as well as hybrid - ray-traced / image based reflection system.
We believe that physically based rendering is a viable way to optimize asset creation pipeline efficiency and quality. It also enables the rendering quality to reach a new level that is highly flexible depending on art direction requirements.
Secrets of CryENGINE 3 Graphics TechnologyTiago Sousa
In this talk, the authors will describe an overview of a different method for deferred lighting approach used in CryENGINE 3, along with an in-depth description of the many techniques used. Original file and videos at http://crytek.com/cryengine/presentations
Talk by Fabien Christin from DICE at GDC 2016.
Designing a big city that players can explore by day and by night while improving on the unique visual from the first Mirror's Edge game isn't an easy task.
In this talk, the tools and technology used to render Mirror's Edge: Catalyst will be discussed. From the physical sky to the reflection tech, the speakers will show how they tamed the new Frostbite 3 PBR engine to deliver realistic images with stylized visuals.
They will talk about the artistic and technical challenges they faced and how they tried to overcome them, from the simple light settings and Enlighten workflow to character shading and color grading.
Takeaway
Attendees will get an insight of technical and artistic techniques used to create a dynamic time of day system with updating radiosity and reflections.
Intended Audience
This session is targeted to game artists, technical artists and graphics programmers who want to know more about Mirror's Edge: Catalyst rendering technology, lighting tools and shading tricks.
Taking Killzone Shadow Fall Image Quality Into The Next GenerationGuerrilla
This talk focuses on the technical side of Killzone Shadow Fall, the platform exclusive launch title for PlayStation 4.
We present the details of several new techniques that were developed in the quest for next generation image quality, and the talk uses key locations from the game as examples. We discuss interesting aspects of the new content pipeline, next-gen lighting engine, usage of indirect lighting and various shadow rendering optimizations. We also describe the details of volumetric lighting, the real-time reflections system, and the new anti-aliasing solution, and include some details about the image-quality driven streaming system. A common, very important, theme of the talk is the temporal coherency and how it was utilized to reduce aliasing, and improve the rendering quality and image stability above the baseline 1080p resolution seen in other games.
Talk by Graham Wihlidal (Frostbite Labs) at GDC 2017.
Checkerboard rendering is a relatively new technique, popularized recently by the introduction of the PlayStation 4 Pro. Many modern game engines are adding support for it right now, and in this talk, Graham will present an in-depth look at the new implementation in Frostbite, which is used in shipping titles like 'Battlefield 1' and 'Mass Effect Andromeda'. Despite being conceptually simple, checkerboard rendering requires a deep integration into the post-processing chain, in particular temporal anti-aliasing, dynamic resolution scaling, and poses various challenges to existing effects. This presentation will cover the basics of checkerboard rendering, explain the impact on a game engine that powers a wide range of titles, and provide a detailed look at how the current implementation in Frostbite works, including topics like object id, alpha unrolling, gradient adjust, and a highly efficient depth resolve.
The presentation describes Physically Based Lighting Pipeline of Killzone : Shadow Fall - Playstation 4 launch title. The talk covers studio transition to a new asset creation pipeline, based on physical properties. Moreover it describes light rendering systems used in new 3D engine built from grounds up for upcoming Playstation 4 hardware. A novel real time lighting model, simulating physically accurate Area Lights, will be introduced, as well as hybrid - ray-traced / image based reflection system.
We believe that physically based rendering is a viable way to optimize asset creation pipeline efficiency and quality. It also enables the rendering quality to reach a new level that is highly flexible depending on art direction requirements.
Secrets of CryENGINE 3 Graphics TechnologyTiago Sousa
In this talk, the authors will describe an overview of a different method for deferred lighting approach used in CryENGINE 3, along with an in-depth description of the many techniques used. Original file and videos at http://crytek.com/cryengine/presentations
Graphics Gems from CryENGINE 3 (Siggraph 2013)Tiago Sousa
This lecture covers rendering topics related to Crytek’s latest engine iteration, the technology which powers titles such as Ryse, Warface, and Crysis 3. Among covered topics, Sousa presented SMAA 1TX: an update featuring a robust and simple temporal antialising component; performant and physically-plausible camera related post-processing techniques such as motion blur and depth of field were also covered.
Course presentation at SIGGRAPH 2014 by Charles de Rousiers and Sébastian Lagarde at Electronic Arts about transitioning the Frostbite game engine to physically-based rendering.
Make sure to check out the 118 page course notes on: http://www.frostbite.com/2014/11/moving-frostbite-to-pbr/
During the last few months, we have revisited the concept of image quality in Frostbite. The core of our approach was to be as close as possible to a cinematic look. We used the concept of reference to evaluate the accuracy of produced images. Physically based rendering (PBR) was the natural way to achieve this. This talk covers all the different steps needed to switch a production engine to PBR, including the small details often bypass in the literature.
The state of the art of real-time PBR techniques allowed us to achieve good overall results but not without production issues. We present some techniques for improving convolution time for image based reflection, proper ambient occlusion handling, and coherent lighting units which are mandatory for level editing.
Moreover, we have managed to reduce the quality gap, highlighted by our systematic reference comparison, in particular related to rough material handling, glossy screen space reflection, and area lighting.
The technical part of PBR is crucial for achieving good results, but represents only the top of the iceberg. Frostbite has become the de facto high-end game engine within Electronic Arts and is now used by a large amount of game teams. Moving all these game teams from “old fashion” lighting to PBR has required a lot of education, which have been done in parallel of the technical development. We have provided editing and validation tools to help the transition of art production. In addition, we have built a flexible material parametrisation framework to adapt to the various authoring tools and game teams’ requirements.
Siggraph2016 - The Devil is in the Details: idTech 666Tiago Sousa
A behind-the-scenes look into the latest renderer technology powering the critically acclaimed DOOM. The lecture will cover how technology was designed for balancing a good visual quality and performance ratio. Numerous topics will be covered, among them details about the lighting solution, techniques for decoupling costs frequency and GCN specific approaches.
Screen Space Decals in Warhammer 40,000: Space MarinePope Kim
My Siggraph 2012 presentation slides on Screen Space Decals in Warhammer 40,000: Space Marine.
SSD is similar to Deferred Decals, so I focused more on the problems we had and how we solved(or avoided) them
This session presents a detailed programmer oriented overview of our SPU based shading system implemented in DICE's Frostbite 2 engine and how it enables more visually rich environments in BATTLEFIELD 3 and better performance over traditional GPU-only based renderers. We explain in detail how our SPU Tile-based deferred shading system is implemented, and how it supports rich material variety, High Dynamic Range Lighting, and large amounts of light sources of different types through an extensive set of culling, occlusion and optimization techniques.
For this year's keynote at High Performance Graphics 2018, Colin Barré-Brisebois from SEED discussed the state of the art in real-time game ray tracing. He explored some of the connections between offline and real-time game ray tracing, and presented some of the open problems. Colin exposed a few potential solutions to those problems, and also proposed a call-to-arms on topics where the ray tracing research community and the games industry should unite in order to solve such open problems.
A technical deep dive into the DX11 rendering in Battlefield 3, the first title to use the new Frostbite 2 Engine. Topics covered include DX11 optimization techniques, efficient deferred shading, high-quality rendering and resource streaming for creating large and highly-detailed dynamic environments on modern PCs.
Graphics Gems from CryENGINE 3 (Siggraph 2013)Tiago Sousa
This lecture covers rendering topics related to Crytek’s latest engine iteration, the technology which powers titles such as Ryse, Warface, and Crysis 3. Among covered topics, Sousa presented SMAA 1TX: an update featuring a robust and simple temporal antialising component; performant and physically-plausible camera related post-processing techniques such as motion blur and depth of field were also covered.
Course presentation at SIGGRAPH 2014 by Charles de Rousiers and Sébastian Lagarde at Electronic Arts about transitioning the Frostbite game engine to physically-based rendering.
Make sure to check out the 118 page course notes on: http://www.frostbite.com/2014/11/moving-frostbite-to-pbr/
During the last few months, we have revisited the concept of image quality in Frostbite. The core of our approach was to be as close as possible to a cinematic look. We used the concept of reference to evaluate the accuracy of produced images. Physically based rendering (PBR) was the natural way to achieve this. This talk covers all the different steps needed to switch a production engine to PBR, including the small details often bypass in the literature.
The state of the art of real-time PBR techniques allowed us to achieve good overall results but not without production issues. We present some techniques for improving convolution time for image based reflection, proper ambient occlusion handling, and coherent lighting units which are mandatory for level editing.
Moreover, we have managed to reduce the quality gap, highlighted by our systematic reference comparison, in particular related to rough material handling, glossy screen space reflection, and area lighting.
The technical part of PBR is crucial for achieving good results, but represents only the top of the iceberg. Frostbite has become the de facto high-end game engine within Electronic Arts and is now used by a large amount of game teams. Moving all these game teams from “old fashion” lighting to PBR has required a lot of education, which have been done in parallel of the technical development. We have provided editing and validation tools to help the transition of art production. In addition, we have built a flexible material parametrisation framework to adapt to the various authoring tools and game teams’ requirements.
Siggraph2016 - The Devil is in the Details: idTech 666Tiago Sousa
A behind-the-scenes look into the latest renderer technology powering the critically acclaimed DOOM. The lecture will cover how technology was designed for balancing a good visual quality and performance ratio. Numerous topics will be covered, among them details about the lighting solution, techniques for decoupling costs frequency and GCN specific approaches.
Screen Space Decals in Warhammer 40,000: Space MarinePope Kim
My Siggraph 2012 presentation slides on Screen Space Decals in Warhammer 40,000: Space Marine.
SSD is similar to Deferred Decals, so I focused more on the problems we had and how we solved(or avoided) them
This session presents a detailed programmer oriented overview of our SPU based shading system implemented in DICE's Frostbite 2 engine and how it enables more visually rich environments in BATTLEFIELD 3 and better performance over traditional GPU-only based renderers. We explain in detail how our SPU Tile-based deferred shading system is implemented, and how it supports rich material variety, High Dynamic Range Lighting, and large amounts of light sources of different types through an extensive set of culling, occlusion and optimization techniques.
For this year's keynote at High Performance Graphics 2018, Colin Barré-Brisebois from SEED discussed the state of the art in real-time game ray tracing. He explored some of the connections between offline and real-time game ray tracing, and presented some of the open problems. Colin exposed a few potential solutions to those problems, and also proposed a call-to-arms on topics where the ray tracing research community and the games industry should unite in order to solve such open problems.
A technical deep dive into the DX11 rendering in Battlefield 3, the first title to use the new Frostbite 2 Engine. Topics covered include DX11 optimization techniques, efficient deferred shading, high-quality rendering and resource streaming for creating large and highly-detailed dynamic environments on modern PCs.
FlameWorks presentation from NVIDIA GTC 2014.
Learn how to add volumetric effects to your game engine - smoke, fire and explosions that are interactive, more realistic, and can actually render faster than traditional sprite-based techniques. Volumetrics remain one of the last big differences between real-time and offline visual effects. In this talk we will show how volumetric effects are now practical on current GPU hardware. We will describe several new simulation and rendering techniques, including new solvers, combustion models, optimized ray marching and shadows, which together can make volumetric effects a practical alternative to particle-based methods for game effects.
Optimizing the Graphics Pipeline with Compute, GDC 2016Graham Wihlidal
With further advancement in the current console cycle, new tricks are being learned to squeeze the maximum performance out of the hardware. This talk will present how the compute power of the console and PC GPUs can be used to improve the triangle throughput beyond the limits of the fixed function hardware. The discussed method shows a way to perform efficient "just-in-time" optimization of geometry, and opens the way for per-primitive filtering kernels and procedural geometry processing.
Takeaway:
Attendees will learn how to preprocess geometry on-the-fly per frame to improve rendering performance and efficiency.
Intended Audience:
This presentation is targeting seasoned graphics developers. Experience with DirectX 12 and GCN is recommended, but not required.
Efficient occlusion culling in dynamic scenes is a very important topic to the game and real-time graphics community in order to accelerate rendering. We present a novel algorithm inspired by recent advances in depth culling for graphics hardware, but adapted and optimized for SIMD-capable CPUs. Our algorithm has very low memory overhead and is three times faster than previous work, while culling 98% of all triangles by a full resolution depth buffer approach. It supports interleaving occluder rasterization and occlusion queries without penalty, making it easy
Look Ma, No Jutter! Optimizing Performance Across Oculus MobileUnity Technologies
The introduction of mobile AR means the arrival of more accessible devices, and for developers, a broader range of consumers to target. The good news is that you're "already ready." A stable of universal techniques and best practices can help reduce draw calls and maximize performance without sacrificing fidelity across Gear VR, Oculus Go, and Project Santa Cruz. We'll start with an overview of the devices and basic considerations, and go step by step through the process of reviewing and optimizing textures, scene geometry, and lighting. Attendees interested in Project Santa Cruz will also benefit from an introduction to Unity's profiling workflow.
Gabor Szauer - Oculus
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Let's dive deeper into the world of ODC! Ricardo Alves (OutSystems) will join us to tell all about the new Data Fabric. After that, Sezen de Bruijn (OutSystems) will get into the details on how to best design a sturdy architecture within ODC.
Leading Change strategies and insights for effective change management pdf 1.pdf
Rendering Technologies from Crysis 3 (GDC 2013)
1. The Rendering Technologies of
Tiago Sousa Carsten Wenzel Chris Raine
R&D Principal Graphics Engineer R&D Lead Software Engineer R&D Senior Software Engineer
Crytek
2. Thin G-Buffer 2.0
● For Crysis 3, wanted:
● Minimize redundant drawcalls
● AB details on G-Buffer with proper glossiness
● Tons of vegetation => Deferred translucency
● Multiplatform friendly
3. Thin G-Buffer 2.0
Channels Format
Depth AmbID, Decals D24S8
N.x N.y Gloss, Zsign Translucency A8B8G8R8
Albedo Y Albedo Cb,Cr Specular Y Per-Project A8B8G8R8
12. G-Buffer Packing
World space normal packed into 2 components (WIKI00)
Stereographic projection worked ok in practice (also cheap)
Glossiness + Normal Z sign packed together
z
y
z
x
YX
1
,
1
),( 22
22
2222
X1
1
,
X1
2
,
X1
2
z)y,(x,
Y
YX
Y
Y
Y
X
5.05.0)( ZsignGlossGlossZsign
13. G-Buffer Packing (2)
Albedo in Y’CbCr color space (WIKI01)
Stored in 2 channels via Chrominance Subsampling (WIKI02)
)081.0418.05.0(5.0
5.0331.0168.05.0
114.0587.0299.0'
BGRC
BGRC
BGRY
R
B
)5.0(772.1'
)5.0(714.0)5.0(344.0'
)5.0(402.1'
B
RB
R
CYB
CCYG
CYR
14. Hybrid Deferred Rendering
Deferred lighting still processed as usual (SOUSA11)
L-Buffers now using BW friendlier R11G11B10F formats
Precision was sufficient, since material properties not applied yet
Deferred shading composited via fullscreen pass
For more complex shading such as Hair or Skin, process forward passes
Allowed us to drop almost all opaque forward passes
Less Drawcalls, but G-Buffer passes now with higher cost
Fast Double-Z Prepass for some of the closest geometry helps slightly
Overall was nice win, on all platforms*
16. Thin G-Buffer Benefits
Unified solution across all platforms
Deferred Rendering for less BW/Memory than vanilla
Good for MSAA + avoiding tiled rendering on Xbox360
Tackle glossiness for transparent geometry on G-Buffer
Alpha blended cases, e.g. Decals, Deferred Decals, Terrain Layers
Can composite all such cases directly into G-Buffer
Avoid need for multipass
Deferred sub-surface scattering
Visual + performance win, in particular for vegetation rendering
17. Thin G-Buffer Hindsights
Why not pack G-Buffer directly?
Because we need to be able to blend details into G-Buffer
Would need to decode –> blend –> encode
Or could blend such cases into separate targets (bad for MSAA/Consoles)
Programmable blending would have been nice
Transparent cases can’t use alpha channel for store*
sRGB output only for couple channels or all
Would allow for more interesting and optimal packing schemes
While at it, stencil write from fragment shader would also be handy
18. Volumetric Fog Updates
Density calculation based on fog model established for
Crysis 1 (WENZEL06)
Deferred pass for opaque geometry
Per-Vertex approximation for transparent geometry
19. Volumetric Fog Updates
Little tuning: Artist controllable gradients (via ToD tool)
Height based: Density and color for specified top and bottom height
Radial based: Size, color and lobe around sun position
20. Volumetric Fog Shadows
Based on TÓTH09: Don’t accumulate in-scattered light but
shadow contribution along view ray instead
21. Volumetric fog shadows
Interleave pass distributes 1024 shadow samples on a 8x8
grid shared by neighboring pixels
Half resolution destination target
Gather pass computes final shadow value
Bilateral filtering was used to minimize ghosting and halos
Shadow stored in alpha, 8 bit depth in red channel
Used 8 taps to compare against center full resolution depth
Max sample distance configurable (~150-200m in C3 levels)
Cloud shadow texture baked into final result
Final result modifies fog height and radial color
25. Silhouette POM
Alternative to tessellation based displacement mapping
Looked into various approaches, most weren’t practical for production
Current implementation is based on principle of barycentric
correspondence (JESCHKE07)
26. Silhouette POM: Steps
Transform vertices and extrude - VS
Generate prisms (do not split into tetrahedral) and setup clip planes - GS
Generally prism sides are bilinear patches, we approximate by a
conservative plane
Note to IHVs: Emitting per-triangle constants would be nice!
In theory, on DX11.1, we could emit via UAV output?
Ray marching - PS
Compute intersection of view ray with prism in WS, translate to texture
space via (Jeschke07) barycentric correspondence
Use resulting texture uv and height for entry and exit to trace height field
Compute final uv and selectively discard pixel (viewer below height map; view
ray leaving prism before hitting terrain)
Lots of pressure on PS, yet GS is the bottleneck (prism gen)
30. Massive Grass: Simulation
Grass blade instance:
A chain of points held together by constraints
Distance + bending constrains to try maintain local space rest pose
angle per-particle
Physics collision geometry converted into small sphere set
Collisions handled as plane constrains
No stable collision handling, overdamp the instance
Applied to vegetation meshes via software-skinning
Exposed parameters per group:
Stiffness, damping, wind force factor, random variance
35. Massive Grass: Mesh Merging
One patch results in N-Meshes
N is number of materials used
Instances grouped into 16x16x16 meter patches (yes, volumetric)
Typical Numbers:
50k – 70k visible instances on consoles. PC > 100k
Instances have 18 to 3.6k vertices depending on mesh complexity
Closest instances simulated every frame
Based on distance: simulation and time sliced skinning
Instances removed further away
37. Massive Grass: Update Loop
Culling process (for each visible patch):
Mark visible instances
Compute LOD
Check if instance should be skipped in distance
After culling:
Allocate (from pool) dynamic VB/IB memory for each patch
Sample force fields into per-patch buffer (coarse discretization 4x4x4)
Sample physics for potential colliders, extract collider geometry
Dispatch sim & skin jobs for each patch
38. Massive Grass: Challenges
Efficient buffer management
Resulting meshes can vary in size per frame
Naive implementation (C2) resulted in bad perf on PC and out of vram
on consoles due to fragmentation
Current implementation inspired by “Don’t Throw it all Away” (McDONALD12)
Large pools for dynamic IB/VB
Each maintains two free lists (usable and pending)
Each item in pending list is moved to main free list as soon as GPU
query guarantees GPU done with pool
1.3 MB consoles main memory and PC 16 MB
39. Massive Grass: Challenges (2)
Efficient scheduling:
Patch instances are divided into small groups
Sim job kicked off for each group in main thread
DP in render thread has blocking wait for sim job
Job considered low-priority
Important:
Avoid unnecessary copies, skin directly to final destination
Reduce throughput and memory requirements (used half & fixed point
precision everywhere)
PC: ~15 ms, 300 to 600 jobs on worst case scenarios
Xbox360 ~16ms, 800 jobs; PS3 ~10ms, 100-400 jobs
40. Massive Grass: Challenges (3)
Alpha tested geometry, literaly everywhere
Massive overdraw, also troublesome for MSAA
Literaly worst case scenario for RSX due to poor z-cull
Prototyped alternatives (e.g. geometry based)
Art was not happy with these unfortunately
End solution: keep it simple
G-Buffer stage minimalistic
Consoles: Mostly outputting vertex data
Art side surface coverage minimization
42. DX11 Deferred MSAA: 101
The problem:
Multiple passes and reading/writing from Multisampled Render Targets
SV_SampleIndex / SV_Coverage system value semantics allow to solve
via multipass for pixel/sample frequency passes (Thibieroz08)
SV_SampleIndex
Forces pixel shader execution for each sub-sample
SV_SampleIndex provides index of the sub-sample currently executed
Index can be used to fetch sub-sample from your Multisampled RT
E.g. FooMS.Load( UnnormScreenCoord, nCurrSample)
SV_Coverage
Indicates to pixel shader which sub-samples covered during raster stage
Can also modify sub-sample coverage for custom coverage mask
43. DX11 Deferred MSAA
Foundation for almost all our supported AA techniques
Simple theory => troublesome practice
At least with fairly complex and deferred based engines
Disclaimer:
Non-MSAA friendly code accumulates fast
Breaks regularly as new techniques added with no care for MSAA
Pinpoint non-msaa friendly techniques, and update them one by one.
Rinse and repeat and you’ll get there eventually.
Will be enforced by default on our future engine versions
44. Custom Resolve & Per-Sample Mask
Post G-Buffer, perform a custom msaa resolve:
Outputs sample 0 for lighting/other msaa dependent passes
Creates sub-sample mask on same pass, rejecting similar samples
Tag stencil with sub-sample mask
How to combine with existing complex techniques that
might be using Stencil Buffer already?
Reserve 1 bit from stencil buffer
Update it with sub-sample mask
Make usage of stencil read/write bitmask to avoid bit override
Restore whenever a stencil clear occurs
48. Pixel/Sample Frequency Passes
Ensure disabling sample bit override via stencil write mask
StencilWriteMask = 0x7F
Pixel Frequency Passes
Set stencil read mask to reserved bits for per-pixel regions (~0x80)
Bind pre-resolved (non-multisampled) targets SRVs
Render pass as usual
Sample Frequency Passes
Set stencil read mask to reserved bit for per-sample regions (0x80)
Bind multisampled targets SRVs
Index current sub-sample via SV_SAMPLEINDEX
Render pass as usual
49. Alpha Test Super-Sampling
● Alpha testing is a special case
● Default SV_Coverage only applies to triangle edges
● Create your own sub-sample coverage mask
● E.g. check if current sub-sample AT or not and set bit
// 2 thumbs up for standardized MSAA offsets on DX11 (and even documented!)
static const float2 vMSAAOffsets[2] = {float2(0.25, 0.25),float2(-0.25,-0.25)};
const float2 vDDX = ddx(vTexCoord.xy);
const float2 vDDY = ddy(vTexCoord.xy);
[unroll] for(int s = 0; s < nSampleCount; ++s)
{
float2 vTexOffset = vMSAAOffsets[s].x * vDDX + vMSAAOffsets[s].y * vDDY;
float fAlpha = tex2D(DiffuseSmp, vTexCoord + vTexOffset).w;
uCoverageMask |= ((fAlpha-fAlphaRef) >= 0)? (uint(0x1)<<i) : 0;
}
52. Corner Cases
Cascades sun shadow maps:
Doing it “by the book” gets expensive quickly
Render shadows as usual at pixel frequency
Bilateral upscale during deferred shading
composite pass
53. Corner Cases
Soft particles (or similar techniques accessing depth):
Recommendation to tackle via per-sample frequency is quite slow on
real world scenarios
Max Depth instead works quite ok for most cases and N-times faster
Bad Good
54. MSAA Friendliness
MSAA unfriendly techniques, the usual suspects:
No AA at all or noticeable bright/dark silhouettes
Bad Good
55. MSAA Friendliness
MSAA unfriendly techniques, the usual suspects:
No AA at all or noticeable bright/dark silhouettes
Bad Good
56. MSAA Friendliness
Rules of thumb:
Accessing and/or rendering to Multisampled Render Targets?
Then you’ll need to care about accessing/outputting correct sub-sample
Obviously, always minimize BW – avoid fat formats
The later is always valid, but even more for MSAA cases
57. MSAA Correctness vs Performance
Our goal was correctness and quality over performance
You can always cut some corners as most games doing:
Alpha to Coverage instead of Alpha Test Super-Sampling
Or even no Alpha Test AA
Render only opaque with MSAA
Then render alpha blended passes withouth MSAA
Assuming HDR rendering: note that tone mapping is implicitly done post-
resolve resulting is loss of detail on high contrast regions
Note to IHVs: Having explicit access to HW capabilities
such as EQAA/CSAA would be nice
Smarter AA combos
58. Conclusion
● What’s next for CryENGINE ?
● A Big Next Generation leap is finally upon us
● In 2 years time, GPUs will be at ~16 TFLOPS and ridiculous amount
of available memory.
●Extrapolate results from there, without >8 year old consoles slowing progress
● 4k resolution will bring some interesting challenges/opportunities
● Call to arms - still a lot of problems to solve
● IHVs/Microsoft: PC GPU profilers have a lot to evolve! How about a
unified GPU Profiler, working great for all IHVs?
● Microsoft: Sup with DX11 (lack of) documentation? Where’s DX12?
● You: No great realtime GI / realtime reflections solution yet!
59. Special Thanks
● Nicolas Thibieroz
● Chris Auty, Carsten Wenzel, Chris Raine, Chris Bolte,
Baldur Karlsson, Andrew Khan, Michael Kopietz, Ivo Zoltan
Frey, Desmond Gayle, Marco Corbetta, Jake Turner, Pierre-
Ives Donzallaz, Magnus Larbrant, Nicolas Schulz, Nick
Kasyan, Vladimir Kajalin..
Uff… lets just make it shorter:
Thanks to the entire Crytek Team ^_^
64. Massive Grass: Challenges
Trick: Updating allocation done with Copy-On-Write in case
GPU still using original location
Consoles: incrementally defragment pools with GPU memory
copies
Also possible on PC, but more expensive due to CopySubResource
limitations (need scratchpad memory, since CSR won’t allow copies
where Dst/Src are same resource)
Note to IHVs: Being able to copy from same Dst/Src resource, if non-
overlapping memory regions, would be handy
Ended up using allocation & usage scheme for static
geometry as well
Editor's Notes
Hi everyone !Welcome to “The Rendering Technologies of Crysis 3” – our latest game, which I’m sure you’ve heard, it has a lot of GRAPHICS ! My name is Tiago Sousa, I’m Cryteks R&D Principal Graphics Engineer. Unfortunately Carsten and Chris couldn’t be today with me on stage, but I’ll do my best to present some of their great work.During past year we’ve made quite some multiplatform and DX11 related updates to our CryENGINE 3. I’ve picked 5 topics for you today, from some of these updates, that I hope you’ll like: - Deferred Rendering - Volumetric Fog - Silhouette POM - Massive Grass - Anti-AliasingEach of the topic would deserve a separate and minucious lecture for itself, but I’ll try to share clearly the topics foundation/concepts from the work we did.Before we start, heads up that I’m assuming most here familiar with CryENGINE 3 rendering, if not please check out our previous GDC/Siggraph/Gamefest talks after this lecture.So, withouth further dues, lets quickly start – we have to cover a lot of ground !
Thin G-Buffer 2.0The first topic we’ll cover is about deferred rendering, what changed hereFor Crysis 3 there was 4 areas we wanted to improve:Minimize redundant drawcalls. One big flaw from deferred lighting is the requirement for the additional shading drawcall, we wanted to get rid of this. Particularly important for MSAA supportAlpha blended details on G-Buffer (decals, deferred decals and similar) with proper glossiness. On crysis 2 (in case you didnt noticed) most decals had a fixed glossiness factor, we wanted art to be able to use nice gloss maps and such.Tons of vegetation on screen – this means we needed to tackle somehow translucency for all deferred light types, including sunMultiplatform friendly: Last but not least, Crysis 3 had the smallest fulltime tech development team ever (2 rendering guys in Frankfurt), so we aimed at generalized solution that either work on all platforms or just DX11 to minimize QA efforts
This was our final G-Buffer layoutEssentially 64bits mrt setup + 32 bits for zbuffer&stencil
Let’s break it down into bits for easier visualization.We start with our final target image, essentially everything is done (shadows, shading, tone mapping, etc)
Depth & StencilThe usualOnly thing is for stencil we do some magic1 bit is reserved to tag dynamic geometry (for masking out deferred decals – a real fix for deferred decals is tricky/expensive)7 bits for tagging ambient areas, so that art can specify diferent ambient for some geometry (while avoiding leaking. We have couple diferent techniques for art convenience)
2 channels for world space normals storage
For the second target, we have additional material propertiesOn red channel, albedo luminance is stored
On green channel, albedo chrominance is stored, packed via chrominance subsampling – more details soon
Blue channel stores specular intensity. As you know color for specular intensity is mostly needed just for certain metals – for us was an acceptable compromise
G-Buffer packingAs mentioned:Normals are stored in 2 channels. Stereographic projection worked ok in practice, for usWe packed Z-sign together with 7 bits of glossinessImportant:- This little tricks are what allowed us to have glossiness support for alpha blended cases and free 1 channel for storing translucency.
Albedo is stored using Y´CbCr color space. Might look quite some instructions, but it is actually fairly cheap in practice, couple ALUsThis is stored into 2 channels, via chrominance subsampling. Important:Concept here is that the Human Visual System has much lower accuity for color diferences. We actually are much better at checking luminance diferencesThis means in practice we can store chrominance at lower frequency. Several packing schemes exist.
Hybrid Deferred RenderingThis is an old idea from beggining of Crysis 2 times (way back to 2008), but back then we didn’t noticed much benefits, likely due to much simpler levelsImportant:Concept here is to use deferred rendering for everything that is “deferred compatible”, the rest is still processed using forward renderingStep by step:Deferred lighting accumulationstill processed as usual (SOUSA11 - Sousa, T. “CryENGINE 3 Rendering Techniques”, 2011)L-Buffers now using BW friendly R11G11B10F formats. Consoles still same formatsPrecision was sufficient, materials properties are not applied yet – you need the precision mostly when applying material properties.Deferred Shading compositedvia fullscreen passThis is where material properties applied, still uses R16G16B16A16F format. In theory could use lower precision + range scalling has we do on consoles (didn’t try)For more complex shading such as Hair or Skin, still process forwardAllowed drop of almost all opaque forward passesLess Drawcalls, but G-Buffer passes with higher cosZ-Prepass for few nearest geometryImportant:*Up to 10 ms on consoles on fairly heavy scenes, also fairly nice win for MSAA (regular deferred lighting + MSAA work fairly poor togheter)
Here we can see behaviour, red is for all pixels processed via deferred, green for all pixels still foward rendered
To recap what was said:Unified solution for all platformsDeferred rendering using 25% less BW than vanilla deferred. Good for MSAA /avoiding tiled rendering for xbox360Allows tackle glossiness for transparent geometry on g-buffer and also sub surface scattering for all deferred lights
Thin G-Buffer Hindsights:Why not pack G-Buffer directly into a 64 bit target ?Because we need to be able to blend details into G-BufferWould need to decode –> blend –> encodeOr could blend such cases into separate targets (bad for MSAA/Consoles)Programable blending would have been niceAB cases can’t use alpha channel for store (for all MRTs!)*Withouth resorting to multipassWould allow for more interesting and optimal packing schemessRGB output only for couple channels or all While at it, stencil write from fragment shader would also be handy
Volumetric Fog Updates:Mostly same since Crysis 1 times, with couple updatesFog density calculation still same model that Carsten introduces in his “Real Time Atmospheric Effects in Games”, in 2006Still rendered in deferred fashion as fullscreen pass for opaque geometry. One little optimization here was computing distance at which fog contributes or not at all and set minZ accordingly for Depth bounds checking (you could also achieve same by rendering quad at such depth + depth test)For transparents, we still do a per vertex approximation, unless is some visually important/low tessellation case such as water, for such we compute it per-pixel
One update we made, was exposing artist controleable gradients. Height based gradients allow controlling color and density for top and minimum height. The radial gradient allows art to control color/size/and lobe around sun position. Not super physically based, but was one of those things art kept requesting for artistic control
Volumetric Fog ShadowsSomething new we introduced for Crysis 3. Our work is based on “Real Time Volumetric Lighting in Participating Media”, by TOTH et al in 2009Important. Concept here is to not accumulate in-scattered light, we only accumulate shadow contribution along view ray. Fairly simple, imagine you have a volume, discretize it, say divide in 16 points, check if for each point, sample shadow map if its in shadow at that location or not
Technique is fairly simple:We interleave 1k samples on a 8x8 grid, so for each pixel we use 16 taps. This is done of course at half resolutionThen a fullscreen composite pass for computing final shadow value.Bilateral filtering was used to minimize artifactsOn our case, we used 8 taps from a low resolution depth buffer to compare with full resolution depth. All data for composite step stored on same target. 8 bit precision for depth sufficed to tackle most obvious artifacts.Extra:Max sample distance configurable (~150-200m in C3 levels)Cloud shadow texture baked into final resultFinal result modifies height and radial color components of fog
Alternative to tessellation based displacement mappingLooked into various approaches, most weren’t practical for productionCurrent implementation is based on principle of barycentric correspondence introduced (afawk) by JESCHKE07 - Jeschke, S. et al. “Interactive Smooth and Curved Shell Mapping”, 2007
JESCHKE07 - Jeschke, S. et al. “Interactive Smooth and Curved Shell Mapping”, 2007Alternative to tessellation based displacement mappingLooked into various approaches, most weren’t practical for productione.g. needed obj space normal maps, separate shader for fins and shells, very expensive ray prism intersection costs, etcCurrent implementation is based on principle of barycentric correspondence (JES07) Allows tracing ray in obj space and map it back into texture space
Transform vertices and extrude – VSOutput current vertex + extruded version (position, view vector)Generate prisms (do not split into tetrahedral) and setup clip planes - GSGenerally prism sides are bilinear patches, we approximate by a conservative planeNote to IHVs: Emitting per-triangle constants would be nice!Ray marching - PSCompute intersection of view ray with prism in WS, translate to texture space via barycentric correspondenceUse resulting texture uv and height for entry and exit to trace height fieldCompute final uv and selectively discard pixel (viewer below height map; view ray leaving prism before hitting terrain)Lots of pressure on PS, yet GS is the bottleneck (prism gen)
Currently don’t fix up depth buffer for correct intersectionsDo fix up depth in separate target though which is used for deferred passes (shadows, fog, deferred decals, screen space occlusion, etc)Uses same self shadow algorithm that also runs atop of OBM and POMNext projects will make better usage of such tech
Initial goals: Everything moving on the screen: eg: grass, vegetation, cloth
Red simulated everyframe/ highest detail. Green time sliced update/lower detail (no shadows and such)
MCD12 – McDonald, J. “Don’t Throw it all Away”, 2012Efficient buffer managementResulting meshes can vary in size per frame. Eg: player walking/looking diferent directions can result in more/less vegetation visibleLarge pools for dynamic IB/VBEach maintains two free lists (usable and pending)Each item in pending list is moved to main free list as soon as GPU query guarantees GPU done with pool * (done with rendering)
Efficient scheduling:Patch instances are divided into small groupsSim job kicked off for each group in main threadDP in render thread has blocking wait for sim job (gives full frame of time)Job considered low-priority (= higher priority jobs run before it in work queue)*No copies at all, store directlyImportant:Avoid unnecessary copies, skin directly to final destinationReduce throughput and memory requirements (used half & fixed point precision everywhere)*e.g.: velocity for sim
Alpha tested geometry. Literaly everywhereWorst case scenario for RSX due to fairly poor z-cull. Xbox 360 outperformed PS3 here 2x. Also troublesome for MSAAPrototyped alternatives (e.g geometry based) but art hated them End solution: keep it simpleG-Buffer stage minimalisticConsoles: Mostly outputting vertex dataSurface coverage minimize1 cycle fragment program on rsx + extra cycle due to clip requirement
Just gave a combo of options; let gamers pick their favorite
*alpha tested geometry included*custom coverage mask allows for nifty tricks: e.g. Selective alpha test Super-Sampling, custom ATOC, fancier lod dissolves
*If nothing else works due to already crazy stencil usage – you’ll have to use the poor man version via clip
Custom Per-Sample Mask rejecting similar samples, via depth/normal thresholdOne adittionallittle trick we also do: tag entire quad instead of just pixel, from our profiling helps stencil culling efficiency (due to better spatial coeherency => entire quad rejected/accepted) – in average about 1ms save
(Tip from Thibieroz) EvaluateAttributeAtSample vs DDX/DDY – DDX/Y are TEX intructions, using EvaluateAttribute will likely perform better
Motion blur and Depth of FieldBoth done at pixel frequencyComposited into MSAA buffer after
Motion blur and Depth of FieldBoth done at pixel frequencyComposited into MSAA buffer after