A Certain Slant of Light - Past, Present and Future Challenges of Global Illu...Electronic Arts / DICE
Global illumination (GI) has been an ongoing quest in games. The perpetual tug-of-war between visual quality and performance often forces developers to take the latest and greatest from academia and tailor it to push the boundaries of what has been realized in a game product. Many elements need to align for success, including image quality, performance, scalability, interactivity, ease of use, as well as game-specific and production challenges.
First we will paint a picture of the current state of global illumination in games, addressing how the state of the union compares to the latest and greatest research. We will then explore various GI challenges that game teams face from the art, engineering, pipelines and production perspective. The games industry lacks an ideal solution, so the goal here is to raise awareness by being transparent about the real problems in the field. Finally, we will talk about the future. This will be a call to arms, with the objective of uniting game developers and researchers on the same quest to evolve global illumination in games from being mostly static, or sometimes perceptually real-time, to fully real-time.
This presentation was given at SIGGRAPH 2017 by Colin Barré-Brisebois (EA SEED) as part of the Open Problems in Real-Time Rendering course.
Talk by Fabien Christin from DICE at GDC 2016.
Designing a big city that players can explore by day and by night while improving on the unique visual from the first Mirror's Edge game isn't an easy task.
In this talk, the tools and technology used to render Mirror's Edge: Catalyst will be discussed. From the physical sky to the reflection tech, the speakers will show how they tamed the new Frostbite 3 PBR engine to deliver realistic images with stylized visuals.
They will talk about the artistic and technical challenges they faced and how they tried to overcome them, from the simple light settings and Enlighten workflow to character shading and color grading.
Takeaway
Attendees will get an insight of technical and artistic techniques used to create a dynamic time of day system with updating radiosity and reflections.
Intended Audience
This session is targeted to game artists, technical artists and graphics programmers who want to know more about Mirror's Edge: Catalyst rendering technology, lighting tools and shading tricks.
A Certain Slant of Light - Past, Present and Future Challenges of Global Illu...Electronic Arts / DICE
Global illumination (GI) has been an ongoing quest in games. The perpetual tug-of-war between visual quality and performance often forces developers to take the latest and greatest from academia and tailor it to push the boundaries of what has been realized in a game product. Many elements need to align for success, including image quality, performance, scalability, interactivity, ease of use, as well as game-specific and production challenges.
First we will paint a picture of the current state of global illumination in games, addressing how the state of the union compares to the latest and greatest research. We will then explore various GI challenges that game teams face from the art, engineering, pipelines and production perspective. The games industry lacks an ideal solution, so the goal here is to raise awareness by being transparent about the real problems in the field. Finally, we will talk about the future. This will be a call to arms, with the objective of uniting game developers and researchers on the same quest to evolve global illumination in games from being mostly static, or sometimes perceptually real-time, to fully real-time.
This presentation was given at SIGGRAPH 2017 by Colin Barré-Brisebois (EA SEED) as part of the Open Problems in Real-Time Rendering course.
Talk by Fabien Christin from DICE at GDC 2016.
Designing a big city that players can explore by day and by night while improving on the unique visual from the first Mirror's Edge game isn't an easy task.
In this talk, the tools and technology used to render Mirror's Edge: Catalyst will be discussed. From the physical sky to the reflection tech, the speakers will show how they tamed the new Frostbite 3 PBR engine to deliver realistic images with stylized visuals.
They will talk about the artistic and technical challenges they faced and how they tried to overcome them, from the simple light settings and Enlighten workflow to character shading and color grading.
Takeaway
Attendees will get an insight of technical and artistic techniques used to create a dynamic time of day system with updating radiosity and reflections.
Intended Audience
This session is targeted to game artists, technical artists and graphics programmers who want to know more about Mirror's Edge: Catalyst rendering technology, lighting tools and shading tricks.
Optimizing the Graphics Pipeline with Compute, GDC 2016Graham Wihlidal
With further advancement in the current console cycle, new tricks are being learned to squeeze the maximum performance out of the hardware. This talk will present how the compute power of the console and PC GPUs can be used to improve the triangle throughput beyond the limits of the fixed function hardware. The discussed method shows a way to perform efficient "just-in-time" optimization of geometry, and opens the way for per-primitive filtering kernels and procedural geometry processing.
Takeaway:
Attendees will learn how to preprocess geometry on-the-fly per frame to improve rendering performance and efficiency.
Intended Audience:
This presentation is targeting seasoned graphics developers. Experience with DirectX 12 and GCN is recommended, but not required.
Rendering Technologies from Crysis 3 (GDC 2013)Tiago Sousa
This talk covers changes in CryENGINE 3 technology during 2012, with DX11 related topics such as moving to deferred rendering while maintaining backward compatibility on a multiplatform engine, massive vegetation rendering, MSAA support and how to deal with its common visual artifacts, among other topics.
Taking Killzone Shadow Fall Image Quality Into The Next GenerationGuerrilla
This talk focuses on the technical side of Killzone Shadow Fall, the platform exclusive launch title for PlayStation 4.
We present the details of several new techniques that were developed in the quest for next generation image quality, and the talk uses key locations from the game as examples. We discuss interesting aspects of the new content pipeline, next-gen lighting engine, usage of indirect lighting and various shadow rendering optimizations. We also describe the details of volumetric lighting, the real-time reflections system, and the new anti-aliasing solution, and include some details about the image-quality driven streaming system. A common, very important, theme of the talk is the temporal coherency and how it was utilized to reduce aliasing, and improve the rendering quality and image stability above the baseline 1080p resolution seen in other games.
Game engines have long been in the forefront of taking advantage of the ever increasing parallel compute power of both CPUs and GPUs. This talk is about how the parallel compute is utilized in practice on multiple platforms today in the Frostbite game engine and how we think the parallel programming models, hardware and software in the industry should look like in the next 5 years to help us make the best games possible
Talk by Yuriy O’Donnell at GDC 2017.
This talk describes how Frostbite handles rendering architecture challenges that come with having to support a wide variety of games on a single engine. Yuriy describes their new rendering abstraction design, which is based on a graph of all render passes and resources. This approach allows implementation of rendering features in a decoupled and modular way, while still maintaining efficiency.
A graph of all rendering operations for the entire frame is a useful abstraction. The industry can move away from “immediate mode” DX11 style APIs to a higher level system that allows simpler code and efficient GPU utilization. Attendees will learn how it worked out for Frostbite.
The presentation describes Physically Based Lighting Pipeline of Killzone : Shadow Fall - Playstation 4 launch title. The talk covers studio transition to a new asset creation pipeline, based on physical properties. Moreover it describes light rendering systems used in new 3D engine built from grounds up for upcoming Playstation 4 hardware. A novel real time lighting model, simulating physically accurate Area Lights, will be introduced, as well as hybrid - ray-traced / image based reflection system.
We believe that physically based rendering is a viable way to optimize asset creation pipeline efficiency and quality. It also enables the rendering quality to reach a new level that is highly flexible depending on art direction requirements.
Course presentation at SIGGRAPH 2014 by Charles de Rousiers and Sébastian Lagarde at Electronic Arts about transitioning the Frostbite game engine to physically-based rendering.
Make sure to check out the 118 page course notes on: http://www.frostbite.com/2014/11/moving-frostbite-to-pbr/
During the last few months, we have revisited the concept of image quality in Frostbite. The core of our approach was to be as close as possible to a cinematic look. We used the concept of reference to evaluate the accuracy of produced images. Physically based rendering (PBR) was the natural way to achieve this. This talk covers all the different steps needed to switch a production engine to PBR, including the small details often bypass in the literature.
The state of the art of real-time PBR techniques allowed us to achieve good overall results but not without production issues. We present some techniques for improving convolution time for image based reflection, proper ambient occlusion handling, and coherent lighting units which are mandatory for level editing.
Moreover, we have managed to reduce the quality gap, highlighted by our systematic reference comparison, in particular related to rough material handling, glossy screen space reflection, and area lighting.
The technical part of PBR is crucial for achieving good results, but represents only the top of the iceberg. Frostbite has become the de facto high-end game engine within Electronic Arts and is now used by a large amount of game teams. Moving all these game teams from “old fashion” lighting to PBR has required a lot of education, which have been done in parallel of the technical development. We have provided editing and validation tools to help the transition of art production. In addition, we have built a flexible material parametrisation framework to adapt to the various authoring tools and game teams’ requirements.
A technical deep dive into the DX11 rendering in Battlefield 3, the first title to use the new Frostbite 2 Engine. Topics covered include DX11 optimization techniques, efficient deferred shading, high-quality rendering and resource streaming for creating large and highly-detailed dynamic environments on modern PCs.
Secrets of CryENGINE 3 Graphics TechnologyTiago Sousa
In this talk, the authors will describe an overview of a different method for deferred lighting approach used in CryENGINE 3, along with an in-depth description of the many techniques used. Original file and videos at http://crytek.com/cryengine/presentations
In this technical presentation Johan Andersson shows how the Frostbite 3 game engine is using the low-level graphics API Mantle to deliver significantly improved performance in Battlefield 4 on PC and future games from Electronic Arts. He will go through the work of bringing over an advanced existing engine to an entirely new graphics API, the benefits and concrete details of doing low-level rendering on PC and how it fits into the architecture and rendering systems of Frostbite. Advanced optimization techniques and topics such as parallel dispatch, GPU memory management, multi-GPU rendering, async compute & async DMA will be covered as well as sharing experiences of working with Mantle in general.
This session presents a detailed programmer oriented overview of our SPU based shading system implemented in DICE's Frostbite 2 engine and how it enables more visually rich environments in BATTLEFIELD 3 and better performance over traditional GPU-only based renderers. We explain in detail how our SPU Tile-based deferred shading system is implemented, and how it supports rich material variety, High Dynamic Range Lighting, and large amounts of light sources of different types through an extensive set of culling, occlusion and optimization techniques.
Graphics Gems from CryENGINE 3 (Siggraph 2013)Tiago Sousa
This lecture covers rendering topics related to Crytek’s latest engine iteration, the technology which powers titles such as Ryse, Warface, and Crysis 3. Among covered topics, Sousa presented SMAA 1TX: an update featuring a robust and simple temporal antialising component; performant and physically-plausible camera related post-processing techniques such as motion blur and depth of field were also covered.
Siggraph2016 - The Devil is in the Details: idTech 666Tiago Sousa
A behind-the-scenes look into the latest renderer technology powering the critically acclaimed DOOM. The lecture will cover how technology was designed for balancing a good visual quality and performance ratio. Numerous topics will be covered, among them details about the lighting solution, techniques for decoupling costs frequency and GCN specific approaches.
This talk provides additional details around the hybrid real-time rendering pipeline we developed at SEED for Project PICA PICA.
At Digital Dragons 2018, we presented how leveraging Microsoft's DirectX Raytracing enables intuitive implementations of advanced lighting effects, including soft shadows, reflections, refractions, and global illumination. We also dove into the unique challenges posed by each of those domains, discussed the tradeoffs, and evaluated where raytracing fits in the spectrum of solutions.
Bill explains some of the ways that the Vertex Shader can be used to improve performance by taking a fast path through the Vertex Shader rather than generating vertices with other parts of the pipeline in this AMD technology presentation from the 2014 Game Developers Conference in San Francisco March 17-21. Check out more technical presentations at http://developer.amd.com/resources/documentation-articles/conference-presentations/
Talk by Graham Wihlidal (Frostbite Labs) at GDC 2017.
Checkerboard rendering is a relatively new technique, popularized recently by the introduction of the PlayStation 4 Pro. Many modern game engines are adding support for it right now, and in this talk, Graham will present an in-depth look at the new implementation in Frostbite, which is used in shipping titles like 'Battlefield 1' and 'Mass Effect Andromeda'. Despite being conceptually simple, checkerboard rendering requires a deep integration into the post-processing chain, in particular temporal anti-aliasing, dynamic resolution scaling, and poses various challenges to existing effects. This presentation will cover the basics of checkerboard rendering, explain the impact on a game engine that powers a wide range of titles, and provide a detailed look at how the current implementation in Frostbite works, including topics like object id, alpha unrolling, gradient adjust, and a highly efficient depth resolve.
This talk presents the approach Frostbite took to add support for HDR displays. It will summarize Frostbite's previous post processing pipeline and what the issues were. Attendees will learn the decisions made to fix these issues, improve the color grading workflow and support high quality HDR and SDR output. This session will detail the display mapping used to implement the"grade once, output many" approach to targeting any display and why an ad-hoc approach as opposed to filmic tone mapping was chosen. Frostbite retained 3D LUT-based grading flexibility and the accuracy differences of computing these in decorrelated color spaces will be shown. This session will also include the main issues found on early adopter games, differences between HDR standards, optimizations to achieve performance parity with the legacy path and why supporting HDR can also improve the SDR version.
Takeaway
Attendees will learn how and why Frostbite chose to support High Dynamic Range [HDR] displays. They will understand the issues faced and how these were resolved. This talk will be useful for those still to support HDR and provide discussion points for those who already do.
Intended Audience
The intended audience is primarily rendering engineers, technical artists and artists; specifically those who focus on grading and lighting and those interested in HDR displays. Ideally attendees will be familiar with color grading and tonemapping.
Optimizing the Graphics Pipeline with Compute, GDC 2016Graham Wihlidal
With further advancement in the current console cycle, new tricks are being learned to squeeze the maximum performance out of the hardware. This talk will present how the compute power of the console and PC GPUs can be used to improve the triangle throughput beyond the limits of the fixed function hardware. The discussed method shows a way to perform efficient "just-in-time" optimization of geometry, and opens the way for per-primitive filtering kernels and procedural geometry processing.
Takeaway:
Attendees will learn how to preprocess geometry on-the-fly per frame to improve rendering performance and efficiency.
Intended Audience:
This presentation is targeting seasoned graphics developers. Experience with DirectX 12 and GCN is recommended, but not required.
Rendering Technologies from Crysis 3 (GDC 2013)Tiago Sousa
This talk covers changes in CryENGINE 3 technology during 2012, with DX11 related topics such as moving to deferred rendering while maintaining backward compatibility on a multiplatform engine, massive vegetation rendering, MSAA support and how to deal with its common visual artifacts, among other topics.
Taking Killzone Shadow Fall Image Quality Into The Next GenerationGuerrilla
This talk focuses on the technical side of Killzone Shadow Fall, the platform exclusive launch title for PlayStation 4.
We present the details of several new techniques that were developed in the quest for next generation image quality, and the talk uses key locations from the game as examples. We discuss interesting aspects of the new content pipeline, next-gen lighting engine, usage of indirect lighting and various shadow rendering optimizations. We also describe the details of volumetric lighting, the real-time reflections system, and the new anti-aliasing solution, and include some details about the image-quality driven streaming system. A common, very important, theme of the talk is the temporal coherency and how it was utilized to reduce aliasing, and improve the rendering quality and image stability above the baseline 1080p resolution seen in other games.
Game engines have long been in the forefront of taking advantage of the ever increasing parallel compute power of both CPUs and GPUs. This talk is about how the parallel compute is utilized in practice on multiple platforms today in the Frostbite game engine and how we think the parallel programming models, hardware and software in the industry should look like in the next 5 years to help us make the best games possible
Talk by Yuriy O’Donnell at GDC 2017.
This talk describes how Frostbite handles rendering architecture challenges that come with having to support a wide variety of games on a single engine. Yuriy describes their new rendering abstraction design, which is based on a graph of all render passes and resources. This approach allows implementation of rendering features in a decoupled and modular way, while still maintaining efficiency.
A graph of all rendering operations for the entire frame is a useful abstraction. The industry can move away from “immediate mode” DX11 style APIs to a higher level system that allows simpler code and efficient GPU utilization. Attendees will learn how it worked out for Frostbite.
The presentation describes Physically Based Lighting Pipeline of Killzone : Shadow Fall - Playstation 4 launch title. The talk covers studio transition to a new asset creation pipeline, based on physical properties. Moreover it describes light rendering systems used in new 3D engine built from grounds up for upcoming Playstation 4 hardware. A novel real time lighting model, simulating physically accurate Area Lights, will be introduced, as well as hybrid - ray-traced / image based reflection system.
We believe that physically based rendering is a viable way to optimize asset creation pipeline efficiency and quality. It also enables the rendering quality to reach a new level that is highly flexible depending on art direction requirements.
Course presentation at SIGGRAPH 2014 by Charles de Rousiers and Sébastian Lagarde at Electronic Arts about transitioning the Frostbite game engine to physically-based rendering.
Make sure to check out the 118 page course notes on: http://www.frostbite.com/2014/11/moving-frostbite-to-pbr/
During the last few months, we have revisited the concept of image quality in Frostbite. The core of our approach was to be as close as possible to a cinematic look. We used the concept of reference to evaluate the accuracy of produced images. Physically based rendering (PBR) was the natural way to achieve this. This talk covers all the different steps needed to switch a production engine to PBR, including the small details often bypass in the literature.
The state of the art of real-time PBR techniques allowed us to achieve good overall results but not without production issues. We present some techniques for improving convolution time for image based reflection, proper ambient occlusion handling, and coherent lighting units which are mandatory for level editing.
Moreover, we have managed to reduce the quality gap, highlighted by our systematic reference comparison, in particular related to rough material handling, glossy screen space reflection, and area lighting.
The technical part of PBR is crucial for achieving good results, but represents only the top of the iceberg. Frostbite has become the de facto high-end game engine within Electronic Arts and is now used by a large amount of game teams. Moving all these game teams from “old fashion” lighting to PBR has required a lot of education, which have been done in parallel of the technical development. We have provided editing and validation tools to help the transition of art production. In addition, we have built a flexible material parametrisation framework to adapt to the various authoring tools and game teams’ requirements.
A technical deep dive into the DX11 rendering in Battlefield 3, the first title to use the new Frostbite 2 Engine. Topics covered include DX11 optimization techniques, efficient deferred shading, high-quality rendering and resource streaming for creating large and highly-detailed dynamic environments on modern PCs.
Secrets of CryENGINE 3 Graphics TechnologyTiago Sousa
In this talk, the authors will describe an overview of a different method for deferred lighting approach used in CryENGINE 3, along with an in-depth description of the many techniques used. Original file and videos at http://crytek.com/cryengine/presentations
In this technical presentation Johan Andersson shows how the Frostbite 3 game engine is using the low-level graphics API Mantle to deliver significantly improved performance in Battlefield 4 on PC and future games from Electronic Arts. He will go through the work of bringing over an advanced existing engine to an entirely new graphics API, the benefits and concrete details of doing low-level rendering on PC and how it fits into the architecture and rendering systems of Frostbite. Advanced optimization techniques and topics such as parallel dispatch, GPU memory management, multi-GPU rendering, async compute & async DMA will be covered as well as sharing experiences of working with Mantle in general.
This session presents a detailed programmer oriented overview of our SPU based shading system implemented in DICE's Frostbite 2 engine and how it enables more visually rich environments in BATTLEFIELD 3 and better performance over traditional GPU-only based renderers. We explain in detail how our SPU Tile-based deferred shading system is implemented, and how it supports rich material variety, High Dynamic Range Lighting, and large amounts of light sources of different types through an extensive set of culling, occlusion and optimization techniques.
Graphics Gems from CryENGINE 3 (Siggraph 2013)Tiago Sousa
This lecture covers rendering topics related to Crytek’s latest engine iteration, the technology which powers titles such as Ryse, Warface, and Crysis 3. Among covered topics, Sousa presented SMAA 1TX: an update featuring a robust and simple temporal antialising component; performant and physically-plausible camera related post-processing techniques such as motion blur and depth of field were also covered.
Siggraph2016 - The Devil is in the Details: idTech 666Tiago Sousa
A behind-the-scenes look into the latest renderer technology powering the critically acclaimed DOOM. The lecture will cover how technology was designed for balancing a good visual quality and performance ratio. Numerous topics will be covered, among them details about the lighting solution, techniques for decoupling costs frequency and GCN specific approaches.
This talk provides additional details around the hybrid real-time rendering pipeline we developed at SEED for Project PICA PICA.
At Digital Dragons 2018, we presented how leveraging Microsoft's DirectX Raytracing enables intuitive implementations of advanced lighting effects, including soft shadows, reflections, refractions, and global illumination. We also dove into the unique challenges posed by each of those domains, discussed the tradeoffs, and evaluated where raytracing fits in the spectrum of solutions.
Bill explains some of the ways that the Vertex Shader can be used to improve performance by taking a fast path through the Vertex Shader rather than generating vertices with other parts of the pipeline in this AMD technology presentation from the 2014 Game Developers Conference in San Francisco March 17-21. Check out more technical presentations at http://developer.amd.com/resources/documentation-articles/conference-presentations/
Talk by Graham Wihlidal (Frostbite Labs) at GDC 2017.
Checkerboard rendering is a relatively new technique, popularized recently by the introduction of the PlayStation 4 Pro. Many modern game engines are adding support for it right now, and in this talk, Graham will present an in-depth look at the new implementation in Frostbite, which is used in shipping titles like 'Battlefield 1' and 'Mass Effect Andromeda'. Despite being conceptually simple, checkerboard rendering requires a deep integration into the post-processing chain, in particular temporal anti-aliasing, dynamic resolution scaling, and poses various challenges to existing effects. This presentation will cover the basics of checkerboard rendering, explain the impact on a game engine that powers a wide range of titles, and provide a detailed look at how the current implementation in Frostbite works, including topics like object id, alpha unrolling, gradient adjust, and a highly efficient depth resolve.
This talk presents the approach Frostbite took to add support for HDR displays. It will summarize Frostbite's previous post processing pipeline and what the issues were. Attendees will learn the decisions made to fix these issues, improve the color grading workflow and support high quality HDR and SDR output. This session will detail the display mapping used to implement the"grade once, output many" approach to targeting any display and why an ad-hoc approach as opposed to filmic tone mapping was chosen. Frostbite retained 3D LUT-based grading flexibility and the accuracy differences of computing these in decorrelated color spaces will be shown. This session will also include the main issues found on early adopter games, differences between HDR standards, optimizations to achieve performance parity with the legacy path and why supporting HDR can also improve the SDR version.
Takeaway
Attendees will learn how and why Frostbite chose to support High Dynamic Range [HDR] displays. They will understand the issues faced and how these were resolved. This talk will be useful for those still to support HDR and provide discussion points for those who already do.
Intended Audience
The intended audience is primarily rendering engineers, technical artists and artists; specifically those who focus on grading and lighting and those interested in HDR displays. Ideally attendees will be familiar with color grading and tonemapping.
Presentation by Andrew Hamilton and Ken Brown from DICE at GDC 2016.
Photogrammetry has started to gain steam within the Games Industry in recent years. At DICE, this technique was first used on Battlefield and they fully embraced the technology and workflow for Star Wars: Battlefront. This talk will cover their research and development, planning and production, techniques, key takeaways and plans for the future. The speakers will cover photogrammetry as a technology, but more than that, show that it's not a magic bullet but instead a tool like any other that can be used to help achieve your artistic vision and craft.
Takeaway
Come and learn how (and why) photogrammetry was used to create the world of Star Wars. This talk will cover Battlefront's use of of the technology from pre-production to launch as well as some of their philosophies around photogrammetry as a tool. Many visuals will be included!
Intended Audience
A content creator friendly talk intended for pretty much any developer, especially those involved in 3D content creation. It is not a technical talk focused on the code or engineering of photogrammetry. The speakers will quickly cover all basics, so absolutely no prerequisite knowledge required.
Talk by Johan Andersson (DICE/EA) in the Beyond Programmable Shading Course at SIGGRAPH 2012.
The other talks in the course can be found here: http://bps12.idav.ucdavis.edu/
Game engines have long been in the forefront of taking advantage of the ever
increasing parallel compute power of both CPUs and GPUs. This talk is about how the
parallel compute is utilized in practice on multiple platforms today in the Frostbite game
engine and how we think the parallel programming models, hardware and software in
the industry should look like in the next 5 years to help us make the best games possible.
Technical talk from the AMD GPU14 Tech Day by Johan Andersson in the Frostbite team at DICE/EA about Battlefield 4 on PC which is the first title that will use 'Mantle' - a very high-performance low-level graphics API being in close collaboration by AMD and DICE/EA to get the absolute best performance and experience in Frostbite games on PC.
Talk from SIGGRAPH 2010 and the <a />Beyond Programmable Shading course</a>
Also see <a />publications.dice.se</a> for more material and other DICE talks.
With the highest-quality video options, Battlefield 3 renders its Screen-Space Ambient Occlusion (SSAO) using the Horizon-Based Ambient Occlusion (HBAO) algorithm. For performance reasons, the HBAO is rendered in half resolution using half-resolution input depths. The HBAO is then blurred in full resolution using a depth-aware blur. The main issue with such low-resolution SSAO rendering is that it produces objectionable flickering for thin objects (such as alpha-tested foliage) when the camera and/or the geometry are moving. After a brief recap of the original HBAO pipeline, this talk describes a novel temporal filtering algorithm that fixed the HBAO flickering problem in Battlefield 3 with a 1-2% performance hit in 1920x1200 on PC (DX10 or DX11). The talk includes algorithm and implementation details on the temporal filtering part, as well as generic optimizations for SSAO blur pixel shaders. This is a joint work between Louis Bavoil (NVIDIA) and Johan Andersson (DICE).
Audio for Multiplayer & Beyond - Mixing Case Studies From Battlefield: Bad Co...Electronic Arts / DICE
Leanings from creating soundscapes for online multiplayer games. With experiences from the Battlefield Series with an emphasis on Battlefield: Bad Company.
Presentation from DICE Coder's Day (2010 November) by Andreas Fredriksson in the Frostbite team.
Goes into detail about Scope Stacks, which are a systems programming tool for memory layout that provides
• Deterministic memory map behavior
• Single-cycle allocation speed
• Regular C++ object life cycle for objects that need it
This makes it very suitable for games.
For this year's keynote at High Performance Graphics 2018, Colin Barré-Brisebois from SEED discussed the state of the art in real-time game ray tracing. He explored some of the connections between offline and real-time game ray tracing, and presented some of the open problems. Colin exposed a few potential solutions to those problems, and also proposed a call-to-arms on topics where the ray tracing research community and the games industry should unite in order to solve such open problems.
GFX Part 1 - Introduction to GPU HW and OpenGL ES specificationsPrabindh Sundareson
Introduction to OpenGL ES and GPU Programming portion of the 7 part session on GFX workshops. Introduces the OpenGL ES specifications from Khronos and provides a perspective of current GPU architectures.
Creating next-gen VR and MR experiences using Varjo VR-1 and XR-1 - Unite Cop...Unity Technologies
The developers of Varjo VR-1 learned a lot about human eye resolution and the demands it puts on virtual reality (VR) content. In these slides, you'll explore what next-generation VR can mean for your VR experiences. Learn about what matters the most when it comes to visual quality, the possible caveats, and the role performance requirements play in this equation.
Speaker:
Mikko Strandborg - Varjo
FGS 2011: Making A Game With Molehill: Zombie Tycoonmochimedia
Luc Beaulieu and Jean-Philipe Auclair from Frima Studio share their experience working with Adobe's new Molehill API's in making their new game "Zombie Tycoon".
EXTENT-2017: Heterogeneous Computing Trends and Business Value CreationIosif Itkin
EXTENT-2017: Software Testing & Trading Technology Trends Conference
29 June, 2017, 10 Paternoster Square, London
Heterogeneous Computing Trends and Business Value Creation
Thayaparan Sripavan, Head of Hardware Accelerated Systems, MillenniumIT
Would like to know more?
Visit our website: extentconf.com
Follow us:
https://www.linkedin.com/company/exactpro-systems-llc?trk=biz-companies-cym
https://twitter.com/exactpro
#extentconf
#exactpro
Introduction to Software Defined Visualization (SDVis)Intel® Software
Software defined visualization (SDVis) is an open-source initiative from Intel and industry collaborators. Improve the visual fidelity, performance, and efficiency of prominent visualization solutions, while supporting the rapidly growing big data use on workstations through high-performance computing (HPC) on supercomputing clusters without memory limitations and cost of GPU-based solutions.
Tutorial on Point Cloud Compression and standardisationRufael Mekuria
Tutorial on Point Cloud Compression and standardisation given at IEEE VCIP 2017 in december. I provide the techniques for point cloud compression and the designed quality metrics and codecs in my PhD at CWI. I detail the standardisation activity on point cloud compression that I started in 2014 and that started in 2017 involving all mobile device makers like Apple, Huawei, Sony, Samsung and Nokia.
Developing Distributed High-performance Computing Capabilities of an Open Sci...Globus
COVID-19 had an unprecedented impact on scientific collaboration. The pandemic and its broad response from the scientific community has forged new relationships among public health practitioners, mathematical modelers, and scientific computing specialists, while revealing critical gaps in exploiting advanced computing systems to support urgent decision making. Informed by our team’s work in applying high-performance computing in support of public health decision makers during the COVID-19 pandemic, we present how Globus technologies are enabling the development of an open science platform for robust epidemic analysis, with the goal of collaborative, secure, distributed, on-demand, and fast time-to-solution analyses to support public health.
Listen to the keynote address and hear about the latest developments from Rachana Ananthakrishnan and Ian Foster who review the updates to the Globus Platform and Service, and the relevance of Globus to the scientific community as an automation platform to accelerate scientific discovery.
Code reviews are vital for ensuring good code quality. They serve as one of our last lines of defense against bugs and subpar code reaching production.
Yet, they often turn into annoying tasks riddled with frustration, hostility, unclear feedback and lack of standards. How can we improve this crucial process?
In this session we will cover:
- The Art of Effective Code Reviews
- Streamlining the Review Process
- Elevating Reviews with Automated Tools
By the end of this presentation, you'll have the knowledge on how to organize and improve your code review proces
OpenFOAM solver for Helmholtz equation, helmholtzFoam / helmholtzBubbleFoamtakuyayamamoto1800
In this slide, we show the simulation example and the way to compile this solver.
In this solver, the Helmholtz equation can be solved by helmholtzFoam. Also, the Helmholtz equation with uniformly dispersed bubbles can be simulated by helmholtzBubbleFoam.
TROUBLESHOOTING 9 TYPES OF OUTOFMEMORYERRORTier1 app
Even though at surface level ‘java.lang.OutOfMemoryError’ appears as one single error; underlyingly there are 9 types of OutOfMemoryError. Each type of OutOfMemoryError has different causes, diagnosis approaches and solutions. This session equips you with the knowledge, tools, and techniques needed to troubleshoot and conquer OutOfMemoryError in all its forms, ensuring smoother, more efficient Java applications.
Paketo Buildpacks : la meilleure façon de construire des images OCI? DevopsDa...Anthony Dahanne
Les Buildpacks existent depuis plus de 10 ans ! D’abord, ils étaient utilisés pour détecter et construire une application avant de la déployer sur certains PaaS. Ensuite, nous avons pu créer des images Docker (OCI) avec leur dernière génération, les Cloud Native Buildpacks (CNCF en incubation). Sont-ils une bonne alternative au Dockerfile ? Que sont les buildpacks Paketo ? Quelles communautés les soutiennent et comment ?
Venez le découvrir lors de cette session ignite
In software engineering, the right architecture is essential for robust, scalable platforms. Wix has undergone a pivotal shift from event sourcing to a CRUD-based model for its microservices. This talk will chart the course of this pivotal journey.
Event sourcing, which records state changes as immutable events, provided robust auditing and "time travel" debugging for Wix Stores' microservices. Despite its benefits, the complexity it introduced in state management slowed development. Wix responded by adopting a simpler, unified CRUD model. This talk will explore the challenges of event sourcing and the advantages of Wix's new "CRUD on steroids" approach, which streamlines API integration and domain event management while preserving data integrity and system resilience.
Participants will gain valuable insights into Wix's strategies for ensuring atomicity in database updates and event production, as well as caching, materialization, and performance optimization techniques within a distributed system.
Join us to discover how Wix has mastered the art of balancing simplicity and extensibility, and learn how the re-adoption of the modest CRUD has turbocharged their development velocity, resilience, and scalability in a high-growth environment.
Modern design is crucial in today's digital environment, and this is especially true for SharePoint intranets. The design of these digital hubs is critical to user engagement and productivity enhancement. They are the cornerstone of internal collaboration and interaction within enterprises.
Advanced Flow Concepts Every Developer Should KnowPeter Caitens
Tim Combridge from Sensible Giraffe and Salesforce Ben presents some important tips that all developers should know when dealing with Flows in Salesforce.
Strategies for Successful Data Migration Tools.pptxvarshanayak241
Data migration is a complex but essential task for organizations aiming to modernize their IT infrastructure and leverage new technologies. By understanding common challenges and implementing these strategies, businesses can achieve a successful migration with minimal disruption. Data Migration Tool like Ask On Data play a pivotal role in this journey, offering features that streamline the process, ensure data integrity, and maintain security. With the right approach and tools, organizations can turn the challenge of data migration into an opportunity for growth and innovation.
How Recreation Management Software Can Streamline Your Operations.pptxwottaspaceseo
Recreation management software streamlines operations by automating key tasks such as scheduling, registration, and payment processing, reducing manual workload and errors. It provides centralized management of facilities, classes, and events, ensuring efficient resource allocation and facility usage. The software offers user-friendly online portals for easy access to bookings and program information, enhancing customer experience. Real-time reporting and data analytics deliver insights into attendance and preferences, aiding in strategic decision-making. Additionally, effective communication tools keep participants and staff informed with timely updates. Overall, recreation management software enhances efficiency, improves service delivery, and boosts customer satisfaction.
Software Engineering, Software Consulting, Tech Lead.
Spring Boot, Spring Cloud, Spring Core, Spring JDBC, Spring Security,
Spring Transaction, Spring MVC,
Log4j, REST/SOAP WEB-SERVICES.
A Comprehensive Look at Generative AI in Retail App Testing.pdfkalichargn70th171
Traditional software testing methods are being challenged in retail, where customer expectations and technological advancements continually shape the landscape. Enter generative AI—a transformative subset of artificial intelligence technologies poised to revolutionize software testing.
How to Position Your Globus Data Portal for Success Ten Good PracticesGlobus
Science gateways allow science and engineering communities to access shared data, software, computing services, and instruments. Science gateways have gained a lot of traction in the last twenty years, as evidenced by projects such as the Science Gateways Community Institute (SGCI) and the Center of Excellence on Science Gateways (SGX3) in the US, The Australian Research Data Commons (ARDC) and its platforms in Australia, and the projects around Virtual Research Environments in Europe. A few mature frameworks have evolved with their different strengths and foci and have been taken up by a larger community such as the Globus Data Portal, Hubzero, Tapis, and Galaxy. However, even when gateways are built on successful frameworks, they continue to face the challenges of ongoing maintenance costs and how to meet the ever-expanding needs of the community they serve with enhanced features. It is not uncommon that gateways with compelling use cases are nonetheless unable to get past the prototype phase and become a full production service, or if they do, they don't survive more than a couple of years. While there is no guaranteed pathway to success, it seems likely that for any gateway there is a need for a strong community and/or solid funding streams to create and sustain its success. With over twenty years of examples to draw from, this presentation goes into detail for ten factors common to successful and enduring gateways that effectively serve as best practices for any new or developing gateway.
Globus Connect Server Deep Dive - GlobusWorld 2024Globus
We explore the Globus Connect Server (GCS) architecture and experiment with advanced configuration options and use cases. This content is targeted at system administrators who are familiar with GCS and currently operate—or are planning to operate—broader deployments at their institution.
Why React Native as a Strategic Advantage for Startup Innovation.pdfayushiqss
Do you know that React Native is being increasingly adopted by startups as well as big companies in the mobile app development industry? Big names like Facebook, Instagram, and Pinterest have already integrated this robust open-source framework.
In fact, according to a report by Statista, the number of React Native developers has been steadily increasing over the years, reaching an estimated 1.9 million by the end of 2024. This means that the demand for this framework in the job market has been growing making it a valuable skill.
But what makes React Native so popular for mobile application development? It offers excellent cross-platform capabilities among other benefits. This way, with React Native, developers can write code once and run it on both iOS and Android devices thus saving time and resources leading to shorter development cycles hence faster time-to-market for your app.
Let’s take the example of a startup, which wanted to release their app on both iOS and Android at once. Through the use of React Native they managed to create an app and bring it into the market within a very short period. This helped them gain an advantage over their competitors because they had access to a large user base who were able to generate revenue quickly for them.
2. Intro
What does an advanced game engine real-time rendering
pipeline look like?
What are some of the key challenges & open problems?
What are some of the next steps to improve on?
From both software & hardware perspectives
6. Improvements since 2010 & 2012
Image quality & authoring: massive transition to PBR
Reflections: SSR and perspective-correct IBLs
Antialiasing: TAA instead of MSAA
Gen4 consoles (PS4 & XB1) as new minspec
Compute shader use prevalent – create your own pipelines!
7. Improvements since 2010 & 2012 (cont.)
New explicit control APIs
Mantle, Metal, DX12, Vulkan
Well needed change & major step forward
Not much improvements on compute & shaders
Programmability
Conservative raster, min/max texture filter
"Need a virtual data-parallel ISA" -> SPIR-V!
"Render target read/modify/write" -> Raster Ordered Views
Sparse resources
8. Pipeline of today – key themes
Non-orthogonality gets in the way – can we get to a more
unified pipeline?
Complexity is continuing to increase
Increasing quality in a scalable way
10. Transparencies – sorting
Can’t mix different transparent surfaces & volumes
Particles, meshes, participating media, raymarching
Can’t render strict front to back to get correct sorting
Most particles can be sorted, have to use uber shaders
Contrains game environments
Restricts games from using more volumetric
rendering
particles
meshes
volumetric
11. Transparencies – sorting solution
Render everything with Order-Independent Transparency (OIT)
Use Raster Ordered Views (DX12 FL: Haswell & Maxwell)
Not available on consoles = most games stuck with no OIT
Scalable to mix all types of transparencies with high quality?
Transparent meshes (windows, foliage): 1-50x overdraw
Particles: 10-200x overdraw
Volume rendering (ray-marched)
Able to combine with variable resolution rendering?
Most particles & participating do not need to be shaded at full resolution
12. Defocus & motion blur - opaque
Works okay in-game on opaque surfaces
Render out velocity vectors
Calc CoC from z
Apply post-process
But not correct or ideal
Leakage
Disocclusion
13. Defocus & motion blur - transparencies
Transparent surfaces are even more problematic
Esp. motion blur
Typically simulated with stretched geometry (works mostly for sparks)
Can only skip or smear everything with standard post-processes
Fast moving particles should also have internal motion blur
14. Defocus & motion blur - transparencies
Blend velocity vectors & CoC for transparencies?
Feed into the post-process passes
Post-processes should also be depth-aware - use OIT approx.
transmittance function
Still not correct, but could prevent the biggest artifacts
Ideal: directly sample defocus & motion blur in rendering
But how? Stochastic raster? Raytracing?
Pre-filtered volumetric representations?
15. Forward vs deferred
Most high-end games & engines use deferred shading for opaque geometry
Better quad utilization
Separation of material property laydown & lighting shaders
Would like to render more as transparent, which has to use forward
Thin geometry: hair & fur
Proxy geometry (foliage) with alpha-blending for antialiasing
But forward rendering is much more limiting in compositing
No SSAO
No screen-space reflections
No decal blending of individual channels (e.g. albedo)
No screen-space sub-surface scattering
16. Forward vs deferred (cont.)
Can we extend either forward or deferred to be more orthogonal?
Use world-space data structures instead of screen-space
Be able to query & calc AO, reflections, decals while forward shading
Texture shader to convolve SSS lighting
Massive forward uber shaders that can do everything
Render opaque & transparent with deep deferred shading?
Store all layers of a pixel, including transparents, in a deep gbuffer
Unbounded memory
Be able to query neighbors (AO)
Be able to render into with blending (decals)
18. Rendering pipeline complexity
Recent improvements that reduce complexity
New APIs are more explicit – less of a black box
DX11 hardware & compute shaders now minspec
Hardware trend towards DX12 feature level
19. Rendering pipeline complexity
Challenges:
Sheer amount of rendering systems & passes
Making architectural choices of what techniques & pipeline to use
Shader permutations & uber shaders
Compute shaders still very limiting – no nested dynamic parallelism & pipes
Mobile: TBDR vs immediate mode
22. Architectural decisions
Selecting which techniques to develop & invest in is a challenge
Critical to create visual look of a game
Non-orthogonal choices and tradeoffs
Difficult to predict the moving future of hardware, games and authoring
Can be paralyzing with a big advanced engine rendering pipeline
Exponential scaling with amount of systems & techniques interacting
Difficult to redesign and move large passes
Can result in a lot of refactoring & cascading effects to the overall pipeline
Backwards compatibility with existing content
Easier if passes & systems can be made more decoupled
23. What can we do to reduce complexity?
A more unified pipeline would certainly help!
Such as with OIT
Or in the long term: native handling of defocus & motion blur
Improve GPU performance – simplify rendering systems
Much of the complexity comes from optimizations for performance
Could sacrifice a bit of performance for increased orthogonality, but not much
We have real-time constrain = get the most out of our 16 ms/f (VR: 4 ms/f!)
Raytrace & raymarch more
Easier to express complex rendering
Warning: moves to complexity to data structures and the GPU execution instead
Not practical overall replacement / unification
Use as complement – more & more common (SSR, volume rendering, shadows?)
24. What can we do to reduce complexity?
Make it easier to drive the graphics & compute
CPU/GPU communication – C++ on both sides (and more languages)
Device enqueue & nested data parallelism
Increase flexibility, expressiveness & modularity of building pipelines
Build a specialized renderer
Focus in on very specific rendering techniques & look
Typically tied to a single game
E.g. The Tomorrow Children, Dreams
Build engines, tools & infrastructure to build general renderers
Handle wide set of environments, content and techniques
Modular layers to easily have all the passes & techniques interoperate
Shader authoring is also key
25. Uber shaders
Example cases:
Forward shaders (lights, fog, skinning, etc)
Particles (to be able to sort without OIT) – want to use individual shaders instead
Terrain layers [Andersson07] – want to use massive uber shaders
Why they can be a problem:
Authoring: Massive shaders with all possible paths in it, no separate shader linker
Performance: Large GPR pressure affects entire shader
Performance: Flow control overhead
Classic approach: break out into separate shader permutations
Static CPU selection of shader/PSO to use – limited flexibility
Can end up creating huge amount of permutations = long compile/load times.
Worse with new APIs! PSO explosion
26. Uber shaders – potential improvements
Shader function pointers
Define individual functions as own kernels
Select pointers to use per draw call
Part of ExecuteIndirect params
Ideal: Select pointers inside shader – not possible today
Optimization: VS selects pointers PS will use?
What would the consequences be for the GPU?
I$ stalls, register allocation, coherency, more?
More efficient GPU execution of uber shaders?
Shaders with highly divergent flow & sections with very different GPR usage
Hardware & execution model that enables resorting & building coherency?
29. Real-time rendering have gotten quite far!
In order to get further, want to:
1. Get that last 5-10% quality in our environments
to reach photorealism
NFS photo reference
30. Real-time rendering have gotten quite far!
In order to get further, want to:
1. Get that last 5-10% quality in our environments
to reach photorealism
2. Be able to build & render new environments
that we haven’t been able to before
Glass houses!
31. Real-time rendering have gotten quite far!
In order to get further, want to:
1. Get that last 5-10% quality in our environments
to reach photorealism
2. Be able to build & render new environments
that we haven’t been able to before
Dreams (MediaMolecule)
32. Difficult areas
Hair & fur
OIT, overdraw, LOD, quad overshading, deep shadows
Foliage
OIT, overdraw, LOD, geometry throughput,
Lighting, translucency, AO
Fluids
LOD & scalability, simulation, overall rendering
VFX
Need volumetric representation & lighting
Related to [Hillaire15]
33. Difficult areas
Hair & fur
OIT, overdraw, LOD, quad overshading, deep shadows
Foliage
OIT, overdraw, LOD, geometry throughput,
Lighting, translucency, AO
Fluids
LOD & scalability, simulation, overall rendering
VFX
Need volumetric representation & lighting
Related to [Hillaire15]
34. Difficult areas
Hair & fur
OIT, overdraw, LOD, quad overshading, deep shadows
Foliage
OIT, overdraw, LOD, geometry throughput,
Lighting, translucency, AO
Fluids
LOD & scalability, simulation, overall rendering
VFX
Need volumetric representation & lighting
Related to [Hillaire15]
35. Difficult areas
Hair & fur
OIT, overdraw, LOD, quad overshading, deep shadows
Foliage
OIT, overdraw, LOD, geometry throughput,
Lighting, translucency, AO
Fluids
LOD & scalability, simulation, overall rendering
VFX
Need volumetric representation & lighting
Related to [Hillaire15]
Pompeii movie
36. Difficult areas (cont.)
Correct shadows on everything
Including area lights & shadows!
Extra important with PBR to prevent leakage
Geometry throughput, CPU overhead, filtering, LOD
Reflections
Hodgepodge of techniques today
Occlusion of specular critical
See Mirror’s Edge talk [Johansson15]
Antialiasing
See [Salvi15] next
Mirror’s Edge: Catalyst concept
37. Quality challenges
Getting the last 5-10% quality can be very expensive
While covering a relatively small portion of the screen
Example: hair & fur rendering
Improving GPUs in some of these areas may not benefit “ordinary”
rendering
How to build truly scalable solutions
Example: Rendering, lighting and shadowing a full forest
Level-of-detail is a key challenge for most techniques to make them
practical
38. Scalable solutions – screen-space
Sub-surface scattering went from texture- to screen-space
Orders of magnitude faster
Implicitly scalable + no per-object tracking
Not perfect, but made it practical & mainstream
Volumetric rendering to view frustum 3d texture
Froxels! See [Wronski14] and [Hillaire15]
39. Scalable solutions – screen-space
Can one extend screen-space techniques further?
Render multiple depth layers to solve occlusion
Multi-layer deep gbuffers [Mara14]
Render cubemap to reach outside of frustum
Render lower resolution separate cubemap, slow
Render main view as cubemap with variable resolution?
Single geometry pass
Also for fovated rendering
40. Scalable solutions – pre-compute
Traditionally a strong cut off between pre-computed & runtime solutions
Believe this is going away more – techniques and systems have to scale &
cover more of the spectrum:
Offline pre-compute: Highest-quality
Load-time pre-compute: High-quality
Background compute: Medium-quality
Runtime
Want flexible tradeoffs depending on contexts
Artist live editing lighting
Gamer customizing in-game content
Background gameplay changes to the game environment
41. Scalable solutions – hierarchical geometry
Want to avoid wasteful brute force geometry rendering
Do your own culling, occlusion & LOD directly on the GPU
Finer granularity than CPU code
Engine can have more context and own spatial data structures
Combined with GPU information (for example HiZ)
Opportunities to extend the GPU pipeline?
Compute as frontend for graphics pipeline to accelerate
Avoid writing geometry out to memory
Good fit with procedural geometry systems as well
42. Takeaways
We’ve gotten very far in the last few years!
Big transitions: PBR, Gen4, Compute, explicit APIs
We are at the cusp of a beautiful future!
Build your own rendering pipelines & data structures
But which ones? All of them!
Need reduce coupling & further evolve GPU execution models
43. Thanks to everyone who provided feedback!
Sébastien Hillaire (@sebhillaire)
Christina Coffin (@christinacoffin)
John White (@zedcull)
Aaron Lefohn (@aaronlefohn)
Colin Barré-Brisebois (@zigguratvertigo)
Sébastién Lagarde (@seblagarde)
Tomasz Stachowiak (@h3r2tic)
Andrew Lauritzen (@andrewlauritzen)
Jasper Bekkers (@jasperbekkers)
Yuiry O’Donnell (@yuriyodonnell)
Kenneth Brown
Natasha Tatarchuk (@mirror2mask)
Angelo Pesce (@kenpex)
David Reinig (@d13_dreinig)
Promit Roy (@promit_roy)
Rich Forster (@dickyjimforster)
Niklas Nummelin (@niklasnummelin)
Tobias Berghoff (@tobiasberghoff)
Morgan McGuire (@casualeffects)
Tom Forsyth (@tom_forsyth)
Eric Smolikowski (@esmolikowski)
Nathan Reed (@reedbeta)
Christer Ericson (@christerericson)
Daniel Collin (@daniel_collin)
Matias Goldberg (@matiasgoldberg)
Arne Schober (@khipu_kamayuq)
Dan Olson (@olson_dan)
Joshua Barczak (@joshuabarczak)
Bart Wronski (@bartwronsk)
Krzysztof Narkowicz (@knarkowicz)
Julien Guertault (@zavie)
Sander van Rossen (@logicalerror)
Lucas Hardi (@lhardi)
Tim Foley (@tangentvector)
45. References
[Andersson07] Terrain rendering in Frostbite using Procedural Shader Splatting
[Hillaire15] Physically Based and Unified Volumetric Rendering in Frostbite
[Wronski14] Volumetric fog: Unified, compute shader based solution to
atmospheric scattering
[Salvi15] Anti-Aliasing: Are We There Yet?
[Mara14] Fast Global Illumination Approximations on Deep G-Buffer
[Johansson15] Leap of Faith: The World of Mirror’s Edge Catalyst
Editor's Notes
But before that…. I did 2 talks around major challenges in real-time rendering in 2010 and 2012 as well.
The industry spent a lot of effort & energy (and still is) on the transition to low-level explicit graphics APIs
Haven't been that many other changes in the rendering pipeline
Esp. compute hasn't improved much
But let’s go back to the rendering pipeline of today
Arbitrary ellipsoid / anisotropic filtering
This is the rendering passes we had in Frostbite 2 years ago for Battlefield 4.
Our pipeline now with PBR has even more passes and complexity
We’ve gotten really far with our real-time rendering results. Here is 2 screenshots and 2 photos of the same area from the new Need for Speed game. And it is getting quite difficult to tell which one is the photo and which one is the real-time render!
But what is left to do, what are we missing to get that last 5-10%?
It is also important to note that this is specifically created scene and as with all game development and real-time rendering we select & craft our worlds & visuals based on the limitations we know about. We avoid creating environments that we know we can’t create or won’t have the performance for or that doesn’t fit in together with the gameplay or other aspects.
But what is left to do, what are we missing to get that last 5-10%?
It is also important to note that this is specifically created scene and as with all game development and real-time rendering we select & craft our worlds & visuals based on the limitations we know about. We avoid creating environments that we know we can’t create or won’t have the performance for or that doesn’t fit in together with the gameplay or other aspects.
But what is left to do, what are we missing to get that last 5-10%?
It is also important to note that this is specifically created scene and as with all game development and real-time rendering we select & craft our worlds & visuals based on the limitations we know about. We avoid creating environments that we know we can’t create or won’t have the performance for or that doesn’t fit in together with the gameplay or other aspects.
Avoid perf cliffs, prefer linear scaling cost of systems