This lecture covers rendering topics related to Crytek’s latest engine iteration, the technology which powers titles such as Ryse, Warface, and Crysis 3. Among covered topics, Sousa presented SMAA 1TX: an update featuring a robust and simple temporal antialising component; performant and physically-plausible camera related post-processing techniques such as motion blur and depth of field were also covered.
Talk by Graham Wihlidal (Frostbite Labs) at GDC 2017.
Checkerboard rendering is a relatively new technique, popularized recently by the introduction of the PlayStation 4 Pro. Many modern game engines are adding support for it right now, and in this talk, Graham will present an in-depth look at the new implementation in Frostbite, which is used in shipping titles like 'Battlefield 1' and 'Mass Effect Andromeda'. Despite being conceptually simple, checkerboard rendering requires a deep integration into the post-processing chain, in particular temporal anti-aliasing, dynamic resolution scaling, and poses various challenges to existing effects. This presentation will cover the basics of checkerboard rendering, explain the impact on a game engine that powers a wide range of titles, and provide a detailed look at how the current implementation in Frostbite works, including topics like object id, alpha unrolling, gradient adjust, and a highly efficient depth resolve.
Talk by Fabien Christin from DICE at GDC 2016.
Designing a big city that players can explore by day and by night while improving on the unique visual from the first Mirror's Edge game isn't an easy task.
In this talk, the tools and technology used to render Mirror's Edge: Catalyst will be discussed. From the physical sky to the reflection tech, the speakers will show how they tamed the new Frostbite 3 PBR engine to deliver realistic images with stylized visuals.
They will talk about the artistic and technical challenges they faced and how they tried to overcome them, from the simple light settings and Enlighten workflow to character shading and color grading.
Takeaway
Attendees will get an insight of technical and artistic techniques used to create a dynamic time of day system with updating radiosity and reflections.
Intended Audience
This session is targeted to game artists, technical artists and graphics programmers who want to know more about Mirror's Edge: Catalyst rendering technology, lighting tools and shading tricks.
Rendering Technologies from Crysis 3 (GDC 2013)Tiago Sousa
This talk covers changes in CryENGINE 3 technology during 2012, with DX11 related topics such as moving to deferred rendering while maintaining backward compatibility on a multiplatform engine, massive vegetation rendering, MSAA support and how to deal with its common visual artifacts, among other topics.
Talk by Graham Wihlidal (Frostbite Labs) at GDC 2017.
Checkerboard rendering is a relatively new technique, popularized recently by the introduction of the PlayStation 4 Pro. Many modern game engines are adding support for it right now, and in this talk, Graham will present an in-depth look at the new implementation in Frostbite, which is used in shipping titles like 'Battlefield 1' and 'Mass Effect Andromeda'. Despite being conceptually simple, checkerboard rendering requires a deep integration into the post-processing chain, in particular temporal anti-aliasing, dynamic resolution scaling, and poses various challenges to existing effects. This presentation will cover the basics of checkerboard rendering, explain the impact on a game engine that powers a wide range of titles, and provide a detailed look at how the current implementation in Frostbite works, including topics like object id, alpha unrolling, gradient adjust, and a highly efficient depth resolve.
Talk by Fabien Christin from DICE at GDC 2016.
Designing a big city that players can explore by day and by night while improving on the unique visual from the first Mirror's Edge game isn't an easy task.
In this talk, the tools and technology used to render Mirror's Edge: Catalyst will be discussed. From the physical sky to the reflection tech, the speakers will show how they tamed the new Frostbite 3 PBR engine to deliver realistic images with stylized visuals.
They will talk about the artistic and technical challenges they faced and how they tried to overcome them, from the simple light settings and Enlighten workflow to character shading and color grading.
Takeaway
Attendees will get an insight of technical and artistic techniques used to create a dynamic time of day system with updating radiosity and reflections.
Intended Audience
This session is targeted to game artists, technical artists and graphics programmers who want to know more about Mirror's Edge: Catalyst rendering technology, lighting tools and shading tricks.
Rendering Technologies from Crysis 3 (GDC 2013)Tiago Sousa
This talk covers changes in CryENGINE 3 technology during 2012, with DX11 related topics such as moving to deferred rendering while maintaining backward compatibility on a multiplatform engine, massive vegetation rendering, MSAA support and how to deal with its common visual artifacts, among other topics.
A technical deep dive into the DX11 rendering in Battlefield 3, the first title to use the new Frostbite 2 Engine. Topics covered include DX11 optimization techniques, efficient deferred shading, high-quality rendering and resource streaming for creating large and highly-detailed dynamic environments on modern PCs.
Secrets of CryENGINE 3 Graphics TechnologyTiago Sousa
In this talk, the authors will describe an overview of a different method for deferred lighting approach used in CryENGINE 3, along with an in-depth description of the many techniques used. Original file and videos at http://crytek.com/cryengine/presentations
Taking Killzone Shadow Fall Image Quality Into The Next GenerationGuerrilla
This talk focuses on the technical side of Killzone Shadow Fall, the platform exclusive launch title for PlayStation 4.
We present the details of several new techniques that were developed in the quest for next generation image quality, and the talk uses key locations from the game as examples. We discuss interesting aspects of the new content pipeline, next-gen lighting engine, usage of indirect lighting and various shadow rendering optimizations. We also describe the details of volumetric lighting, the real-time reflections system, and the new anti-aliasing solution, and include some details about the image-quality driven streaming system. A common, very important, theme of the talk is the temporal coherency and how it was utilized to reduce aliasing, and improve the rendering quality and image stability above the baseline 1080p resolution seen in other games.
Siggraph2016 - The Devil is in the Details: idTech 666Tiago Sousa
A behind-the-scenes look into the latest renderer technology powering the critically acclaimed DOOM. The lecture will cover how technology was designed for balancing a good visual quality and performance ratio. Numerous topics will be covered, among them details about the lighting solution, techniques for decoupling costs frequency and GCN specific approaches.
Bill explains some of the ways that the Vertex Shader can be used to improve performance by taking a fast path through the Vertex Shader rather than generating vertices with other parts of the pipeline in this AMD technology presentation from the 2014 Game Developers Conference in San Francisco March 17-21. Check out more technical presentations at http://developer.amd.com/resources/documentation-articles/conference-presentations/
With the highest-quality video options, Battlefield 3 renders its Screen-Space Ambient Occlusion (SSAO) using the Horizon-Based Ambient Occlusion (HBAO) algorithm. For performance reasons, the HBAO is rendered in half resolution using half-resolution input depths. The HBAO is then blurred in full resolution using a depth-aware blur. The main issue with such low-resolution SSAO rendering is that it produces objectionable flickering for thin objects (such as alpha-tested foliage) when the camera and/or the geometry are moving. After a brief recap of the original HBAO pipeline, this talk describes a novel temporal filtering algorithm that fixed the HBAO flickering problem in Battlefield 3 with a 1-2% performance hit in 1920x1200 on PC (DX10 or DX11). The talk includes algorithm and implementation details on the temporal filtering part, as well as generic optimizations for SSAO blur pixel shaders. This is a joint work between Louis Bavoil (NVIDIA) and Johan Andersson (DICE).
Course presentation at SIGGRAPH 2014 by Charles de Rousiers and Sébastian Lagarde at Electronic Arts about transitioning the Frostbite game engine to physically-based rendering.
Make sure to check out the 118 page course notes on: http://www.frostbite.com/2014/11/moving-frostbite-to-pbr/
During the last few months, we have revisited the concept of image quality in Frostbite. The core of our approach was to be as close as possible to a cinematic look. We used the concept of reference to evaluate the accuracy of produced images. Physically based rendering (PBR) was the natural way to achieve this. This talk covers all the different steps needed to switch a production engine to PBR, including the small details often bypass in the literature.
The state of the art of real-time PBR techniques allowed us to achieve good overall results but not without production issues. We present some techniques for improving convolution time for image based reflection, proper ambient occlusion handling, and coherent lighting units which are mandatory for level editing.
Moreover, we have managed to reduce the quality gap, highlighted by our systematic reference comparison, in particular related to rough material handling, glossy screen space reflection, and area lighting.
The technical part of PBR is crucial for achieving good results, but represents only the top of the iceberg. Frostbite has become the de facto high-end game engine within Electronic Arts and is now used by a large amount of game teams. Moving all these game teams from “old fashion” lighting to PBR has required a lot of education, which have been done in parallel of the technical development. We have provided editing and validation tools to help the transition of art production. In addition, we have built a flexible material parametrisation framework to adapt to the various authoring tools and game teams’ requirements.
Optimizing the Graphics Pipeline with Compute, GDC 2016Graham Wihlidal
With further advancement in the current console cycle, new tricks are being learned to squeeze the maximum performance out of the hardware. This talk will present how the compute power of the console and PC GPUs can be used to improve the triangle throughput beyond the limits of the fixed function hardware. The discussed method shows a way to perform efficient "just-in-time" optimization of geometry, and opens the way for per-primitive filtering kernels and procedural geometry processing.
Takeaway:
Attendees will learn how to preprocess geometry on-the-fly per frame to improve rendering performance and efficiency.
Intended Audience:
This presentation is targeting seasoned graphics developers. Experience with DirectX 12 and GCN is recommended, but not required.
The presentation describes Physically Based Lighting Pipeline of Killzone : Shadow Fall - Playstation 4 launch title. The talk covers studio transition to a new asset creation pipeline, based on physical properties. Moreover it describes light rendering systems used in new 3D engine built from grounds up for upcoming Playstation 4 hardware. A novel real time lighting model, simulating physically accurate Area Lights, will be introduced, as well as hybrid - ray-traced / image based reflection system.
We believe that physically based rendering is a viable way to optimize asset creation pipeline efficiency and quality. It also enables the rendering quality to reach a new level that is highly flexible depending on art direction requirements.
A Certain Slant of Light - Past, Present and Future Challenges of Global Illu...Electronic Arts / DICE
Global illumination (GI) has been an ongoing quest in games. The perpetual tug-of-war between visual quality and performance often forces developers to take the latest and greatest from academia and tailor it to push the boundaries of what has been realized in a game product. Many elements need to align for success, including image quality, performance, scalability, interactivity, ease of use, as well as game-specific and production challenges.
First we will paint a picture of the current state of global illumination in games, addressing how the state of the union compares to the latest and greatest research. We will then explore various GI challenges that game teams face from the art, engineering, pipelines and production perspective. The games industry lacks an ideal solution, so the goal here is to raise awareness by being transparent about the real problems in the field. Finally, we will talk about the future. This will be a call to arms, with the objective of uniting game developers and researchers on the same quest to evolve global illumination in games from being mostly static, or sometimes perceptually real-time, to fully real-time.
This presentation was given at SIGGRAPH 2017 by Colin Barré-Brisebois (EA SEED) as part of the Open Problems in Real-Time Rendering course.
Talk from SIGGRAPH 2010 and the <a />Beyond Programmable Shading course</a>
Also see <a />publications.dice.se</a> for more material and other DICE talks.
A technical deep dive into the DX11 rendering in Battlefield 3, the first title to use the new Frostbite 2 Engine. Topics covered include DX11 optimization techniques, efficient deferred shading, high-quality rendering and resource streaming for creating large and highly-detailed dynamic environments on modern PCs.
Secrets of CryENGINE 3 Graphics TechnologyTiago Sousa
In this talk, the authors will describe an overview of a different method for deferred lighting approach used in CryENGINE 3, along with an in-depth description of the many techniques used. Original file and videos at http://crytek.com/cryengine/presentations
Taking Killzone Shadow Fall Image Quality Into The Next GenerationGuerrilla
This talk focuses on the technical side of Killzone Shadow Fall, the platform exclusive launch title for PlayStation 4.
We present the details of several new techniques that were developed in the quest for next generation image quality, and the talk uses key locations from the game as examples. We discuss interesting aspects of the new content pipeline, next-gen lighting engine, usage of indirect lighting and various shadow rendering optimizations. We also describe the details of volumetric lighting, the real-time reflections system, and the new anti-aliasing solution, and include some details about the image-quality driven streaming system. A common, very important, theme of the talk is the temporal coherency and how it was utilized to reduce aliasing, and improve the rendering quality and image stability above the baseline 1080p resolution seen in other games.
Siggraph2016 - The Devil is in the Details: idTech 666Tiago Sousa
A behind-the-scenes look into the latest renderer technology powering the critically acclaimed DOOM. The lecture will cover how technology was designed for balancing a good visual quality and performance ratio. Numerous topics will be covered, among them details about the lighting solution, techniques for decoupling costs frequency and GCN specific approaches.
Bill explains some of the ways that the Vertex Shader can be used to improve performance by taking a fast path through the Vertex Shader rather than generating vertices with other parts of the pipeline in this AMD technology presentation from the 2014 Game Developers Conference in San Francisco March 17-21. Check out more technical presentations at http://developer.amd.com/resources/documentation-articles/conference-presentations/
With the highest-quality video options, Battlefield 3 renders its Screen-Space Ambient Occlusion (SSAO) using the Horizon-Based Ambient Occlusion (HBAO) algorithm. For performance reasons, the HBAO is rendered in half resolution using half-resolution input depths. The HBAO is then blurred in full resolution using a depth-aware blur. The main issue with such low-resolution SSAO rendering is that it produces objectionable flickering for thin objects (such as alpha-tested foliage) when the camera and/or the geometry are moving. After a brief recap of the original HBAO pipeline, this talk describes a novel temporal filtering algorithm that fixed the HBAO flickering problem in Battlefield 3 with a 1-2% performance hit in 1920x1200 on PC (DX10 or DX11). The talk includes algorithm and implementation details on the temporal filtering part, as well as generic optimizations for SSAO blur pixel shaders. This is a joint work between Louis Bavoil (NVIDIA) and Johan Andersson (DICE).
Course presentation at SIGGRAPH 2014 by Charles de Rousiers and Sébastian Lagarde at Electronic Arts about transitioning the Frostbite game engine to physically-based rendering.
Make sure to check out the 118 page course notes on: http://www.frostbite.com/2014/11/moving-frostbite-to-pbr/
During the last few months, we have revisited the concept of image quality in Frostbite. The core of our approach was to be as close as possible to a cinematic look. We used the concept of reference to evaluate the accuracy of produced images. Physically based rendering (PBR) was the natural way to achieve this. This talk covers all the different steps needed to switch a production engine to PBR, including the small details often bypass in the literature.
The state of the art of real-time PBR techniques allowed us to achieve good overall results but not without production issues. We present some techniques for improving convolution time for image based reflection, proper ambient occlusion handling, and coherent lighting units which are mandatory for level editing.
Moreover, we have managed to reduce the quality gap, highlighted by our systematic reference comparison, in particular related to rough material handling, glossy screen space reflection, and area lighting.
The technical part of PBR is crucial for achieving good results, but represents only the top of the iceberg. Frostbite has become the de facto high-end game engine within Electronic Arts and is now used by a large amount of game teams. Moving all these game teams from “old fashion” lighting to PBR has required a lot of education, which have been done in parallel of the technical development. We have provided editing and validation tools to help the transition of art production. In addition, we have built a flexible material parametrisation framework to adapt to the various authoring tools and game teams’ requirements.
Optimizing the Graphics Pipeline with Compute, GDC 2016Graham Wihlidal
With further advancement in the current console cycle, new tricks are being learned to squeeze the maximum performance out of the hardware. This talk will present how the compute power of the console and PC GPUs can be used to improve the triangle throughput beyond the limits of the fixed function hardware. The discussed method shows a way to perform efficient "just-in-time" optimization of geometry, and opens the way for per-primitive filtering kernels and procedural geometry processing.
Takeaway:
Attendees will learn how to preprocess geometry on-the-fly per frame to improve rendering performance and efficiency.
Intended Audience:
This presentation is targeting seasoned graphics developers. Experience with DirectX 12 and GCN is recommended, but not required.
The presentation describes Physically Based Lighting Pipeline of Killzone : Shadow Fall - Playstation 4 launch title. The talk covers studio transition to a new asset creation pipeline, based on physical properties. Moreover it describes light rendering systems used in new 3D engine built from grounds up for upcoming Playstation 4 hardware. A novel real time lighting model, simulating physically accurate Area Lights, will be introduced, as well as hybrid - ray-traced / image based reflection system.
We believe that physically based rendering is a viable way to optimize asset creation pipeline efficiency and quality. It also enables the rendering quality to reach a new level that is highly flexible depending on art direction requirements.
A Certain Slant of Light - Past, Present and Future Challenges of Global Illu...Electronic Arts / DICE
Global illumination (GI) has been an ongoing quest in games. The perpetual tug-of-war between visual quality and performance often forces developers to take the latest and greatest from academia and tailor it to push the boundaries of what has been realized in a game product. Many elements need to align for success, including image quality, performance, scalability, interactivity, ease of use, as well as game-specific and production challenges.
First we will paint a picture of the current state of global illumination in games, addressing how the state of the union compares to the latest and greatest research. We will then explore various GI challenges that game teams face from the art, engineering, pipelines and production perspective. The games industry lacks an ideal solution, so the goal here is to raise awareness by being transparent about the real problems in the field. Finally, we will talk about the future. This will be a call to arms, with the objective of uniting game developers and researchers on the same quest to evolve global illumination in games from being mostly static, or sometimes perceptually real-time, to fully real-time.
This presentation was given at SIGGRAPH 2017 by Colin Barré-Brisebois (EA SEED) as part of the Open Problems in Real-Time Rendering course.
Talk from SIGGRAPH 2010 and the <a />Beyond Programmable Shading course</a>
Also see <a />publications.dice.se</a> for more material and other DICE talks.
FlameWorks presentation from NVIDIA GTC 2014.
Learn how to add volumetric effects to your game engine - smoke, fire and explosions that are interactive, more realistic, and can actually render faster than traditional sprite-based techniques. Volumetrics remain one of the last big differences between real-time and offline visual effects. In this talk we will show how volumetric effects are now practical on current GPU hardware. We will describe several new simulation and rendering techniques, including new solvers, combustion models, optimized ray marching and shadows, which together can make volumetric effects a practical alternative to particle-based methods for game effects.
Efficient time-domain back-projection focusing core for the image formation of very high resolution and highly squinted SAR spotlight data on scenes with strong topography variation - Author(s): Francesco Tataranni, Giuseppe Disimino, Antonella Gallipoli, INNOVA Consorzio per l’Informatica e la Telematica (Italy); Paolo Inversi, Telespazio S.p.A. (Italy)
"Efficient time-domain back-projection focusing core for the image formation of very high resolution and highly squinted SAR spotlight data on scenes with strong topography variation" - Author(s): Francesco Tataranni, Giuseppe Disimino, Antonella Gallipoli, INNOVA Consorzio per l’Informatica e la Telematica (Italy); Paolo Inversi, Telespazio S.p.A. (Italy)
Look Ma, No Jutter! Optimizing Performance Across Oculus MobileUnity Technologies
The introduction of mobile AR means the arrival of more accessible devices, and for developers, a broader range of consumers to target. The good news is that you're "already ready." A stable of universal techniques and best practices can help reduce draw calls and maximize performance without sacrificing fidelity across Gear VR, Oculus Go, and Project Santa Cruz. We'll start with an overview of the devices and basic considerations, and go step by step through the process of reviewing and optimizing textures, scene geometry, and lighting. Attendees interested in Project Santa Cruz will also benefit from an introduction to Unity's profiling workflow.
Gabor Szauer - Oculus
SIGGRAPH 2018 - Full Rays Ahead! From Raster to Real-Time RaytracingElectronic Arts / DICE
In this presentation part of the "Introduction to DirectX Raytracing" course, Colin Barré-Brisebois of SEED discusses some of the challenges the team had to go through when going from raster to real-time raytracing for Project PICA PICA.
Efficient occlusion culling in dynamic scenes is a very important topic to the game and real-time graphics community in order to accelerate rendering. We present a novel algorithm inspired by recent advances in depth culling for graphics hardware, but adapted and optimized for SIMD-capable CPUs. Our algorithm has very low memory overhead and is three times faster than previous work, while culling 98% of all triangles by a full resolution depth buffer approach. It supports interleaving occluder rasterization and occlusion queries without penalty, making it easy
GDC talk on how to render Per-Face Texture Mapping (PTEX) datasets on commodity GPUs in Realtime in a much simpler fashion than the earlier proposed techniques. This method is heavily suitable for real time consumption, and can be altered to support other texture techniques.
Similar to Graphics Gems from CryENGINE 3 (Siggraph 2013) (20)
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
PHP Frameworks: I want to break free (IPC Berlin 2024)Ralf Eggert
In this presentation, we examine the challenges and limitations of relying too heavily on PHP frameworks in web development. We discuss the history of PHP and its frameworks to understand how this dependence has evolved. The focus will be on providing concrete tips and strategies to reduce reliance on these frameworks, based on real-world examples and practical considerations. The goal is to equip developers with the skills and knowledge to create more flexible and future-proof web applications. We'll explore the importance of maintaining autonomy in a rapidly changing tech landscape and how to make informed decisions in PHP development.
This talk is aimed at encouraging a more independent approach to using PHP frameworks, moving towards a more flexible and future-proof approach to PHP development.
"Impact of front-end architecture on development cost", Viktor TurskyiFwdays
I have heard many times that architecture is not important for the front-end. Also, many times I have seen how developers implement features on the front-end just following the standard rules for a framework and think that this is enough to successfully launch the project, and then the project fails. How to prevent this and what approach to choose? I have launched dozens of complex projects and during the talk we will analyze which approaches have worked for me and which have not.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
2. Advances in Real-Time Rendering course, Siggraph 2013 2
AGENDA
Anti-aliasing
Practical Deferred MSAA
Temporal Antialiasing: SMAA 1TX
Camera Post-Processing
Depth of Field
Motion Blur
Sharing results from ongoing research
Results not used in a shipped game yet
3. Advances in Real-Time Rendering course, Siggraph 2013 3
ANTIALIASINGDEFERRED MSAA REVIEW
The problem: Multiple passes + r/w from Multisampled RTs
DX 10.1 introduced SV_SampleIndex / SV_Coverage system value semantics.
Allows to solve via multipass for pixel/sample frequency passes [Thibieroz11]
SV_SampleIndex
Forces pixel shader execution for each sub-sample and provides index of the sub-sample currently executed
Index can be used to fetch sub-sample from a Multisampled RT. E.g. FooMS.Load( UnnormScreenCoord, nSampleIndex)
SV_Coverage
Indicates to pixel shader which sub-samples covered during raster stage.
Can modify also sub-sample coverage for custom coverage mask
DX 11.0 Compute Tiled based deferred shading/lighting MSAA is simpler
Loop through MSAA tagged sub-samples
4. Advances in Real-Time Rendering course, Siggraph 2013 4
DEFERRED MSAAHEADS UP !
Simple theory, troublesome practice
At least with complex deferred renderers
Non-MSAA friendly code accumulates fast.
Breaks regularly, as new techniques added without MSAA consideration
Even if still works.. Very often you’ll need to pinpoint and fix non-msaa friendly techniques, as these introduce visual
artifacts.
E.g. white/dark outlines, or no AA at all
Do it upfront. Retrofitting a renderer to support Deferred MSAA is some work
And it is very finiky
5. Advances in Real-Time Rendering course, Siggraph 2013 5
DEFERRED MSAACUSTOM RESOLVE & PER-SAMPLE MASK
Post G-Buffer, perform a custom msaa resolve
Pre-resolves sample 0, for pixel frequency passes such as lighting/other MSAA dependent passes
In same pass create sub-sample mask (compare samples similarity, mark if mismatching)
Avoid default SV_COVERAGE, since it results in redundant processing on regions not requiring MSAA
SV_Coverage Custom Per-Sample Mask
6. Advances in Real-Time Rendering course, Siggraph 2013 6
DEFERRED MSAASTENCIL BATCHING [SOUSA13]
Batching per-sample stencil mask with regular stencil buffer usage
Reserve 1 bit from stencil buffer
Update with sub-sample mask
Tag entire pixel-quad instead of just single pixel -> improves stencil culling efficiency
Make usage of stencil read/write bitmask to avoid per-sample bit override
StencilWriteMask = 0x7F
Restore whenever a stencil clear occurs
Not possible due to extreme stencil usage?
Use clip/discard
Extra overhead also from additional texture read for per-sample mask
7. Advances in Real-Time Rendering course, Siggraph 2013 7
DEFERRED MSAAPIXEL AND SAMPLE FREQUENCY PASSES
Pixel Frequency Passes
Set stencil read mask to reserved bits for per-pixel regions (~0x80)
Bind pre-resolved (non-multisampled) targets SRVs
Render pass as usual
Sample Frequency Passes
Set stencil read mask to reserved bit for per-sample regions (0x80)
Bind multisampled targets SRVs
Index current sub-sample via SV_SAMPLEINDEX
Render pass as usual
8. Advances in Real-Time Rendering course, Siggraph 2013 8
DEFERRED MSAAALPHA TEST SSAA
Alpha testing requires ad hoc solution
Default SV_Coverage only applies to triangle edges
Create your own sub-sample coverage mask
E.g. check if current sub-sample uses AT or not and set bit
static const float2 vMSAAOffsets[2] = {float2(0.25, 0.25),float2(-0.25,-0.25)};
const float2 vDDX = ddx(vTexCoord.xy);
const float2 vDDY = ddy(vTexCoord.xy);
[unroll] for(int s = 0; s < nSampleCount; ++s)
{
float2 vTexOffset = vMSAAOffsets[s].x * vDDX + (vMSAAOffsets[s].y * vDDY);
float fAlpha = tex2D(DiffuseSmp, vTexCoord + vTexOffset).w;
uCoverageMask |= ((fAlpha-fAlphaRef) >= 0)? (uint(0x1)<<i) : 0;
}
Alpha Test SSAA Disabled
Alpha Test SSAA Enabled
9. Advances in Real-Time Rendering course, Siggraph 2013 9
DEFERRED MSAAPERFORMANCE SHORTCUTS
Deferred cascades sun shadow maps
Render shadows as usual at pixel frequency
Bilateral upscale during deferred shading composite pass
10. Advances in Real-Time Rendering course, Siggraph 2013 10
DEFERRED MSAAPERFORMANCE SHORTCUTS (2)
Non-opaque techniques accessing depth (e.g. Soft-Particles)
Recommendation to tackle via per-sample frequency is fairly slow on real world scenarios
Using Max Depth works ok for most cases and N-times faster
11. Advances in Real-Time Rendering course, Siggraph 2013 11
MSAAPERFORMANCE SHORTCUTS (3)
Many games, also doing:
Skipping Alpha Test Super Sampling
Use alpha to coverage instead, or even no alpha test AA (let morphological AA tackle that)
Render only opaque with MSAA
Then render transparents withouth MSAA
Assuming HDR rendering: note that tone mapping is implicitly done post-resolve resulting is loss of detail on high
contrast regions
12. Advances in Real-Time Rendering course, Siggraph 2013 12
DEFERRED MSAAMSAA FRIENDLINESS
Look out for these:
No MSAA noticeably working, or noticeable bright/dark silhouettes.
Incorrect Incorrect
13. Advances in Real-Time Rendering course, Siggraph 2013 13
DEFERRED MSAAMSAA FRIENDLINESS
Look out for these:
No MSAA noticeably working, or noticeable bright/dark silhouettes.
Fixed Fixed
14. Advances in Real-Time Rendering course, Siggraph 2013 14
DEFERRED MSAARECAP
Accessing and/or rendering to Multisampled RTs?
Then you need to care about accessing and outputting correct sub-sample
In general always strive to minimize BW
Avoid vanilla deferred lighting
Prefer fully deferred, hybrids, or just skip deferred altogether.
If deferred, prefer thin g-buffers
Each additional target on g-buffer incurs in export rate overhead [Thibieroz11]
NV/AMD (GCN): Export Cost = Cost(RT0)+Cost(RT1)..., AMD (older hw): Export Cost = (Num RTs) * (Slowest RT)
Fat formats are half rate sampling cost for bilinear filtering modes on GCN [Thibieroz13]
For lighting/some hdr post processes: 32 bit R11G11B10F fmt suffices for most cases
15. Advances in Real-Time Rendering course, Siggraph 2013 15
ANTIALIASING + 4K RESOLUTIONSWILL WE NEED MSAA AT ALL?
Likely can start getting creative here
Source 4kHighly Compressed 4k
16. Advances in Real-Time Rendering course, Siggraph 2013 16
ANTIALIASING THE QUEST FOR BETTER (AND FAST) AA
2011: the boom year of alternative AA modes (and naming combos)
FXAA, MLAA, SMAA, SRAA, DEAA, GBAA, DLAA, ETC AA
“Filtering Approaches for Real-Time Anti-Aliasing” [Jimenez et all 11]
Shading Anti-aliasing
“Mip mapping normal maps” [Toksvig04]
“Spectacular Specular: LEAN and CLEAN Specular Highlights” [Baker11]
“Rock-Solid Shading: Image Stability withouth Sacrificing Detail” [Hill12]
17. Advances in Real-Time Rendering course, Siggraph 2013 17
TEMPORAL SSAASMAA 2TX/4X REVIEW [JIMENEZ11][SOUSA11]
Morphological AA + MSAA + Temporal SSAA combo
Balanced cost/quality tradeoff, techniques complement each other.
Temporal component uses 2 sub-pixel buffers.
Each frame adds a sub-pixel jitter for 2x SSAA.
Reproject previous frame and blend between current and
previous frames, via velocity length weighting.
Preserves image sharpness + reasonable temporal stability
Naive Blending Reprojection |V| weighting
18. Advances in Real-Time Rendering course, Siggraph 2013 18
TEMPORAL AACOMMON ROBUSTNESS FLAWS
Relying on opaque geometry information
Can’t handle signal (color) changes nor transparency.
For correct result, all opaque geometry must output velocity
Pathological cases
Alpha blended surfaces (e.g. particles), lighting/shadow/reflections/uv animation/etc
Any scatter and alike post processes, before the AA resolve
Can result in distracting errors
E.g. “ghosting” on transparency, lighting, shadows and such
Silhouettes might appear, from scatter and alike post processes (e.g. bloom)
Multi-GPU
Simplest solution: force resource sync
NVIDIA exposes driver hint to force sync resource , via NVAPI. This is solution used by NVIDIAs TXAA
Note to hw vendors: would be great if all vendors exposed such (even better if Multi-GPU API functionality generalized)
Incorrect Blending
19. Advances in Real-Time Rendering course, Siggraph 2013 19
SMAA 1TXA MORE ROBUST TEMPORAL AA
Concept: Only track signal changes, don’t rely on geometry information
For higher temporal stability: accumulate multiple frames in an accumulation buffer, alike TXAA [Lottes12]
Re-project accumulation buffer
Weighting: Map acc. buffer colors into the range of curr. frame neighborhood color extents [Malan2012]; different weight
for hi/low frequency regions (for sharpness preservation).
Current Frame (t0)
M
Accumulation Buffer (tN)
TL TR
BL BR
M
20. Advances in Real-Time Rendering course, Siggraph 2013 20
SMAA 1TXA MORE ROBUST TEMPORAL AA (2)
Concept: Only track signal changes, don’t rely on geometry information
For higher temporal stability: accumulate multiple frames in an accumulation buffer, alike TXAA [Lottes12]
Re-project accumulation buffer
Weighting: Map acc. buffer colors into the range of curr. frame neighborhood color extents [Malan2012]; different weight
for hi/low frequency regions (for sharpness preservation).
Smaa 1txRegular TAA
23. Advances in Real-Time Rendering course, Siggraph 2013 23
DEPTH OF FIELDPLAUSIBLE DOF: PARAMETERIZATION
Artist friendly parameters is one reason why games DOF tends to look wrong
Typical controls such as “focus range” + “blur amount” and others have not much physical meaning
CoC depends mainly on f-stops, focal length and focal distance. These last 2 directly affect FOV.
If you want more Bokeh, you need to max your focal length + widen aperture. This means also getting closer or further
from subject for proper framing.
Not the typical way a game artist/programmer thinks about DOF.
Wider fov, less bokeh Shallow fov, more bokeh
24. Advances in Real-Time Rendering course, Siggraph 2013 24
50 mm
DEPTH OF FIELDFOCAL LENGTH
200 mm
25. Advances in Real-Time Rendering course, Siggraph 2013 25
DEPTH OF FIELDF-STOPS
2 f-stops 22 f-stops8 f-stops
27. Advances in Real-Time Rendering course, Siggraph 2013 27
DEPTH OF FIELDFOCAL DISTANCE
0.5 m
28. Advances in Real-Time Rendering course, Siggraph 2013 28
DEPTH OF FIELDFOCAL DISTANCE
0.75 m
29. Advances in Real-Time Rendering course, Siggraph 2013 29
DEPTH OF FIELDFOCAL DISTANCE
1.0 m
30. Advances in Real-Time Rendering course, Siggraph 2013 30
DEPTH OF FIELDPLAUSIBLE DOF: BOKEH
Out of focus region is commonly referred in photography as “Bokeh” (Japanese word for blur)
Bokeh shape has direct relation to camera aperture size (aka f-stops) and diaphragm blades count
Bigger aperture = more “circular” bokeh, smaller aperture = more polygonal bokeh
Polygonal bokeh look depends on diaphragm blades count
Blades count varies on lens characteristics
Bigger aperture = more light enters, smaller aperture = less light
On night shots, you might notice often more circular bokeh and more motion blur
Bokeh kernel is flat
Almost same amount of light enters camera iris from all directions
Edges might be in shadow, this is commonly known as Vignetting
Poor lenses manufacturing may introduce a vast array of optical aberrations [Wiki01]
This is main reason why gaussian blur, diffusion dof, and derivative techniques look wrong/visually unpleasant
31. Advances in Real-Time Rendering course, Siggraph 2013 31
DEPTH OF FIELDSTATE OF THE ART OVERVIEW
Scatter based techniques [Cyril05][Sawada07][3DMark11][Mittring11][Sousa11]
Render 1 quad or tri per-pixel, scale based on CoC
Simple implementation and nice results. Downside: performance, particularly on shallow DOF
Variable/inconsistent fillrate hit, depending on near/far layers resolution and aperture size might reach >5 ms
Quad generation phase has fixed cost attached.
32. Advances in Real-Time Rendering course, Siggraph 2013 32
DEPTH OF FIELDSTATE OF THE ART OVERVIEW (2)
Gather based: separable (inflexible kernel) vs. kernel flexibility
[Macintosh12]
[Gotanda09]
[White11] [Andreev 12]
[Kawase09]
33. Advances in Real-Time Rendering course, Siggraph 2013 33
DEPTH OF FIELDA PLAUSIBLE AND EFFICIENT DOF RECONSTRUCTION FILTER
Separable flexible filter: low bandwidth requirement + different bokeh shape possible
1st pass N^2 taps (e.g: 7x7).
2nd pass N^2 taps (e.g: 3x3) for flood filling shape
R11G11B10F: downscaled HDR scene; R8G8: CoC
Done at half resolution
Far/Near fields processed in same pass
Limit offset range to minimize undersampling
Higher specs hw can have higher tap count
Diaphragm and optical aberrations sim
Physically based CoC
34. Advances in Real-Time Rendering course, Siggraph 2013 34
DEPTH OF FIELDLENS REVIEW
Pinhole “Lens”
A camera withouth lens
Light has to pass through single small aperture before hitting image plane
Tipical realtime rendering
Thin lens
Camera lenses have finite dimension
Light refracts through lens until hitting image plane.
F = Focal lenght
P = Plane in focus
I = Image distance
Circle of
Confusion
Image Plane
P I
F
35. Advances in Real-Time Rendering course, Siggraph 2013 35
DEPTH OF FIELDLENS REVIEW (2)
The thin lens equation gives relation between:
F = Focal length (where light starts getting in focus)
P = Plane in focus (camera focal distance)
I = Image distance (where image is projected in focus)
Circle of Confusion [Potmesil81]
f = f-stops (aka as the f-number or focal ratio)
D = Object distance
A = Aperture diameter
Simplifies to:
Note: f and F are known variables from camera setup
Folds down into a single mad in shader
Camera FOV:
Typical film formats (or sensor), 35mm/70mm
Can alternatively derive focal length from FOV
36. Advances in Real-Time Rendering course, Siggraph 2013 36
DEPTH OF FIELDSAMPLING
Concentric Mapping [Shirley97] used for uniform sample distribution
Maps unit square to unit circle
Square mapped to (a,b) [-1,1]2 and divided into 4 regions by lines a=b, a=-b
First region given by:
Diaphragm simulation by morphing samples to n-gons
Via a modified equation for the regular polygon.
37. Advances in Real-Time Rendering course, Siggraph 2013 37
DEPTH OF FIELDSAMPLING: 2ND ITERATION
To floodfill final shape, composite via boolean union, similarly to [McIntosh12]
+
U
+...=
U...=
39. Advances in Real-Time Rendering course, Siggraph 2013 39
DEPTH OF FIELDREFERENCE VS SEPARABLE FILTER
144 taps (1.31ms) 58 taps (0.52ms)
40. Advances in Real-Time Rendering course, Siggraph 2013 40
DEPTH OF FIELDDIAPHRAGM SIMULATION IN ACTION
4f-stops2f-stops
41. Advances in Real-Time Rendering course, Siggraph 2013 41
Tile Min/Max CoC
Downscale CoC target k times (k = tile count)
Take min fragment for far field, max fragment for near field
R8G8 storage
Used to process near/far fields in same pass
Dynamic branching using Tile Min/Max CoC for both fields
Balances cost between far/near
Also used for scatter as gather approximation for near field
Can fold cost with other post-processes
Initial downscale cost folded with HDR scene downscale for bloom,
also pack near/far fields HDR input into R11G11B10F - all in 1 pass
DEPTH OF FIELDTILE MIN/MAX COC
Tile min-CoC (far field)
Tile max-CoC (near field)
42. Advances in Real-Time Rendering course, Siggraph 2013 42
DEPTH OF FIELDFAR + NEAR FIELD PROCESSING
Both fields use half resolution input
Careful: downscale is source of error due to bilinear filtering
Use custom bilinear (bilateral) filter for downscaling
Far Field
Scale kernel size and weight samples with far CoC [Scheumerman05]
Pre-multiply layer with far CoC [Gotanda09]
Prevents bleeding artifacts from bilinear/separable filter
No weighting CoC weighting
CoC weighting +
CoC pre-multiply
43. Advances in Real-Time Rendering course, Siggraph 2013 43
DEPTH OF FIELDFAR + NEAR FIELD PROCESSING
Both fields use half resolution input
Careful: downscale is source of error due to bilinear filtering
Use custom bilinear (bilateral) filter for downscaling
Far Field
Scale kernel size and weight samples with far CoC [Scheumerman05]
Pre-multiply layer with far CoC [Gotanda09]
Prevents bleeding artifacts from bilinear/separable filter
Near Field
Scatter as gather aproximation
Scale kernel size + weight fragments with Tile Max CoC against
near CoC
Pre-multiply with near CoC
Only want to blur near field fragments (cheap partial occlusion approximation) Far Field Near Field
44. Advances in Real-Time Rendering course, Siggraph 2013 44
DEPTH OF FIELDFINAL COMPOSITE
Far field: upscale via bilateral filter
Take 4 taps from half res CoC, compare against full res CoC
Weighted using bicubic filtering for quality [Sigg05]
Far field CoC used for blending
Near field: upscale carelessly
Half resolution near field CoC used for blending
Can bleed as much as possible
Also using bicubic filtering
Carefull with blending
Linear blending doesn’t look good (signal frequency soup)
Can be seen in many games, including all Crysis series (puts hat of shame)
Simple to address: use non-linear blend factor instead.
Linear blend
Non-linear blend (better)
46. Advances in Real-Time Rendering course, Siggraph 2013 46
MOTION BLURSHUTTER SPEED AND F-STOPS REVIEW
Amount of motion blur is relative to camera shutter speed and f-stops usage
The longer the exposure (slower shutter), the more light received (and the bigger amount of motion blur), and vice-versa
The lower f-stops the faster the exposure can be (and have less motion blur), and vice versa
2 f-stops, shutter 1/20 sec 4 f-stops, shutter 1/6 sec
47. Advances in Real-Time Rendering course, Siggraph 2013 47
MOTION BLURSTATE OF THE ART OVERVIEW
Scatter via geometry expansion [Green03][Sawada07]
Require additional geometry pass + gs shader usage *
48. Advances in Real-Time Rendering course, Siggraph 2013 48
MOTION BLURSTATE OF THE ART OVERVIEW (2)
Scatter as gather [Sousa08][Gotanda09][Sousa11][Maguire12]
E.g.velocity dilation, velocity blur, tile max velocity; single vs. multiple pass composite; depth/v/obj ID masking; single pass DOF+MB
49. Advances in Real-Time Rendering course, Siggraph 2013 49
MOTION BLURRECONSTRUCTION FILTER FOR PLAUSIBLE MB [MCGUIRE12]
Tile Max Velocity and Tile Neighbor Max Velocity
Downscale Velocity buffer by k times (k is tile count)
Take max length velocity at each step
Motion Blur Pass
Tile Neighbor Max for early out
Tile Max Velocity as center velocity tap
At each iteration step weight against full
resolution ||V|| and Depth
50. Advances in Real-Time Rendering course, Siggraph 2013 50
MOTION BLURAN IMPROVED RECONSTRUCTION FILTER
Performant Quality
Simplify and vectorize inner loop weight computation (ends up in couple mads)
Fat buffers sampling are half rate on GCN hw with bilinear (point filtering is fullrate, but doesn’t look good due to aliasing)
Inputs: R11G11B10F for scene , bake ||V|| and 8 bit depth into a R8G8 target
Make it separable, 2 passes [Sousa08]
1 iteration: 6 taps (0.236 ms) 2nd iter: 6 taps (+0.236 ms; 36 taps acc.)
52. Advances in Real-Time Rendering course, Siggraph 2013 52
Output object velocity in G-Buffer (only when required)
Avoids separate geometry passes.
Rigid geometry: object distance < distance threshold
Deformable geometry: if amount of movement > movement threshold
Moving geometry rendered last
R8G8 fmt
Composite with camera velocity
Velocity encoded in gamma 2.0 space
Precision still insufficient, but not much noticeable in practice
MOTION BLURAN IMPROVED RECONSTRUCTION FILTER (3)
Encode
Decode
Object velocity
Composite with camera velocity
53. Advances in Real-Time Rendering course, Siggraph 2013 53
MOTION BLURMB OR DOF FIRST?
In real world MB/DOF occur simultaneously
A dream implementation: big N^2 kernel + batched DOF/MB
Or sprite based with MB quad stretching
Full resolution! 1 Billion taps! FP16! Multiple layers!
But... performance still matters (consoles):
DOF before MB introduces less error when MB happening on focus
This is due MB is a scatter as gather op relying on geometry data.
Any other similar op after will introduce error. And vice-versa.
Error from MB after DOF is less noticeable.
Order swap makes DOF harder to fold with other posts
Additional overhead
MB before DOF MB after DOF
54. Advances in Real-Time Rendering course, Siggraph 2013 54
FINAL REMARKS
Practical MSAA details
Do’s and Dont’s
SMAA 1TX: A More Robust Temporal AA
For just 4 extra texture ops and couple alu
A Plausible and Performant DOF Reconstruction Filter
Separable flexible filter, any bokeh kernel shape doable
1st pass: 0.426ms, 2nd pass: 0.094ms. Sum: 0.52ms for reconstruction filter *
An Improved Reconstruction Filter for Plausible Motion Blur
Separable, 1st pass: 0.236 ms, 2nd pass: 0.236ms. Sum: 0.472ms for reconstruction filter *
* 1080p + AMD 7970
55. Advances in Real-Time Rendering course, Siggraph 2013 55
SPECIAL THANKS
Natalya Tartachuck
Michael Kopietz, Nicolas Schulz, Christopher Evans, Carsten Wenzel, Christopher Raine,
Nick Kasyan, Magnus Larbrant, Pierre Donzallaz
58. Advances in Real-Time Rendering course, Siggraph 2013 58
REFERENCES
Potmesil M., Chakravarty I. “Synthetic Image Generation with a Lens and Aperture Camera Model”, 1981
Shirley P., Chiu K., “A Low Distortion Map Between Disk and Square”, 1997
Green S. “Stupid OpenGL Tricks”, 2003
Toksvig M., “Mip Mapping Normal Maps”, 2004
Scheuermann T., Tatarchuk N. “Improved Depth of Field Rendering”, Shader X3, 2005
Sigg C., Hadwiger M., “Fast Third-Order Texture Filtgering”, 2005
Cyril, P et al. “Photographic Depth of Field Rendering”, 2005
Gritz L., Eon E. “The Importance of Being Linear”, 2007
Sawada Y., Talk at Game Developers Conference, http://www.beyond3d.com/content/news/499 , 2007
Sousa T. “Crysis Next Gen Effects”, 2008
Gotanda Y. ”Star Ocean 4: Flexible Shader Management and Post Processing”, 2009
Yang L. et al, “Amortized SuperSampling”, 2009
Kawase M., “Anti-Downsized Buffer Artifacts”, 2011
Sousa T., “CryENGINE 3 Rendering Techniques”, 2011
Binks D., “Dynamic Resolution Rendering”, 2011
Sousa T., Schulz N. , Kasyan N., “Secrets of CryENGINE 3 Graphics Technology”, 2011
Sousa T., “Anti-Aliasing Methods in CryENGINE 3”, 2011
Jimenez J. et.al, “Filtering Approaches for Real-Time Anti-Aliasing”, 2011
Thibieroz N., “Deferred Shading Optimizations”, 2011
59. Advances in Real-Time Rendering course, Siggraph 2013 59
REFERENCES
Cloward B., Otstott A., “Cinematic Character Lighting in Star Wars: The Old Republic”, 2011
Mittring M., Dudash B., “The Technology Behind the DirectX11 Unreal Engine “Samaritan” Demo“, 2011
Futuremark, 3DMark11 “White Paper”, 2011
Baker D., “Spectacular Specular: LEAN and CLEAN Specular Highlights”, 2011
White J., Brisebois C., “More Performance! Five Rendering Ideas from Batlefield 3 and Need for Speed: The Run”, 2011
Lottes T., “Nvidia TXAA”, http://www.nvidia.in/object/txaa-anti-aliasing-technology-in.html#gameContent=1 , 2012
McGuire M. et al, “A Reconstruction Filter for Plausible Motion Blur”, 2012
McIntosh L. Et al, “Efficiently Simulating Bokeh”, 2012
Malan H., “Real-Time Global Illumination and Reflections in Dust 514”, 2012
Hill S., Baker D., “Rock-Solid Shading: Image Stability withouth Sacrificing Detail”, 2012
Sousa T., Wenzel C., Raine C., “Rendering Technologies of Crysis 3”, 2013
Thibieroz N., Gruen H., “DirectX11 Performance Reloaded”, 2013
Andreev D., “Rendering Tricks in Dead Space 3”, 2013
http://en.wikipedia.org/wiki/Lens_%28optics%29
http://en.wikipedia.org/wiki/Pinhole_camera
http://en.wikipedia.org/wiki/F-number
http://en.wikipedia.org/wiki/Angle_of_view