This talk presents the approach Frostbite took to add support for HDR displays. It will summarize Frostbite's previous post processing pipeline and what the issues were. Attendees will learn the decisions made to fix these issues, improve the color grading workflow and support high quality HDR and SDR output. This session will detail the display mapping used to implement the"grade once, output many" approach to targeting any display and why an ad-hoc approach as opposed to filmic tone mapping was chosen. Frostbite retained 3D LUT-based grading flexibility and the accuracy differences of computing these in decorrelated color spaces will be shown. This session will also include the main issues found on early adopter games, differences between HDR standards, optimizations to achieve performance parity with the legacy path and why supporting HDR can also improve the SDR version.
Takeaway
Attendees will learn how and why Frostbite chose to support High Dynamic Range [HDR] displays. They will understand the issues faced and how these were resolved. This talk will be useful for those still to support HDR and provide discussion points for those who already do.
Intended Audience
The intended audience is primarily rendering engineers, technical artists and artists; specifically those who focus on grading and lighting and those interested in HDR displays. Ideally attendees will be familiar with color grading and tonemapping.
Taking Killzone Shadow Fall Image Quality Into The Next GenerationGuerrilla
This talk focuses on the technical side of Killzone Shadow Fall, the platform exclusive launch title for PlayStation 4.
We present the details of several new techniques that were developed in the quest for next generation image quality, and the talk uses key locations from the game as examples. We discuss interesting aspects of the new content pipeline, next-gen lighting engine, usage of indirect lighting and various shadow rendering optimizations. We also describe the details of volumetric lighting, the real-time reflections system, and the new anti-aliasing solution, and include some details about the image-quality driven streaming system. A common, very important, theme of the talk is the temporal coherency and how it was utilized to reduce aliasing, and improve the rendering quality and image stability above the baseline 1080p resolution seen in other games.
Filmic Tonemapping for Real-time Rendering - Siggraph 2010 Color Coursehpduiker
Filmic Tonemapping for Real-time Rendering, a presentation from the Siggraph 2010 Course on Color, on a technique developed from film that became very applicable to games with the addition of support for HDR lighting and rendering in graphics cards.
Talk by Fabien Christin from DICE at GDC 2016.
Designing a big city that players can explore by day and by night while improving on the unique visual from the first Mirror's Edge game isn't an easy task.
In this talk, the tools and technology used to render Mirror's Edge: Catalyst will be discussed. From the physical sky to the reflection tech, the speakers will show how they tamed the new Frostbite 3 PBR engine to deliver realistic images with stylized visuals.
They will talk about the artistic and technical challenges they faced and how they tried to overcome them, from the simple light settings and Enlighten workflow to character shading and color grading.
Takeaway
Attendees will get an insight of technical and artistic techniques used to create a dynamic time of day system with updating radiosity and reflections.
Intended Audience
This session is targeted to game artists, technical artists and graphics programmers who want to know more about Mirror's Edge: Catalyst rendering technology, lighting tools and shading tricks.
The presentation describes Physically Based Lighting Pipeline of Killzone : Shadow Fall - Playstation 4 launch title. The talk covers studio transition to a new asset creation pipeline, based on physical properties. Moreover it describes light rendering systems used in new 3D engine built from grounds up for upcoming Playstation 4 hardware. A novel real time lighting model, simulating physically accurate Area Lights, will be introduced, as well as hybrid - ray-traced / image based reflection system.
We believe that physically based rendering is a viable way to optimize asset creation pipeline efficiency and quality. It also enables the rendering quality to reach a new level that is highly flexible depending on art direction requirements.
Taking Killzone Shadow Fall Image Quality Into The Next GenerationGuerrilla
This talk focuses on the technical side of Killzone Shadow Fall, the platform exclusive launch title for PlayStation 4.
We present the details of several new techniques that were developed in the quest for next generation image quality, and the talk uses key locations from the game as examples. We discuss interesting aspects of the new content pipeline, next-gen lighting engine, usage of indirect lighting and various shadow rendering optimizations. We also describe the details of volumetric lighting, the real-time reflections system, and the new anti-aliasing solution, and include some details about the image-quality driven streaming system. A common, very important, theme of the talk is the temporal coherency and how it was utilized to reduce aliasing, and improve the rendering quality and image stability above the baseline 1080p resolution seen in other games.
Filmic Tonemapping for Real-time Rendering - Siggraph 2010 Color Coursehpduiker
Filmic Tonemapping for Real-time Rendering, a presentation from the Siggraph 2010 Course on Color, on a technique developed from film that became very applicable to games with the addition of support for HDR lighting and rendering in graphics cards.
Talk by Fabien Christin from DICE at GDC 2016.
Designing a big city that players can explore by day and by night while improving on the unique visual from the first Mirror's Edge game isn't an easy task.
In this talk, the tools and technology used to render Mirror's Edge: Catalyst will be discussed. From the physical sky to the reflection tech, the speakers will show how they tamed the new Frostbite 3 PBR engine to deliver realistic images with stylized visuals.
They will talk about the artistic and technical challenges they faced and how they tried to overcome them, from the simple light settings and Enlighten workflow to character shading and color grading.
Takeaway
Attendees will get an insight of technical and artistic techniques used to create a dynamic time of day system with updating radiosity and reflections.
Intended Audience
This session is targeted to game artists, technical artists and graphics programmers who want to know more about Mirror's Edge: Catalyst rendering technology, lighting tools and shading tricks.
The presentation describes Physically Based Lighting Pipeline of Killzone : Shadow Fall - Playstation 4 launch title. The talk covers studio transition to a new asset creation pipeline, based on physical properties. Moreover it describes light rendering systems used in new 3D engine built from grounds up for upcoming Playstation 4 hardware. A novel real time lighting model, simulating physically accurate Area Lights, will be introduced, as well as hybrid - ray-traced / image based reflection system.
We believe that physically based rendering is a viable way to optimize asset creation pipeline efficiency and quality. It also enables the rendering quality to reach a new level that is highly flexible depending on art direction requirements.
Physically Based Lighting in Unreal Engine 4Lukas Lang
Talk held at Unreal Meetup Munich on 15th May 2019.
I talked about some of the theoretical background of physically based lighting, demonstrated a workflow + containing value tables needed to be able to easily use the workflow.
Rendering Technologies from Crysis 3 (GDC 2013)Tiago Sousa
This talk covers changes in CryENGINE 3 technology during 2012, with DX11 related topics such as moving to deferred rendering while maintaining backward compatibility on a multiplatform engine, massive vegetation rendering, MSAA support and how to deal with its common visual artifacts, among other topics.
Graphics Gems from CryENGINE 3 (Siggraph 2013)Tiago Sousa
This lecture covers rendering topics related to Crytek’s latest engine iteration, the technology which powers titles such as Ryse, Warface, and Crysis 3. Among covered topics, Sousa presented SMAA 1TX: an update featuring a robust and simple temporal antialising component; performant and physically-plausible camera related post-processing techniques such as motion blur and depth of field were also covered.
This talk is about our experiences gained during making of the Killzone Shadow Fall announcement demo.
We’ve gathered all the hard data about our assets, memory, CPU and GPU usage and a whole bunch of tricks.
The goal of talk is to help you to form a clear picture of what’s already possible to achieve on PS4.
A technical deep dive into the DX11 rendering in Battlefield 3, the first title to use the new Frostbite 2 Engine. Topics covered include DX11 optimization techniques, efficient deferred shading, high-quality rendering and resource streaming for creating large and highly-detailed dynamic environments on modern PCs.
Course presentation at SIGGRAPH 2014 by Charles de Rousiers and Sébastian Lagarde at Electronic Arts about transitioning the Frostbite game engine to physically-based rendering.
Make sure to check out the 118 page course notes on: http://www.frostbite.com/2014/11/moving-frostbite-to-pbr/
During the last few months, we have revisited the concept of image quality in Frostbite. The core of our approach was to be as close as possible to a cinematic look. We used the concept of reference to evaluate the accuracy of produced images. Physically based rendering (PBR) was the natural way to achieve this. This talk covers all the different steps needed to switch a production engine to PBR, including the small details often bypass in the literature.
The state of the art of real-time PBR techniques allowed us to achieve good overall results but not without production issues. We present some techniques for improving convolution time for image based reflection, proper ambient occlusion handling, and coherent lighting units which are mandatory for level editing.
Moreover, we have managed to reduce the quality gap, highlighted by our systematic reference comparison, in particular related to rough material handling, glossy screen space reflection, and area lighting.
The technical part of PBR is crucial for achieving good results, but represents only the top of the iceberg. Frostbite has become the de facto high-end game engine within Electronic Arts and is now used by a large amount of game teams. Moving all these game teams from “old fashion” lighting to PBR has required a lot of education, which have been done in parallel of the technical development. We have provided editing and validation tools to help the transition of art production. In addition, we have built a flexible material parametrisation framework to adapt to the various authoring tools and game teams’ requirements.
Talk by Graham Wihlidal (Frostbite Labs) at GDC 2017.
Checkerboard rendering is a relatively new technique, popularized recently by the introduction of the PlayStation 4 Pro. Many modern game engines are adding support for it right now, and in this talk, Graham will present an in-depth look at the new implementation in Frostbite, which is used in shipping titles like 'Battlefield 1' and 'Mass Effect Andromeda'. Despite being conceptually simple, checkerboard rendering requires a deep integration into the post-processing chain, in particular temporal anti-aliasing, dynamic resolution scaling, and poses various challenges to existing effects. This presentation will cover the basics of checkerboard rendering, explain the impact on a game engine that powers a wide range of titles, and provide a detailed look at how the current implementation in Frostbite works, including topics like object id, alpha unrolling, gradient adjust, and a highly efficient depth resolve.
Killzone Shadow Fall: Creating Art Tools For A New Generation Of GamesGuerrilla
This talk describes the tool improvements Guerrilla Games implemented to make Killzone Shadow Fall shine on the PlayStation 4. It highlights additions to the Maya pipeline, such as Viewport 2.0, Maya's coupling with in-game updates and in-engine deferred renderer features including real-time shadow-casting, volumetric lighting, hardware instancing, lens flares and color grading.
We present the technology and ideas behind the unique lighting in MIRRORS EDGE from DICE. Covering how DICE adopted Global illumination into their lighting process and Illuminate Labs current toolbox of state of the art lighting technology.
Progressive Lightmapper: An Introduction to Lightmapping in UnityUnity Technologies
In 2018.1 we removed the preview label from the Progressive Lightmapper – we’ve made memory improvements, optimizations, and have had customers battle test it. We are now also working on a GPU accelerated version of the lightmapper. In this session, Tobias and Kuba will provide an intro to the basics of lightmapping and address of the most common issues that users struggle with and how to solve them. They will also provide an update on the future roadmap for lightmapping in Unity.
Tobias Alexander Franke & Kuba Cupisz (Unity Technologies)
Checkerboard Rendering in Dark Souls: Remastered by QLOCQLOC
This is a talk on checkerboard rendering Markus & Andreas held at Digital Dragons 2019.
In it they quickly go through the history of Checkerboard Rendering before taking a deep dive into how it works and how it is implemented in Dark Souls: Remastered. Lastly, they present the quality and performance improvements they got from using it and their conclusion.
PS: The PDF. file includes useful in-depth notes from both authors.
The most important part of a modern PostFX pipeline is picking the right color model to support. This way the whole PostFX pipeline can use 32-bit render targets and at the same time have increased color representation and luminance representation.
Physically Based Lighting in Unreal Engine 4Lukas Lang
Talk held at Unreal Meetup Munich on 15th May 2019.
I talked about some of the theoretical background of physically based lighting, demonstrated a workflow + containing value tables needed to be able to easily use the workflow.
Rendering Technologies from Crysis 3 (GDC 2013)Tiago Sousa
This talk covers changes in CryENGINE 3 technology during 2012, with DX11 related topics such as moving to deferred rendering while maintaining backward compatibility on a multiplatform engine, massive vegetation rendering, MSAA support and how to deal with its common visual artifacts, among other topics.
Graphics Gems from CryENGINE 3 (Siggraph 2013)Tiago Sousa
This lecture covers rendering topics related to Crytek’s latest engine iteration, the technology which powers titles such as Ryse, Warface, and Crysis 3. Among covered topics, Sousa presented SMAA 1TX: an update featuring a robust and simple temporal antialising component; performant and physically-plausible camera related post-processing techniques such as motion blur and depth of field were also covered.
This talk is about our experiences gained during making of the Killzone Shadow Fall announcement demo.
We’ve gathered all the hard data about our assets, memory, CPU and GPU usage and a whole bunch of tricks.
The goal of talk is to help you to form a clear picture of what’s already possible to achieve on PS4.
A technical deep dive into the DX11 rendering in Battlefield 3, the first title to use the new Frostbite 2 Engine. Topics covered include DX11 optimization techniques, efficient deferred shading, high-quality rendering and resource streaming for creating large and highly-detailed dynamic environments on modern PCs.
Course presentation at SIGGRAPH 2014 by Charles de Rousiers and Sébastian Lagarde at Electronic Arts about transitioning the Frostbite game engine to physically-based rendering.
Make sure to check out the 118 page course notes on: http://www.frostbite.com/2014/11/moving-frostbite-to-pbr/
During the last few months, we have revisited the concept of image quality in Frostbite. The core of our approach was to be as close as possible to a cinematic look. We used the concept of reference to evaluate the accuracy of produced images. Physically based rendering (PBR) was the natural way to achieve this. This talk covers all the different steps needed to switch a production engine to PBR, including the small details often bypass in the literature.
The state of the art of real-time PBR techniques allowed us to achieve good overall results but not without production issues. We present some techniques for improving convolution time for image based reflection, proper ambient occlusion handling, and coherent lighting units which are mandatory for level editing.
Moreover, we have managed to reduce the quality gap, highlighted by our systematic reference comparison, in particular related to rough material handling, glossy screen space reflection, and area lighting.
The technical part of PBR is crucial for achieving good results, but represents only the top of the iceberg. Frostbite has become the de facto high-end game engine within Electronic Arts and is now used by a large amount of game teams. Moving all these game teams from “old fashion” lighting to PBR has required a lot of education, which have been done in parallel of the technical development. We have provided editing and validation tools to help the transition of art production. In addition, we have built a flexible material parametrisation framework to adapt to the various authoring tools and game teams’ requirements.
Talk by Graham Wihlidal (Frostbite Labs) at GDC 2017.
Checkerboard rendering is a relatively new technique, popularized recently by the introduction of the PlayStation 4 Pro. Many modern game engines are adding support for it right now, and in this talk, Graham will present an in-depth look at the new implementation in Frostbite, which is used in shipping titles like 'Battlefield 1' and 'Mass Effect Andromeda'. Despite being conceptually simple, checkerboard rendering requires a deep integration into the post-processing chain, in particular temporal anti-aliasing, dynamic resolution scaling, and poses various challenges to existing effects. This presentation will cover the basics of checkerboard rendering, explain the impact on a game engine that powers a wide range of titles, and provide a detailed look at how the current implementation in Frostbite works, including topics like object id, alpha unrolling, gradient adjust, and a highly efficient depth resolve.
Killzone Shadow Fall: Creating Art Tools For A New Generation Of GamesGuerrilla
This talk describes the tool improvements Guerrilla Games implemented to make Killzone Shadow Fall shine on the PlayStation 4. It highlights additions to the Maya pipeline, such as Viewport 2.0, Maya's coupling with in-game updates and in-engine deferred renderer features including real-time shadow-casting, volumetric lighting, hardware instancing, lens flares and color grading.
We present the technology and ideas behind the unique lighting in MIRRORS EDGE from DICE. Covering how DICE adopted Global illumination into their lighting process and Illuminate Labs current toolbox of state of the art lighting technology.
Progressive Lightmapper: An Introduction to Lightmapping in UnityUnity Technologies
In 2018.1 we removed the preview label from the Progressive Lightmapper – we’ve made memory improvements, optimizations, and have had customers battle test it. We are now also working on a GPU accelerated version of the lightmapper. In this session, Tobias and Kuba will provide an intro to the basics of lightmapping and address of the most common issues that users struggle with and how to solve them. They will also provide an update on the future roadmap for lightmapping in Unity.
Tobias Alexander Franke & Kuba Cupisz (Unity Technologies)
Checkerboard Rendering in Dark Souls: Remastered by QLOCQLOC
This is a talk on checkerboard rendering Markus & Andreas held at Digital Dragons 2019.
In it they quickly go through the history of Checkerboard Rendering before taking a deep dive into how it works and how it is implemented in Dark Souls: Remastered. Lastly, they present the quality and performance improvements they got from using it and their conclusion.
PS: The PDF. file includes useful in-depth notes from both authors.
The most important part of a modern PostFX pipeline is picking the right color model to support. This way the whole PostFX pipeline can use 32-bit render targets and at the same time have increased color representation and luminance representation.
Filmic Tone Mapping, a presentation at Electronic Arts on a technique from film that became very applicable to games with the addition of support for HDR lighting and rendering in graphics cards.
Color me intrigued: A jaunt through color technology in videoVittorio Giovara
Here are my slides from Demuxed 2017.
This talk aims to shed light on colorspaces - what they are, how and why they work, why we should care about handling edge cases properly. Starting with historical design choices, venturing through current standards such as BT.709, and arriving at modern times with High Dynamic Range, the focus will be on practical applications on the web and in broadcast.
Unite Berlin 2018 - Book of the Dead Optimizing Performance for High End Cons...Unity Technologies
In this session, the Unity Demo team provides their best tips and tricks for optimizing detailed, complex environment scenes for modern console performance.
Speakers:
Rob Thompson (Unity Technologies)
Upcoming rendering technology including scriptable render pipelines, advanced lighting options and more.
Presenter: Arisa Scott (Graphis Product Manager, Unity Technologies)
GDC2019 - SEED - Towards Deep Generative Models in Game DevelopmentElectronic Arts / DICE
Deep learning is becoming ubiquitous in Machine Learning (ML) research, and it's also finding its place in industry-related applications. Specifically, deep generative models have proven incredibly useful at generating and remixing realistic content from scratch, making themselves a very appealing technology in the field of AI-enhanced content authoring. As part of this year's Machine Learning Tutorial at the Game Developers Conference 2019 (GDC), Jorge Del Val from SEED will cover in an accessible manner the fundamentals of deep generative modeling, including some common algorithms and architectures. He will also discuss applications to game development and explore some recent advances in the field.
The attendee will gain basic understanding of the fundamentals of generative models and how to implement them. Also, attendees will grasp potential applications in the field of game development to inspire their work and companies. This talk does not require a mathematical or machine learning background, although previous knowledge on either of those is beneficial.
Henrik Halén (Lead Rendering Programmer) at Electronic Arts presented "Style and Gameplay in the Mirror's Edge" at SIGGRAPH 2010's Stylized Rendering in Games. https://www.cs.williams.edu/~morgan/SRG10/
Syysgraph 2018 - Modern Graphics Abstractions & Real-Time Ray TracingElectronic Arts / DICE
Graham Wihlidal and Colin Barré-Brisebois of SEED attended SyysGraph 2018 in Helsinki and presented to the group. The first section described aspects of Halcyon's rendering architecture, including information on explicit heterogeneous and virtual multi-GPU, render graph, and the remote render proxy backend. The second section discussed real-time ray tracing approaches and current open problems. The following day, this presentation was also given as a lecture at Aalto University.
Graham Wihlidal from SEED attended the Munich Khronos Meetup and presented some aspects of Halcyon's rendering architecture, as well as details of the Vulkan implementation. Graham presented components like high-level render command translation, render graph, and shader compilation.
CEDEC 2018 - Towards Effortless Photorealism Through Real-Time RaytracingElectronic Arts / DICE
Real-time raytracing holds the promise of simplifying rendering pipelines, eliminating artist-intensive workflows, and ultimately delivering photorealistic images. This talk by Tomasz Stachowiak provides a glimpse of the future through the lens of SEED's PICA PICA demo: a game made for artificial intelligence agents, with procedural level assembly, and no precomputation. We dive into technical details of several advanced rendering algorithms, and discuss how Microsoft's DirectX Raytracing technology allows for their intuitive implementation. Several challenges remain -- we will take a look at some of them, discuss how real-time raytracing fits in the spectrum of solutions, and start to plot the course towards robust and artist-friendly image synthesis.
CEDEC 2018 - Functional Symbiosis of Art Direction and ProceduralismElectronic Arts / DICE
This talk by SEED's Anastasia Opara covers the approach for procedural layout generation and placement in Project PICA PICA. The project posed a unique challenge as the levels were not created for humans, but for self-learning AI agents. Therefore, the level system had be flexible to meet the agents’ needs and ensure navigability, gameplay elements as well as adhere to the art direction.
We used Houdini from the very early stages to the final release: from co-creating art-direction to exporting final levels into our in-house RnD engine Halcyon. From this talk, you will learn how in a team of only 3 artists we created a functional symbiosis of art direction and procedural system in under 2 months as well as what challenges and solutions we had during our ‘procedural journey’.
At SIGGRAPH 2018, Colin Barré-Brisebois presented PICA PICA running on NVIDIA's new Turing architecture, with performance comparisons with Volta. Developed by Henrik Halén of SEED a technique for real-time raytraced transparent shadows was also presented, as well as an experiment with rough glass.
SIGGRAPH 2018 - Full Rays Ahead! From Raster to Real-Time RaytracingElectronic Arts / DICE
In this presentation part of the "Introduction to DirectX Raytracing" course, Colin Barré-Brisebois of SEED discusses some of the challenges the team had to go through when going from raster to real-time raytracing for Project PICA PICA.
For this year's keynote at High Performance Graphics 2018, Colin Barré-Brisebois from SEED discussed the state of the art in real-time game ray tracing. He explored some of the connections between offline and real-time game ray tracing, and presented some of the open problems. Colin exposed a few potential solutions to those problems, and also proposed a call-to-arms on topics where the ray tracing research community and the games industry should unite in order to solve such open problems.
EPC 2018 - SEED - Exploring The Collaboration Between Proceduralism & Deep Le...Electronic Arts / DICE
Proceduralism is a powerful language of rules, dependencies and patterns that can generate content indistinguishable from a manually produced one. Yet there are new opportunities that hold a great potential to enhance the existing techniques. In this talk, SEED's Anastasia Opara shares some of the early tests of marrying Proceduralism and Deep Learning and discusses how it can contribute to the current workflows.
You can view a recording of the presentation from 2018's Everything Procedural Conference here:
https://www.youtube.com/watch?v=dpYwLny0P8M
This talk provides additional details around the hybrid real-time rendering pipeline we developed at SEED for Project PICA PICA.
At Digital Dragons 2018, we presented how leveraging Microsoft's DirectX Raytracing enables intuitive implementations of advanced lighting effects, including soft shadows, reflections, refractions, and global illumination. We also dove into the unique challenges posed by each of those domains, discussed the tradeoffs, and evaluated where raytracing fits in the spectrum of solutions.
Human mechanisms of representing the surrounding world in a form of ‘language’ is an outstanding ability that enables us to store the information as internal compact abstractions. Proceduralism is also a form of language, where we view the world through rules, dependencies and patterns. And though rules are often perceived as something rigid, their engineering is a fluid and creative task, where analyzing our own thought framework often fuels the design process.
In this talk, we present results from the real-time raytracing research done at SEED, a cross-disciplinary team working on cutting-edge, future graphics technologies and creative experiences at Electronic Arts. We explain in detail several techniques from “PICA PICA”, a real-time raytracing experiment featuring a mini-game for self-learning AI agents in a procedurally-assembled world. The approaches presented here are intended to inspire developers and provide a glimpse of a future where real-time raytracing powers the creative experiences of tomorrow.
The past few years have seen a sharp increase in the complexity of rendering algorithms used in modern game engines. Large portions of the rendering work are increasingly written in GPU computing languages, and decoupled from the conventional “one-to-one” pipeline stages for which shading languages were designed. Following Tim Foley’s talk from SIGGRAPH 2016’s Open Problems course on shading language directions, we explore example rendering algorithms that we want to express in a composable, reusable and performance-portable manner. We argue that a few key constraints in GPU computing languages inhibit these goals, some of which are rooted in hardware limitations. We conclude with a call to action detailing specific improvements we would like to see in GPU compute languages, as well as the underlying graphics hardware.
This talk was originally given at SIGGRAPH 2017 by Andrew Lauritzen (EA SEED) for the Open Problems in Real-Time Rendering course.
A Certain Slant of Light - Past, Present and Future Challenges of Global Illu...Electronic Arts / DICE
Global illumination (GI) has been an ongoing quest in games. The perpetual tug-of-war between visual quality and performance often forces developers to take the latest and greatest from academia and tailor it to push the boundaries of what has been realized in a game product. Many elements need to align for success, including image quality, performance, scalability, interactivity, ease of use, as well as game-specific and production challenges.
First we will paint a picture of the current state of global illumination in games, addressing how the state of the union compares to the latest and greatest research. We will then explore various GI challenges that game teams face from the art, engineering, pipelines and production perspective. The games industry lacks an ideal solution, so the goal here is to raise awareness by being transparent about the real problems in the field. Finally, we will talk about the future. This will be a call to arms, with the objective of uniting game developers and researchers on the same quest to evolve global illumination in games from being mostly static, or sometimes perceptually real-time, to fully real-time.
This presentation was given at SIGGRAPH 2017 by Colin Barré-Brisebois (EA SEED) as part of the Open Problems in Real-Time Rendering course.
Talk by Yuriy O’Donnell at GDC 2017.
This talk describes how Frostbite handles rendering architecture challenges that come with having to support a wide variety of games on a single engine. Yuriy describes their new rendering abstraction design, which is based on a graph of all render passes and resources. This approach allows implementation of rendering features in a decoupled and modular way, while still maintaining efficiency.
A graph of all rendering operations for the entire frame is a useful abstraction. The industry can move away from “immediate mode” DX11 style APIs to a higher level system that allows simpler code and efficient GPU utilization. Attendees will learn how it worked out for Frostbite.
Presentation by Andrew Hamilton and Ken Brown from DICE at GDC 2016.
Photogrammetry has started to gain steam within the Games Industry in recent years. At DICE, this technique was first used on Battlefield and they fully embraced the technology and workflow for Star Wars: Battlefront. This talk will cover their research and development, planning and production, techniques, key takeaways and plans for the future. The speakers will cover photogrammetry as a technology, but more than that, show that it's not a magic bullet but instead a tool like any other that can be used to help achieve your artistic vision and craft.
Takeaway
Come and learn how (and why) photogrammetry was used to create the world of Star Wars. This talk will cover Battlefront's use of of the technology from pre-production to launch as well as some of their philosophies around photogrammetry as a tool. Many visuals will be included!
Intended Audience
A content creator friendly talk intended for pretty much any developer, especially those involved in 3D content creation. It is not a technical talk focused on the code or engineering of photogrammetry. The speakers will quickly cover all basics, so absolutely no prerequisite knowledge required.
Natural farming @ Dr. Siddhartha S. Jena.pptxsidjena70
A brief about organic farming/ Natural farming/ Zero budget natural farming/ Subash Palekar Natural farming which keeps us and environment safe and healthy. Next gen Agricultural practices of chemical free farming.
"Understanding the Carbon Cycle: Processes, Human Impacts, and Strategies for...MMariSelvam4
The carbon cycle is a critical component of Earth's environmental system, governing the movement and transformation of carbon through various reservoirs, including the atmosphere, oceans, soil, and living organisms. This complex cycle involves several key processes such as photosynthesis, respiration, decomposition, and carbon sequestration, each contributing to the regulation of carbon levels on the planet.
Human activities, particularly fossil fuel combustion and deforestation, have significantly altered the natural carbon cycle, leading to increased atmospheric carbon dioxide concentrations and driving climate change. Understanding the intricacies of the carbon cycle is essential for assessing the impacts of these changes and developing effective mitigation strategies.
By studying the carbon cycle, scientists can identify carbon sources and sinks, measure carbon fluxes, and predict future trends. This knowledge is crucial for crafting policies aimed at reducing carbon emissions, enhancing carbon storage, and promoting sustainable practices. The carbon cycle's interplay with climate systems, ecosystems, and human activities underscores its importance in maintaining a stable and healthy planet.
In-depth exploration of the carbon cycle reveals the delicate balance required to sustain life and the urgent need to address anthropogenic influences. Through research, education, and policy, we can work towards restoring equilibrium in the carbon cycle and ensuring a sustainable future for generations to come.
Willie Nelson Net Worth: A Journey Through Music, Movies, and Business Venturesgreendigital
Willie Nelson is a name that resonates within the world of music and entertainment. Known for his unique voice, and masterful guitar skills. and an extraordinary career spanning several decades. Nelson has become a legend in the country music scene. But, his influence extends far beyond the realm of music. with ventures in acting, writing, activism, and business. This comprehensive article delves into Willie Nelson net worth. exploring the various facets of his career that have contributed to his large fortune.
Follow us on: Pinterest
Introduction
Willie Nelson net worth is a testament to his enduring influence and success in many fields. Born on April 29, 1933, in Abbott, Texas. Nelson's journey from a humble beginning to becoming one of the most iconic figures in American music is nothing short of inspirational. His net worth, which estimated to be around $25 million as of 2024. reflects a career that is as diverse as it is prolific.
Early Life and Musical Beginnings
Humble Origins
Willie Hugh Nelson was born during the Great Depression. a time of significant economic hardship in the United States. Raised by his grandparents. Nelson found solace and inspiration in music from an early age. His grandmother taught him to play the guitar. setting the stage for what would become an illustrious career.
First Steps in Music
Nelson's initial foray into the music industry was fraught with challenges. He moved to Nashville, Tennessee, to pursue his dreams, but success did not come . Working as a songwriter, Nelson penned hits for other artists. which helped him gain a foothold in the competitive music scene. His songwriting skills contributed to his early earnings. laying the foundation for his net worth.
Rise to Stardom
Breakthrough Albums
The 1970s marked a turning point in Willie Nelson's career. His albums "Shotgun Willie" (1973), "Red Headed Stranger" (1975). and "Stardust" (1978) received critical acclaim and commercial success. These albums not only solidified his position in the country music genre. but also introduced his music to a broader audience. The success of these albums played a crucial role in boosting Willie Nelson net worth.
Iconic Songs
Willie Nelson net worth is also attributed to his extensive catalog of hit songs. Tracks like "Blue Eyes Crying in the Rain," "On the Road Again," and "Always on My Mind" have become timeless classics. These songs have not only earned Nelson large royalties but have also ensured his continued relevance in the music industry.
Acting and Film Career
Hollywood Ventures
In addition to his music career, Willie Nelson has also made a mark in Hollywood. His distinctive personality and on-screen presence have landed him roles in several films and television shows. Notable appearances include roles in "The Electric Horseman" (1979), "Honeysuckle Rose" (1980), and "Barbarosa" (1982). These acting gigs have added a significant amount to Willie Nelson net worth.
Television Appearances
Nelson's char
Diabetes is a rapidly and serious health problem in Pakistan. This chronic condition is associated with serious long-term complications, including higher risk of heart disease and stroke. Aggressive treatment of hypertension and hyperlipideamia can result in a substantial reduction in cardiovascular events in patients with diabetes 1. Consequently pharmacist-led diabetes cardiovascular risk (DCVR) clinics have been established in both primary and secondary care sites in NHS Lothian during the past five years. An audit of the pharmaceutical care delivery at the clinics was conducted in order to evaluate practice and to standardize the pharmacists’ documentation of outcomes. Pharmaceutical care issues (PCI) and patient details were collected both prospectively and retrospectively from three DCVR clinics. The PCI`s were categorized according to a triangularised system consisting of multiple categories. These were ‘checks’, ‘changes’ (‘change in drug therapy process’ and ‘change in drug therapy’), ‘drug therapy problems’ and ‘quality assurance descriptors’ (‘timer perspective’ and ‘degree of change’). A verified medication assessment tool (MAT) for patients with chronic cardiovascular disease was applied to the patients from one of the clinics. The tool was used to quantify PCI`s and pharmacist actions that were centered on implementing or enforcing clinical guideline standards. A database was developed to be used as an assessment tool and to standardize the documentation of achievement of outcomes. Feedback on the audit of the pharmaceutical care delivery and the database was received from the DCVR clinic pharmacist at a focus group meeting.
UNDERSTANDING WHAT GREEN WASHING IS!.pdfJulietMogola
Many companies today use green washing to lure the public into thinking they are conserving the environment but in real sense they are doing more harm. There have been such several cases from very big companies here in Kenya and also globally. This ranges from various sectors from manufacturing and goes to consumer products. Educating people on greenwashing will enable people to make better choices based on their analysis and not on what they see on marketing sites.
WRI’s brand new “Food Service Playbook for Promoting Sustainable Food Choices” gives food service operators the very latest strategies for creating dining environments that empower consumers to choose sustainable, plant-rich dishes. This research builds off our first guide for food service, now with industry experience and insights from nearly 350 academic trials.
Micro RNA genes and their likely influence in rice (Oryza sativa L.) dynamic ...Open Access Research Paper
Micro RNAs (miRNAs) are small non-coding RNAs molecules having approximately 18-25 nucleotides, they are present in both plants and animals genomes. MiRNAs have diverse spatial expression patterns and regulate various developmental metabolisms, stress responses and other physiological processes. The dynamic gene expression playing major roles in phenotypic differences in organisms are believed to be controlled by miRNAs. Mutations in regions of regulatory factors, such as miRNA genes or transcription factors (TF) necessitated by dynamic environmental factors or pathogen infections, have tremendous effects on structure and expression of genes. The resultant novel gene products presents potential explanations for constant evolving desirable traits that have long been bred using conventional means, biotechnology or genetic engineering. Rice grain quality, yield, disease tolerance, climate-resilience and palatability properties are not exceptional to miRN Asmutations effects. There are new insights courtesy of high-throughput sequencing and improved proteomic techniques that organisms’ complexity and adaptations are highly contributed by miRNAs containing regulatory networks. This article aims to expound on how rice miRNAs could be driving evolution of traits and highlight the latest miRNA research progress. Moreover, the review accentuates miRNAs grey areas to be addressed and gives recommendations for further studies.
High Dynamic Range color grading and display in Frostbite
1. High Dynamic Range
color grading and display
in Frostbite
Alex Fry
Rendering Engineer
Frostbite Team, Electronic Arts
2. Contents
• History, terminology & tonemapping
• Frostbite legacy post process pipeline & issues
• Considering how to support HDR TVs & our new pipeline
• “Display mapping” & issues uncovered
• LUT precision & decorrelation
• Performance
• HDR standards and platform support
• Next steps
1
3. Brief history of film & terminology
1
• Film density (of the negative) responds to light at different rates
• 3 main sections
– Mid section (‘linear’)
– Toe (slower response)
– Shoulder (slower response)
• Captures wide range of light values
– Toe/Shoulder are main range reduction
– Expose scene for mid section
• Characteristic, familiar and pleasant look
4. Tonemapping
1
• Realtime games are fully CG digital images with no film to develop
– But we do have limited range TVs
– We need to perform similar range reduction
• Tonemapping is the process of mapping a wide dynamic range onto
something narrower whilst retaining most important detail
– E.g. 16bit floating point down to 8bit or “HDR” to “LDR”
– Preserve mid tones
– “Filmic Tonemapping” does this while emulating film characteristics
• “HDR” photos in many cameras are more accurately “tonemapped”
7. Color grading
1
• The act of applying a “look”
• Background in film developing. E.g.
– Choose different film stock
– Apply or skip different processing steps (e.g. bleach bypass)
• Digitally we can do a lot more
– White balancing
– Color replacement
– Orange & Teal …
8. Brief history of TVs & standards
1
• Older (CRT) TV/monitors had limited dynamic range
– 0.1 – 100 nits (cd/m2)
• Non-linear response to electrical input
– Electro Optical Transfer Function (EOTF)
– Also known as ‘gamma curve’
• sRGB/BT.1886 standard introduced
– Standard and display capability are similar
• We still use this standard today
9. Brief history of TVs & standards
1
• Modern TVs (0.01-300+ nits … HDR way more)
– Hardware far more capable than the standard
– LCD response different to CRT response
• We still use sRGB/709
– TVs modify our 0.1-100 nit signal to best “show off” the TV
– We have no control over this, other than asking for calibration
– Tends to end up with too much contrast
• And that’s just luminance
– Too much over-saturation as well (e.g. store demo mode)
11. Frostbite legacy pipeline
1
Post
(FP16)
Tone
map
sRGB Grading UI TV
• Scene is linear floating point up to and including post FX
– Bloom, motion blur, vignette, depth of field etc
• Apply tonemap
• Linear to sRGB conversion
• Apply color grade
• Draw UI on top
• Scanout to TV
12. Frostbite legacy pipeline
1
Post
(FP16)
Tone
map
sRGB Grading UI TV
• Tonemap chosen by artist from a fixed set of built-in algorithms
– Usually a filmic tonemap
– Tonemaps are 1D curves, applied independently to RGB values
• Linear to sRGB conversion
– Constrains results to 0-1
13. Frostbite legacy pipeline
1
Post
(FP16)
Tone
map
sRGB
Grading
LUT
UI TV
• Color grading is a 3D LUT generated offline
– Load (tonemapped, sRGB) screenshot in Photoshop
– Load 32x32x32 ‘identity’ color cube layer
– Apply look transformations
– Save (now non-identity) colour cube LUT
– Index this LUT by [s]RGB at runtime
– Use LUT RGB value as color
14. Frostbite legacy pipeline troubles
1
Post
(FP16)
Tone
map
sRGB
Grading
LUT
UI TV
• 1D curve applied separately RGB causes hue shifts
– Not completely linear in mid section (shots later)
– Highlights shift as channels clip
• Usually filmic so this applies a “look”
– This “look” choice is a different workflow to grading
• It’s not easy for content to author new tonemaps
– Tonemaps are not data driven (only selectable)
15. Frostbite legacy pipeline troubles
1
Post
(FP16)
Tone
map
sRGB
Grading
LUT
UI TV
• Photoshop is not really a grading package
– Non-optimal workflow
• LUT is often 8bit
– Irrecoverable quantisation
– Mach banding & hue shifts
• No access to values >1!
– We clamped it earlier
16. Frostbite legacy pipeline troubles
1
Post
(FP16)
Tone
map
sRGB
Grading
LUT
UI TV
• Tonemap + sRGB = LUT distribution function
– Distribution function is not optimal for grading
– Different distribution depending on tonemap
– Cannot share LUTs with other EA teams if they use a different tonemap
17. Frostbite legacy pipeline troubles
1
Post
(FP16)
Tone
map
sRGB
Grading
LUT
UI
SDR
TV
• Legacy pipeline is hardcoded to SDR
• Want to support HDR TVs
– Higher bit depth
– Higher dynamic range
– Higher color range (wider gamut)
• Want to give content creators control over all of this
– Take creative control of the TV, not rely on TV manufacturers
18. HDR TV support – simple approach
1
Post
(FP16)
Tone
map
sRGB
Grading
LUT
UI
HDR
TV
Reverse
Tone
Map
• Reverse tonemap curve to extract original HDR data
– Cheapest possible option
• Recall that tonemaps have a shoulder
– Reverse the shoulder
– Recover the original HDR data
– Profit?
19. HDR TV support – simple approach
1
Post
(FP16)
Tone
map
sRGB
Grading
LUT
UI
HDR
TV
Reverse
Tone
Map
• Plenty of restrictions
– 8bit not enough precision to reverse
– Multiple different tonemap curves, all need to be reversible
– A limited range can be recovered
– Order of operations not correct
• Grading and UI are drawn after tonemapping
• Reversing tonemap later incorrectly reverses these
20. HDR TV support – simple approach
1
Post
(FP16)
Tone
map
sRGB
Grading
LUT
UI
HDR
TV
Reverse
Tone
Map
• Can we make it ‘good enough’?
– Tweak tonemap curves (LUT distribution function)
• Capture more range
• Compute analytical reverse mappings
– Promote render targets and grades to 10bit
• Re-author legacy 8bit grades
– Ask teams to use mild color grading
– Scale UI to avoid shoulder region
21. Decode
HDR
Comp UI
Tonemap
and
Display
Encode
Frostbite approach – clean sheet
• Transform HDR to LUT space
• LUT based grading
• Draw UI offscreen
– Premultiplied alpha essential
• Composite UI
• Tonemap & encode for target display
1
Post
(FP16)
To LUT
space
SDR
HDR10
Dolby
Vision
?
UI
Grading
LUT
22. Frostbite approach – clean sheet
• Why do this?
– Improve workflow
– Single grade for all TV variants
– If redoing grades, do them well
• And don’t restrict grading
– UI tech is complex and customized
– Wanted a full HDR implementation
• ‘Future proof’ to look better on newer TVs
1
Post
(FP16)
Decode
HDR
Comp UI
Tonemap
and
Display
Encode
To LUT
space
SDR
HDR10
Dolby
Vision
?
UI
Grading
LUT
23. HDR grading
1
• HDR friendly distribution function
– Went through a few ideas (log, S-Log, LogC) etc
– Ended up using ST.2084 or “PQ” (Perceptual Quantiser)
– PQ ensures LUT entries are perceptually spaced, gives great control
• 33x33x33 LUT lookup (10bit UNORM runtime format)
– Industry standard size allows use of ‘standard’ grading tools
• Single distribution function gives other gains
– Anyone can make a “look” LUT and share it across EA
– Build a look library
24. HDR grading
1
• Create grades using DaVinci Resolve
– Industry standard grading tool
– Build view LUTs to match Frostbite tonemapping (WYSIWYG workflow)
• Why not write our own in engine?
– No need, time or desire to reinvent the wheel & maintain it
– Reduces friction hiring experienced graders
– Free version of Resolve does enough for many
• “Resolve Live” is a great workflow improvement
– Must buy a capture/monitor card though … so not free
25. Tone Display mapping
2
• The HDR version of the game is the reference version
• We tonemap at the last stage of the pipeline
• The tonemap is different depending on the attached display
– Aggressive tonemap on SDR
– Less aggressive in HDR but still varies depending on TV
– No tonemap for Dolby Vision
– Each TV gets a subtly different version tuned for each case
– Not just curves, exposure too
• Decided to use the term Display Mapping instead
26. Tone Display mapping
2
• Main challenges of display mapping
– Scale across wide range of devices
– Achieve a similar look regardless of TV
• Desirable properties
– No toe
– No contrast change
– No hue shifts
– No built in film look
• Build as neutral a display map as we could
SDR
HDR10
Dolby
Vision
?
30. Display mapping
2
• How?
– Work in luma/chroma rather than RGB
• Apply shoulder to luma only
• Progressively desaturate chroma depending on shoulder
– Use a ‘new’ working space (ICtCp)
• Developed by Dolby for HDR
• Chroma follows lines of perceptually constant hue
– Tuned by eye on a lot of different images/games
• Not very “PBR” but this is about perception not maths
34. Plain sailing, then?
4
• Recall that SDR spec is 0.1-100 nits
– And TVs are more capable and over-brighten the image …
– … so 100 nits will really be more like 200-400
• HDR TVs (by and large) do what they are told
– So 100 nits is 100 nits …
– … and HDR looks darker and “worse” than SDR!
• We treat HDR as reference and expose for that
– SDR display mapper under-exposes the SDR version
– So that when (incorrectly) over-brightened it looks correct
35. OK, plain sailing now, then?
4
• No – of course not
– Display mapper is a hue preserving shoulder
– This faithfully and neutrally reproduces content
– Changes the look of existing/legacy assets
– Some assets authored to leverage legacy hue shifts
– These assets no longer look good
37. Re-author VFX
4
• Fire effects changing was most common complaint
– VFX authored to leverage hue shifts in legacy tonemapper
– Looked OK in SDR but not in HDR
• Use blackbody simulation
– Author effect as temperature
– Build colour ramp from blackbody radiation
– Index ramp by temperature
– Tweak ramp to suit
40. So … plain sailing now ?
4
• Er, no …
– Hue preserving shoulder not always desirable
– Prevents highly saturated very bright visuals … effects (again)
– Prevents matching the look of certain films
• Working with DICE effects team
– Still iterating on display mapper
• Re-introduce hue shifts
44. Re-introduce hue shifts
2
• We’re not done
– But getting there
• Games shipping this year will have different implementations
– Some prefer hue preserving
– Some prefer hue shifts
– Likely to offer content-driven mixing
• Easy to do
– Have graded an [HDR] reference implementation
– Simply tweak to suit different displays
46. What about ACES?
1
• Academy Color Encoding System
• Standardised color management for film CGI
• Defines a processing pipeline
• Includes “look” and display mapping
47. What about ACES?
1
Input IDT LMT RRT
ODT TV
• IDT Input Device Transform
• LMT Look Modification Transform
• RRT Reference Rendering Transform
• ODT Output Device Transform (varies per output)
ODT TV
48. What about ACES?
2
• Why didn’t we use it as-is?
– We started work in 2014, ACES was only just getting going
– Early versions required FP16, not suited to some texture formats
– Was not convinced by the “filmic” RRT
• But we agreed with the principles so we used it in concept
– Order of operations
– Used a wide ‘master’ working space
– LMT = grading
– ODT = display mapping
49. What about ACES?
2
• Should I use it?
– Yes absolutely look at it!
– Suggest investigating ACEScc and ACEScg
• Will Frostbite use it?
– Quite possibly yes, in future
– Will continue to investigate
– Shared principles & defined spaces will ease transition
– Very likely to adopt ACES color management for assets
59. Color grading LUT accuracy
2
• Decorrelated LUTs improve precision issues for display mapping
– Align luma with single axis rather than multiple axis
– Piecewise Linear Approximation in 1D not 3D
– Great!
• But we’re baking display mapping on top of existing RGB LUTs
– Concerned that different space would cause problems
• Plotted LUT access patterns in different spaces
– Measure percentage of LUT touched in each space
– Determine if we would affect precision of color grading …
63. Performance
2
• Performance at a premium
– Increased focus on 60hz
– Increased focus on resolution
– Cannot afford to give up visual quality to pay for HDR
• New path entirely replaces legacy path
– Requested to achieve performance parity with legacy path
64. Performance
2
• Performance of legacy path
– At the end of the frame we already do a high order resample
• Dual pass (V/H)
• Resample vertically to intermediate (ESRAM): 0.17ms (720 -> 1080)
• Resample horizontally to backbuffer (DRAM): 0.26ms (1280 –> 1920)
– Draw UI on top
• Total: 0.43ms + UI
– But UI costs a lot since it’s drawn to DRAM not ESRAM
– NOTE: Numbers all from XBox One
65. Version 1: Naïve implementation
2
• Render UI to offscreen RGBA8 target at backbuffer resolution
– Must clear UI target first
+0.25ms
• Modify second pass of dual-pass resample
– Resample HDR & convert PQ to linear
– Load UI target & convert sRGB to linear
– Composite HDR & UI
– Encode to sRGB
+0.45ms
• Total: ~1.1ms
66. Iterate: Low Hanging Fruit
2
• Shader is dominated by ALU
• UI render target is UNORM but contains sRGB values
– Alias as sRGB for readback for free conversion
• PQ to Linear expensive
– Trade ALU for texture & use 1D LUT
• Hitting bandwidth limits on Xbox
– Put UI render target in ESRAM
– Clear costs are halved, UI renders 2x faster
• Total: ~0.8ms
67. Iterate: Compute resample
2
• Use single-pass CS for resample
– Resample vertically to groupshared
– Resample horizontally from groupshared
• Optimisations (some GCN specific)
– 64x1 thread layout (linear tile mode backbuffers)
– Precomputed kernel weights in StructuredBuffers
– SMEM/SALU to load & extract vertical weights
• Total ~0.7ms
68. Iterate: CMASK-aware composite
2
• GCN hardware maintains “CMASK” metadata for written pixels
• Stores whether any block of 4x4 pixels was rendered to
• Used in “Fast Clear Eliminate” to write clear color to unwritten pixels
69. Iterate: CMASK-aware composite
2
• We can read this metadata
– Remove the Fast Clear Eliminate (FCE)
– Read CMASK in composite pass
– Skip reading and compositing of unwritten UI pixels
• Not quite that simple
– CMASK tiles are tiled & packed
– Software de-tiling and unpacking needed
– Not cheap
70. Iterate: CMASK-aware composite
2
• Transcode CMASK via compute to composite-friendly format
– 1 bit per 4x4 pixels
– 32 bits per 128x4 pixels
– Adds a cost of 0.02ms but much cheaper than FCE
• Composite pass
– Load 32bit bitmask via SMEM (free)
– Unpack bit corresponding to current pixel (cheap)
– If bit is zero, skip entire UI load & composite operation
• Total: 0.5ms
71. HDR10 additional costs
2
• Additional ALU work needed
– Rotate primaries to 2020
– Encode to PQ (slightly more ALU than sRGB encode)
– Total: ~0.7ms (0.2ms more)
• Only runs on Xbox One S (and PS4)
– XBox One S GPU is ~7% faster than XB1 (1.1ms extra at 60hz)
– 0.2ms HDR overhead is tiny. Bulk of XB1S perf goes to the game
• PS4 has more ALU than Xbox so we don’t worry about it
– Same resolution (1080p) so PS4 is faster like-for-like in this case
73. HDR standards
2
• Two primary standards
– Dolby Vision
– HDR10
• Frostbite can support both * & supporting more is relatively easy
74. Dolby Vision
2
• Pros
– 12 bit signal, HDMI 1.4b compatible
– No need to write a display mapper
– Standardisation across displays (Dolby-tuned display mapper)
– Good results from low-end panels
• Cons
– Metadata generation and framebuffer encoding adds a cost
– Framebuffer encoding prevents overlays blending on top
– Not supported by many 2016 TVs
75. HDR10
2
• Pros
– More widespread at present
– No custom framebuffer encoding; overlays ‘work’
– Software display mapping can be cheap
• Cons
– No display mapping standardization across manufacturers
– Game should do its own display mapping to help this
– 10bit
76. Commonalities
2
• Share an EOTF (or ‘gamma curve’)
• Share min, max & mastering luma
• Share color gamut
• Same ‘master’ content works on both
• One display map target for each
77. Platform support
2
• PS4 & PS4 Pro: HDR10
• XBox One S: HDR10
• PC: HDR10, Dolby Vision
– Requires GPU vendor API to handle HDMI metadata
– DX11: Exclusive fullscreen “just works”
– DX12: Non-exclusive fullscreen
• Desktop compositor can scale or add overlays
• Can cause issues with Dolby Vision
• Working with various parties to improve this
78. Don’t forget SDR TVs
2
• Huge number of SDR TVs
– Majority of market today
• SDR version needs to look great
– Content is mastered in HDR
– HDR is reference version
– We own the display mapping and it runs at the final stage
– Tune display mapper to look great in SDR
• Play to the fact a TV will over-brighten your image
80. HDR video
2
• High bit depth video
– Decode performance overheads
– File size and streaming overheads
• Marketing materials
– Multiple versions of same video
• Wide Color Gamut support needed
81. Wide gamut rendering
2
• Lots to do
– Expand runtime gamut
– Add gamut metadata on every asset
– Add color management to editors
– Preserve metadata between DCCs
• Where to start
– Work from the TV backwards
– Convert color grading to wide gamut
84. A note on gamut reduction
2
• Gamut expansion is trivial
– 3x3 matrix multiply in linear space
• Gamut reduction can produce out-of-gamut colors
– Negative numbers when clipped to 0 cause hue shifts
– Must map colors to target gamut before 3x3 matrix multiply
• Suggest investigating ICtCp as a working space
– Chroma scaling is perceptually hue linear
– Adjust saturation to fit target gamut
85. Wrap up
• Ensure your colors are in the assets
– Don’t rely on tonemaps or clip points to change hue
• Master your game in HDR
– Move your tonemap as late in the pipeline as possible
– Vary the tonemap for each display
• Consider decorrelated spaces
– RGB isn’t the only way to do things
• Aim to support all standards
– Please don’t forget about SDR
1
86. Thanks to
• Thanks to
– Tomasz Stachowiak @h3r2tic
– Ben Gannon, Bill Hofmann, Spencer Hooks & Thadeus Beyer at Dolby
– Unnamed early adopter game teams
– The DICE VFX team
– The EA rendering community
– Mark Cerny
1
88. Code: display mapper
2
float3 applyHuePreservingShoulder(float3 col)
{
float3 ictcp = RGBToICtCp(col);
// Hue-preserving range compression requires desaturation in order to achieve a natural look. We adaptively desaturate the input based on its luminance.
float saturationAmount = pow(smoothstep(1.0, 0.3, ictcp.x), 1.3);
col = ICtCpToRGB(ictcp * float3(1, saturationAmount.xx));
// Only compress luminance starting at a certain point. Dimmer inputs are passed through without modification.
float linearSegmentEnd = 0.25;
// Hue-preserving mapping
float maxCol = max(col.x, max(col.y, col.z));
float mappedMax = rangeCompress(maxCol, linearSegmentEnd);
float3 compressedHuePreserving = col * mappedMax / maxCol;
// Non-hue preserving mapping
float3 perChannelCompressed = rangeCompress(col, linearSegmentEnd);
// Combine hue-preserving and non-hue-preserving colors. Absolute hue preservation looks unnatural, as bright colors *appear* to have been hue shifted.
// Actually doing some amount of hue shifting looks more pleasing
col = lerp(perChannelCompressed, compressedHuePreserving, 0.6);
float3 ictcpMapped = RGBToICtCp(col);
// Smoothly ramp off saturation as brightness increases, but keep some even for very bright input
float postCompressionSaturationBoost = 0.3 * smoothstep(1.0, 0.5, ictcp.x);
// Re-introduce some hue from the pre-compression color. Something similar could be accomplished by delaying the luma-dependent desaturation before range compression.
// Doing it here however does a better job of preserving perceptual luminance of highly saturated colors. Because in the hue-preserving path we only range-compress the max channel,
// saturated colors lose luminance. By desaturating them more aggressively first, compressing, and then re-adding some saturation, we can preserve their brightness to a greater extent.
ictcpMapped.yz = lerp(ictcpMapped.yz, ictcp.yz * ictcpMapped.x / max(1e-3, ictcp.x), postCompressionSaturationBoost);
col = ICtCpToRGB(ictcpMapped);
return col;
}
89. Code: supporting functions
2
// RGB with sRGB/Rec.709 primaries to ICtCp
float3 RGBToICtCp(float3 col)
{
col = RGBToXYZ(col);
col = XYZToLMS(col);
// 1.0f = 100 nits, 100.0f = 10k nits
col = linearToPQ(max(0.0.xxx, col), 100.0);
// Convert PQ-LMS into ICtCp. Note that the "S" channel is not used,
// but overlap between the cone responses for long, medium, and short wavelengths
// ensures that the corresponding part of the spectrum contributes to luminance.
float3x3 mat = float3x3(
0.5000, 0.5000, 0.0000,
1.6137, -3.3234, 1.7097,
4.3780, -4.2455, -0.1325
);
return mul(mat, col);
}
float3 ICtCpToRGB(float3 col)
{
float3x3 mat = float3x3(
1.0, 0.00860514569398152, 0.11103560447547328,
1.0, -0.00860514569398152, -0.11103560447547328,
1.0, 0.56004885956263900, -0.32063747023212210
);
col = mul(mat, col);
// 1.0f = 100 nits, 100.0f = 10k nits
col = PQtoLinear(col, 100.0);
col = LMSToXYZ(col);
return XYZToRGB(col);
}
90. Code: supporting functions
2
// RGB with sRGB/Rec.709 primaries to CIE XYZ
float3 RGBToXYZ(float3 c)
{
float3x3 mat = float3x3(
0.4124564, 0.3575761, 0.1804375,
0.2126729, 0.7151522, 0.0721750,
0.0193339, 0.1191920, 0.9503041
);
return mul(mat, c);
}
float3 XYZToRGB(float3 c)
{
float3x3 mat = float3x3(
3.24045483602140870, -1.53713885010257510, -0.49853154686848090,
-0.96926638987565370, 1.87601092884249100, 0.04155608234667354,
0.05564341960421366, -0.20402585426769815, 1.05722516245792870
);
return mul(mat, c);
}
// Converts XYZ tristimulus values into cone responses for the three types of cones in the human visual system, matching long, medium, and short wavelengths.
// Note that there are many LMS color spaces; this one follows the ICtCp color space specification.
float3 XYZToLMS(float3 c)
{
float3x3 mat = float3x3(
0.3592, 0.6976, -0.0358,
-0.1922, 1.1004, 0.0755,
0.0070, 0.0749, 0.8434
);
return mul(mat, c);
}
float3 LMSToXYZ(float3 c)
{
float3x3 mat = float3x3(
2.07018005669561320, -1.32645687610302100, 0.206616006847855170,
0.36498825003265756, 0.68046736285223520, -0.045421753075853236,
-0.04959554223893212, -0.04942116118675749, 1.187995941732803400
);
return mul(mat, c);
}
Basic explanation of film response to light.
Non-linear, captures wide range.
Looks nice.
Image credit: http://www.naturephotographers.net/articles0303/tw0303-4.gif
From http://www.naturephotographers.net/articles0303/tw0303-1.html
Games are realtime CG with no film to develop
But we do need to broadcast our games on TVs
These TVs have a limited range not dissimilar to film/slide projectors
So we have similar requirements to perform range reduction
Tonemapping is just range reduction, done in such a way to preserve detail outside the display range.
Follow a similar process.
First we would expose the scene such that the most interesting parts are in the mid tones.
Then tonemap the rest of the range to best fit into the upper/lower regions of the SDR.
Going to use this example image for our tests, screenshot from internal test level built by Joacim Lunde
-Good contrast-Wide dynamic range
-Some saturated colours
-Clearly suffering from lack of tonemapping in several parts (lights, hologram)
Example of four tonemappers:
Two simple (Reinhard & Exponential). Primarily a shoulder (affect brightest parts the most, darkest parts the least).
Two filmic (Hejl/Burgess-Dawson & Hable). Toe and shoulder as well as contrast changes.
X axis = brightness in stops (EVs).
Y axis = tonemapped value converted to sRGB.
X axis = brightness in stops (EVs).
Y axis = tonemapped value converted to sRGB.
This one is an analytical fit of a tweaked approximation of a Kodak characteristic curve.
Strong toe adds contrast to darks and mids.
X axis = brightness in stops (EVs).
Y axis = tonemapped value converted to sRGB.
This is Hables operator with default settings.
Note that this operator is highly configurable and can be tweaked to suit many needs.
X axis = brightness in stops (EVs).
Y axis = tonemapped value converted to sRGB.
Color grading is the act of applying a characteristic look to the scene
Still has its background in film, for example one could choose different film stock to achieve a look
Or during film processing (developing the negative) one could perhaps change the processing
e.g. bleach bypass is skipping the bleaching process which produces a higher contrast ‘silvery’ look.
With CG, we can do a lot more than this
-Change white balance or exposure
-Arbitrary color replacement or highlighting (Schindler's List)
-Orange and Teal (Michael Bay)
-”Fix it in post”
Image credit: http://www.drodd.com/images14/schindlers-list3.jpg
Image credit: https://pix-media.priceonomics-media.com/blog/892/trans.jpg
Graphs:
X axis = brightness in stops (EVs).
Y axis = tonemapped value converted to sRGB.
Image credit: http://www.naturephotographers.net/articles0303/tw0303-4.gif
From http://www.naturephotographers.net/articles0303/tw0303-1.html
Question is – can this be made to work?
Yes, in many cases it can.
Tweak tonemap to capture ‘enough’ range to reverse, and compute reverse function.
Increase precision of render targets and grades to 10bit.
Use less extreme color grading, work within the limits.
Scale the UI during rendering to avoid the shoulder region that will be reversed.
Sucker Punch (team behind Infamous: Second Son) has done some nice blog posts which cover this approach. See
http://www.glowybits.com/blog/2016/12/21/ifl_iss_hdr_1/
http://www.glowybits.com/blog/2017/01/04/ifl_iss_hdr_2/
Our approach was to go for a new, clean sheet implementation.
LUT space = a single distribution function optimized for grading once in a master HDR space, regardless of connected TV or output tonemap.
Move all “look” workflow into a single place.
Grade once in a master space, regardless of connected TV/output dynamic range.
Remove UI from the equation, since lots of UI exists that would look different/incorrect if we changed the way that was drawn.
Regarding “future proof”, the current HDR specification is bigger (in terms of luminance range and color gamut) than any display can reproduce today.
By targeting the full specification, our games will look better when played on better TVs in future, which reproduce more of the range we use.
Will explain the HDR/Dolby Vision differences in later slides
Going to use this example image for our tests, screenshot from internal test level built by Joacim Lunde
-Good contrast
-Wide dynamic range
-Clearly suffering from lack of tonemapping in several parts (lights, hologram)
-This shot is the image naively converted to sRGB
Going to use this example image for our tests, screenshot from internal test level built by Joacim Lunde
-Good contrast
-Wide dynamic range
-Clearly suffering from lack of tonemapping in several parts (lights, hologram)
-This shot is the display mapped image
Note lack of hue shifts in the shoulder region.
No toe, no contrast or “look” just very neutral.
Rest of image is unchanged.
Comparison with our Filmic (modified Hable) and our Display Mapper
Filmic is 1D and applied to each channel independently, has hue shifts.
This can all be tuned, but ultimately any non-linear 1D operator applied to color channels will hue shift.
Want to make a bright blue sky? It will become cyan.
Display mapper is highly neutral, hue-preserving, does a good job of reproducing artistic intent.
Comparison with our Filmic (modified Hable) and our Display Mapper
Filmic is 1D and applied to each channel independently, has hue shifts.
This can all be tuned, but ultimately any non-linear 1D operator applied to color channels will hue shift.
Watch our for desaturated midtones (in our case, at least).
Want to make a bright orange sunset? It will become yellow.
Display mapper is highly neutral, hue-preserving, does a good job of reproducing artistic intent.
Even though this was planned & we worked with several game teams to figure this out and roll it out, there were a few gotchas along the way.
As mentioned at the start the SDR version gets brightened up by the TV, sometimes massively.
We can play to this strength and under-expose the SDR version so, when brightened by the TV, it looks OK again.
No this is not correct, and it does require some guesswork as to how bright the TV will be.
Artists can configure this SDR Peak value so when working on calibrated monitors it is accurate.
Assume SDR TV is 200 nits as a default, considering letting people tune this value at home as well.
Many assets were authored to leverage the hue shifts that come from clipping color channels.
Any color that contains multiple primaries will hue shift if it’s brightened and one channel clips but the other doesn’t.
The ratios between the color channels fundamentally change at this point, changing the color itself.
If you leverage this hue shift during asset authoring, then these effects long longer look correct when you remove the hue shift.
This is a contrived example showing a fireball.
It’s an effect kindly shared by DICE but modified by me to highlight the issue in question.
This fireball is authored using a single hue … lets call it ‘burnt orange’.
Burnt orange has little to no blue component, but does have a lot of red and some green.
When over-exposed and fed into a tonemap that has a strong shoulder and/or simply hitting the color channel clipping point, the rate of change of red slows quickly and then clips but green does not.
As one keeps over-exposing, red moves slowly (or is stationary, if clipped) but green keeps increasing, fundamentally changing the color from orange to yellow creating a multi-hue effect.
When using a hue preserving shoulder, the authored hue is preserved and it looks ‘correct’ but that is now completely wrong.
This isn’t just the case for hue preserving shoulders though. If the original image were displayed via HDR broadcast standards which have much higher color channel clipping points, it would look wrong in HDR as well because the red channel simply wouldn’t clip as quickly. So the effect would appear bright orange.
So the hue preserving shoulder in SDR actually highlights trouble in the content that would be wrong in HDR.
This is quite a nice way to author HDR content (and find issues with existing content) without needing an HDR display.
Main point of this – any bright effects will suffer.
Many are authored to leverage the hue shifts that come from clipping color channels (red clips but green doesn’t -> effect increasingly becomes yellow as green increases).
These effects do not work well in HDR, as the clip point differs per TV (see notes on previous slide).
Intent must be to author to HDR as the reference, which means the hue shifts must be present in the artwork, not an artefact of the mapper.
Blackbody ‘simulation’ (temp based hue color lookup) to the rescue.
Iterate the display mapper to preserve saturation of medium/bright effects on SDR devices.
Image credit: https://en.wikipedia.org/wiki/Black-body_radiation
Old asset with display mapping ON (no hue shifts) vs Display mapping ON and hue shifts are correctly present in source asset
Old asset with display mapping OFF (hue shifts come from channels clipping) vs Display mapping ON and hue shifts are correctly present in source asset
Hue preserving operator is also troublesome if you are trying to match a “look” that is close to a certain film stock.
Must be able to do that.
Have started to work on re-adding hue shifts.
We’re not done yet, are still actively working on this.
Blackbody fire contained all the correct hues …
-See smaller circle
…were getting desaturated when very bright.
-See larger circle
Original display mapper on the left, small hue shifts (but massive improvement) on the right.
Hue preserving on the left.
Re-introduction of some hue shifts on the right.
No display mapping (tonemap disabled) on the left, hue-shifting display mapper on the right.
Even with some hue shifts, still dramatically better than no display map.
As mentioned, still not done.
Likely to offer a configurable implementation, that each game can dial in the amount of hue shifting.
Really easy to do though (no re-grading etc) due to the wide HDR working space and display mapping right at the end of the frame.
High level summary of the ACES pipeline.
IDT (Input Device Transform) transforms a known input space to the ACES working space.
LMT (Look Modification Transform) is where one applies the grade & “look”
RRT (Reference Rendering Transform) is essentially a filmic tonemap (an S curve applying a toe and shoulder)
ODT (Output Device Transform) is a per-display output transform to ensure consistent results on that display.
Due to the fundamentally aligned approaches, there is nothing stopping us from changing from our custom approach to ACES (ACEScc/cg in particular) in future, and all grades can be automatically upgraded/converted since both spaces are known and published. We can and will re-evaluate this in due course.
Now we look at some performance/quality tradeoffs.
For performance reasons we wanted to dynamically inject the display mapping into the same LUT as the grading.
This has some implications that we investigate here.
For performance reasons we wanted to dynamically inject the display mapping into the same LUT as the grading.
The order of operations of combining the display mapping at the end of the grading, is the same as doing it at the start of the composition shader. It’s just “free” (aside from the cost of injecting it into the grade, which is very cheap).
However, even with PQ distribution we have precision issues which masquerade as mach banding.
Top image is analytical display mapper, middle is baked into the RGB LUT, bottom has levels adjusted to highlight the differences.
Note: screenshot is from a level purchased from Evermotion, not authored by Frostbite.
Show linear filtering turns curves into piecewise linear approximations of curves.
Looks OK in 1D, where only pairs of values contribute.
But our LUTs are volumes (3D).
The luma axis (greyscale from black to white) is a diagonal, so exactly half way between texels 8 neighbors contribute equally.
Image credit: https://en.wikipedia.org/wiki/Piecewise_linear_function
Image credit: https://engineering.purdue.edu/~abe305/HTMLS/rgbspace.htm
Using a higher order filter should improve things.
E.g. move from trilinear to tricubic.
But the cost is expensive (additional dimension over 2D).
Early tests of high order filtering doubled the costs of our main post process pass (which does a lot more than just grading) so it was immediately prohibitive.
Image credit: https://en.wikipedia.org/wiki/Bicubic_interpolation
But can use different spaces for the LUT. We don’t *have* to index by RGB …
In fact the display mapper is working in luma/chroma, so let’s try that.
Here’s RGB again as a reminder.
YCgCo – fastest decorrelated space (luma and chroma are separate).
Major improvement on RGB.
YCbCr – better than YCgCo
ICtCp – not as good as YCbCr but comparable to YCgCo.
It should be best since it natively matches our display mapping space, but reasons for this will be explained later. Relates to color gamut.
Compare RGB to the three decorrelated spaces.
Decorrelated are all better in terms of luma than RGB.
Still using linear filtering but luma and chroma axis are aligned with cubemap axis now.
Luma-only ramps become 1D and touch fewer neighbours so the PLA artefacts are reduced.
So this is a major improvement and allows use of linear filtering without obvious artefacts.
But, will it impact the grades themselves, which are authored in RGB?
Again, we look at this test image.
By plotting each pixel into each LUT and tracking the coverage, we can easily compute the LUT volume used, as a percentage of the total number of texels that exist in the LUT.
Basically, more texels used = more precision for grading.
Ah.
Decorrelated spaces fundamentally touch fewer texels, which is likely to have an impact on color grading accuracy
ICtCp is better than either YCC format, but still not as accurate as RGB.
So today we have stuck with RGB (the visual artefacts are minimal, after all) but we are continuing to investigate decorrelated spaces for the future.
Perhaps we will transform RGB into a decorrelated space in the offline pipeline, using most appropriate high order filter.
Performance parity was needed with the legacy path in order to achieve quick adoption. Performance timings from Xbox One as that was the slowest platform.
Legacy end-of-frame path:
Separable Resample from sRGB render target to 1080p backbuffer, via ESRAM transient on XBox: 0.43ms (Xbox is slowest platform, so is the one we use for timings).
UI draw on top of backbuffer: Arbitrary, typically 0.2-0.3ms.
ESRAM used for both planes (intermediate & UI). Manual ESRAM management *
Also roughly doubles speed of UI rendering, reaping benefits not shown here.
* See “FrameGraph: Extensible Rendering Architecture” talk for how we are moving to automatic ESRAM management
Not super relevant to HDR, this was simply an enabling change.
Moving from two to one pass reduced work and enabled significantly better scheduling.
We use a thread layout optimized for the consoles and also for the linear tile modes used for swapchain/scanout targets.
Use of the 1D vertical thread layout allowed us to use the scalar pipeline present in GCN to obtain some of the filter weights for free.
Actually GCN doesn’t always clear all pixels
Maintains metadata for which pixels were written.
Before readback, performs a “Fast Clear Eliminate” (FCE) pass to write back the clear color only to unwritten pixels.
Example CMASK screenshot shows typical in-game UI coverage. Red = pixels written. Black = unwritten pixels that will need clearing.
No FCE, instead use a custom CMASK transcode from sub-tiles to a 32x1 bitmask designed to be loaded via the scalar pipeline ‘for free’ on the final resample/merge/displaymap pass.
No FCE, instead use a custom CMASK transcode from sub-tiles to a 32x1 bitmask designed to be loaded via the scalar pipeline ‘for free’ on the final resample/merge/displaymap pass.
All timings so far have been from the SDR version on base Xbox One, since base Xbox One only supports SDR and SDR output will be the most common path for a while.
However, Xbox One S supports HDR10 and this incurs some additional costs to encode.
Xbox One S is faster though, so these overheads are well within the extra performance.
Future optimizations are still available to us.
UI load is cheap, move it to start of frame and look at alpha values.
If UI is opaque, skip expensive resample and merge operation.
On GCN can use wave-wide operations to check all threads are able to skip, which allows us to skip the first half of the resample which writes to groupshared then syncs threads.
Can also use async compute & overlap the merge with next frame, at the expense of a little latency.
Two primary (but very similar) HDR standards; likely more are coming
Frostbite can and will target any standard that makes sense
* It is down to each game to negotiate licensing though so I can’t say anything about specific games
Our display mapping “scanout shader” allows easy plugin of any format, it’s just another encoding/mapping.
Image credit: https://s.aolcdn.com/hss/storage/midas/e94dc10a6894b46ee659c6136fd7040c/203213453/dolbyvision.jpg
Image credit: http://edge.alluremedia.com.au/m/g/2016/01/ultra_hd_premium_2.jpg
Essentially:
Dolby Vision uses a custom encoded framebuffer.
One must generate dynamic metadata (e.g. Min/max/avg luminance) & send it up to every frame to the TV.
Dolby build a custom display mapping for each panel to get the best from it, and achieve standardization of look across displays.
Can’t go into any more details here, suggest contacting Dolby if you are interested in supporting Dolby Vision.
https://www.dolby.com/us/en/technologies/dolby-vision/dolby-vision-white-paper.pdf
Game can look quite different on each HDR10 TV, due to lack of standardization across manufacturers.
Lots of high level commonalities … try to support both.
Huge number of SDR devices out there.
HDR must not be worse than SDR!
HDR is the reference in Frostbite; SDR is just an artefact of the display mapper.
Recall that SDR TVs scale the image up in terms of gamut and luminance.
No real control over this, though having HDR in the engine and being able to map to SDR helps get the best from each display.
This includes re-exposing the image as part of display mapping (under-expose it so that the SDR TV can re-brighten it again).
HDR movies need work.
Right now we use SDR movies but have a few tricks to extract pseudo-HDR data from them.
True HDR movies need increased storage, streaming, runtime playback costs of high bit depth video.
But the main issue is one of needing wide color gamut support.
Image credit: http://www.bbc.co.uk/news/technology-37908975
Not “just” rendering – requires a collaboration with multiple parts of Frostbite (data pipelines, import/export, UI, movies, textures, shaders, timeline editors etc – anything with color data needs color management).
Specifically, challenge is related to maintaining and respecting the necessary gamut metadata.
First need to assign and manage gamut on every ‘colour’ asset (textures, colours in shader graphs or timelines, movies etc).
DCC packages may or may not support this; different approaches may be necessary to manage import/edit/export.
Likely to start by upgrading the engine from the “TV back” – first step will be to upgrade the color grading to wide gamut (likely 2020). ICtCp likely necessary for LUTs at this point (see next slides).
Runtime gamut reduction necessary; again we expect to use ICtCp for hue linear desaturation and fold it into the display mapper for free.
Image credit: http://www.acousticfrontiers.com/wp-content/uploads/2016/03/Color_Gamut.png
And we’re back here again.
This is a reminder about the LUT accuracy of different spaces, but specifically calling out that we’re currently working in the sRGB gamut.
In the future we want to support wider gamuts, so let’s test in 2020.
Aha. Not so good now.
RGB, YCgCo and YCbCr all nearly halve in accuracy/volume.
But ICtCp is natively wide gamut so stays the same, becoming more of a sensible choice now.
It has been an interesting journey, and we’re not done yet.
Hopefully our experiences learnt, and this talk, can be of use to someone.
Special thanks and credit especially to Tomasz who is the author of our display mapper.
Contact me if you need:
afry@europe.ea.com
@TheFryster
Note: this function is expensive.
This is why we bake it down into a LUT (ideally the same LUT we use for grading, to make it free).
This is the hue-preserving version used for screenshots in this article, it’s very ad-hoc but hopefully interesting to play with.
float3 applyHuePreservingShoulder(float3 col)
{
float3 ictcp = RGBToICtCp(col);
// Hue-preserving range compression requires desaturation in order to achieve a natural look. We adaptively desaturate the input based on its luminance.
float saturationAmount = pow(smoothstep(1.0, 0.3, ictcp.x), 1.3);
col = ICtCpToRGB(ictcp * float3(1, saturationAmount.xx));
// Only compress luminance starting at a certain point. Dimmer inputs are passed through without modification.
float linearSegmentEnd = 0.25;
// Hue-preserving mapping
float maxCol = max(col.x, max(col.y, col.z));
float mappedMax = rangeCompress(maxCol, linearSegmentEnd);
float3 compressedHuePreserving = col * mappedMax / maxCol;
// Non-hue preserving mapping
float3 perChannelCompressed = rangeCompress(col, linearSegmentEnd);
// Combine hue-preserving and non-hue-preserving colors. Absolute hue preservation looks unnatural, as bright colors *appear* to have been hue shifted.
// Actually doing some amount of hue shifting looks more pleasing
col = lerp(perChannelCompressed, compressedHuePreserving, 0.6);
float3 ictcpMapped = RGBToICtCp(col);
// Smoothly ramp off saturation as brightness increases, but keep some even for very bright input
float postCompressionSaturationBoost = 0.3 * smoothstep(1.0, 0.5, ictcp.x);
// Re-introduce some hue from the pre-compression color. Something similar could be accomplished by delaying the luma-dependent desaturation before range compression.
// Doing it here however does a better job of preserving perceptual luminance of highly saturated colors. Because in the hue-preserving path we only range-compress the max channel,
// saturated colors lose luminance. By desaturating them more aggressively first, compressing, and then re-adding some saturation, we can preserve their brightness to a greater extent.
ictcpMapped.yz = lerp(ictcpMapped.yz, ictcp.yz * ictcpMapped.x / max(1e-3, ictcp.x), postCompressionSaturationBoost);
col = ICtCpToRGB(ictcpMapped);
return col;
}
// RGB with sRGB/Rec.709 primaries to ICtCp
float3 RGBToICtCp(float3 col)
{
col = RGBToXYZ(col);
col = XYZToLMS(col);
// 1.0f = 100 nits, 100.0f = 10k nits
col = linearToPQ(max(0.0.xxx, col), 100.0);
// Convert PQ-LMS into ICtCp. Note that the "S" channel is not used,
// but overlap between the cone responses for long, medium, and short wavelengths
// ensures that the corresponding part of the spectrum contributes to luminance.
float3x3 mat = float3x3(
0.5000, 0.5000, 0.0000,
1.6137, -3.3234, 1.7097,
4.3780, -4.2455, -0.1325
);
return mul(mat, col);
}
float3 ICtCpToRGB(float3 col)
{
float3x3 mat = float3x3(
1.0, 0.00860514569398152, 0.11103560447547328,
1.0, -0.00860514569398152, -0.11103560447547328,
1.0, 0.56004885956263900, -0.32063747023212210
);
col = mul(mat, col);
// 1.0f = 100 nits, 100.0f = 10k nits
col = PQtoLinear(col, 100.0);
col = LMSToXYZ(col);
return XYZToRGB(col);
}
// RGB with sRGB/Rec.709 primaries to CIE XYZ
float3 RGBToXYZ(float3 c)
{
float3x3 mat = float3x3(
0.4124564, 0.3575761, 0.1804375,
0.2126729, 0.7151522, 0.0721750,
0.0193339, 0.1191920, 0.9503041
);
return mul(mat, c);
}
float3 XYZToRGB(float3 c)
{
float3x3 mat = float3x3(
3.24045483602140870, -1.53713885010257510, -0.49853154686848090,
-0.96926638987565370, 1.87601092884249100, 0.04155608234667354,
0.05564341960421366, -0.20402585426769815, 1.05722516245792870
);
return mul(mat, c);
}
// Converts XYZ tristimulus values into cone responses for the three types of cones in the human visual system, matching long, medium, and short wavelengths.
// Note that there are many LMS color spaces; this one follows the ICtCp color space specification.
float3 XYZToLMS(float3 c)
{
float3x3 mat = float3x3(
0.3592, 0.6976, -0.0358,
-0.1922, 1.1004, 0.0755,
0.0070, 0.0749, 0.8434
);
return mul(mat, c);
}
float3 LMSToXYZ(float3 c)
{
float3x3 mat = float3x3(
2.07018005669561320, -1.32645687610302100, 0.206616006847855170,
0.36498825003265756, 0.68046736285223520, -0.045421753075853236,
-0.04959554223893212, -0.04942116118675749, 1.187995941732803400
);
return mul(mat, c);
}