ACEScg is a color encoding system developed by the Academy of Motion Picture Arts and Sciences to address issues with color management during the transition from film to digital in the motion picture industry. It aims to preserve the full visible spectrum of HDR colors using RGB primaries while being more artist-friendly than previous versions. ACEScg encompasses the gamuts of most cameras and displays but uses primaries that are only slightly imaginary to allow for future standards. It also provides ways to handle out-of-gamut and negative values that may occur.
This talk presents the approach Frostbite took to add support for HDR displays. It will summarize Frostbite's previous post processing pipeline and what the issues were. Attendees will learn the decisions made to fix these issues, improve the color grading workflow and support high quality HDR and SDR output. This session will detail the display mapping used to implement the"grade once, output many" approach to targeting any display and why an ad-hoc approach as opposed to filmic tone mapping was chosen. Frostbite retained 3D LUT-based grading flexibility and the accuracy differences of computing these in decorrelated color spaces will be shown. This session will also include the main issues found on early adopter games, differences between HDR standards, optimizations to achieve performance parity with the legacy path and why supporting HDR can also improve the SDR version.
Takeaway
Attendees will learn how and why Frostbite chose to support High Dynamic Range [HDR] displays. They will understand the issues faced and how these were resolved. This talk will be useful for those still to support HDR and provide discussion points for those who already do.
Intended Audience
The intended audience is primarily rendering engineers, technical artists and artists; specifically those who focus on grading and lighting and those interested in HDR displays. Ideally attendees will be familiar with color grading and tonemapping.
With the highest-quality video options, Battlefield 3 renders its Screen-Space Ambient Occlusion (SSAO) using the Horizon-Based Ambient Occlusion (HBAO) algorithm. For performance reasons, the HBAO is rendered in half resolution using half-resolution input depths. The HBAO is then blurred in full resolution using a depth-aware blur. The main issue with such low-resolution SSAO rendering is that it produces objectionable flickering for thin objects (such as alpha-tested foliage) when the camera and/or the geometry are moving. After a brief recap of the original HBAO pipeline, this talk describes a novel temporal filtering algorithm that fixed the HBAO flickering problem in Battlefield 3 with a 1-2% performance hit in 1920x1200 on PC (DX10 or DX11). The talk includes algorithm and implementation details on the temporal filtering part, as well as generic optimizations for SSAO blur pixel shaders. This is a joint work between Louis Bavoil (NVIDIA) and Johan Andersson (DICE).
This talk presents the approach Frostbite took to add support for HDR displays. It will summarize Frostbite's previous post processing pipeline and what the issues were. Attendees will learn the decisions made to fix these issues, improve the color grading workflow and support high quality HDR and SDR output. This session will detail the display mapping used to implement the"grade once, output many" approach to targeting any display and why an ad-hoc approach as opposed to filmic tone mapping was chosen. Frostbite retained 3D LUT-based grading flexibility and the accuracy differences of computing these in decorrelated color spaces will be shown. This session will also include the main issues found on early adopter games, differences between HDR standards, optimizations to achieve performance parity with the legacy path and why supporting HDR can also improve the SDR version.
Takeaway
Attendees will learn how and why Frostbite chose to support High Dynamic Range [HDR] displays. They will understand the issues faced and how these were resolved. This talk will be useful for those still to support HDR and provide discussion points for those who already do.
Intended Audience
The intended audience is primarily rendering engineers, technical artists and artists; specifically those who focus on grading and lighting and those interested in HDR displays. Ideally attendees will be familiar with color grading and tonemapping.
With the highest-quality video options, Battlefield 3 renders its Screen-Space Ambient Occlusion (SSAO) using the Horizon-Based Ambient Occlusion (HBAO) algorithm. For performance reasons, the HBAO is rendered in half resolution using half-resolution input depths. The HBAO is then blurred in full resolution using a depth-aware blur. The main issue with such low-resolution SSAO rendering is that it produces objectionable flickering for thin objects (such as alpha-tested foliage) when the camera and/or the geometry are moving. After a brief recap of the original HBAO pipeline, this talk describes a novel temporal filtering algorithm that fixed the HBAO flickering problem in Battlefield 3 with a 1-2% performance hit in 1920x1200 on PC (DX10 or DX11). The talk includes algorithm and implementation details on the temporal filtering part, as well as generic optimizations for SSAO blur pixel shaders. This is a joint work between Louis Bavoil (NVIDIA) and Johan Andersson (DICE).
Filmic Tonemapping for Real-time Rendering - Siggraph 2010 Color Coursehpduiker
Filmic Tonemapping for Real-time Rendering, a presentation from the Siggraph 2010 Course on Color, on a technique developed from film that became very applicable to games with the addition of support for HDR lighting and rendering in graphics cards.
Epic Games Japan hold a meeting named "Lightmass Deep Dive" on July 30, 2016.
A Japanese architectural artist, Kenichi Makaya, created Casa Barragan on UE4. the architecture is a house of Mexican Architect, Luis Barragan. And he gave a presentation about making of the scene. .
CASA BARRAGAN Unreal Engine4
https://www.youtube.com/watch?v=Y7r28nO4iDU&feature=youtu.be
EGJ translated the slide for the presentation to English and published it.
Physically Based Lighting in Unreal Engine 4Lukas Lang
Talk held at Unreal Meetup Munich on 15th May 2019.
I talked about some of the theoretical background of physically based lighting, demonstrated a workflow + containing value tables needed to be able to easily use the workflow.
UE4のライトビルドシステムであるライトマスの内部挙動について、イラストにてなるべくかみ砕いて説明しております。ライトマップ編
アルゴリズム説明にフォーカスしております、実際のパラメータの設定で何が変更されるかは、本公演のVol.2を参考にしていただければと思います。
https://www.slideshare.net/EpicGamesJapan/lightmass-deep-dive-2018-vol-2-lightmaplightmass/EpicGamesJapan/lightmass-deep-dive-2018-vol-2-lightmaplightmass
※こちらは2016年に行った"Lightmass Deep Dive"の2018年度版になります。
Original Slide: https://www.slideshare.net/EpicGamesJapan/lightmass-lightmap-epic-games-japan
(Epic Games Japan: 篠山範明)
Filmic Tonemapping for Real-time Rendering - Siggraph 2010 Color Coursehpduiker
Filmic Tonemapping for Real-time Rendering, a presentation from the Siggraph 2010 Course on Color, on a technique developed from film that became very applicable to games with the addition of support for HDR lighting and rendering in graphics cards.
Epic Games Japan hold a meeting named "Lightmass Deep Dive" on July 30, 2016.
A Japanese architectural artist, Kenichi Makaya, created Casa Barragan on UE4. the architecture is a house of Mexican Architect, Luis Barragan. And he gave a presentation about making of the scene. .
CASA BARRAGAN Unreal Engine4
https://www.youtube.com/watch?v=Y7r28nO4iDU&feature=youtu.be
EGJ translated the slide for the presentation to English and published it.
Physically Based Lighting in Unreal Engine 4Lukas Lang
Talk held at Unreal Meetup Munich on 15th May 2019.
I talked about some of the theoretical background of physically based lighting, demonstrated a workflow + containing value tables needed to be able to easily use the workflow.
UE4のライトビルドシステムであるライトマスの内部挙動について、イラストにてなるべくかみ砕いて説明しております。ライトマップ編
アルゴリズム説明にフォーカスしております、実際のパラメータの設定で何が変更されるかは、本公演のVol.2を参考にしていただければと思います。
https://www.slideshare.net/EpicGamesJapan/lightmass-deep-dive-2018-vol-2-lightmaplightmass/EpicGamesJapan/lightmass-deep-dive-2018-vol-2-lightmaplightmass
※こちらは2016年に行った"Lightmass Deep Dive"の2018年度版になります。
Original Slide: https://www.slideshare.net/EpicGamesJapan/lightmass-lightmap-epic-games-japan
(Epic Games Japan: 篠山範明)
8k is the latest upcoming video technology widely used in digital camera, digital cinema,sports broadcasting etc.
In-order to achieve high image quality,more detailed pictures,better fast action,large projection surface visibility this method is used.
8K IS THE LATEST UPCOMING VIDEO TECHNOLOGY WIDELY USED IN DIGITAL CAMERA,DIGITAL CINEMA,SPORTS BROAD CASTING ETC.
In order to achieve high image quality,more detailed pictures,large projection surface visibility this method is used.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2022/09/comparing-ml-based-audio-with-ml-based-vision-an-introduction-to-ml-audio-for-ml-vision-engineers-a-presentation-from-dsp-concepts/
Josh Morris, Engineering Manager at DSP Concepts, presents the “Comparing ML-Based Audio with ML-Based Vision: An Introduction to ML Audio for ML Vision Engineers” tutorial at the May 2022 Embedded Vision Summit.
As embedded processors become more powerful, our ability to implement complex machine learning solutions at the edge is growing. Vision has led the way, solving problems as far-reaching as facial recognition and autonomous navigation. Now, ML audio is starting to appear in more and more edge applications, for example in the form of voice assistants, voice user interfaces and voice communication systems.
Although audio data is quite different from video and image data, ML audio solutions often use many of the same techniques initially developed for video and images. In this talk, Morris introduces the ML techniques commonly used for audio at the edge, and compares and contrasts them with those commonly used for vision. You’ll get inspired to integrate ML-based audio into your next solution.
The oscilloscope is the most powerful instrument in our arsenal of electronic instruments. It is widely used for measurement of time-varying signals. Any time you have a signal that varies with time - slowly or quickly - you can use an oscilloscope to measure it - to look at it, and to find any unexpected features in it.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2021/08/visual-ai-at-the-edge-from-surveillance-cameras-to-people-counters-a-presentation-from-synaptics/
Patrick Worfolk, Senior Vice President and CTO of Synaptics, presents the “Enabling Visual AI at the Edge: From Surveillance Cameras to People Counters" tutorial at the May 2021 Embedded Vision Summit.
New AI-at-the-edge processors with improved efficiencies and flexibility are unleashing a huge opportunity to democratize computer vision broadly across all markets, enabling edge AI devices with small, low-cost, low-power cameras. Synaptics has embarked on a roadmap of edge-AI DNN processors targeted at a range of real-time computer vision and multimedia applications. These span from enhancing the image quality of a high-resolution camera’s output using Synaptics' VS680 multi-TOPS processor to performing computer vision in battery-powered devices at lower resolution using the company's Katana Edge-AI SoC.
In this talk, Worfolk shows how these edge AI SoCs can be used to:
• Achieve exceptional color video in very low light conditions
• De-noise and distortion-correct both 2D and 3D imagery from a time-of-flight depth camera that images through a smartphone OLED display
• Perform super-resolution enhancement of high-resolution video imagery, and
• Recognize objects using lower-resolution sensors under battery power.
For the full video of this presentation, please visit:
https://www.embedded-vision.com/platinum-members/euresys/embedded-vision-training/videos/pages/may-2017-embedded-vision-summit
For more information about embedded vision, please visit:
http://www.embedded-vision.com
Jean-Michel Wintgens, Vice President of Engineering at Euresys, presents the "Developing Real-time Video Applications with CoaXPress" tutorial at the May 2017 Embedded Vision Summit.
CoaXPress is a modern, high-performance video transport interface. Using a standard coaxial cable, it provides a point-to-point connection that is reliable, scalable and versatile. Wintgens shows, using real application cases and comparisons with other standards, that CoaXPress is the best choice for real-time embedded applications that require individual camera control with accurate timing and synchronization.
Vieworks' TDI line scan cameras are based on new innovative “hybrid” sensor technology. The sensors combine CMOS and CCD. The base sensor has is a classic CMOS structure with much faster readout speeds and lower power consumption. However the sensors use CCD photo cells to capture the images and CCD technology still offers higher dynamic range and better image quality. In addition, the integration and transfer of stages is done by moving charge in CCDs, making this process virtually noiseless. The TDI line scan cameras based on these new hybrid imaging sensors allows image captures as fast as 250 kHz line rates with up to 256 stages. They are designed for applications where faster line rates and higher sensitivity are critical.
Elemental high dynamic _ range_video_white_paperCMR WORLD TECH
FROM SCIENCE TO PRACTICE
The next large challenge facing the video industry is translating the science behind HDR into a system or
systems that can actually perform the required tasks of making HDR a reality for consumers and provide
a return on investment for providers. This adds complexity by bringing the laboratory into the
marketplace.
Industrial Training at Shahjalal Fertilizer Company Limited (SFCL)MdTanvirMahtab2
This presentation is about the working procedure of Shahjalal Fertilizer Company Limited (SFCL). A Govt. owned Company of Bangladesh Chemical Industries Corporation under Ministry of Industries.
Saudi Arabia stands as a titan in the global energy landscape, renowned for its abundant oil and gas resources. It's the largest exporter of petroleum and holds some of the world's most significant reserves. Let's delve into the top 10 oil and gas projects shaping Saudi Arabia's energy future in 2024.
Final project report on grocery store management system..pdfKamal Acharya
In today’s fast-changing business environment, it’s extremely important to be able to respond to client needs in the most effective and timely manner. If your customers wish to see your business online and have instant access to your products or services.
Online Grocery Store is an e-commerce website, which retails various grocery products. This project allows viewing various products available enables registered users to purchase desired products instantly using Paytm, UPI payment processor (Instant Pay) and also can place order by using Cash on Delivery (Pay Later) option. This project provides an easy access to Administrators and Managers to view orders placed using Pay Later and Instant Pay options.
In order to develop an e-commerce website, a number of Technologies must be studied and understood. These include multi-tiered architecture, server and client-side scripting techniques, implementation technologies, programming language (such as PHP, HTML, CSS, JavaScript) and MySQL relational databases. This is a project with the objective to develop a basic website where a consumer is provided with a shopping cart website and also to know about the technologies used to develop such a website.
This document will discuss each of the underlying technologies to create and implement an e- commerce website.
Sachpazis:Terzaghi Bearing Capacity Estimation in simple terms with Calculati...Dr.Costas Sachpazis
Terzaghi's soil bearing capacity theory, developed by Karl Terzaghi, is a fundamental principle in geotechnical engineering used to determine the bearing capacity of shallow foundations. This theory provides a method to calculate the ultimate bearing capacity of soil, which is the maximum load per unit area that the soil can support without undergoing shear failure. The Calculation HTML Code included.
An Approach to Detecting Writing Styles Based on Clustering Techniquesambekarshweta25
An Approach to Detecting Writing Styles Based on Clustering Techniques
Authors:
-Devkinandan Jagtap
-Shweta Ambekar
-Harshit Singh
-Nakul Sharma (Assistant Professor)
Institution:
VIIT Pune, India
Abstract:
This paper proposes a system to differentiate between human-generated and AI-generated texts using stylometric analysis. The system analyzes text files and classifies writing styles by employing various clustering algorithms, such as k-means, k-means++, hierarchical, and DBSCAN. The effectiveness of these algorithms is measured using silhouette scores. The system successfully identifies distinct writing styles within documents, demonstrating its potential for plagiarism detection.
Introduction:
Stylometry, the study of linguistic and structural features in texts, is used for tasks like plagiarism detection, genre separation, and author verification. This paper leverages stylometric analysis to identify different writing styles and improve plagiarism detection methods.
Methodology:
The system includes data collection, preprocessing, feature extraction, dimensional reduction, machine learning models for clustering, and performance comparison using silhouette scores. Feature extraction focuses on lexical features, vocabulary richness, and readability scores. The study uses a small dataset of texts from various authors and employs algorithms like k-means, k-means++, hierarchical clustering, and DBSCAN for clustering.
Results:
Experiments show that the system effectively identifies writing styles, with silhouette scores indicating reasonable to strong clustering when k=2. As the number of clusters increases, the silhouette scores decrease, indicating a drop in accuracy. K-means and k-means++ perform similarly, while hierarchical clustering is less optimized.
Conclusion and Future Work:
The system works well for distinguishing writing styles with two clusters but becomes less accurate as the number of clusters increases. Future research could focus on adding more parameters and optimizing the methodology to improve accuracy with higher cluster values. This system can enhance existing plagiarism detection tools, especially in academic settings.
Understanding Inductive Bias in Machine LearningSUTEJAS
This presentation explores the concept of inductive bias in machine learning. It explains how algorithms come with built-in assumptions and preferences that guide the learning process. You'll learn about the different types of inductive bias and how they can impact the performance and generalizability of machine learning models.
The presentation also covers the positive and negative aspects of inductive bias, along with strategies for mitigating potential drawbacks. We'll explore examples of how bias manifests in algorithms like neural networks and decision trees.
By understanding inductive bias, you can gain valuable insights into how machine learning models work and make informed decisions when building and deploying them.
We have compiled the most important slides from each speaker's presentation. This year’s compilation, available for free, captures the key insights and contributions shared during the DfMAy 2024 conference.
6th International Conference on Machine Learning & Applications (CMLA 2024)ClaraZara1
6th International Conference on Machine Learning & Applications (CMLA 2024) will provide an excellent international forum for sharing knowledge and results in theory, methodology and applications of on Machine Learning & Applications.
KuberTENes Birthday Bash Guadalajara - K8sGPT first impressionsVictor Morales
K8sGPT is a tool that analyzes and diagnoses Kubernetes clusters. This presentation was used to share the requirements and dependencies to deploy K8sGPT in a local environment.
Hierarchical Digital Twin of a Naval Power SystemKerry Sado
A hierarchical digital twin of a Naval DC power system has been developed and experimentally verified. Similar to other state-of-the-art digital twins, this technology creates a digital replica of the physical system executed in real-time or faster, which can modify hardware controls. However, its advantage stems from distributing computational efforts by utilizing a hierarchical structure composed of lower-level digital twin blocks and a higher-level system digital twin. Each digital twin block is associated with a physical subsystem of the hardware and communicates with a singular system digital twin, which creates a system-level response. By extracting information from each level of the hierarchy, power system controls of the hardware were reconfigured autonomously. This hierarchical digital twin development offers several advantages over other digital twins, particularly in the field of naval power systems. The hierarchical structure allows for greater computational efficiency and scalability while the ability to autonomously reconfigure hardware controls offers increased flexibility and responsiveness. The hierarchical decomposition and models utilized were well aligned with the physical twin, as indicated by the maximum deviations between the developed digital twin hierarchy and the hardware.
ACEScg: A Common Color Encoding for Visual Effects Applications - DigiPro 2015
1.
ACEScg:
A Common Color Encoding for Visual Effects Applications
Haarm-Pieter Duiker
Duiker Research
Alexander Forsythe
The Academy
Scott Dyer
The Academy
Ray Feeney
RFX
Will McCown
Consultant
Jim Houston
Starwatcher Digital
Andy Maltz
The Academy
Doug Walker
Autodesk
2. ACES Background
• The Academy Color Encoding System, ACES, is a free, open, device-independent
color management and image interchange system
• Intended to address the loss of underlying standards in the film to digital transition
• Developed by hundreds of the industry’s top scientists, engineers, cinematographers,
colorists, visual effects artists, and other motion picture creatives working together
under the auspices of the Academy of Motion Picture Arts and Sciences.
• ACES 1.0 is the first production-ready release of the system, the result of over 10
years of research, testing and field trials.
7. The ACES2065 encoding: wide gamut HDR
• Preserve the full spectrum of visible HDR
colors using RGB primaries
• D60 white point can be transformed to hit
D65, D50, DCI
• Provide rigorous specification for
encoding and storage of color data
8. The ACES2065 encoding: feedback
• Great for capturing full range of camera
and CG data
• Gamut too wide for practical use
• Not artist friendly
• Hard to interpret
• Too many imaginary colors in gamut
9. ACEScg
• Preserve the full spectrum of visible HDR
colors using RGB primaries
• D60 white point can be transformed to hit
D65, D50, DCI
• Provide rigorous specification for
encoding of color data
• And…
• More artist friendly
10. ACEScg: More artist friendly
• Color correction controls closer to artists’
expectation, produce smoother transitions
• Fewer choices result in imaginary colors
• More efficient use of gamut
ACES colorwheel ACEScg colorwheel
12. Camera encoding gamuts
Alexa Wide Gamut Canon Cinema Gamut RED DRAGONcolor RED DRAGONcolor2 RED REDcolor
RED REDcolor2 RED REDcolor3 Sony S-Gamut3 Sony S-Gamut3.Cine Panasonic V-Gamut
13. The multi-gamut world: camera encodings
• a lot of camera encoding gamuts
• Many use imaginary primaries
• ACEScg allows for the encoding of the
majority of the visible spectrum
16. The multi-gamut world: display gamuts
• Multiple display gamuts
• More on the way with new displays
• ACEScg is a superset of the main
existing standards
17. Concerns: Primaries don’t match the display
• Most productions deliver masters for
multiple classes of displays, with multiple
sets of primaries
18. Concerns: Imaginary Primaries
• Imaginary, but only just
• Many gamuts use imaginary primaries
• Needed to encompass Rec.2020 and P3
19. Concerns: Negative Values
• Negative values from two main sources
• Linearization transfer functions
• Out-of-gamut values
• Ways to handle negatives
• Lean on packages to not clip and
educate artists
• Heuristics to lift low end values
24. ACEScg
• An artist-friendly color encoding that
preserves the full spectrum of visible HDR
colors using RGB primaries
25. Thanks
Thanks to all the authors and contributors
Special thanks to
Nick Cannon
Thomas Mansencal / colour-science.org
Will McCown
Miaoqi Zhu
26. Questions
Sign up for email oscars.org/aces
Twitter @AcademyACES
Questions acessupport@oscars.org
Haarm-Pieter Duiker
Duiker Research
Alexander Forsythe
The Academy
Scott Dyer
The Academy
Ray Feeney
RFX
Will McCown
Consultant
Jim Houston
Starwatcher Digital
Andy Maltz
The Academy
Doug Walker
Autodesk
Editor's Notes
Cover Technical Goals as you talk through diagram
Define a path from camera-native data to scene-referred imagery
Process and store wide-gamut HDR color data
Display consistently across multiple devices
Provide a consistent basis for look authoring and application, on-set and in post
Cover Technical Goals as you talk through diagram
Define a path from camera-native data to scene-referred imagery
Process and store wide-gamut HDR color data
Display consistently across multiple devices
Provide a consistent basis for look authoring and application, on-set and in post
Cover Technical Goals as you talk through diagram
Define a path from camera-native data to scene-referred imagery
Process and store wide-gamut HDR color data
Display consistently across multiple devices
Provide a consistent basis for look authoring and application, on-set and in post
Cover Technical Goals as you talk through diagram
Define a path from camera-native data to scene-referred imagery
Process and store wide-gamut HDR color data
Display consistently across multiple devices
Provide a consistent basis for look authoring and application, on-set and in post
Maintain all the fidelity of original source material in a common scene referred color encoding. Archive that to avoid marrying master to a particular output display technology.
Gamut too wide:
Not artist friendly. Too easy for artists to create non-existent colors
“The RGB color correction controls all feel different”
Hard to interpret. Wide gamut workflows not in wide use. What is the data supposed to 'look like'?
White point.
Doesn't match white point of some common display targets like Rec.709 and artists may not understand how to handle that situation.
Too many non-existent colors are represented. - Inefficient use of bits.
These goals and features match the ACES color encoding
Primaries are much closer to the spectral locus, but still encompass the main display targets
Because the primaries are closer to the spectral locus…
Note that the transitions between regions in the ACE colorwheel are more abrupt and there seem to be larger areas or redundant colors, with little variation
More efficient use of gamut for integer encodings like ACESproxy
The Pointer’s gamut is (an approximation of) the gamut of real surface colors as can be seen by the human eye, based on the research by Michael R. Pointer (1980).
Because ACEScg is a floating-point encoding, it can represent the full visible spectrum
Question: Why do we need a standard working space? Can’t we just use Camera X’s primaries?
Not pictured: Gamuts from AJA, BlackMagic, GoPro, Apple
Lots of imaginary primaries
Reference: Michael Bay used 6 different ‘cinema’ cameras on the last Transformers
Question: Why do we need a standard working space? Can’t we just use Camera X’s primaries?
Why not just use Rec.2020? It is not a strict superset of P3 and Rec.709.
HDR displays, OLED, quantum dot
If you're only targeting one display, you could use the primaries for that display. Most films won't satisfy that constraint.
ACES and ACEScg are scene-referred standards.
Most displays don’t match the display targets… P3 and Rec.2020 are rarely actually covered by projectors or displays.
They're imaginary, but only just.
Lots of applications use them without issue
A very small portion of the gamut is dedicated to non-visible colors
Lots of other gamuts have imaginary primaries.
Ex. Alexa Wide Gamut, REDcolor2, …
If we didn’t have imaginary primaries, we wouldn’t be able to fill all of Rec.2020 if there is any desaturation in the rendering.
Source: Academy’s “Next Generation CinemaTest”
Pixels that go negative in ACEScg are only the most super saturated
References
Mansencal, T., Mauderer, M., & Parsons, M. (2015, May). Colour 0.3.5. doi:10.5281/zenodo.17370
Academy’s Next Generation Camera Text
Transforms and colorspaces discussed here are implemented in the ACES 1.0 OpenColor IO configuration, linked to from the ACES site.