The organised and formatted embodiment of the Colour Science notes I have taken along the years.
It is aimed at the VFX industry, and is the work-in-progress subset of a broader and generic Colour Science presentation.
The Importance of Terminology and sRGB Uncertainty - Notes - 0.4Thomas Mansencal
The latest version of this presentation is available at this URL: https://www.slideshare.net/thomasmansencal/the-importance-of-terminology-and-srgb-uncertainty-notes-05
The organised and formatted embodiment of the Colour Science notes I have taken along the years.
It is aimed at the VFX industry, and is the work-in-progress subset of a broader and generic Colour Science presentation.
Physically Based Lighting in Unreal Engine 4Lukas Lang
Talk held at Unreal Meetup Munich on 15th May 2019.
I talked about some of the theoretical background of physically based lighting, demonstrated a workflow + containing value tables needed to be able to easily use the workflow.
The presentation describes Physically Based Lighting Pipeline of Killzone : Shadow Fall - Playstation 4 launch title. The talk covers studio transition to a new asset creation pipeline, based on physical properties. Moreover it describes light rendering systems used in new 3D engine built from grounds up for upcoming Playstation 4 hardware. A novel real time lighting model, simulating physically accurate Area Lights, will be introduced, as well as hybrid - ray-traced / image based reflection system.
We believe that physically based rendering is a viable way to optimize asset creation pipeline efficiency and quality. It also enables the rendering quality to reach a new level that is highly flexible depending on art direction requirements.
유니티의 라이팅이 안 이쁘다구요? (A to Z of Lighting)ozlael ozlael
This document discusses various lighting techniques used in 3D graphics and Unity, including physically based rendering, light probes, subsurface scattering, pre-integrated skin shading, color grading, reflection probes, indirect lighting, cubemapping, image based lighting, high dynamic range, conservation of energy, and global illumination. It provides examples of lighting in games and images to illustrate different techniques.
The Importance of Terminology and sRGB Uncertainty - Notes - 0.4Thomas Mansencal
The latest version of this presentation is available at this URL: https://www.slideshare.net/thomasmansencal/the-importance-of-terminology-and-srgb-uncertainty-notes-05
The organised and formatted embodiment of the Colour Science notes I have taken along the years.
It is aimed at the VFX industry, and is the work-in-progress subset of a broader and generic Colour Science presentation.
Physically Based Lighting in Unreal Engine 4Lukas Lang
Talk held at Unreal Meetup Munich on 15th May 2019.
I talked about some of the theoretical background of physically based lighting, demonstrated a workflow + containing value tables needed to be able to easily use the workflow.
The presentation describes Physically Based Lighting Pipeline of Killzone : Shadow Fall - Playstation 4 launch title. The talk covers studio transition to a new asset creation pipeline, based on physical properties. Moreover it describes light rendering systems used in new 3D engine built from grounds up for upcoming Playstation 4 hardware. A novel real time lighting model, simulating physically accurate Area Lights, will be introduced, as well as hybrid - ray-traced / image based reflection system.
We believe that physically based rendering is a viable way to optimize asset creation pipeline efficiency and quality. It also enables the rendering quality to reach a new level that is highly flexible depending on art direction requirements.
유니티의 라이팅이 안 이쁘다구요? (A to Z of Lighting)ozlael ozlael
This document discusses various lighting techniques used in 3D graphics and Unity, including physically based rendering, light probes, subsurface scattering, pre-integrated skin shading, color grading, reflection probes, indirect lighting, cubemapping, image based lighting, high dynamic range, conservation of energy, and global illumination. It provides examples of lighting in games and images to illustrate different techniques.
The document discusses advanced rendering techniques for achieving anime-style rendering in Unity. It covers topics such as advanced rendering features for mobile like bloom and dynamic particles. It also discusses stylized scene lighting, specialized shaders for materials like silk and hair, character shading techniques including multi-ramp shading and facial expression blending, and effects like volume lights, depth of field, and image-space outlines. The overall goal is to achieve high quality anime-style rendering on mobile and PC platforms.
Epic Games Japan hold a meeting named "Lightmass Deep Dive" on July 30, 2016.
A Japanese architectural artist, Kenichi Makaya, created Casa Barragan on UE4. the architecture is a house of Mexican Architect, Luis Barragan. And he gave a presentation about making of the scene. .
CASA BARRAGAN Unreal Engine4
https://www.youtube.com/watch?v=Y7r28nO4iDU&feature=youtu.be
EGJ translated the slide for the presentation to English and published it.
강의 판매 시작했습니다! https://coloso.co.kr/game/gamegraphic_yijuyeong
PBR 렌더링에 사용되는 각종 맵에 대한 안내입니다.
2버전에 비해 메탈릭, Anisotropy 내용 보강 및 Clearcoat 머티리얼 관련 내용이 추가되었습니다.
The document discusses content-based image retrieval (CBIR) systems. It describes how CBIR systems use feature extraction to search large image databases based on visual content. The key components of CBIR systems are feature extraction, indexing, and system design. Feature extraction involves extracting information about images' colors, textures, shapes, and spatial locations. Effective features and indexing techniques are needed to make CBIR scalable for large image collections. Performance is evaluated based on how well systems return relevant images.
Multiprocessor Game Loops: Lessons from Uncharted 2: Among ThievesNaughty Dog
The document discusses challenges with updating large-scale engine systems, like animation and physics, on a moving train level in a game. A simple approach of just updating game objects sequentially each frame does not work well because it does not allow optimizing updates of different systems. The train in the game is dynamic rather than static, with each car following a spline path independently while maintaining proper spacing. Special handling is needed for updating the order of train cars and teleporting the train between locations.
Epic Games Japan hold a meeting named "Lightmass Deep Dive" on July 30, 2016.
Osamu Satio of Square Enix Osaka gave a presentation about their Lightmass Operation for Large Console Games. EGJ translated the slide for the presentation to English and published it.
There are some movies in the slide. So we recommend downloading this slide.
講演動画はこちら:
https://youtu.be/BoUNuMJGHuc
講演者:
斎藤 修(Epic Games Japan)
https://twitter.com/shiba_zushi
本スライドは2021年7月25日に行われたオンライン勉強会「UE4 Character Art Dive Online」の講演資料となります。
イベントについてはこちら:
https://www.unrealengine.com/ja/blog/epicgamesjapan-onlinelearning-13
This document describes an approach to animating hair using vertex shaders. Key points:
- Hair movement is encoded in vertex color channels for amount, left/right collisions, clumping groups, and front/back collisions.
- A collision detection pass calculates distance values between hair vertices and collision meshes, which are stored in the vertex colors.
- In the vertex shader, the collision values are used to modify a normalized acceleration vector controlling hair movement based on wind and head movement.
- The modified acceleration vector is multiplied by a strength value and filtered by each vertex's preset movement amount to determine final per-vertex displacement.
This talk presents the approach Frostbite took to add support for HDR displays. It will summarize Frostbite's previous post processing pipeline and what the issues were. Attendees will learn the decisions made to fix these issues, improve the color grading workflow and support high quality HDR and SDR output. This session will detail the display mapping used to implement the"grade once, output many" approach to targeting any display and why an ad-hoc approach as opposed to filmic tone mapping was chosen. Frostbite retained 3D LUT-based grading flexibility and the accuracy differences of computing these in decorrelated color spaces will be shown. This session will also include the main issues found on early adopter games, differences between HDR standards, optimizations to achieve performance parity with the legacy path and why supporting HDR can also improve the SDR version.
Takeaway
Attendees will learn how and why Frostbite chose to support High Dynamic Range [HDR] displays. They will understand the issues faced and how these were resolved. This talk will be useful for those still to support HDR and provide discussion points for those who already do.
Intended Audience
The intended audience is primarily rendering engineers, technical artists and artists; specifically those who focus on grading and lighting and those interested in HDR displays. Ideally attendees will be familiar with color grading and tonemapping.
Probabilistic Approaches to Shadow Maps FilteringMarco Salvi
The document summarizes various probabilistic approaches to reducing aliasing in shadow maps. It discusses previous work using variance shadow maps (VSM) and higher order moments. It then covers using probability density functions (PDFs) fitted to multiple moments to model the depth distribution, and an approach called exponential shadow maps that uses an exponential function to remove light bleeding artifacts. The document concludes that while these techniques help, further work is still needed to fully address light bleeding from non-planar surfaces.
This document provides a summary of a lecture on color and color perception. It begins with announcements about homework assignments. It then outlines the topics to be covered, including a recap of color and human color perception, retinal color space, color matching, linear color spaces, chromaticity, color calibration, non-linear color spaces, and notes on color reproduction. It provides context on the origins of some slides and then dives into detailed explanations and examples of these color-related topics. Key points covered include how color is a human perception of light wavelengths, the role of illuminant spectra and object reflectance, retinal vs perceived color, color matching experiments, linear color representations in different color spaces like LMS, RGB, XYZ, gam
WEBINAR ON FUNDAMENTALS OF DIGITAL IMAGE PROCESSING DURING COVID LOCK DOWN by by K.Vijay Anand , Associate Professor, Department of Electronics and Instrumentation Engineering , R.M.K Engineering College, Tamil Nadu , India
The document discusses advanced rendering techniques for achieving anime-style rendering in Unity. It covers topics such as advanced rendering features for mobile like bloom and dynamic particles. It also discusses stylized scene lighting, specialized shaders for materials like silk and hair, character shading techniques including multi-ramp shading and facial expression blending, and effects like volume lights, depth of field, and image-space outlines. The overall goal is to achieve high quality anime-style rendering on mobile and PC platforms.
Epic Games Japan hold a meeting named "Lightmass Deep Dive" on July 30, 2016.
A Japanese architectural artist, Kenichi Makaya, created Casa Barragan on UE4. the architecture is a house of Mexican Architect, Luis Barragan. And he gave a presentation about making of the scene. .
CASA BARRAGAN Unreal Engine4
https://www.youtube.com/watch?v=Y7r28nO4iDU&feature=youtu.be
EGJ translated the slide for the presentation to English and published it.
강의 판매 시작했습니다! https://coloso.co.kr/game/gamegraphic_yijuyeong
PBR 렌더링에 사용되는 각종 맵에 대한 안내입니다.
2버전에 비해 메탈릭, Anisotropy 내용 보강 및 Clearcoat 머티리얼 관련 내용이 추가되었습니다.
The document discusses content-based image retrieval (CBIR) systems. It describes how CBIR systems use feature extraction to search large image databases based on visual content. The key components of CBIR systems are feature extraction, indexing, and system design. Feature extraction involves extracting information about images' colors, textures, shapes, and spatial locations. Effective features and indexing techniques are needed to make CBIR scalable for large image collections. Performance is evaluated based on how well systems return relevant images.
Multiprocessor Game Loops: Lessons from Uncharted 2: Among ThievesNaughty Dog
The document discusses challenges with updating large-scale engine systems, like animation and physics, on a moving train level in a game. A simple approach of just updating game objects sequentially each frame does not work well because it does not allow optimizing updates of different systems. The train in the game is dynamic rather than static, with each car following a spline path independently while maintaining proper spacing. Special handling is needed for updating the order of train cars and teleporting the train between locations.
Epic Games Japan hold a meeting named "Lightmass Deep Dive" on July 30, 2016.
Osamu Satio of Square Enix Osaka gave a presentation about their Lightmass Operation for Large Console Games. EGJ translated the slide for the presentation to English and published it.
There are some movies in the slide. So we recommend downloading this slide.
講演動画はこちら:
https://youtu.be/BoUNuMJGHuc
講演者:
斎藤 修(Epic Games Japan)
https://twitter.com/shiba_zushi
本スライドは2021年7月25日に行われたオンライン勉強会「UE4 Character Art Dive Online」の講演資料となります。
イベントについてはこちら:
https://www.unrealengine.com/ja/blog/epicgamesjapan-onlinelearning-13
This document describes an approach to animating hair using vertex shaders. Key points:
- Hair movement is encoded in vertex color channels for amount, left/right collisions, clumping groups, and front/back collisions.
- A collision detection pass calculates distance values between hair vertices and collision meshes, which are stored in the vertex colors.
- In the vertex shader, the collision values are used to modify a normalized acceleration vector controlling hair movement based on wind and head movement.
- The modified acceleration vector is multiplied by a strength value and filtered by each vertex's preset movement amount to determine final per-vertex displacement.
This talk presents the approach Frostbite took to add support for HDR displays. It will summarize Frostbite's previous post processing pipeline and what the issues were. Attendees will learn the decisions made to fix these issues, improve the color grading workflow and support high quality HDR and SDR output. This session will detail the display mapping used to implement the"grade once, output many" approach to targeting any display and why an ad-hoc approach as opposed to filmic tone mapping was chosen. Frostbite retained 3D LUT-based grading flexibility and the accuracy differences of computing these in decorrelated color spaces will be shown. This session will also include the main issues found on early adopter games, differences between HDR standards, optimizations to achieve performance parity with the legacy path and why supporting HDR can also improve the SDR version.
Takeaway
Attendees will learn how and why Frostbite chose to support High Dynamic Range [HDR] displays. They will understand the issues faced and how these were resolved. This talk will be useful for those still to support HDR and provide discussion points for those who already do.
Intended Audience
The intended audience is primarily rendering engineers, technical artists and artists; specifically those who focus on grading and lighting and those interested in HDR displays. Ideally attendees will be familiar with color grading and tonemapping.
Probabilistic Approaches to Shadow Maps FilteringMarco Salvi
The document summarizes various probabilistic approaches to reducing aliasing in shadow maps. It discusses previous work using variance shadow maps (VSM) and higher order moments. It then covers using probability density functions (PDFs) fitted to multiple moments to model the depth distribution, and an approach called exponential shadow maps that uses an exponential function to remove light bleeding artifacts. The document concludes that while these techniques help, further work is still needed to fully address light bleeding from non-planar surfaces.
This document provides a summary of a lecture on color and color perception. It begins with announcements about homework assignments. It then outlines the topics to be covered, including a recap of color and human color perception, retinal color space, color matching, linear color spaces, chromaticity, color calibration, non-linear color spaces, and notes on color reproduction. It provides context on the origins of some slides and then dives into detailed explanations and examples of these color-related topics. Key points covered include how color is a human perception of light wavelengths, the role of illuminant spectra and object reflectance, retinal vs perceived color, color matching experiments, linear color representations in different color spaces like LMS, RGB, XYZ, gam
WEBINAR ON FUNDAMENTALS OF DIGITAL IMAGE PROCESSING DURING COVID LOCK DOWN by by K.Vijay Anand , Associate Professor, Department of Electronics and Instrumentation Engineering , R.M.K Engineering College, Tamil Nadu , India
The document discusses color models including HLS and YIQ. It provides background on visible light wavelengths and introduces the YIQ and HLS color models. The YIQ model with Y for luminance, I for in-phase, and Q for quadrature was used in analog television to transmit color information using one signal. The HLS and HSV models represent color as Hue, Lightness/Value, and Saturation in a double hexagonal cone with white at the top and black at the bottom to better match human color perception compared to the RGB model. The models have applications in color selection, comparison, editing and image analysis.
This document discusses color image processing and different color models. It begins with an introduction and then covers color fundamentals such as brightness, hue, and saturation. It describes common color models like RGB, CMY, HSI, and YIQ. Pseudo color processing and full color image processing are explained. Color transformations between color models are also discussed. Implementation tips for interpolation methods in color processing are provided. The document concludes with thanks to the head of the computer science department.
This document discusses color image processing and provides information on various color models and color fundamentals. It describes full-color and pseudo-color processing, color fundamentals including the visible light spectrum, color perception by the human eye, and color properties. It also summarizes RGB, CMY/CMYK, and HSI color models, conversions between models, and methods for pseudo-color image processing including intensity slicing and intensity to color transformations.
The document is a project report on image contrast enhancement using histogram equalization and cubic spline interpolation. It discusses image processing and contrast enhancement techniques. It provides details on color models like RGB, HSV, and LAB. It describes converting between color spaces like RGB to HSV and RGB to LAB. It outlines histogram equalization and cubic spline interpolation for contrast enhancement in the spatial domain. The report was conducted as a training project at the Defence Terrain Research Laboratory in India.
This document provides an overview of a digital image processing course. It outlines 4 course outcomes: 1) describing basic concepts and applications of image processing, 2) describing techniques in color, segmentation, and recognition, 3) illustrating pixel relationships and image arithmetic, and 4) analyzing digital image enhancement principles. The document then discusses topics that will be covered in the course, including image types, operations, and applications in various fields.
1. There are two types of color image processing: pseudocolor processing which assigns colors to grayscale images, and full color processing which manipulates real color images.
2. The human visual system perceives color through photoreceptor cells (cones) in the retina that are sensitive to red, green, and blue wavelengths. Color images can be represented in various color spaces like RGB, HSI, CMYK.
3. Pseudocolor processing techniques include intensity slicing, color coding, and gray level to color transformations to visualize grayscale images. Full color processing involves operations on color components like color balancing, complement, slicing, smoothing and sharpening.
The document discusses color science and human color perception. It explains that color depends on the wavelength of light and how the eye perceives different wavelengths. The eye contains three types of cones that are most sensitive to red, green, and blue light. Combinations of these primary colors can reproduce any color visible to humans. Common color models used in devices include RGB used in computer monitors, CMYK used in printing, and YUV/YCbCr used in video and television.
COM2304: Intensity Transformation and Spatial Filtering – I (Intensity Transf...Hemantha Kulathilake
At the end of this lesson, you should be able to;
describe spatial domain of the digital image.
recognize the image enhancement techniques.
describe and apply the concept of intensity transformation.
express histograms and histogram processing.
describe image noise.
characterize the types of Noise.
describe concept of image restoration.
This document discusses digital image processing and various image enhancement techniques. It begins with introductions to digital image processing and fundamental image processing systems. It then covers topics like image sampling and quantization, color models, image transforms like the discrete Fourier transform, and noise removal techniques like median filtering. Histogram equalization and homomorphic filtering are also summarized as methods for image enhancement.
This document contains a question bank for the course EC2029 Digital Image Processing. It includes questions about key concepts in digital image processing such as defining images, pixels, brightness, color models, resolutions, transforms, and enhancement techniques. Specifically, it provides definitions for terms like grayscale, dynamic range, quantization, and color models. It also lists steps in digital image processing and properties of transforms like the Fourier and discrete cosine transforms.
The document discusses color image processing and color models. It describes how color is perceived by the human visual system through rods and cones in the retina. Various color models are examined, including RGB, CMY, HSV, YIQ, and YUV. Color models transform between different representations of color, such as representing a color by its hue, saturation, and intensity rather than red, green, and blue values.
This document discusses color image processing and covers several topics:
- The electromagnetic spectrum and how color is perceived by the human visual system.
- Common color models like RGB, CMY, HSI and how to convert between them.
- Color fundamentals including hue, saturation, brightness.
- Pseudocolor image processing to assign color to monochrome images.
- Full color image processing using color models like HSI.
- The modulation transfer function (MTF) and how it relates to the image contrast sensitivity of the visual system.
Analyzing color imaging failure on consumer-grade camerasSaiTedla1
There are many efforts to employ consumer-grade cameras for home-based health and wellness monitoring. Such applications rely on users to capture images for analysis using their personal cameras in a home environment. When color is a primary feature for diagnostic algorithms, the camera requires calibration to ensure accurate color measurements. Given the importance of these diagnostic tests for the users’ health and well-being, it is important to understand the conditions in which color calibration may fail. To this end, we analyzed a wide range of camera sensors and environmental lighting to determine (1) how often color calibration failure is likely to occur and (2) the underlying reasons for failure. Our analysis shows that it is rare to encounter a camera sensor and lighting condition combination that results in color imaging failure. Moreover, when color imaging does fail, the cause is almost always attributed to spectral poor environmental lighting and not the camera sensor. We believe this finding is useful for scientists and engineers developing color-based applications for use with consumer-grade cameras.
1. The document discusses the key elements of digital image processing including image acquisition, enhancement, restoration, segmentation, representation and description, recognition, and knowledge bases.
2. It also covers fundamentals of human visual perception such as the anatomy of the eye, image formation, brightness adaptation, color fundamentals, and color models like RGB and HSI.
3. The principles of video cameras are explained including the construction and working of the vidicon camera tube.
This document summarizes key concepts related to television imaging and the human visual system. It discusses how television aims to accurately present distant scenes in terms of geometry, brightness, contrast and color. It also explains fundamentals of human vision that television design is based on. Key aspects covered include the electromagnetic spectrum, color temperature, the definition of white, saturation, contrast, scanning and synchronization, color displays, and common video codecs.
The document discusses various techniques for digitally enhancing remote sensing images. It describes how enhancement aims to amplify subtle differences in images for better clarity and separability between features. Specific techniques discussed include linear contrast stretching to utilize the full dynamic range, density slicing to emphasize gray-scale differences, and histogram equalization to produce a uniform pixel distribution. Filters are also examined, with low-pass filters used to smooth images by averaging spatial frequencies while high-pass filters emphasize fine details and linear features.
color image processing is divided into two major areas:
1. Full Color image Processing
2. Pseudo Color image Processing
It Includes Color Fundamentals,Color Models,Pseudo color image Processing,Full Color image Processing,Color Transformation.
With the improvements in Image acquisition systems there is an increasing concentration in the direction of
High Dynamic Range (HDR) images where the amount of intensity levels varies among 2 to 10,000. With these
numerous intensity levels the exact representation of luminance variations is entirely possible. But, because the
normal display devices are shaped to exhibit Low Dynamic Range (LDR) images, there is necessary to translate
HDR images to LDR images without down significant image structures in HDR images. In this paper four TMOs
like Reinhard, Gamma and color correction TMOs are evaluated .In this paper two novel TMOs are projected.
Keywords — HDR, LDR, Tone mapping, Gamma correction.
Similar to The Importance of Terminology and sRGB Uncertainty - Notes - 0.5 (20)
Deep Behavioral Phenotyping in Systems Neuroscience for Functional Atlasing a...Ana Luísa Pinho
Functional Magnetic Resonance Imaging (fMRI) provides means to characterize brain activations in response to behavior. However, cognitive neuroscience has been limited to group-level effects referring to the performance of specific tasks. To obtain the functional profile of elementary cognitive mechanisms, the combination of brain responses to many tasks is required. Yet, to date, both structural atlases and parcellation-based activations do not fully account for cognitive function and still present several limitations. Further, they do not adapt overall to individual characteristics. In this talk, I will give an account of deep-behavioral phenotyping strategies, namely data-driven methods in large task-fMRI datasets, to optimize functional brain-data collection and improve inference of effects-of-interest related to mental processes. Key to this approach is the employment of fast multi-functional paradigms rich on features that can be well parametrized and, consequently, facilitate the creation of psycho-physiological constructs to be modelled with imaging data. Particular emphasis will be given to music stimuli when studying high-order cognitive mechanisms, due to their ecological nature and quality to enable complex behavior compounded by discrete entities. I will also discuss how deep-behavioral phenotyping and individualized models applied to neuroimaging data can better account for the subject-specific organization of domain-general cognitive systems in the human brain. Finally, the accumulation of functional brain signatures brings the possibility to clarify relationships among tasks and create a univocal link between brain systems and mental functions through: (1) the development of ontologies proposing an organization of cognitive processes; and (2) brain-network taxonomies describing functional specialization. To this end, tools to improve commensurability in cognitive science are necessary, such as public repositories, ontology-based platforms and automated meta-analysis tools. I will thus discuss some brain-atlasing resources currently under development, and their applicability in cognitive as well as clinical neuroscience.
Unlocking the mysteries of reproduction: Exploring fecundity and gonadosomati...AbdullaAlAsif1
The pygmy halfbeak Dermogenys colletei, is known for its viviparous nature, this presents an intriguing case of relatively low fecundity, raising questions about potential compensatory reproductive strategies employed by this species. Our study delves into the examination of fecundity and the Gonadosomatic Index (GSI) in the Pygmy Halfbeak, D. colletei (Meisner, 2001), an intriguing viviparous fish indigenous to Sarawak, Borneo. We hypothesize that the Pygmy halfbeak, D. colletei, may exhibit unique reproductive adaptations to offset its low fecundity, thus enhancing its survival and fitness. To address this, we conducted a comprehensive study utilizing 28 mature female specimens of D. colletei, carefully measuring fecundity and GSI to shed light on the reproductive adaptations of this species. Our findings reveal that D. colletei indeed exhibits low fecundity, with a mean of 16.76 ± 2.01, and a mean GSI of 12.83 ± 1.27, providing crucial insights into the reproductive mechanisms at play in this species. These results underscore the existence of unique reproductive strategies in D. colletei, enabling its adaptation and persistence in Borneo's diverse aquatic ecosystems, and call for further ecological research to elucidate these mechanisms. This study lends to a better understanding of viviparous fish in Borneo and contributes to the broader field of aquatic ecology, enhancing our knowledge of species adaptations to unique ecological challenges.
hematic appreciation test is a psychological assessment tool used to measure an individual's appreciation and understanding of specific themes or topics. This test helps to evaluate an individual's ability to connect different ideas and concepts within a given theme, as well as their overall comprehension and interpretation skills. The results of the test can provide valuable insights into an individual's cognitive abilities, creativity, and critical thinking skills
Comparing Evolved Extractive Text Summary Scores of Bidirectional Encoder Rep...University of Maribor
Slides from:
11th International Conference on Electrical, Electronics and Computer Engineering (IcETRAN), Niš, 3-6 June 2024
Track: Artificial Intelligence
https://www.etran.rs/2024/en/home-english/
ESPP presentation to EU Waste Water Network, 4th June 2024 “EU policies driving nutrient removal and recycling
and the revised UWWTD (Urban Waste Water Treatment Directive)”
The ability to recreate computational results with minimal effort and actionable metrics provides a solid foundation for scientific research and software development. When people can replicate an analysis at the touch of a button using open-source software, open data, and methods to assess and compare proposals, it significantly eases verification of results, engagement with a diverse range of contributors, and progress. However, we have yet to fully achieve this; there are still many sociotechnical frictions.
Inspired by David Donoho's vision, this talk aims to revisit the three crucial pillars of frictionless reproducibility (data sharing, code sharing, and competitive challenges) with the perspective of deep software variability.
Our observation is that multiple layers — hardware, operating systems, third-party libraries, software versions, input data, compile-time options, and parameters — are subject to variability that exacerbates frictions but is also essential for achieving robust, generalizable results and fostering innovation. I will first review the literature, providing evidence of how the complex variability interactions across these layers affect qualitative and quantitative software properties, thereby complicating the reproduction and replication of scientific studies in various fields.
I will then present some software engineering and AI techniques that can support the strategic exploration of variability spaces. These include the use of abstractions and models (e.g., feature models), sampling strategies (e.g., uniform, random), cost-effective measurements (e.g., incremental build of software configurations), and dimensionality reduction methods (e.g., transfer learning, feature selection, software debloating).
I will finally argue that deep variability is both the problem and solution of frictionless reproducibility, calling the software science community to develop new methods and tools to manage variability and foster reproducibility in software systems.
Exposé invité Journées Nationales du GDR GPL 2024
ANAMOLOUS SECONDARY GROWTH IN DICOT ROOTS.pptxRASHMI M G
Abnormal or anomalous secondary growth in plants. It defines secondary growth as an increase in plant girth due to vascular cambium or cork cambium. Anomalous secondary growth does not follow the normal pattern of a single vascular cambium producing xylem internally and phloem externally.
Nucleophilic Addition of carbonyl compounds.pptxSSR02
Nucleophilic addition is the most important reaction of carbonyls. Not just aldehydes and ketones, but also carboxylic acid derivatives in general.
Carbonyls undergo addition reactions with a large range of nucleophiles.
Comparing the relative basicity of the nucleophile and the product is extremely helpful in determining how reversible the addition reaction is. Reactions with Grignards and hydrides are irreversible. Reactions with weak bases like halides and carboxylates generally don’t happen.
Electronic effects (inductive effects, electron donation) have a large impact on reactivity.
Large groups adjacent to the carbonyl will slow the rate of reaction.
Neutral nucleophiles can also add to carbonyls, although their additions are generally slower and more reversible. Acid catalysis is sometimes employed to increase the rate of addition.
Or: Beyond linear.
Abstract: Equivariant neural networks are neural networks that incorporate symmetries. The nonlinear activation functions in these networks result in interesting nonlinear equivariant maps between simple representations, and motivate the key player of this talk: piecewise linear representation theory.
Disclaimer: No one is perfect, so please mind that there might be mistakes and typos.
dtubbenhauer@gmail.com
Corrected slides: dtubbenhauer.com/talks.html
The use of Nauplii and metanauplii artemia in aquaculture (brine shrimp).pptxMAGOTI ERNEST
Although Artemia has been known to man for centuries, its use as a food for the culture of larval organisms apparently began only in the 1930s, when several investigators found that it made an excellent food for newly hatched fish larvae (Litvinenko et al., 2023). As aquaculture developed in the 1960s and ‘70s, the use of Artemia also became more widespread, due both to its convenience and to its nutritional value for larval organisms (Arenas-Pardo et al., 2024). The fact that Artemia dormant cysts can be stored for long periods in cans, and then used as an off-the-shelf food requiring only 24 h of incubation makes them the most convenient, least labor-intensive, live food available for aquaculture (Sorgeloos & Roubach, 2021). The nutritional value of Artemia, especially for marine organisms, is not constant, but varies both geographically and temporally. During the last decade, however, both the causes of Artemia nutritional variability and methods to improve poorquality Artemia have been identified (Loufi et al., 2024).
Brine shrimp (Artemia spp.) are used in marine aquaculture worldwide. Annually, more than 2,000 metric tons of dry cysts are used for cultivation of fish, crustacean, and shellfish larva. Brine shrimp are important to aquaculture because newly hatched brine shrimp nauplii (larvae) provide a food source for many fish fry (Mozanzadeh et al., 2021). Culture and harvesting of brine shrimp eggs represents another aspect of the aquaculture industry. Nauplii and metanauplii of Artemia, commonly known as brine shrimp, play a crucial role in aquaculture due to their nutritional value and suitability as live feed for many aquatic species, particularly in larval stages (Sorgeloos & Roubach, 2021).
The use of Nauplii and metanauplii artemia in aquaculture (brine shrimp).pptx
The Importance of Terminology and sRGB Uncertainty - Notes - 0.5
1. The Importance of Terminology
and sRGB Uncertainty
Notes - 0.5
colour-science.org
1
2. Foreword
This presentation is the organised and formatted embodiment of the Colour
Science notes I have taken along the years. It is aimed at the VFX industry,
and is the work in-progress subset of a broader and generic Colour Science
presentation. Its creation wouldn’t have been possible without the works and
references cited in the Bibliography section.
Thomas Mansencal
2
4. The sRGB Uncertainty
• Understanding linear and sRGB color spaces : What does this mean?
sRGB is intrinsically linear!
• “We’ll start by learning how the sRGB and linear color spaces differ.”
• This is confusing for non experts because omitting an explicit emphasis of
the affected component of the sRGB colourspace.
4
5. What is Colour?
“Almost everyone knows what color is. After all, they have had firsthand
experience of it since shortly after birth. However, very few can precisely
describe their color experiences or even precisely define color.” [1]
1. Fairchild, M. D. (2013). Color Appearance Models (3rd ed., pp. 1–10831). Wiley. ISBN:B00DAYO8E2 5
6. What is Colour?
• Characteristic of visual perception that can be described by attributes of
hue, brightness (or lightness) and colourfulness (or saturation or chroma).
[1]
• Colour is perceived when light interacts with the human visual system
(HVS).
1. CIE. (n.d.). 17-198 colour (perceived). Retrieved June 26, 2014, from http://eilv.cie.co.at/term/198 6
8. Additive RGB Colourspace
• An additive RGB colourspace is defined by specifying 3 mandatory
components:
• Primaries
• Whitepoint
• Transfer Functions (OETF and EOTF)
8
9. Additive RGB Colourspace
• An additive RGB colourspace is a colorimetric colour space having three
colour primaries (generally red, green and blue) such that CIE XYZ
tristimulus values can be determined from the RGB colour space values
by forming a weighted combination of the CIE XYZ tristimulus values for
the individual colour primaries, where the weights are proportional to the
radiometrically linear colour space values for the corresponding colour
primaries. [1]
• NOTE 2 Additive RGB colour spaces are defined by specifying the CIE
chromaticity values for a set of additive RGB primaries and a colour
space white point, together with a colour component transfer
function.
1. ISO. (2004). INTERNATIONAL STANDARD ISO 22028-1 - Photography and graphic technology - Extended colour encodings for digital image storage,
manipulation and interchange, 2004. 9
11. Primaries
• The primaries chromaticity coordinates define the gamut of colours that
can be encoded by a given RGB colourspace.
• While commonly represented as triangles on a chromaticity diagram (such
as the CIE 1931 Chromaticity Diagram), RGB colourspace gamuts define
the boundaries of an actual solid within the CIE xyY colourspace.
11
12. Whitepoint
• The colourspace whitepoint is defined as the colour stimulus to which
colour space values are normalized. [1]
• Any colour lying on the neutral axis normal to the xy plane and passing
through the whitepoint, no matter its luminance, will be achromatic.
1. ISO. (2004). INTERNATIONAL STANDARD ISO 22028-1 - Photography and graphic technology - Extended colour encodings for digital image storage,
manipulation and interchange, 2004. 12
14. Transfer Functions (Conversion Functions)
• A colour component transfer function is defined as a single variable,
monotonic mathematical function applied individually to one or more
colour channels of a colour space. [1]
• They perform the mapping between the linear light components /
tristimulus values and a non-linear R'G'B' video signal.
• They are commonly used for faithful representation of images and
perceptual coding in relation with display non linear response and HVS
non linearity.
1. ISO. (2004). INTERNATIONAL STANDARD ISO 22028-1 - Photography and graphic technology - Extended colour encodings for digital image storage,
manipulation and interchange, 2004. 14
16. Opto-Electronic Transfer Function
• The opto-electronic transfer function (OETF or OECF) maps (encodes)
estimated tristimulus values in a scene to a non-linear R'G'B' video
component signal value.
• Typical OETFs are usually expressed by a power function with an
exponent between 0.4 and 0.5.
16
18. Electro-Optical Transfer function
• The electro-optical transfer function (EOTF or EOCF) maps (decodes) a
non-linear R'G'B' video component signal to a tristimulus value at the
display.
• Typical EOTFs are usually expressed by a power function with an
exponent between 2.2 and 2.6.
18
20. Misleading Terminology
Nuke’s Read node colorspace knob until Nuke 10 is only specifying an
electro-optical transfer function and will not perform gamut conversion.
20
21. Non Linearity of the Human Visual System
1. Davson, H. (1990). Physiology of the Eye (5th ed.). Elsevier Science Ltd. ISBN:978-0080379074 - colour-science.org 21
22. Non Linearity of the Human Visual System
• Weber’s law states that the just-noticeable difference (JND) between two
stimuli is proportional to the magnitude of the stimuli: an increment is
judged relative to the previous amount.
• Fechner mathematically characterised Weber’s law showing that it follows
a logarithmic transformation: the perceived magnitude of a stimulus is
proportional to the logarithm of the physical stimulus intensity.
22
23. Non Linearity of the Human Visual System
• Fechner’s scaling has been found to apply to the perception of brightness,
at moderate and high brightness, with perceived brightness being
proportional to the logarithm of the actual intensity.
• At lower levels of brightness, the de Vries-Rose law applies which states
that the perception of brightness is proportional to the square root of the
actual intensity.
23
24. Non Linearity of the Human Visual System
• Stevens’s law supersedes Fechner's law and addresses its lack of
generality.
• The results of the physical-perceptual relationship of his experiments on a
logarithmic scale were characterised by straight lines with different slopes,
suggesting that the relationship between perceptual magnitude and
stimulus intensity follows a power law with varying exponent.
24
28. Lightness - CIE L*
• Because of the various HVS adaptation mechanisms, perceived
brightness has a non-linear relationship with the actual physical intensity of
the stimulus.
• It is commonly approximated by a cube root.
• Multiple approximations of lightness (or value in the Munsell Renotation
System) were proposed leading to the creation of CIE L* in 1976.
• CIE L* characterises the perceptual response to relative luminance.
28
30. Colour Imaging System
• A colour imaging system embodies any combination of technologies and
devices required to perform:
• Image capture
• Signal processing
• Image formation
30
32. Image Capture
• Image capture / acquisition of colour stimuli can be performed in a
number of different ways using for example:
• An electronic device (electronic video camera, DSLR)
• Photographic film
32
33. Electronic Capture
• A movie camera may use a solid-state image sensor (CCD or CMOS) that
absorbs photons of light.
• As photons absorption occurs, electrons are collected into charge
packets.
• The image signal is produced by a sequential readout of the packets.
33
34. Electronic Capture
• Accurate image reproduction requires the capture device to be at least
trichromatic implying that colour stimuli spectral power distributions must
be separated into 3 colour signals.
• This separation can be achieved with:
• A beam splitter / colour filters combined to three sensors on high end
capture devices resulting in reduced noise and increased resolution.
• A single sensor covered with a mosaic of colour filters on systems
requiring a small form factor and lower price.
• Three sensor layers with different responses to wavelengths of light
stacked together similarly to photographic film (Foveon).
34
36. Photographic Negative Film
• A photographic film has red-, green-, and blue-light-sensitive layers
coated on a transparent base.
• The red and green layers are also sensitive to blue light, thus a yellow
filtering layer is placed above them. It will be made colourless during
chemical processing.
• Light sensitivity is induced by silver halide grains with appropriate spectral
response scattered within each light sensitive layer. The sensitive layers
also contain an appropriate dye coupler.
36
38. Image Formation
• The processed image signals control colour-forming elements of the
image formation medium / device.
• Two categories of image formation exist:
• Additive colour
• Subtractive colour
38
39. Additive Colour Formation
• CRT, LCD or plasma displays mix red, green and blue light through pixels
adjacency.
• DLP, digital cinema projectors perform superposition by using a beam
combiner.
39
40. Subtractive Colour Formation
• Photographic film use cyan, magenta and yellow dyes to absorb red,
green and blue light.
• Similarly, most printing processes use CMY inks.
• Colour stimuli formed by subtractive colour are dependent (and affected)
by the viewing light source.
40
41. Picture Rendering
• The colour imaging system usually achieves representation of a scene in a
way that matches viewer expectation of the appearance of that scene
instead of attempting to reproduce physical colour stimuli quantities.
• A sunlight outdoor scene can have luminance of 50,000 cd.m-2 but may be
displayed on a consumer electronic display with white peak luminance of
320 cd.m-2.
41
42. Picture Rendering
• The different viewing conditions and image formation medium / device
capabilities impose that scene luminance must be mapped to image
formation medium / device luminance.
• A simple linear mapping from scene luminance to image formation
medium / device luminance is not satisfactory.
• Picture rendering adjusts the tone scale to achieve a perceptual uniform
mapping.
42
43. Non Triviality of Picture Rendering
1. Fairchild, M. D. (n.d.). The HDR Photographic Survey. Retrieved April 15, 2015, from http://rit-mcsl.org/fairchild/HDRPS/HDRthumbs.html 43
44. Non Triviality of Picture Rendering
1. Fairchild, M. D. (n.d.). The HDR Photographic Survey. Retrieved April 15, 2015, from http://rit-mcsl.org/fairchild/HDRPS/HDRthumbs.html 44
45. Non Triviality of Picture Rendering
1. Fairchild, M. D. (n.d.). The HDR Photographic Survey. Retrieved April 15, 2015, from http://rit-mcsl.org/fairchild/HDRPS/HDRthumbs.html 45
46. Non Triviality of Picture Rendering
1. Fairchild, M. D. (n.d.). The HDR Photographic Survey. Retrieved April 15, 2015, from http://rit-mcsl.org/fairchild/HDRPS/HDRthumbs.html 46
47. Effect of Lateral-Brightness Adaptation
Images seen with a dark surround appear to have less contrast than if
viewed with a dim, average or bright surround.
47
48. Effect of Lateral-Brightness Adaptation
1. Fairchild, M. D. (n.d.). The HDR Photographic Survey. Retrieved April 15, 2015, from http://rit-mcsl.org/fairchild/HDRPS/HDRthumbs.html 48
49. Colour Encoding
• A colour encoding is a digital representation of colours for image
processing, storage, and interchange between systems.
• A colour encoding specification (standardised input / output interface of a
colour imaging system) must define:
• A colour encoding method which determines the meaning of the
encoded data or what will be represented by the data.
• A colour encoding data metric characterising the colourspace and the
numerical units used to encode the data or how the the representation
will be numerically expressed.
49
50. Image States
• The image state concept was defined by Madden & Giorgianni.
• Some signal processing operations make the image transition to a different
colorimetric state.
• An image may exist in scene state which is not directly viewable on typical
image formation devices and must be transitioned to a new state, the
rendered state.
50
51. Image States
• A colour encoding specification defined in relation to scene quantities is
said to be scene-referred: it has a colorimetric link to a scene.
• A colour encoding specification defined in relation to digital display
characteristics is said to be display-referred (rendered state): it has a
colorimetric link to a digital display device.
51
52. Display-Referred Imaging
• Raw image processors used by photographers (Lightroom, Darktable,
DCRaw, etc…) perform picture rendering on the raw scene referred data
to deliver a display-referred image.
• Artists achieving direct content creation in 2d applications are generating
display-referred content.
• Images available on the Internet such as on Google Images or textures
vendors website are output- / display-referred.
• A photograph taken on a mobile phone and uploaded to a social network
is display-referred.
52
53. Display-Referred Imaging
• Display-referred imagery created and exhibited on a display that matches
a standard reference (using sRGB specification and viewing conditions)
will appear the same across similar display devices without any further
action required.
• A photograph processed on a consumer graphics desktop and output as
a sRGB JPG or PNG file will approximately look the same on other
consumer graphics desktop.
53
54. Display-Referred Imaging
• Display-referred imagery has usually a restricted luminance dynamic
range and limited colour gamut thus some of the original captured scene-
referred data is lost upon encoding.
• This is unsuitable if the image is meant to be viewed on different image
formation devices with wider dynamic range.
54
55. Sony F35 - Out of Gamut Colours
1. http://www.oscars.org/science-technology/sci-tech-projects/aces 55
58. Scene-Referred Imaging
• Scene-referred representation of data contains enough information to
achieve the desired appearance of the scene on a variety of image
formation medium / device.
• Scene-referred imaging is the basis of physically-based rendering
allowing to reproduce realistic light interaction using plausible light
quantities. It makes possible realistic camera effects (motion-blur,
defocus).
58
59. Scene-Referred Imaging
1. Fairchild, M. D. (n.d.). The HDR Photographic Survey. Retrieved April 15, 2015, from http://rit-mcsl.org/fairchild/HDRPS/HDRthumbs.html 59
60. Scene-Referred Imaging
1. Fairchild, M. D. (n.d.). The HDR Photographic Survey. Retrieved April 15, 2015, from http://rit-mcsl.org/fairchild/HDRPS/HDRthumbs.html 60
61. Scene-Referred Imaging
1. Fairchild, M. D. (n.d.). The HDR Photographic Survey. Retrieved April 15, 2015, from http://rit-mcsl.org/fairchild/HDRPS/HDRthumbs.html 61
62. Scene-Referred Imaging
• Measured scene linear-light quantities are usually normalised to a known
reference.
• Commonly middle grey is set at luminance = 0.18 which is the reflectance
of:
• Reference Kodak 18% Grey Card
• Background colour of a DSC Labs CamAlign ChromaDuMonde chart
• Reflectance of a X-Rite ColorChecker neutral 5 (.70 D) sample is ≈ 19%!
62
64. Energy Conservation
• Anti-aliasing or image filtering operations should be energy preserving: the
total light emitted from the display should remain the same after the
processing operations.
• Resizing an image should not affect its luminance.
• Those operations must be performed on linear image data
64
68. Digital Image - Raster Graphics
• A digital image is a rectangular data structure (a 2 or 3-dimensional array)
of picture elements (pixels).
• A pixel colour is determined by a single code for achromatic images or
multiple codes for chromatic images (commonly three).
68
72. Quantization
• Quantization is the process of mapping a continuous signal (or large set of
input values) to a smaller set.
• Information between each quantizer steps is discarded and lost.
• Quantization error (signal distortion) decreases signal-to-noise ratio (SNR).
• Banding and contouring artefacts can be reduced by introducing a small
amount of noise (≈ 1 / 2 quantiser step) prior to the quantization. Dithering
decreases the SNR.
72
83. Perceptual Uniformity
• A colour imaging system is perceptually uniform if a small perturbation of a
component value is approximately equally perceptible across the range of
that value. [1]
• Most electronic colour imaging systems account for non linearity of the HVS
and its perceptual response to brightness when encoding RGB scene
relative luminance values (linear-light values) into R’G’B’ perceptually
uniform values.
This is commonly achieved with a logarithmic transfer function (gamma,
L*).
• They leverage non linearity of the HVS to reduce the bandwidth and
number of bits needed per pixel by optimising digital codes allocation.
1. Poynton, C. (n.d.). Perceptual Uniformity. Retrieved March 5, 2016, from http://www.poynton.com/notes/Timo/Perceptual_uniformity.html 83
84. Perceptual Uniformity
• Cathode ray tubes (CRT) display electron gun characteristics imposed an
EOCF that is approximately the inverse of HVS perception of brightness.
• HVS perceptual response to brightness associated with the CRT power
function produces code values displayed in a perceptual uniform way.
• Modern display devices (LCD, plasma, DLP) replicate this behaviour by
imposing a 2.2, 2.4 or 2.6 power function (Gamma Correction) through
signal processing circuitry.
84
89. Perceptual Coding
1. Poynton, C. (2012). Digital Video and HD, Second Edition: Algorithms and Interfaces (2nd ed.). Elsevier / Morgan Kaufmann. ISBN:978-0123919267 89
90. Perceptual Coding
1. Poynton, C. (2012). Digital Video and HD, Second Edition: Algorithms and Interfaces (2nd ed.). Elsevier / Morgan Kaufmann. ISBN:978-0123919267 90
92. Perceptual Coding
• The luminance difference between L and L + ΔL is noticeable when ΔL is
about 1% of L.
• The 1.01 (101 / 100) ratio is known as the Weber contrast or fraction.
92
93. Perceptual Coding
• An ideal non-linear transfer function will allocate code values to minimise
the just-noticeable difference (JND).
• On a linear-light values scale, code 100 is the location where Weber
contrast reaches 1%.
• Weber contrast increases for codes below 100, raising the perceptible
difference between adjacent codes and possibly producing banding and
contouring artefacts.
• Weber contrast decreases for codes over 100, higher codes are getting
wasteful and could be discarded without affecting the perception.
93
94. Perceptual Coding
• Good-quality image reproduction requires a contrast ratio >= 30:1 as
shown by the NTSC engineers in the 1950s.
• Using 8-bit linear-light coding, the contrast ratio that can be reproduced
without artefacts is only 2.55:1.
• Achieving a contrast ratio >= 30:1 with linear-light coding requires 12-bit
resulting in an artefacts free contrast ratio of 40.95:1 however most of
those codes cannot be visually discriminated.
94
95. Perceptual Coding
Maintaining a 1.01 Weber contrast over scene relative luminance range of
[0.01, 100], contrast ratio of 100:1, requires approximately 462 codes (≈ 9
bits). [1]
1. Poynton, C., & Funt, B. (2014). Perceptual uniformity in digital image representation and display. Color Research and Application, 39(1), 6–15. doi:10.1002/
col.21768 95
log100
log1.01
⇡ 462; 1.01462
⇡ 100
C =
log(CR)
log(WC)
where C is the number of codes, CR is the contrast ratio and WC is the desired
Weber contrast.
96. 16-Bit Integer & Half Float
Perceptual coding is not required when using 16-bit integer (artefacts free
contrast ratio of 655.35:1) or half float representations (Weber contrast of
0.1% [1], 2^10 = 1024 code values per stop)).
1. Poynton, C., & Funt, B. (2014). Perceptual uniformity in digital image representation and display. Color Research and Application, 39(1), 6–15. doi:10.1002/
col.21768 96
97. 8-Bit Colour Imaging System Dynamic Range
Dynamic range associated with code 1 on 8-bit colour imaging system is
closer to 200,000:1 (or 600,000:1) instead of the 255:1 (or 256:1) dynamic
range often alleged because of the incorrect assumption that linear-light
values are encoded.
97
1
255
!2.4
⇡ 0.0000016 ⇡
1
600000
98. Gamma
• Gamma (γ) is a numerical parameter giving the exponent of a power
function assumed to approximate the relationship between a signal
quantity (such as a video signal code) and light power. [1]
• Gamma Encoding (γE), characteristic of OETFs uses an exponent
approximately between 0.4 and 0.5.
• Gamma Decoding (γD), characteristic of EOTFs uses an exponent
approximately between 2.2 and 2.6.
1. Poynton, C. (2012). Digital Video and HD, Second Edition: Algorithms and Interfaces (2nd ed.). Elsevier / Morgan Kaufmann. ISBN:978-0123919267 98
101. Digital Colour Imaging System End-to-End Power Function
To overcome the loss in apparent contrast, the end-to-end power function of
a digital colour imaging system may have appropriate exponent values of 1,
1.25, and 1.5 for respectively bright, dim, and dark surrounds. [1]
1. Hunt, R. W. G. (2004). The Reproduction of Colour (6th ed.). Chichester, UK: Wiley. doi:10.1002/0470024275 101
103. Gamma Correction Misconceptions
• NTSC monochrome television was created in the 1940s and non linear
coding was a well understood element of good visual performance.
• Significance of perceptual uniformity has been generally forgotten: video
engineers seem to see gamma correction as a mean to address CRT “non
linearity defect”.
• ‘‘If gamma correction was not already necessary for physical reasons at
the CRT, we would have to invent it for perceptual reasons.” [1]
1. Poynton, C., & Funt, B. (2014). Perceptual uniformity in digital image representation and display. Color Research and Application, 39(1), 6–15. doi:10.1002/
col.21768 103
104. Digital Video & HD
• The luminance output of a CRT is proportional to input raised to the 5 / 2
power. A studio reference display CRT has a gamma ≈ 2.4.
• Gamma correction through the mean of an OETF is applied to pre-
compensate CRT display non linear power function and achieve perceptual
uniformity.
• In order to account for the different viewing conditions between original
scene and presentation, the correction under-compensate the actual CRT
display non linearity.
• This under-compensation yields an end-to-end power function with
exponent ≈ 1.2 which produces a pleasing television viewing experience in
dim surrounds.
104
105. Digital Video & HD
• Image Structure
• 1920 x 1080 progressive (24Hz, 30Hz), 16:9 aspect ratio
• 1920 x 1080 interlaced (30Hz), 16:9 aspect ratio
• 1280 x 720 progressive (24Hz, 30Hz, 60Hz), 16:9 aspect ratio
105
106. ITU-R BT.1886
• ITU-R BT.1886 defines the reference electro-optical transfer function for
CRT and LCD displays used in HDTV studio production.
• ITU-R BT.1886 adopts a power function with exponent γ = 0.5.
• The recommendation doesn’t standardise reference white and viewing
conditions.
• ITU-R BT.2035 defines a reference viewing environment for evaluation of
HDTV program material.
106
107. ITU-R BT.1886
• HD Studio Mastering (Typical)
• Reference white is typically set at 100-120 cd.m-2.
• Surround luminance is expected to be very dim at around 1% of reference white
luminance.
• Typical intra-image contrast ratio is 1000:1.
• HD Consumer (Typical)
• Reference white is typically set at 200 cd.m-2.
• Surround luminance is expected to be dim at around 5% of reference white luminance.
• Typical intra-image contrast ratio is 400:1.
107
108. ITU-R BT.2035
• ITU-R BT.2035 defines a reference viewing environment for evaluation of
HDTV program material.
• D.R.A.F.T
108
109. ITU-R BT.709 / Rec. 709
ITU-R BT.709 is the international standard defining the parameter values for
HDTV.
109
110. BT.709 OETF
• BT.709 OETF defines a 0.45 exponent but its effective power function
exponent is γE ≈ 0.5.
• BT.709 OETF is a piece-wise function: in order to reduce noise in dark
region, a line segment limits the slope of the power function (slope of a
power function is infinite at zero).
110
114. UHDTV
The UHD Alliance (UHDA) developed three specifications to support the
next-generation premium home entertainment experience covering the
entertainment ecosystem in the following categories: [1]
• Devices
• Distribution
• Content
1. UHDA. (2016). UHD Alliance Defines Premium Home Entertainment Experience. Retrieved January 8, 2016, from http://www.uhdalliance.org/uhd-alliance-
press-releasejanuary-4-2016/ 114
115. UHDTV - Devices
An UHDA compliant device must meet or exceed the following specifications:
• Image Resolution: 3840×2160
• Color Bit Depth: 10-bit signal
• Color Palette (Wide Color Gamut)
• Signal Input: BT.2020 color representation
• Display Reproduction: More than 90% of P3 colours
• High Dynamic Range
• SMPTE ST2084 EOTF
• A combination of peak brightness and black level of either:
• More than 1000 nits peak brightness and less than 0.05 nits black level
• More than 540 nits peak brightness and less than 0.0005 nits black level
115
116. UHDTV - Distribution
An UHDA compliant distribution channel must support:
• Image Resolution: 3840×2160
• Color Bit Depth: Minimum 10-bit signal
• Color: BT.2020 color representation
• High Dynamic Range: SMPTE ST2084 EOTF
116
117. UHDTV - Content Mastering
UHDA Content Master must meet the following requirements:
• Image Resolution: 3840×2160
• Color Bit Depth: Minimum 10-bit signal
• Color: BT.2020 color representation
• High Dynamic Range: SMPTE ST2084 EOTF
Specifications of UHDA recommended mastering display:
• Display Reproduction: Minimum 100% of P3 colours
• Peak Brightness: More than 1000 nits
• Black Level: Less than 0.03 nits
117
118. ITU-R BT.2020 / Rec. 2020
ITU-R BT.2020 defines the parameter values for ultra-high definition
television systems for production and international programme exchange.
118
120. BT.2020 OETF
BT.2020 OETF is the same than BT.709 OETF and is expected to be used in
conjunction with BT.1886 EOTF yielding an an end-to-end power function
with exponent ≈ 1.2.
120
124. SMPTE ST 2084
• SMPTE ST 2084 (PQ) is the international standard defining the EOTF
characterizing high-dynamic-range reference displays used primarily for
mastering non-broadcast content.
• The perceptual quantizer has been modeled by Dolby Laboratories using
Barten (1999) contrast sensitivity function.
• Display peak luminance is expected to reach 10,000 cd.m-2 and use a 10
or 12-bit data representation.
124
126. Multimedia & Desktop Graphics
sRGB IEC 61966-2-1:1999 specification is defined for multimedia
applications, desktop graphics considering a brighter surround than the one
of a studio reference display.
126
127. sRGB IEC 61966-2-1:1999
• sRGB adopts ITU-R BT.709 RGB colourspace gamut but it does not define
an OETF, only an EOTF.
• sRGB reference white is specified at 80 cd.m-2 in accordance to CRTs.
• Surround luminance is expected to be average at around 20% of
reference white luminance.
• Typical intra-image contrast ratio is 100:1.
• Modern LCD displays commonly peak at 320 cd.m-2.
127
128. sRGB EOTF
• sRGB EOTF doesn’t account for picture rendering: the end-to-end gamma
is ≈ 1.0, thus it is not suitable for displaying captured images.
• sRGB is defined as a display-referred colour encoding.
128
133. Digital Cinema
• Picture rendering was traditionally imposed by a camera negative film
gamma ≈ 0.5-0.6, an inter-positive film having a unity gamma and a
release print film stock with gamma ≈ 2.8-3.2, resulting in an end-to-end
gamma ≈ 1.4-1.8, suitable for dark film projection surrounds.
• DCI / SMPTE standard reference digital cinema projector apply a 2.6
gamma to the X’Y’Z’ DCDM (Digital Cinema Distribution Master) non linear
components.
133
134. Digital Cinema
• The X’Y’Z’ DCDM is encoded with JPEG-2000 compression.
• The X’Y’Z’ DCDM image file format is mapped into TIFF. Colour channels
are represented by 12-bit unsigned integer code values. These 12 bits are
placed into the most significant bits of 16-bit words, with the remaining 4
bits filled with zeroes.
• Image Structure
• 4096 x 2160, 24Hz, 1:1
• 2048 x 1080, 24Hz, 1:1
• 2048 x 1080, 48Hz, 1:1
134
135. Digital Cinema
• Digital cinema standards are display-referred: colour appearance of the
digital intermediate is fully baked into the X’Y’Z’ DCDM.
• Digital cinema reference white is specified at 48 cd.m-2.
• Surround luminance is expected to be dark (0% of reference white
luminance).
• Typical intra-image contrast ratio is 100:1.
• DCI-P3 is the wide gamut RGB colourspace in which digital cinema
material is mastered.
135
138. Digital Capture for Digital Cinema
• Motion picture camera vendors commonly encode their scene-referred
data using a log encoding function ('ALEXA Log C', 'C-Log', 'Panalog', 'S-
Log', ‘V-Log', etc…) tailored to account camera specific dynamic range
and noise characteristics.
• They also define dedicated gamuts accounting for the specific spectral
responses of their respective camera.
138
141. Digital Capture for Digital Cinema
• Those log encoding functions draw inspiration into Cineon Digital Film
System developed by Eastman Kodak Company.
• Cineon is a logarithmic encoding of the colour film negative optical
density.
• “Film has traditionally been represented by a characteristic curve which
plots density vs log exposure. This is a log/log representation. In defining
the calibration for the Cineon digital film system, Eastman Kodak Co.
talked to many experts in the film industry to determine the best data
metric to use for digitizing film. The consensus was to use the familier
density metric and to store the film as logarithmic data.” [1]
1. Kodak. (1995). Conversion of 10-bit Log Film Data To 8-bit Linear or Video Data for The Cineon Digital Film System. 141
145. Visual Effects Colour Pipeline
• Visual effects vendors generate scene-referred imagery that is seamlessly
integrated onto client plates while not altering their image state.
• This fundamental principle is at the heart of visual effects as shots with
visual effects must be intercutted with shots without visual effects (or
coming from other vendors).
• The digital intermediate (DI) expects a delivery that is a high fidelity
representation of the original capture.
145
146. Visual Effects Colour Pipeline
• The visual effects colour pipeline is a complex colour imaging system built
on individual chained colour imaging systems.
• Colour encoding specifications must be defined (and identifiable to be
accounted for) for every input / output signal processing operations.
146
147. Working Colour Encoding Specification
• A modern paradigm is to define a working colour encoding specification
(for example based on ACEScg, DCI-P3, or Rec. 2020 gamuts and
representing scene-referred linear-light quantities) and convert all the input
imagery with their respective colour encoding specifications to that
working specification.
• Plates are usually converted to the working colour encoding specification
by using an invertible decoding 1D LUT specific to their originating
gamma / log encoding and then to the working gamut by mean of a 3x3
matrix (or a 3D LUT).
147
148. Working Colour Encoding Specification
Some facilities perform the compositing stage within the client delivery
gamut: it can be beneficial when the working colour encoding specification
doesn’t encompass the captured plates gamut (avoiding complicated to
handle negative values).
Note: ARRI Alexa cameras are notorious to have a very wide gamut.
148
149. View Transform
• The scene-referred data is visualised using a dedicated view transform
(1D LUT or 3D LUT) that commonly model the typical characteristic curve
of a print film (print film emulation, S-Curve, sigmoid function combined
with a log curve, etc…).
• The view transform is never baked into the DI delivered imagery.
149
152. Compositing
• Plates are neutralised using an invertible process to overcome lighting
changes across a sequence.
• This permits reusability of light rigs at the rendering stage and establish a
better consistency across shots during the compositing stage.
• The neutralisation is reversed on compositing output.
152
154. Digital Intermediate & Mastering
1. http://www.parkroad.co.nz/wp-content/uploads/2015/10/Clare_Mahana_DI.jpg 154
155. Digital Intermediate & Mastering
• Digital intermediate is a display-referred finishing process originally
involving motion picture digitisation, colour manipulation (colour timing /
grading, contrast adjustment, etc…) and recording back to film again to
create a master internegative.
• The viewing environment replicates the final exhibition viewing
environment, and is adapted accordingly to each type of exhibition image
formation device (digital cinema, typical home theater, etc…).
• Calibration tolerances to the standards (DCI / SMPTE) are very strict.
155
156. Digital Intermediate & Mastering
• The DI process is commonly split into an initial pass that neutralises per
shot variation and a secondary pass that defines the colour artistic intent /
look of the film.
• The DI house may provide a Colour Decision List (CDL) or 3D LUT per
shot to visual effects vendors to give them an overview of the look being
developed.
156
157. Digital Intermediate & Mastering
• DI often creates masters for multiple image formation medium / devices.
• Artistic grading is performed on the “gold standard” image formation
device (usually the digital cinema projector) with approval of the director.
• Trim passes are executed for the other image formation devices and will
include specific corrections for the respective devices characteristics and
viewing conditions.
157
159. Academy Color Encoding System
• ACES is a colour management and image interchange system designed
for production, mastering and long-term archiving of motion pictures. [1]
• It enables consistent, high-quality colour management from production to
distribution.
• It provides digital image encoding and specifications preserving original
imagery latitude and colour range while establishing a common standard
so deliverables can be efficiently and predictably created and preserved.
1. The Academy of Motion Picture Arts and Sciences. (n.d.). ACES. Retrieved March 22, 2016, from http://www.oscars.org/science-technology/sci-tech-
projects/aces 159
160. ACES Components - Input
• Reference Input Capture Device (RICD)
The RICD, an ideal capturing device, records all the colour (and dynamic
range) of a given scene. It provides a documented, unambiguous and
fixed relationship between scene colours and encoded RGB values.
• Input Device Transform (IDT)
An image captured by a physical or virtual camera is transformed by the
IDT into ACES RGB relative exposure values that the RICD would have
recorded if used in-place.
160
161. ACES Components - Output
• Reference Rendering Transform (RRT)
ACES images are an intermediate representation and cannot be used for
final image evaluation. The RRT is an idealised replacement for print-film
emulations (S-Curve) with an extremely wide gamut and high dynamic
range (32 stops).
• Output Device Transform (ODT)
The ODT performs rendering of the RRT wide gamut and dynamic range
on a given physical display, accounting for its specific characteristics
(gamut, dynamic range, and EOCF) and viewing conditions.
161
162. ACES Components - Negative Film
• Academy Printing Density (APD)
Reference printing density for calibrating film scanners and film recorders.
• Academy Density Exchange (ADX)
Densitometric encoding (similar to Cineon) used for capturing data from
film scanners.
162
163. ACES Encodings
• ACES2065-1 (ACES Primaries 0, AP0)
The ACES common colour encoding colourspace used for exchange of full fidelity images
and archiving.
• ACEScg (ACES Primaries 1, AP1)
A linearly encoded colourspace for CG rendering and compositing, using the improved set
of primaries that encompass Rec. 2020 and DCI-P3 gamuts.
• ACEScc (ACES Primaries 1, AP1)
A logarithmically encoded colourspace for use in colour grading applications, using the AP1
primaries.
• ACES proxy (ACES Primaries 1, AP1)
A lightweight encoding using the AP1 primaries, for transmission over HD-SDI (or other
production transmission schemes), onset look management. Not intended to be stored or
used in production imagery or for final colour grading / mastering.
163
168. Colour Grading - GoG
168
y = ax + b (1)
y = (ax + b + c(1 x))1/
(2)
Yo = (gain ⇥ Yi + offset + lift ⇥ (1 Yi))(1/gamma)
(3)
where Yi is the input luminance and Yo the output luminance.
Note 1: (1) is slope-intercept form of a linear equation.
Note 2: On a television the contrast and brightness controls are respectively
mapped to the gain and o↵set variables.
182. 1D Lut & 3D Lut
• A 1D Lut is a single variable indexed one dimensional table.
Expensive runtime computation are replaced with a simpler array indexing
operation / look up.
• A 3D Lut is a three variable indexed three dimensional table (3D lattice)
where each variable (lattice axis) represent a colour component.
Output colour values for input variable points not exactly matching output
lattice points are interpolated.
182
183. Bibliography
• Fairchild, M. D. (2013). Color Appearance Models (3rd ed.). Wiley.
ISBN:B00DAYO8E2
• Wyszecki, G., & Stiles, W. S. (2000). Color Science: Concepts and
Methods, Quantitative Data and Formulae. Wiley. ISBN:978-0471399186
• Poynton, C. (2012). Digital Video and HD, Second Edition: Algorithms and
Interfaces (2nd ed.). Elsevier / Morgan Kaufmann. ISBN:978-0123919267
• Madden, T. E., & Giorgianni, E. J. (2007). Digital Color Management (Vol.
20). doi:10.1002/9780470994375
• Dutré, P., Bekaert, P., & Bala, K. (2006). Advanced Global Illumination, 2,
384. ISBN:1439864950
183
184. Bibliography
• ISO. (2004). INTERNATIONAL STANDARD ISO 22028-1 - Photography and graphic technology -
Extended colour encodings for digital image storage, manipulation and interchange, 2004.
• International Telecommunication Union. (2011). Recommendation ITU-R BT.1886 - Reference electro-
optical transfer function for flat panel displays used in HDTV studio production BT Series Broadcasting
service.
• International Telecommunication Union. (2015). Recommendation ITU-R BT.709-6 - Parameter values
for the HDTV standards for production and international programme exchange BT Series Broadcasting
service (Vol. 5). Retrieved from https://www.itu.int/dms_pubrec/itu-r/rec/bt/R-REC-BT.709-6-201506-I!!
PDF-E.pdf
• International Telecommunication Union. (2013). Recommendation ITU-R BT.2035 - A reference viewing
environment for evaluation of HDTV program material or completed programmes BT Series
Broadcasting service.
• International Telecommunication Union. (2015). Recommendation ITU-R BT.2020 - Parameter values for
ultra-high definition television systems for production and international programme exchange (Vol. 1).
Retrieved from https://www.itu.int/dms_pubrec/itu-r/rec/bt/R-REC-BT.2020-2-201510-I!!PDF-E.pdf
184
185. Bibliography
• Reinhard, E. (2009). A Reassessment of the Simultaneous Dynamic Range of the Human Visual
System, 17–24.
• Poynton, C., & Funt, B. (2014). Perceptual uniformity in digital image representation and display.
Color Research and Application, 39(1), 6–15. doi:10.1002/col.21768
• Selan, J. (2012). Cinematic color. ACM SIGGRAPH 2012 Posters on - SIGGRAPH ’12, 1–54.
doi:10.1145/2343483.2343492
• Kodak. (2002). KODAK: Student Filmmaker’s Handbook. Retrieved from http://ultra.sdk.free.fr/misc/
TechniquePhoto/Kodak Student Handbook.pdf
• Gilchrist, A. (2008). Perceptual organization in lightness. Vasa, 1–25. Retrieved from http://
www.gestaltrevision.be/pdfs/oxford/Gilchrist-Perceptual_organization_in_lightness.pdf
• Nilsson, M. (2015). BT Media and Broadcast - Ultra High Definition Video Formats and
Standardisation. Retrieved from http://www.mediaandbroadcast.bt.com/wp-content/uploads/
D2936-UHDTV-final.pdf
185
186. Bibliography
• Brendel, H. (2005). ARRI COMPANION TO DI - Chapter 2. Motion Picture
Film. Retrieved March 12, 2016, from http://dicomp.arri.de/digital/
digital_systems/DIcompanion/ch02.html
• Pritchard, B. R. (n.d.). Why Colour Negative is Orange. Retrieved March
19, 2016, from http://www.brianpritchard.com/
why_colour_negative_is_orange.htm
• https://github.com/colour-science/colour-ipython
• Wikipedia. (n.d.).
186