The document discusses HDR (high dynamic range) and tone mapping. It begins with an introduction to HDR images and how HDR is used in various fields like video on demand services, gaming and photography. It then covers what HDR and tone mapping are, including how HDR images have a wider brightness and color range than traditional images. It also discusses HDR standards like HDR10 and Dolby Vision.
High-dynamic range (HDR) video is available to consumers today via both streaming services and optical discs. HDR video is a visually compelling experience that the average consumer can readily differentiate from existing HD content. As such, HDR video is expected to drive the next wave of consumer video. HDR TVs are available today, and HDR monitors will be available this year. The availability of these monitors will allow new user experiences on PCs, including true HDR gaming. This presentation describes what exactly HDR is and the challenges of properly displaying it on existing devices (PCs, laptops, phones, and such.) Some of the unique challenges of HDR include needing to convert content into linear light space for proper blending and scaling. This requires substantially more precision in hardware and software than we use for displaying today’s standard dynamic range content.
HDR and WCG Video Broadcasting Considerations.pdfssuserc5a4dd
Elements of High-Quality Image Production–Color Gamut Conversion (Gamut Mapping and Inverse Gamut Mapping)–Gamma, OETF, EOTF, OOTF, PQ and HLG HDR–HDR & SDR Mastering, Mapping, Tone Mapping and Inverse Tone Mapping–Static and Dynamic Tone Mapping–Backwards Compatibility–HDR and WCG Production Equipment –DVB UHD Phases–HDR Metadata and HDR Standards –PQ10 and HLG10 Distrib
High-dynamic range (HDR) video is available to consumers today via both streaming services and optical discs. HDR video is a visually compelling experience that the average consumer can readily differentiate from existing HD content. As such, HDR video is expected to drive the next wave of consumer video. HDR TVs are available today, and HDR monitors will be available this year. The availability of these monitors will allow new user experiences on PCs, including true HDR gaming. This presentation describes what exactly HDR is and the challenges of properly displaying it on existing devices (PCs, laptops, phones, and such.) Some of the unique challenges of HDR include needing to convert content into linear light space for proper blending and scaling. This requires substantially more precision in hardware and software than we use for displaying today’s standard dynamic range content.
HDR and WCG Video Broadcasting Considerations.pdfssuserc5a4dd
Elements of High-Quality Image Production–Color Gamut Conversion (Gamut Mapping and Inverse Gamut Mapping)–Gamma, OETF, EOTF, OOTF, PQ and HLG HDR–HDR & SDR Mastering, Mapping, Tone Mapping and Inverse Tone Mapping–Static and Dynamic Tone Mapping–Backwards Compatibility–HDR and WCG Production Equipment –DVB UHD Phases–HDR Metadata and HDR Standards –PQ10 and HLG10 Distrib
HDR Insights Article 2 : PQ and HLG transfer functions for HDRVeneraTech
In this article of HDR insight series, you’ll learn more about transfer functions, and two specific transfer functions for High Dynamic Range, PQ & HLG.
HDR Insights Article 2 : PQ and HLG transfer functions for HDRVeneraTech
In this article of HDR insight series, you’ll learn more about transfer functions, and two specific transfer functions for High Dynamic Range, PQ & HLG.
The real world scenes have a very wide range of luminance levels. But in the field of photography, the ordinary cameras are not capable of capturing the true dynamic range of a natural scene. To enhance the dynamic range of the captured image, a technique known as High Dynamic Range (HDR) imaging is generally used. HDR imaging is the process of capturing scenes with larger intensity range than what conventional sensors can capture. It can faithfully capture the details in dark and bright part of the scene. In this paper HDR generation method such as multiple exposure fusion in image domain and radiance domain are reviewed. The main issues in HDR imaging using multiple exposure combination technique are Misalignment of input images, Noise in data sets and Ghosting artefacts. The removal of these artefacts is a major step in HDR reconstruction. Methods for removing misalignment and noise are discussed and detailed survey of ghost detection and removal techniques are given in this paper. Single shot HDR imaging is a recent technique in the field of HDR reconstruction. Here instead of taking multiple exposure input images, a single image is used for generating HDR image. Various methods for Single shot HDR imaging are also reviewed.
For the full video of this presentation, please visit:
https://www.edge-ai-vision.com/2021/01/cmos-image-sensors-a-guide-to-building-the-eyes-of-a-vision-system-a-presentation-from-gopro/
Jon Stern, Director of Optical Systems at GoPro, presents the “CMOS Image Sensors: A Guide to Building the Eyes of a Vision System” tutorial at the September 2020 Embedded Vision Summit.
Improvements in CMOS image sensors have been instrumental in lowering barriers for embedding vision into a broad range of systems. For example, a high degree of system-on-chip integration allows photons to be converted into bits with minimal support circuitry. Low power consumption enables imaging in even small, battery-powered devices. Simple control protocols mean that companies can design camera-based systems without extensive in-house expertise. Meanwhile, the low cost of CMOS sensors is enabling visual perception to become ever more pervasive.
In this tutorial, Stern introduces the basic operation, types and characteristics of CMOS image sensors; explains how to select the right sensor for your application; and provides practical guidelines for building a camera module by pairing the sensor with suitable optics. He highlights areas demanding of special attention to equip you with an understanding of the common pitfalls in designing imaging systems.
This powerpoint presentation is all about HDR (High Dyanamic Ranging) Photography which is generally used by professional photographers to create extra ordinarily beautiful pictures with more vivid colours. The ppt also covers the types and process to create a HDR Photograph.
20200509 sid china digital optics and digital modulation_v5.0Chun-Wei Tsai
CEO Kenneth Tai from Jasper Display Corp. (JDC) was an invited speaker at SID China on May 9, 2020. The presentation title is "Digital Optics and Digital Modulation."
Freedom in Lighting Design by Tuning the CCT with LEDs, LpS 2015, BregenzWojtek Cieplik
This presentation will look at the principles of colour mixing with an explanation of why two light sources made up of different wavelengths may appear to be the same colour. The connection between the colour qualities of electric light sources and human mood, health and productivity will also be a major focus.
(Paper Review) Abnormal Event Detection in Videos using Generative Adversaria...MYEONGGYU LEE
Korean Paper Review of "Abnormal Event Detection in Videos using Generative Adversarial Nets"
(Review date: 2021.05.17 @ Soongsil Univ. Cognitive Science Class)
(Paper Review) Reconstruction of Monte Carlo Image Sequences using a Recurren...MYEONGGYU LEE
review date: 2019/07/26 (by Meyong-Gyu.LEE @Soongsil Univ.)
Eng+Kor review of 'Reconstruction of Monte Carlo Image Sequences using a Recurrent Denoising Autoencoder' (Siggraph 2017)
(Paper Review)A versatile learning based 3D temporal tracker - scalable, robu...MYEONGGYU LEE
review date: 2018/04/09 (by Meyong-Gyu.LEE @Soongsil Univ.)
Eng review of 'A versatile learning based 3D temporal tracker - scalable, robust, online'(ICCV 2015)
(Paper Review)3D shape reconstruction from sketches via multi view convolutio...MYEONGGYU LEE
review date: 2019/03/20 (by Meyong-Gyu.LEE @Soongsil Univ.)
Korean review of '3D Shape Reconstruction from Sketches via Multi-view Convolutional Networks'(CVPR 2017)
(Paper Review)Towards foveated rendering for gaze tracked virtual realityMYEONGGYU LEE
review date: 2017/10/30 (by Meyong-Gyu.LEE @Soongsil Univ.)
Korean review of 'Towards Foveated Rendering for Gaze-Tracked Virtual Reality'(A Patney et al.)
(Papers Review)CNN for sentence classificationMYEONGGYU LEE
review date: 2017/10/10 (by Meyong-Gyu.LEE @Soongsil Univ.)
Korean review of 'Convolutional Neural Networks for Sentence Classification'(EMNLP2014) and 'A Syllable-based Technique for Word Embeddings of Korean Words'(HCLT 2017)
(Paper Review)Kernel predicting-convolutional-networks-for-denoising-monte-ca...MYEONGGYU LEE
review date: 2017/12/5 (by Meyong-Gyu.LEE @Soongsil Univ.)
Korean Paper review of 'Kernel Predicting Convolutional Networks for Denoising Monte Carlo Renderings'(Siggraph2017)
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
GridMate - End to end testing is a critical piece to ensure quality and avoid...ThomasParaiso2
End to end testing is a critical piece to ensure quality and avoid regressions. In this session, we share our journey building an E2E testing pipeline for GridMate components (LWC and Aura) using Cypress, JSForce, FakerJS…
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024Neo4j
Neha Bajwa, Vice President of Product Marketing, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
zkStudyClub - Reef: Fast Succinct Non-Interactive Zero-Knowledge Regex ProofsAlex Pruden
This paper presents Reef, a system for generating publicly verifiable succinct non-interactive zero-knowledge proofs that a committed document matches or does not match a regular expression. We describe applications such as proving the strength of passwords, the provenance of email despite redactions, the validity of oblivious DNS queries, and the existence of mutations in DNA. Reef supports the Perl Compatible Regular Expression syntax, including wildcards, alternation, ranges, capture groups, Kleene star, negations, and lookarounds. Reef introduces a new type of automata, Skipping Alternating Finite Automata (SAFA), that skips irrelevant parts of a document when producing proofs without undermining soundness, and instantiates SAFA with a lookup argument. Our experimental evaluation confirms that Reef can generate proofs for documents with 32M characters; the proofs are small and cheap to verify (under a second).
Paper: https://eprint.iacr.org/2023/1886
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!SOFTTECHHUB
As the digital landscape continually evolves, operating systems play a critical role in shaping user experiences and productivity. The launch of Nitrux Linux 3.5.0 marks a significant milestone, offering a robust alternative to traditional systems such as Windows 11. This article delves into the essence of Nitrux Linux 3.5.0, exploring its unique features, advantages, and how it stands as a compelling choice for both casual users and tech enthusiasts.
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...SOFTTECHHUB
The choice of an operating system plays a pivotal role in shaping our computing experience. For decades, Microsoft's Windows has dominated the market, offering a familiar and widely adopted platform for personal and professional use. However, as technological advancements continue to push the boundaries of innovation, alternative operating systems have emerged, challenging the status quo and offering users a fresh perspective on computing.
One such alternative that has garnered significant attention and acclaim is Nitrux Linux 3.5.0, a sleek, powerful, and user-friendly Linux distribution that promises to redefine the way we interact with our devices. With its focus on performance, security, and customization, Nitrux Linux presents a compelling case for those seeking to break free from the constraints of proprietary software and embrace the freedom and flexibility of open-source computing.
Enchancing adoption of Open Source Libraries. A case study on Albumentations.AIVladimir Iglovikov, Ph.D.
Presented by Vladimir Iglovikov:
- https://www.linkedin.com/in/iglovikov/
- https://x.com/viglovikov
- https://www.instagram.com/ternaus/
This presentation delves into the journey of Albumentations.ai, a highly successful open-source library for data augmentation.
Created out of a necessity for superior performance in Kaggle competitions, Albumentations has grown to become a widely used tool among data scientists and machine learning practitioners.
This case study covers various aspects, including:
People: The contributors and community that have supported Albumentations.
Metrics: The success indicators such as downloads, daily active users, GitHub stars, and financial contributions.
Challenges: The hurdles in monetizing open-source projects and measuring user engagement.
Development Practices: Best practices for creating, maintaining, and scaling open-source libraries, including code hygiene, CI/CD, and fast iteration.
Community Building: Strategies for making adoption easy, iterating quickly, and fostering a vibrant, engaged community.
Marketing: Both online and offline marketing tactics, focusing on real, impactful interactions and collaborations.
Mental Health: Maintaining balance and not feeling pressured by user demands.
Key insights include the importance of automation, making the adoption process seamless, and leveraging offline interactions for marketing. The presentation also emphasizes the need for continuous small improvements and building a friendly, inclusive community that contributes to the project's growth.
Vladimir Iglovikov brings his extensive experience as a Kaggle Grandmaster, ex-Staff ML Engineer at Lyft, sharing valuable lessons and practical advice for anyone looking to enhance the adoption of their open-source projects.
Explore more about Albumentations and join the community at:
GitHub: https://github.com/albumentations-team/albumentations
Website: https://albumentations.ai/
LinkedIn: https://www.linkedin.com/company/100504475
Twitter: https://x.com/albumentations
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Epistemic Interaction - tuning interfaces to provide information for AI support
Survey of HDR & Tone Mapping Task
1. 응용개발사업부
HDR & Tone Mapping (1/36)
2021/03/19
HDR & Tone Mapping
HDR 기술의 소개부터 딥러닝 기반 최신 톤 매핑 기법까지
Tech Seminar
Presented by 응용개발사업부 이명규
2. 응용개발사업부
HDR & Tone Mapping (2/36)
I N D E X
01
02
03
Introduction
HDR & Tone
Mapping
Featured Papers
3. 응용개발사업부
HDR & Tone Mapping (3/36)
Introduction
Part 01
1. HDR Image
2. HDR In Various Fields
4. 응용개발사업부
HDR & Tone Mapping (4/36)
HDR Image
1-1
• HDR: “Content with a Wider Range of Brightness and Color”
• HDR을 잘 표현하기 위해서는 새로운 디스플레이 장치가 필요
• 기존 콘텐츠: 최대 100nit 밝기, 709 Gamut*
• HDR 콘텐츠: 최대 10,000nit 밝기, 2020 Gamut
HDR LDR
*Gammut: 색상과 채도 (밝기와 대비를 의미하는 Gamma와 구분)
High-Dynamic Range (HDR) Demystified
Object
Approx.
Luminance
(nits)
Sun 1,600,000,000
Arc Lamp 150,000,000
Maximum
Visual Tolerance
50,000
Cloud (Sunny Day) 35,000
2016 UHD TV 800~1,000
Typical Computer Screen 100~300
White Paper
Under the Lamp
50
Night Sky 0.001
Threshold of Vision 0.000003
5. 응용개발사업부
HDR & Tone Mapping (5/36)
HDR In Various Fields
1-2
VOD Service Gaming Photography
High Dynamic Range Imaging 기술 및 최근 동향 (The Korean Information Display Society, 2019)
• HDR Imaging?
• 입력 영상에 대해 노출 정도를 다르게 설정 후 취득한 여러 이미지를 합성함으로써 Dynamic Range 증강
• 카메라에 의해 취득된 HDR 영상을 LCD, OLED 등의 디스플레이에서 정확히 표현할 수 있도록 하는 것
6. 응용개발사업부
HDR & Tone Mapping (6/36)
HDR In Various Fields
1-2
High Dynamic Range Imaging Technology
7. 응용개발사업부
HDR & Tone Mapping (7/36)
HDR & Tone
Mapping
Part 02
1. What is HDR?
2. What is Tone Mapping?
8. 응용개발사업부
HDR & Tone Mapping (8/36)
↳
What is HDR?
2-1
Dynamic Range of Human Eye
Use Cyclone® V SoC FPGA to Create Real-time HDR Video(Intel), Investigation on the Use of HDR Images for Cultural Heritage Documentation
• Contrast Ratio란?
• 디스플레이가 출력 가능한 가장 밝은 색(white)과
어두운 색(Black)의 휘도(Luminance) 명암 비율
9. 응용개발사업부
HDR & Tone Mapping (9/36)
↳
What is HDR?
2-1
Dynamic Range of Human Eye
'Light Adaptation'(Vision Models for High Dynamic Range and Wide Colour Gamut Imaging, 2020), “High Quality High Dynamic Range Imaging”
Cone Function (추상체 수용기 감지범위)
No
Moon
Moonlight
(Fullmoon)
Early
Twilight
Store,
Office
Outdoors
(Sunny)
Starlight
𝟏𝟎−𝟔
𝟏𝟎−𝟓 𝟏𝟎−𝟒
𝟏𝟎−𝟑
𝟏𝟎−𝟐
𝟏𝟎−𝟏 𝟏 𝟏𝟎𝟏
𝟏𝟎𝟐
𝟏𝟎𝟑
𝟏𝟎𝟒
𝟏𝟎𝟓 𝟏𝟎𝟔
Luminance
(𝒄𝒅/𝒎𝟐
)
Rod Function (간상체 수용기 감지범위)
HDR Display
Normal Display
Image Capturing
(Loss Dynamic Range)
Domain of Human Vision:
≈ 𝟏𝟎−𝟒
~ ≈ 𝟏𝟎𝟔
Conventional Image:
−𝟏 ~ ≈ 𝟏𝟎𝟐
“LDR Image”
(8-bit Integer [0~255])
“HDR Image”
(32-bit Floating Point)
HDR Imaging
(Recover lost Dynamic Range)
10. 응용개발사업부
HDR & Tone Mapping (10/36)
↳
What is HDR?
2-1
LDR VS. HDR
High Dynamic Range Imaging 기술 및 최근 동향
• Contrast Ratio에 따른 신호 구분
• 명암비가 1000:1보다 작은 경우: LDR(Low Dynamic Range) 또는 SDR(Standard Dynamic Range)
• 명암비가 1000:1~100,000:1인 경우: EDR (Enhanced Dynamic Range)
• 명암비가 100,000:1보다 큰 경우: HDR(High Dynamic Range)
• HDR 영상의 활용을 위해서는 디스플레이의 영상 신호처리 기술과 함께
색 재현율에 대한 기본적인 이해 필요
Today’s Topic
11. 응용개발사업부
HDR & Tone Mapping (11/36)
↳
What is HDR?
2-1
LDR VS. HDR
[0107 박민근] 쉽게 배우는 hdr과 톤맵핑
• LDR(SDR)
➢0~255로 256단계 색상 표현 가능 (𝟐𝟓𝟔𝟑
= 𝟏𝟔, 𝟕𝟕𝟕, 𝟐𝟏𝟔개 색 표현 가능)
➢픽셀당 4Byte씩 총 32-bit 사용 (R: 8-bit, G: 8-bit, B: 8-bit, A: 8-bit)
• HDR(OpenEXR format)
➢색상 당 16-bit Floating Point로 표현
➢채널 당 부호 1-bit, 가수에 10-bit, 지수에 5-bit 사용 (알파채널 포함 총 64-bit 버퍼 사용)
➢약 𝟒. 𝟒 × 𝟏𝟎𝟏𝟐
= 4.4e+12 색상 표현 가능 (≈ 𝟒, 𝟒𝟎𝟎, 𝟎𝟎𝟎, 𝟎𝟎𝟎, 𝟎𝟎𝟎개)
12. 응용개발사업부
HDR & Tone Mapping (12/36)
↳
What is HDR?
2-1
LDR VS. HDR
High-Dynamic Range (HDR) Demystified, [디스플레이 톺아보기] ㉚ HDR(High Dynamic Range)의 이해
*SDR: Standard Dynamic Range
(30 nit laptops in low power mode, 600 nit HDTVs in vivid mode)
Real World Mastered for SDR* Mastered for HDR
13. 응용개발사업부
HDR & Tone Mapping (13/36)
↳
What is HDR?
2-1
Color Bit
[디스플레이 톺아보기] ㉙ 디스플레이 색심도(Color Depth)의 이해, Converting Color Depth | Color
8-bit
3-bit 8-bit 24-bit
10-bit
14. 응용개발사업부
HDR & Tone Mapping (14/36)
↳
What is HDR?
2-1
HDR Standard Formats
[디스플레이 톺아보기] ㉚ HDR(High Dynamic Range)의 이해
HDR10 Dolby Vision
Color Gamut
(색역)
Color Depth
(색심도)
10bit 12bit
Peak Luminance
(최대밝기)
1000nits 10000nits
Meta Data
Static
(콘텐츠별 설정)
Dynamic
(프레임별 설정)
BT.2020
15. 응용개발사업부
HDR & Tone Mapping (15/36)
↳
What is HDR?
2-1
HDR Standard Formats
High-dynamic-range video
HDR10 HDR10+ Dolby Vision HLG10
CTA Samsung Dolby NHK and BBC
2015 2017 2014 2015
Free
Free (for content company),
Yearly license (for manufacturer)
Proprietary Free
Static
(SMPTE ST 2086, MaxFALL, MaxCLL)
Dynamic
Dynamic
(Dolby Vision L0, L1, L2 trim, L8 trim)
None
PQ PQ PQ (Not always) HLG
10 bit 10 bit (or more) 10 bit or 12 bit 10 bit
Technical
limit
10,000 nits 10,000 nits 10,000 nits Variable
Contents
No rules
1,000 - 4,000 nits (common)
No rules
1,000 - 4,000 nits (common)
(At least 1,000 nits)
4,000 nits common
1,000 nits common
Technical
limit
Rec. 2020 Rec. 2020 Rec. 2020 Rec. 2020
Contents DCI-P3 (common) DCI-P3 (common) At least DCI-P3 DCI-P3 (common)
None HDR10
It depends on the profile used:
- No compatibility
- SDR
- HDR10
- HLG
- Ultra HD Blu-Ray
- UHD-TV (Rec.2020)
- SDR (Rec.709) with color
distortion
Transfer function
Bit Depth
Peak
luminance
Color
primaries
Backward
compatibility
Metadata
Developed by
Year
Cost
Technical
charasteristics
16. 응용개발사업부
HDR & Tone Mapping (16/36)
↳
What is HDR?
2-1
모니터의 영상 신호 처리
High Dynamic Range Imaging 기술 및 최근 동향
• 디스플레이는 인간의 시각적 특성을 고려해 신호 처리
• 베버의 법칙에 따라, 정해진 최대 표현량을 정확히 고려해 최적의 화질을 재생할 수 있는
비선형 관계 이해가 중요
• 단순히 선형적으로 빛의 밝기를 표현하면 Posterization 현상이 발생하며,
이를 해결하기 위해 감마 보정 기법 사용
• HDR Imaging은 기존 디스플레이보다 높은 10~12-bit 범위에서 영상을 표현하므로,
주어진 bit depth 내에서 입력/출력 신호 간 관계를 적절하게 정의해야 함
• HDR Imaging은 또한 기존에 사용되던 Standard RGB 대비 170% 증가된
색역(e.g. BT Rec.2020)을 지원해야 함.
17. 응용개발사업부
HDR & Tone Mapping (17/36)
↳
• “인간의 감각은 선형적이지 않다”
➢초기에 받은 자극의 정도에 따라 나중에 받는 자극의 수용폭이 달라진다는 생리학 이론
Weber’s Law
손바닥에 100g의 무게부터 조금씩 무게를 늘려 나갔을 때 102g에서 최초로 무게가 다르다는 것을 느낄 수가 있었고,
200g의 물건을 올려놓았을 때는 204g에서 최초로 무게가 다르다는 것을 느낄 수 있었다.
이 실험에서 베버는 최초로 차이를 느낄 수 있을 때의 자극의 증가량 2g, 4g(R 절대판별역)과 처음 올려놓은 표준자극
100g과 200g(R)의 비(R/R상대판별역)는 항상 비례적으로 일정하다는 것을 발견했다.
이렇게 감각으로 구별할 수 있는 한계는 물리적인양의 차이가 아니고, 그 비율관계에 의해 결정된다는 것이다.
베버-페흐너의 법칙(Weber-Fechner’s law), UX디자이너가 알아야 할 심리학 법칙 5가지
What is HDR?
2-1
18. 응용개발사업부
HDR & Tone Mapping (18/36)
↳
Gamma Correction
머신 비전 ISP – 17. Gammar Corection, 감마보정 Gamma Correction
• “모니터는 실제보다 어둡게 빛을 표현한다!”
• 사람의 눈은 어두운 환경에서 작은 밝기의 변화에도 민감하게 반응
(반면 밝은 환경에서는 작은 밝기의 변화에 둔감하게 반응)
• “인간의 시각은 비선형적이므로, 굳이 인간이 잘 느끼지 못하는 부분까지 정밀하게
계산할 필요는 없다”
𝒇 𝒙 = 𝑮𝒂𝒊𝒏 × 𝑿𝒈𝒂𝒎𝒎𝒂
+ 𝒐𝒇𝒇𝒔𝒆𝒕
Nonlinear
Transfer
Function
Intensity
(HDR)
Transferred
Intensity
(Display)
일반적인
CRT 모니터의 감마
What is HDR?
2-1
19. 응용개발사업부
HDR & Tone Mapping (19/36)
↳
Gamma Correction
Lighting Shading by John Hable, 이미지 파일 감마 (Image File Gamma)와 디스플레이 감마 (Display Gamma)
• 이미지는 일반적으로 1/2.2로 감마를 적용한 상태에서 저장
• 디스플레이 출력 시 올바른 색상 표현을 위해 저장 시의 Gamma를 상쇄해야 함
• 올바른 HDR 결과를 얻기 위해서는 감마 보정 작업을 거쳐야 한다!
What is HDR?
2-1
20. 응용개발사업부
HDR & Tone Mapping (20/36)
↳
What is Tone Mapping?
2-2
• 따라서 HDR 포맷을 디스플레이 출력이 가능한 휘도 범위로 변환 처리해야 함.
• 톤 매핑은 HDR 범위를 인간 색 인지에 기반해 Display에서 지원하는 범위로 맞추는 작업
• Tone Mapping 기법의 종류
• HDR to HDR
• HDR to LDR
• LDR to HDR (“Inverse Tone Mapping”)
Tone Mapping
• HDR(FP16)
➢ 색상 당 16-bit Floating Point로 표현
➢ 채널 당 부호 1-bit, 가수에 10-bit, 지수에 5-bit 사용 (알파채널 포함 총 64-bit 버퍼 사용)
➢ 약 𝟒. 𝟒 × 𝟏𝟎𝟏𝟐
= 4.4e+12 색상 표현 가능 (≈ 𝟒, 𝟒𝟎𝟎, 𝟎𝟎𝟎, 𝟎𝟎𝟎, 𝟎𝟎𝟎)
“아무리 실수 범위에서 넓게 계산해도
일반 모니터는 4Byte RGB만 출력 가능”
Today’s Topic
21. 응용개발사업부
HDR & Tone Mapping (21/36)
↳
What is Tone Mapping?
2-2
Tone Mapping
SDR
Content
SDR
Display
SDR
Experience
SDR
Content
HDR
Display
SDR
Experience
HDR
Content
HDR
Display
HDR
Experience
HDR
Content
SDR
Display
Bad
Experience
HDR
Content
HDR▶SDR
Tone Mapping
SDR
Display
SDR
Experience
High-Dynamic Range (HDR) Demystified
22. 응용개발사업부
HDR & Tone Mapping (22/36)
↳
What is Tone Mapping?
2-2
• HDR Imaging에는 비싼 장비와 상당한 연산 시간이 필요
➢ 따라서 LDR 이미지만으로 Real World Luminance를 추정하는 역 톤 매핑 기법의 필요성 대두
• 톤 매핑 과정에서 Linear RGB를 CIE 1931 XYZ Color Space로 변환
➢ 𝑿 = 𝟎. 𝟒𝟏𝟐𝑹𝒘 + 𝟎. 𝟑𝟓𝟕𝑮𝒘 + 𝟎. 𝟏𝟖𝑩𝒘
𝒀 = 𝑳𝑾 = 𝟎. 𝟐𝟏𝟑𝑹𝒘 + 𝟎. 𝟕𝟏𝟓𝑮𝒘 + 𝟎. 𝟎𝟕𝟐𝑩𝒘, where 𝐿𝑊 is real world luminance(≈HDR)
𝒁 = 𝟎. 𝟎𝟏𝟗𝑹𝒘 + 𝟎. 𝟏𝟏𝟗𝑮𝒘 + 𝟎. 𝟗𝟓𝑩𝒘
➢ XYZ Color Space는 인간의 색채 인지 연구를 바탕으로 만들어진 색 공간 (따라서 다른 색공간의 기본이 됨)
• 톤 매핑 과정에서 압축된 정보의 손실이 발생하므로 역 톤 매핑은 ill-posed problem에 해당
• 역 톤 매핑은 𝑹𝒅, 𝑮𝒅, 𝑩𝒅만 주어지므로 Real World Color(𝑹𝒘, 𝑮𝒘, 𝑩𝒘)로 돌아가는 변환인 𝑳𝑾을 알 수 없음
Inverse Tone Mapping
"Inverse tone mapping“, ITU-R Recommendation BT.709 RGB, CIE 1931 color space, high Dynamic Range Imaging: Acquisition, Display and Image-Based Lighting
𝑹𝒅
𝑮𝒅
𝑩𝒅
=
𝑳𝒅
𝑹𝒘
𝑳𝒘
𝑳𝒅
𝑮𝒘
𝑳𝒘
𝑳𝒅
𝑩𝒘
𝑳𝒘
▲ Tone Mapping
𝑹𝒘
𝑮𝒘
𝑩𝒘
=
𝑳𝒘
𝑹𝒅
𝑳𝒅
𝑳𝒘
𝑮𝒅
𝑳𝒅
𝑳𝒘
𝑩𝒅
𝑳𝒅
▲ Inverse Tone Mapping
Compressed
Colors
World
Colors
???
23. 응용개발사업부
HDR & Tone Mapping (23/36)
↳
What is Tone Mapping?
2-2
• Previous Approaches
➢“The Reproduction of Specular Highlights on High Dynamic Range Displays”
✓ 출력 디스플레이 특성을 고려해 Dynamic Range를 적응적으로 조절하는 방법
✓ 빛이 방출되거나 반사되는 영역에 대해 더 많은 정보를 부여해 표현력 향상
➢“Ldr2Hdr: on-the-fly reverse tone mapping of legacy video and photographs”
✓ 입력 영상 내에서 상위 계조를 갖는 영역 검출 후 더 많은 정보를 부여해 표현력 향상
✓ 빛 분포 고려 후 픽셀 표현 가능 범위를 넘어 saturation된 상위 계조에 집중
✓“Physiological inverse tone mapping based on retina response”
✓ 인간의 시각적 특성을 반영한 Perceptual Brightness를 정의해 입출력 영상 신호 간
상관 관계를 적응적으로 정의
Inverse Tone Mapping: Method
High Dynamic Range Imaging 기술 및 최근 동향 (The Korean Information Display Society, 2019)
24. 응용개발사업부
HDR & Tone Mapping (24/36)
↳
What is Tone Mapping?
2-2
• Deep-Learning Based Approaches
➢“HDR image reconstruction from a single exposure using deep CNNs”
✓ 입력 영상에 대해 상위/하위 계조를 나눈 후 하위 계조는 원본 신호를,
상위 계조는 CNN을 통해 추론된 신호를 사용(단, 전체적으로 어두운 영상에서만 잘 작동)
➢“Deep Recursive HDRI: Inverse Tone Mapping using Generative Adversarial Networks”
✓ GAN을 활용해 성능을 향상(L1-Loss, Adversarial Loss, U-Net, PatchGAN)
Inverse Tone Mapping: Method
High Dynamic Range Imaging 기술 및 최근 동향 (The Korean Information Display Society, 2019)
25. 응용개발사업부
HDR & Tone Mapping (25/36)
Featured Papers
Part 03
1. HDRCNN
(SIGGRAPH ASIA 2017)
2. SingleHDR
(CVPR 2020)
3. LDR2HDR
(SIGGRAPH 2020)
26. 응용개발사업부
HDR & Tone Mapping (26/36)
↳
HDRCNN
3-1
"HDR image reconstruction from a single exposure using deep CNNs"
(SIGGRAPH ASIA 2017, GABRIEL EILERTSEN et al.) [Paper, Code]
https://arxiv.org/abs/1710.07480
“Highlight Information을 유지하면서(초록 박스)
Saturate된 색상 영역의 Color Detail 복원”
27. 응용개발사업부
HDR & Tone Mapping (27/36)
↳
Convert to
logarithmic HDR
Domain
Performs Bilinear
Up-sampling
Performs
Addition
Predicted HDR image
in log domain
HDRCNN
3-1
Network Structure
https://arxiv.org/abs/1710.07480
28. 응용개발사업부
HDR & Tone Mapping (28/36)
↳
HDRCNN
3-1
Problem Formulation & Loss Function
https://arxiv.org/abs/1710.07480, Modeling the Space of Camera Response Functions
𝓛 ෝ
𝒚, 𝑯 =
𝟏
𝟑𝑵
𝒊,𝒄
𝜶𝒊(ෝ
𝒚𝒊,𝒄 − 𝐥𝐨𝐠(𝑯𝒊,𝒄 + 𝝐))
𝟐
𝑯𝒊,𝒄 ∈ ℝ+
# of Pixels
Linear
GT HDR
Predicted
Log HDR
▲ 201 Real-world Response Functions Database(“DoRF”)
𝑯𝒊,𝒄 = 𝟏 − 𝜶𝒊 𝒇−𝟏 𝑫𝒊,𝒄 + 𝜶𝒊𝐞𝐱𝐩(ෝ
𝒚𝒊, 𝒄)
Final Reconstructed
HDR pixels
(where 𝒊=spatial index,
𝒄=channel)
Input
LDR Pixel
Predicted output
(log domain)
Inverse Camera Function
(transform input to linear domain)
Blending
Factor
𝜶𝒊 =
𝐦𝐚𝐱(𝟎,𝒎𝒂𝒙𝒄 𝑫𝒊,𝒄 −𝝉)
𝟏−𝝉
, where 𝝉 =0.95
(하이라이트 부근의 Banding artifact를 방지하기 위한 factor)
29. 응용개발사업부
HDR & Tone Mapping (29/36)
↳
SingleHDR
3-2
"Single-Image HDR Reconstruction by Learning to Reverse the
Camera Pipeline" (CVPR 2020, Yu-Lun Liu et al.) [Paper, Code]
https://arxiv.org/abs/2004.01179
“카메라 파이프라인을 역으로 모델링해 학습함으로써
소실된 디테일 정보 복원”
32. 응용개발사업부
HDR & Tone Mapping (32/36)
↳
LDR2HDR
3-3
"Single Image HDR Reconstruction Using a CNN with Masked Features
and Perceptual Loss" (SIGGRAPH 2020, MARCEL SANTANA SANTOS et al.) [Paper, Code]
https://people.engr.tamu.edu/nimak/Data/SIGGRAPH20_HDR.pdf
“높은 노출로 인해 밝게 타 버린 텍스쳐 영역도 잘 복원”
34. 응용개발사업부
HDR & Tone Mapping (34/36)
↳
LDR2HDR
3-3
Network Structure
https://people.engr.tamu.edu/nimak/Data/SIGGRAPH20_HDR.pdf
일반적인 문제와는 달리 타 버린(saturated areas) LDR 영역에
대해서도 잘 작동시키기 위해 Feature Masking 기법을 제안
35. 응용개발사업부
HDR & Tone Mapping (35/36)
↳
LDR2HDR
3-3
Network Structure: Feature Masking
https://people.engr.tamu.edu/nimak/Data/SIGGRAPH20_HDR.pdf
𝒁𝒍 = 𝑿𝒍⨀𝑴𝒍
𝑿𝒍+𝟏 = 𝝓𝒍(𝑾𝒍 ∗ 𝒁𝒍 + 𝒃𝒍)
𝑿𝒍 ∈ ℝ𝑯×𝑾×𝑪
, 𝑴𝒍 ∈ 𝟎, 𝟏 𝑯×𝑾×𝑪
𝑊𝑙 and 𝑏𝑙 is weight and bias of the current layer