The document provides an overview of key elements and trends in high-quality image production, including spatial resolution, temporal resolution, dynamic range, color gamut, quantization, and related technologies. It discusses technologies like HD, UHD, HDR and WCG and how they improve the total quality of experience. Images and charts are included to illustrate comparisons of technologies and results from industry surveys on trends and commercial projects.
This document discusses key elements that contribute to high quality image production, including spatial resolution, frame rate, dynamic range, color gamut, bit depth, and compression artifacts. It examines these elements in the context of 4K and 8K broadcast cameras and their advantages over HD. Factors like wider viewing angles, increased perceived motion, and benefits for nature documentaries are cited as motivations for 8K. Technical details covered include lens flange back distance, flare, shading, chromatic aberration, and testing procedures. Overall quality is represented as a function of these various image quality factors.
This document discusses IP interfaces for video production and summarizes the benefits of IP-based systems compared to SDI. It provides examples of IP-enabled video switchers and control systems from Sony and Grass Valley. The rest of the document discusses standards organizations and specifications that enable IP interoperability such as SMPTE ST 2110, AES67, and AIMS. It also summarizes IP routing and processing platforms like Grass Valley's GV Node and control systems like Lawo's VSM.
This document outlines elements of high-quality image production, including spatial and temporal resolution, dynamic range, color gamut, bit depth, and coding. It discusses color gamut conversion, gamma correction, HDR and SDR mastering, tone mapping, and backwards compatibility. The document also covers HDR metadata standards and different distribution scenarios for HDR content.
This document provides an overview of color video signals and color perception by the human visual system. It discusses:
1. The sensitivity of human cone cells to different wavelengths of light and how this determines color perception.
2. How color video signals like YUV, RGB, and composite video encode color and brightness information.
3. Standards for analog color television transmission including NTSC, PAL, and SECAM which differ in aspects like lines, frame rate, and color encoding.
This document discusses emerging technologies and optimization techniques for media workflows and content management. It covers topics like virtual, augmented and mixed reality, 3D spatial audio, high dynamic range video, media over IP, object-based media, video compression techniques, and streaming. Specific technologies and standards discussed include 360-degree video, MPEG-H for 3D audio, HDR10, Dolby Vision, HLG, and SMPTE ST 2110 for media over IP. Applications and use cases are also presented for mixed reality, spatial computing, and next-generation audiovisual experiences.
This document provides information about quality control testing of audiovisual content. It discusses various quality control tests that can be performed, including tests for analogue frame synchronization errors, black bars, constant colour frames, flashing video, macroblocking, video deinterlacing artifacts, and digital tape dropouts. Examples are provided for how each test can be configured and what results might look like. The goal of the quality control tests is to help broadcasters optimize their automated quality control systems and cope with increasing amounts of digital content.
The document discusses various networking protocols and standards related to professional media over IP, including:
- SMPTE ST 2110 standards that define carriage of uncompressed video, audio, and data over IP networks as separate elementary streams.
- AES67, which enables high-performance audio-over-IP streaming interoperability between different IP audio networking products.
- Other relevant standards and protocols like SMPTE ST 2022, AIMS recommendations, Video Services Forum TR-03/04, RTP, SDP, PTP, and IGMP.
- Considerations for designing IP infrastructures for media networks, including capacity, connectivity, timing, control, and redundancy.
The document provides a history of the development of television technology from the late 1800s through the 1920s. Some key developments include:
- In 1873, experiments with selenium, which is light-sensitive and formed the basis for early televisions.
- In 1884, the Nipkow disk laid down many basic concepts like scanning and synchronization.
- In 1923, Vladimir Zworykin developed the Kinescope, which allowed television programs to be recorded on film.
- In 1924, John Logie Baird transmitted the first television image.
- In 1925, Vladimir Zworykin demonstrated 60-line television using a curved-line image structure typical of mechanical television at the time.
This document discusses key elements that contribute to high quality image production, including spatial resolution, frame rate, dynamic range, color gamut, bit depth, and compression artifacts. It examines these elements in the context of 4K and 8K broadcast cameras and their advantages over HD. Factors like wider viewing angles, increased perceived motion, and benefits for nature documentaries are cited as motivations for 8K. Technical details covered include lens flange back distance, flare, shading, chromatic aberration, and testing procedures. Overall quality is represented as a function of these various image quality factors.
This document discusses IP interfaces for video production and summarizes the benefits of IP-based systems compared to SDI. It provides examples of IP-enabled video switchers and control systems from Sony and Grass Valley. The rest of the document discusses standards organizations and specifications that enable IP interoperability such as SMPTE ST 2110, AES67, and AIMS. It also summarizes IP routing and processing platforms like Grass Valley's GV Node and control systems like Lawo's VSM.
This document outlines elements of high-quality image production, including spatial and temporal resolution, dynamic range, color gamut, bit depth, and coding. It discusses color gamut conversion, gamma correction, HDR and SDR mastering, tone mapping, and backwards compatibility. The document also covers HDR metadata standards and different distribution scenarios for HDR content.
This document provides an overview of color video signals and color perception by the human visual system. It discusses:
1. The sensitivity of human cone cells to different wavelengths of light and how this determines color perception.
2. How color video signals like YUV, RGB, and composite video encode color and brightness information.
3. Standards for analog color television transmission including NTSC, PAL, and SECAM which differ in aspects like lines, frame rate, and color encoding.
This document discusses emerging technologies and optimization techniques for media workflows and content management. It covers topics like virtual, augmented and mixed reality, 3D spatial audio, high dynamic range video, media over IP, object-based media, video compression techniques, and streaming. Specific technologies and standards discussed include 360-degree video, MPEG-H for 3D audio, HDR10, Dolby Vision, HLG, and SMPTE ST 2110 for media over IP. Applications and use cases are also presented for mixed reality, spatial computing, and next-generation audiovisual experiences.
This document provides information about quality control testing of audiovisual content. It discusses various quality control tests that can be performed, including tests for analogue frame synchronization errors, black bars, constant colour frames, flashing video, macroblocking, video deinterlacing artifacts, and digital tape dropouts. Examples are provided for how each test can be configured and what results might look like. The goal of the quality control tests is to help broadcasters optimize their automated quality control systems and cope with increasing amounts of digital content.
The document discusses various networking protocols and standards related to professional media over IP, including:
- SMPTE ST 2110 standards that define carriage of uncompressed video, audio, and data over IP networks as separate elementary streams.
- AES67, which enables high-performance audio-over-IP streaming interoperability between different IP audio networking products.
- Other relevant standards and protocols like SMPTE ST 2022, AIMS recommendations, Video Services Forum TR-03/04, RTP, SDP, PTP, and IGMP.
- Considerations for designing IP infrastructures for media networks, including capacity, connectivity, timing, control, and redundancy.
The document provides a history of the development of television technology from the late 1800s through the 1920s. Some key developments include:
- In 1873, experiments with selenium, which is light-sensitive and formed the basis for early televisions.
- In 1884, the Nipkow disk laid down many basic concepts like scanning and synchronization.
- In 1923, Vladimir Zworykin developed the Kinescope, which allowed television programs to be recorded on film.
- In 1924, John Logie Baird transmitted the first television image.
- In 1925, Vladimir Zworykin demonstrated 60-line television using a curved-line image structure typical of mechanical television at the time.
This document outlines an educational course on audio and video over IP. The course covers IP networking fundamentals and standards including TCP/IP, OSI models, and SMPTE ST 2110. It also examines IP infrastructure, routing, timing issues, switching, compression techniques and case studies for broadcast facilities transitioning to IP. The document provides an in-depth outline of topics covered in each session, from IP basics to designing and integrating both hybrid and fully IP-based outside broadcast trucks. The goal is to educate on best practices for implementing audio and video over IP workflows and infrastructure.
This document provides an overview of analog and digital triax systems used for video transmission. It discusses key aspects of triax cables such as their ability to transmit multiple signals simultaneously through bundled cables. Both analog and digital triax systems are described, with analog transmitting component signals on different carrier frequencies and digital transmitting signals in digital format. The document also covers triax cable specifications, common connectors types used for broadcasting applications from different standards, fiber optic cable types including single mode and multi-mode, and common fiber connectors. Transmission distances and electrical properties of triax cables are discussed.
HDR, wide color gamut, and higher frame rates are new technologies that can improve image quality for ultra high definition televisions. They provide benefits like more vivid colors, deeper blacks, better shadow detail, and a more immersive viewing experience. However, supporting these new features requires significantly more data bandwidth compared to legacy standards. Future video standards will need to efficiently support higher resolutions, wider color, high dynamic range, and high frame rates to deliver next-generation picture quality while still allowing content to be economically distributed.
Dr. Mohieddin Moradi provides an outline on high dynamic range (HDR) technology. The 3-page document covers various topics related to HDR including different HDR technologies, tone mapping, color representation, and HDR standards. It discusses concepts such as scene-referred vs display-referred conversions, and direct mapping vs tone mapping when converting between HDR and SDR formats. The document also examines potential side effects when mixing different conversion techniques in a production workflow.
This document provides definitions and explanations of various optical terminology related to light passing through a lens, including:
- Dispersion, refraction, diffraction, reflection, focal point, focal length, principal point, image circle, aperture ratio, numerical aperture, optical axis, and more. It discusses concepts such as entrance pupil, exit pupil, angular aperture, and how they relate to lens performance. The document also covers topics like vignetting, the cosine law, and flare. Overall, it serves as a comprehensive reference for understanding optical and photographic lens terminology.
The document discusses high dynamic range (HDR) imaging technologies including:
- Standards for HDR encoding like SMPTE ST 2084 (PQ) and ARIB/ITU-R BT.2100 (HLG)
- Opto-electronic transfer functions (OETFs) and electro-optical transfer functions (EOTFs) used in HDR systems
- The human visual system's sensitivity to luminance levels and how this relates to quantization in HDR images
Serial Digital Interface (SDI), From SD-SDI to 24G-SDI, Part 1Dr. Mohieddin Moradi
The document discusses standards for serial digital interface (SDI) video signals. It provides information on:
- Early SDI standards including SMPTE 259M for SD-SDI at 270Mbps and how they standardized a serial digital video connection.
- Video signal sampling structures and resolutions for SD, HD, and UHD formats.
- The development of higher data rate SDI standards up to 12G-SDI and 24G-SDI to support higher resolution video.
- Electrical parameters and cable distance limitations for different SDI data rates.
1. The document discusses color temperature and how different light sources emit different color spectrums that video cameras must account for through color balancing. Color temperature is used as a reference to adjust the camera's color balance to match the light source.
2. After color temperature conversion optically or electronically, white balance is then used to precisely match the light source color temperature by adjusting the camera's video amplifiers.
3. Other topics covered include polarizers, neutral density filters, and technical aspects of video such as gamma correction and clipping levels.
Serial Digital Interface (SDI), From SD-SDI to 24G-SDI, Part 2Dr. Mohieddin Moradi
This document discusses high definition video standards including SMPTE 274M, 292M, 372M and dual link SDI formats. It provides details on:
- The HD-SDI standards that define 1080p and 720p video formats and carriage through 1.5Gb/s serial digital interface.
- The timing reference signal codes used in HD-SDI to identify lines and perform error checking.
- How a 12-bit color depth can be achieved within the dual link standard by mapping the additional bits across both links.
- The benefits of 3Gb/s SDI and dual link formats for working at higher resolutions and color spaces prior to finishing.
This document provides information about 4K lens specifications and performance. It discusses key optical parameters for 4K lenses such as sharpness, chromatic aberration, depth of field, and resolution. The document explains how 4K lenses are designed to minimize chromatic aberration and enhance modulation transfer function to improve image quality. It also describes the benefits of 4K lenses for wide color gamut and high dynamic range imaging applications. These benefits include reduced color fringing, flare, and black level for increased dynamic range. Examples are provided comparing image quality between 4K and HD lenses. The document concludes with information about Canon's cinema lens lineup and technologies.
This document discusses various optical and technical aspects of camera lenses, including:
1) It defines focal length as the distance between a lens and the point where light passing through converges, known as the focal point. Shorter focal lengths provide wide-angle views while longer focal lengths provide magnified close-up views.
2) F-number and f-stop are defined, with f-number indicating the maximum light a lens can admit and f-stop indicating light levels at smaller iris openings. Smaller f-numbers and f-stop numbers admit more light.
3) The relationship between aperture, focal length, and depth of field is explained. Smaller apertures provide deeper depth of field while
VIDEO QUALITY ENHANCEMENT IN BROADCAST CHAIN, OPPORTUNITIES & CHALLENGESDr. Mohieddin Moradi
This document discusses elements of high-quality image production for television broadcasting such as spatial resolution, frame rate, dynamic range, color gamut, quantization, and total quality of experience. It outlines these elements and provides examples of their implementation in HD, UHD1, and UHD2 formats. Motivations for 8K and 4K broadcasting are discussed related to improved image quality, new applications, and bandwidth efficiency trends. Implementation examples of 4K and 8K broadcasting systems from Japan, Korea, Sweden, and the UK are also summarized.
The document outlines topics related to video over IP infrastructure and standards. It discusses IP technology trends, networking basics, video and audio over IP standards, SMPTE ST 2110, NMOS, infrastructure considerations, timing issues, clean switching methods, compression, broadcast controller/orchestration, and case studies for migrating broadcast facilities to IP. The document provides an overview and outline for presenting on designing, integrating, and managing IP-based broadcast facilities and production workflows.
Video Compression, Part 3-Section 2, Some Standard Video CodecsDr. Mohieddin Moradi
This document discusses MPEG-2 Transport Streams and Packetized Elementary Streams. It describes how MPEG-2 Transport Streams use fixed length 188 byte packets containing compressed video, audio or data from one or more programs identified by Packet IDs. These packets can contain Packetized Elementary Stream packets which contain compressed elementary streams with timestamps for synchronization. The document also discusses how Transport Streams allow for synchronous multiplexing of multiple programs from independent time bases into a single stream.
This document provides information about various camera settings and technologies for capturing clear images, including:
1. Clear Scan helps eliminate banding caused when a camera's frame rate does not match a CRT display's refresh rate.
2. Slow Shutter extends the camera's exposure time to produce blur effects or allow more light in low-light scenes.
3. Super Sampling uses a 1080p camera to produce sharper 720p images by maintaining higher frequency response.
4. Detail correction adds a spike-shaped detail signal to make edges appear sharper without degrading resolution. Settings like detail level and H/V ratio control the amount and balance of detail correction.
5. Other topics covered
This document discusses video compression techniques. It begins by outlining the history of video compression and describing the basic components of a generic video encoder/decoder system. It then covers specific compression methods including differential pulse code modulation, transform coding using discrete cosine transform, quantization, and entropy coding. The document also discusses techniques for reducing both spatial and temporal redundancy in video, such as prediction coding. It provides examples of how quantization is used to control quality and compression ratio in both lossy and lossless compression systems.
This document provides an overview of high definition television (HDTV) standards and concepts such as color gamut, color bars test signals, colorimetry, chroma adjustment, and luminance adjustment. It discusses differences between standard definition (SDTV) and HDTV color bars, how wider color gamuts in HDTV allow for deeper colors, and how to use various elements of the color bars signal to properly adjust a display's color, brightness, contrast, and chroma. The document contains diagrams demonstrating color gamuts and examples of how objects appear within different gamuts.
Video Compression, Part 3-Section 1, Some Standard Video CodecsDr. Mohieddin Moradi
- ISO/IEC JTC 1/SC 29 and ITU-T are the main organizations that develop video coding standards through working groups like MPEG and VCEG.
- Early standards include H.261 for video telephony and conferencing, and MPEG-1 for DVD quality video.
- Later standards like H.264/AVC, HEVC, and future VVC provide increasingly higher compression through use of block transforms, motion compensation, and entropy coding in a hybrid video codec framework.
- Key organizations periodically collaborate through joint teams like JVT and JCT-VC to develop standards like AVC and HEVC.
The document discusses video compression history and standards, including codecs such as H.261, H.262/MPEG-2, H.263, H.264/AVC, H.265/HEVC, and the roles of organizations like MPEG, VCEG, and ITU-T in developing video coding standards to ensure interoperability. It also covers video encoding and decoding principles, as well as common container formats and their applications in areas like broadcasting, streaming, and storage.
This document provides an overview of video standards and concepts related to standard definition television (SDTV) and high definition television (HDTV). It begins with definitions of key terms like interlacing, progressive scanning, and frame rates. It then covers standards for monochrome signals, including signal timings, synchronization pulses, and blanking intervals. Digital SDTV standards like line counts, field structures, and ancillary data space are also summarized. The document concludes with discussions of spatial resolution, optimal viewing distances, and different aspect ratios used in television.
Broadcast day-2010-ses-world-skies-sspiSSPI Brasil
The document discusses the growth of digital video and satellites as an enabling technology for broadcasting. Some key points:
- Satellites allow for low-cost point-to-multipoint broadcasting to subscribers. Over 24,000 TV channels are now broadcast by satellite, with 2,900 added in 2008-2009 alone.
- Emerging markets are forecasted to see powerful subscriber growth, driving demand for over 200 additional transponders across regions like Asia-Pacific, Latin America, and the Middle East/Africa through 2016.
- High-definition TV is a major driver of transponder demand, with the number of HD channels projected to grow exponentially from over 300 today to over 3,000 by 2017. Satellite
The global broadcast switchers market was valued at US$ 1.86 billion in 2021 and is expected to grow at a CAGR of 6.1% from 2022 to 2031 to reach US$ 3.32 billion by 2031. Companies are focusing on high-growth applications like sports to keep their businesses growing post-COVID. The transition from analog to digital broadcasting is driving demand as it requires technological advancements. Production switchers continue to evolve with more powerful features and are expected to be the fastest growing segment. Advancements in high definition and 4K video resolution are also fueling market growth.
This document outlines an educational course on audio and video over IP. The course covers IP networking fundamentals and standards including TCP/IP, OSI models, and SMPTE ST 2110. It also examines IP infrastructure, routing, timing issues, switching, compression techniques and case studies for broadcast facilities transitioning to IP. The document provides an in-depth outline of topics covered in each session, from IP basics to designing and integrating both hybrid and fully IP-based outside broadcast trucks. The goal is to educate on best practices for implementing audio and video over IP workflows and infrastructure.
This document provides an overview of analog and digital triax systems used for video transmission. It discusses key aspects of triax cables such as their ability to transmit multiple signals simultaneously through bundled cables. Both analog and digital triax systems are described, with analog transmitting component signals on different carrier frequencies and digital transmitting signals in digital format. The document also covers triax cable specifications, common connectors types used for broadcasting applications from different standards, fiber optic cable types including single mode and multi-mode, and common fiber connectors. Transmission distances and electrical properties of triax cables are discussed.
HDR, wide color gamut, and higher frame rates are new technologies that can improve image quality for ultra high definition televisions. They provide benefits like more vivid colors, deeper blacks, better shadow detail, and a more immersive viewing experience. However, supporting these new features requires significantly more data bandwidth compared to legacy standards. Future video standards will need to efficiently support higher resolutions, wider color, high dynamic range, and high frame rates to deliver next-generation picture quality while still allowing content to be economically distributed.
Dr. Mohieddin Moradi provides an outline on high dynamic range (HDR) technology. The 3-page document covers various topics related to HDR including different HDR technologies, tone mapping, color representation, and HDR standards. It discusses concepts such as scene-referred vs display-referred conversions, and direct mapping vs tone mapping when converting between HDR and SDR formats. The document also examines potential side effects when mixing different conversion techniques in a production workflow.
This document provides definitions and explanations of various optical terminology related to light passing through a lens, including:
- Dispersion, refraction, diffraction, reflection, focal point, focal length, principal point, image circle, aperture ratio, numerical aperture, optical axis, and more. It discusses concepts such as entrance pupil, exit pupil, angular aperture, and how they relate to lens performance. The document also covers topics like vignetting, the cosine law, and flare. Overall, it serves as a comprehensive reference for understanding optical and photographic lens terminology.
The document discusses high dynamic range (HDR) imaging technologies including:
- Standards for HDR encoding like SMPTE ST 2084 (PQ) and ARIB/ITU-R BT.2100 (HLG)
- Opto-electronic transfer functions (OETFs) and electro-optical transfer functions (EOTFs) used in HDR systems
- The human visual system's sensitivity to luminance levels and how this relates to quantization in HDR images
Serial Digital Interface (SDI), From SD-SDI to 24G-SDI, Part 1Dr. Mohieddin Moradi
The document discusses standards for serial digital interface (SDI) video signals. It provides information on:
- Early SDI standards including SMPTE 259M for SD-SDI at 270Mbps and how they standardized a serial digital video connection.
- Video signal sampling structures and resolutions for SD, HD, and UHD formats.
- The development of higher data rate SDI standards up to 12G-SDI and 24G-SDI to support higher resolution video.
- Electrical parameters and cable distance limitations for different SDI data rates.
1. The document discusses color temperature and how different light sources emit different color spectrums that video cameras must account for through color balancing. Color temperature is used as a reference to adjust the camera's color balance to match the light source.
2. After color temperature conversion optically or electronically, white balance is then used to precisely match the light source color temperature by adjusting the camera's video amplifiers.
3. Other topics covered include polarizers, neutral density filters, and technical aspects of video such as gamma correction and clipping levels.
Serial Digital Interface (SDI), From SD-SDI to 24G-SDI, Part 2Dr. Mohieddin Moradi
This document discusses high definition video standards including SMPTE 274M, 292M, 372M and dual link SDI formats. It provides details on:
- The HD-SDI standards that define 1080p and 720p video formats and carriage through 1.5Gb/s serial digital interface.
- The timing reference signal codes used in HD-SDI to identify lines and perform error checking.
- How a 12-bit color depth can be achieved within the dual link standard by mapping the additional bits across both links.
- The benefits of 3Gb/s SDI and dual link formats for working at higher resolutions and color spaces prior to finishing.
This document provides information about 4K lens specifications and performance. It discusses key optical parameters for 4K lenses such as sharpness, chromatic aberration, depth of field, and resolution. The document explains how 4K lenses are designed to minimize chromatic aberration and enhance modulation transfer function to improve image quality. It also describes the benefits of 4K lenses for wide color gamut and high dynamic range imaging applications. These benefits include reduced color fringing, flare, and black level for increased dynamic range. Examples are provided comparing image quality between 4K and HD lenses. The document concludes with information about Canon's cinema lens lineup and technologies.
This document discusses various optical and technical aspects of camera lenses, including:
1) It defines focal length as the distance between a lens and the point where light passing through converges, known as the focal point. Shorter focal lengths provide wide-angle views while longer focal lengths provide magnified close-up views.
2) F-number and f-stop are defined, with f-number indicating the maximum light a lens can admit and f-stop indicating light levels at smaller iris openings. Smaller f-numbers and f-stop numbers admit more light.
3) The relationship between aperture, focal length, and depth of field is explained. Smaller apertures provide deeper depth of field while
VIDEO QUALITY ENHANCEMENT IN BROADCAST CHAIN, OPPORTUNITIES & CHALLENGESDr. Mohieddin Moradi
This document discusses elements of high-quality image production for television broadcasting such as spatial resolution, frame rate, dynamic range, color gamut, quantization, and total quality of experience. It outlines these elements and provides examples of their implementation in HD, UHD1, and UHD2 formats. Motivations for 8K and 4K broadcasting are discussed related to improved image quality, new applications, and bandwidth efficiency trends. Implementation examples of 4K and 8K broadcasting systems from Japan, Korea, Sweden, and the UK are also summarized.
The document outlines topics related to video over IP infrastructure and standards. It discusses IP technology trends, networking basics, video and audio over IP standards, SMPTE ST 2110, NMOS, infrastructure considerations, timing issues, clean switching methods, compression, broadcast controller/orchestration, and case studies for migrating broadcast facilities to IP. The document provides an overview and outline for presenting on designing, integrating, and managing IP-based broadcast facilities and production workflows.
Video Compression, Part 3-Section 2, Some Standard Video CodecsDr. Mohieddin Moradi
This document discusses MPEG-2 Transport Streams and Packetized Elementary Streams. It describes how MPEG-2 Transport Streams use fixed length 188 byte packets containing compressed video, audio or data from one or more programs identified by Packet IDs. These packets can contain Packetized Elementary Stream packets which contain compressed elementary streams with timestamps for synchronization. The document also discusses how Transport Streams allow for synchronous multiplexing of multiple programs from independent time bases into a single stream.
This document provides information about various camera settings and technologies for capturing clear images, including:
1. Clear Scan helps eliminate banding caused when a camera's frame rate does not match a CRT display's refresh rate.
2. Slow Shutter extends the camera's exposure time to produce blur effects or allow more light in low-light scenes.
3. Super Sampling uses a 1080p camera to produce sharper 720p images by maintaining higher frequency response.
4. Detail correction adds a spike-shaped detail signal to make edges appear sharper without degrading resolution. Settings like detail level and H/V ratio control the amount and balance of detail correction.
5. Other topics covered
This document discusses video compression techniques. It begins by outlining the history of video compression and describing the basic components of a generic video encoder/decoder system. It then covers specific compression methods including differential pulse code modulation, transform coding using discrete cosine transform, quantization, and entropy coding. The document also discusses techniques for reducing both spatial and temporal redundancy in video, such as prediction coding. It provides examples of how quantization is used to control quality and compression ratio in both lossy and lossless compression systems.
This document provides an overview of high definition television (HDTV) standards and concepts such as color gamut, color bars test signals, colorimetry, chroma adjustment, and luminance adjustment. It discusses differences between standard definition (SDTV) and HDTV color bars, how wider color gamuts in HDTV allow for deeper colors, and how to use various elements of the color bars signal to properly adjust a display's color, brightness, contrast, and chroma. The document contains diagrams demonstrating color gamuts and examples of how objects appear within different gamuts.
Video Compression, Part 3-Section 1, Some Standard Video CodecsDr. Mohieddin Moradi
- ISO/IEC JTC 1/SC 29 and ITU-T are the main organizations that develop video coding standards through working groups like MPEG and VCEG.
- Early standards include H.261 for video telephony and conferencing, and MPEG-1 for DVD quality video.
- Later standards like H.264/AVC, HEVC, and future VVC provide increasingly higher compression through use of block transforms, motion compensation, and entropy coding in a hybrid video codec framework.
- Key organizations periodically collaborate through joint teams like JVT and JCT-VC to develop standards like AVC and HEVC.
The document discusses video compression history and standards, including codecs such as H.261, H.262/MPEG-2, H.263, H.264/AVC, H.265/HEVC, and the roles of organizations like MPEG, VCEG, and ITU-T in developing video coding standards to ensure interoperability. It also covers video encoding and decoding principles, as well as common container formats and their applications in areas like broadcasting, streaming, and storage.
This document provides an overview of video standards and concepts related to standard definition television (SDTV) and high definition television (HDTV). It begins with definitions of key terms like interlacing, progressive scanning, and frame rates. It then covers standards for monochrome signals, including signal timings, synchronization pulses, and blanking intervals. Digital SDTV standards like line counts, field structures, and ancillary data space are also summarized. The document concludes with discussions of spatial resolution, optimal viewing distances, and different aspect ratios used in television.
Broadcast day-2010-ses-world-skies-sspiSSPI Brasil
The document discusses the growth of digital video and satellites as an enabling technology for broadcasting. Some key points:
- Satellites allow for low-cost point-to-multipoint broadcasting to subscribers. Over 24,000 TV channels are now broadcast by satellite, with 2,900 added in 2008-2009 alone.
- Emerging markets are forecasted to see powerful subscriber growth, driving demand for over 200 additional transponders across regions like Asia-Pacific, Latin America, and the Middle East/Africa through 2016.
- High-definition TV is a major driver of transponder demand, with the number of HD channels projected to grow exponentially from over 300 today to over 3,000 by 2017. Satellite
The global broadcast switchers market was valued at US$ 1.86 billion in 2021 and is expected to grow at a CAGR of 6.1% from 2022 to 2031 to reach US$ 3.32 billion by 2031. Companies are focusing on high-growth applications like sports to keep their businesses growing post-COVID. The transition from analog to digital broadcasting is driving demand as it requires technological advancements. Production switchers continue to evolve with more powerful features and are expected to be the fastest growing segment. Advancements in high definition and 4K video resolution are also fueling market growth.
Select Font Size A A A Sponsored By Beyond HDTV .docxkenjordan97598
Select Font Size: A A A
Sponsored By
Beyond HDTV
By John Boyd
The future of television got a test-drive recently in New York City. While consumers around the
globe are just now getting acquainted with the vivid picture quality of high-definition television, or
HDTV, a far more advanced super-high-resolution system is in the works. NHK, Japan's public
broadcaster, is working on what it has dubbed Super Hi-Vision: a TV technology—not expected to
be commercialized for a decade or more—that produces live video with a resolution 16 times that
of today's HDTV and twice that of 70-millimeter movies. The New York City test was recorded for
display at a convention of broadcasters who were meeting in Las Vegas.
Last November, NHK conducted its first live test in the field, when it transmitted an uncompressed
24-gigabit-per-second SHV video signal for several hours, producing a picture with a resolution of
7680 by 4320 pixels. The live video was relayed over 260 kilometers of optical fiber and viewed on
a screen measuring 10 meters by 5.5 meters. The transmission also included a technically swank
audio scheme, with more than 22 channels, to match the video's high resolution. To shoot the live
transmission, the researchers used two custom-built cameras equipped with four 8-megapixel
CMOS sensors.
Months before, NHK had shown off an 8-minute SHV video to visitors at the 2005 World Expo held
near Nagoya, from March to September last year. After postproduction the movie weighed in at
1.4 terabytes and had to be stored on a hard-disk array.
"The typical reaction of the audience was 'Sugoii!' ('Wow!')," says Masaru Kanazawa, a senior
researcher engineer in NHK's Science & Technical Research Laboratories, in western Tokyo. He
says some 1.6 million Expo attendees watched the video, and many were astonished with the
heightened sense of reality it evoked. He attributes this in part to the video's clarity; the system's
wide viewing angle of 100 degrees, as opposed to HDTV's 30 degrees and the 15 degrees for
standard television; and the advanced audio system. "They felt they were a part of the same
scenes," he says.
Despite making such technological progress, NHK's researchers are quick to caution that
commercialization of SHV is years—and maybe decades—away. And there are lots of technical and
political hurdles left to leap. For instance, the company is working to have the format accepted as
an international standard by the International Telecommunication Union-Radiocommunications,
which regulates radio spectrum. If an agreement is reached, Kanazawa says the proposed
standard could be published as early as this year, and then member countries would get to vote
on it.
Perhaps a much greater hurdle SHV faces is further developing the technology so that it can be
used for broadcasting. Because of the huge amount of data involved, today it only works over
optical fiber. But NHK is looking t.
Elemental high dynamic _ range_video_white_paperCMR WORLD TECH
FROM SCIENCE TO PRACTICE
The next large challenge facing the video industry is translating the science behind HDR into a system or
systems that can actually perform the required tasks of making HDR a reality for consumers and provide
a return on investment for providers. This adds complexity by bringing the laboratory into the
marketplace.
Without using an expensive broadcasting van,
Dream Live, an outdoors real time broadcasting relay system,
is able to stream the video contents in live by 1 person..
The document discusses proposed extensions to the DVB-S2 satellite communication standard to improve data transmission efficiency. Some key extensions mentioned include reducing transponder roll-off factors and side lobes to allow transponders to be placed closer together, using wider bandwidth transponders, adding a new 64 APSK modulation, and additional modulation and coding schemes. These extensions could provide up to a 20% increase in data rates for direct-to-home broadcasts and potentially a 64% increase for professional services. The improvements will help enable higher data transmission rates to support services like ultra-high definition TV and high-speed IP.
Cisco Systems is a large networking company founded in 1984 that generates over $40 billion in annual revenue. It has a dominant position in routers and switches with over 70% market share. However, competition from HP, Juniper, and others poses threats. Cisco's strengths include its strategic partnerships and acquisitions strategy, while weaknesses include lack of brand recognition in consumer markets and high prices. In the long term, Cisco aims to improve its position in consumer products and capitalize on opportunities in smart grid technology and cloud computing.
Rufo Guerreschi is the CEO of a startup building privacy-focused consumer electronics. They are developing a thin handheld device called the CivicPod and a TV box called the CivicBox that greatly improve security and privacy for communications and entertainment. The devices will anonymize digital communications and allow users to access content and apps from their phones on a TV screen. The startup is a spin-off of the Trustless Computing Consortium, which provides unique security expertise. They plan to launch the CivicPod and dongle in the next year, targeting high-net-worth individuals, and roll out the CivicTV more widely thereafter through partnerships with telcos and content providers. The goal is to scale exponentially to
Korea was the first country to broadcast UHDTV (Ultra High Definition Television) over DVB-T2 starting in 2012 at 30fps. By 2014, major Korean broadcasters were transmitting their own UHDTV programs at 60fps over DVB-T2 to cover the Seoul metropolitan area. This case study discusses the technical challenges of delivering UHDTV over terrestrial transmission in Seoul, such as achieving 60fps live transmission, providing coverage across Seoul's single frequency network, and targeting both rooftop and indoor reception within a 6MHz bandwidth. The use of HEVC encoding and improvements to it will be key to meeting these challenges at data rates below 20Mbps needed for error-free indoor reception.
We are building a mass-market 2mm-thin handheld and a TV-connected boxthat, jointly, radically exceed state-of-the art in (A) the privacy and security of your communications, and (B) in the choice of content and quality of experience of your home entertainment.
The document discusses proposed extensions to the DVB-S2 digital video broadcasting standard over satellites. The extensions aim to improve spectrum efficiency and allow for higher data rates. Key points include:
1) Reducing roll-off factors and side lobes of carriers allows transponders to be placed closer together, utilizing bandwidth.
2) Using wider bandwidth transponders that are 3 times the size of typical ones further increases efficiency.
3) A new 64-APSK modulation scheme and additional coding options provide more flexibility to optimize throughput.
4) Overall, the extensions could provide up to 20% higher data rates for broadcasting and 64% for professional services.
For the full video of this presentation, please visit:
https://www.edge-ai-vision.com/2020/12/trends-in-neural-network-topologies-for-vision-at-the-edge-a-presentation-from-synopsys/
For more information about edge AI and computer vision, please visit:
https://www.edge-ai-vision.com
Pierre Paulin, Director of R&D for Embedded Vision at Synopsys, presents the “Trends in Neural Network Topologies for Vision at the Edge” tutorial at the September 2020 Embedded Vision Summit.
The widespread adoption of deep neural networks (DNNs) in embedded vision applications has increased the importance of creating DNN topologies that maximize accuracy while minimizing computation and memory requirements. This has led to accelerated innovation in DNN topologies.
In this talk, Paulin summarizes the key trends in neural network topologies for embedded vision applications, highlighting techniques employed by widely used networks such as EfficientNet and MobileNet to boost both accuracy and efficiency. He also touches on other optimization methods—such as pruning, compression and layer fusion—that developers can use to further reduce the memory and computation demands of modern DNNs.
The document discusses proposed extensions to the DVB-S2 satellite broadcasting standard to enable higher data rates for services like UHDTV. The extensions include reducing transponder roll-off factors and side lobes to fit more transponders in the same bandwidth, using wider 72 MHz transponders, adding a new 64-APSK modulation, and more modulation and coding options. These techniques could provide up to a 20% gain for direct broadcasting and up to 64% for professional links. The improvements will help deliver higher data rate ultra high definition and other bandwidth-intensive services over satellite.
A Set-top-Box (STB) is a very common name heard in the consumer electronics market. It is a device that is attached to a Television for enhancing its functions or the quality of its functions. On the other side, the STB is connected to an external source of signal, like satellite, cable, terrestrial or internet. The STB processes the signal it receives, turns it into content, which is then displayed on the television screen or other display device. There are different types of STBs based on what kind of signals it can receive and what kind of processing it can do. The most widely used STBs are DVB STBs, which receive DVB (Digital Video Broadcast) transmission.
This document provides an overview of color spaces and high dynamic range (HDR) technologies. It begins with definitions of color gamut and chromaticity coordinates. It then discusses several key color spaces including Rec.709, Rec.2020, DCI-P3, ACES, and S-Gamut3. It also covers HDR formats like PQ, HLG, and log encoding. The document aims to explain the essential aspects of different color spaces and HDR technologies used for digital cinema and television production.
This document discusses digital television technology trends. It compares analog and digital television systems, describing improvements in quality for signals, images, sound, and number of channels in digital television. It also covers topics like high dynamic range, resolutions up to 8K, color gamuts, frame rates, and video coding standards. The document outlines roadmaps for ultra high definition television standards and deployments in countries like Korea and Japan.
The document discusses proposed extensions to the DVB-S2 satellite communication standard to increase data rates and spectral efficiency. The extensions include reducing transponder roll-off factors and side lobes to place transponders closer together, using wider bandwidth transponders, adding a new 64 APSK modulation, and improving forward error correction. These techniques could provide up to a 20% increase in data rates for direct broadcast services and up to 64% for professional links. The extensions aim to support higher data applications like ultra-high definition TV and high-speed internet services over satellite.
The document discusses proposed extensions to the DVB-S2 satellite communication standard to increase data rates and spectral efficiency. The extensions include reducing transponder roll-off factors and side lobes to place transponders closer together, using wider bandwidth transponders, adding a new 64 APSK modulation, and improving forward error correction. These techniques could provide up to a 20% increase in data rates for direct broadcast applications and up to 64% for professional links. The extensions aim to support higher data applications like ultra-high definition TV and high-speed IP services over satellite.
The document discusses proposed extensions to the DVB-S2 satellite communication standard to improve efficiency as demand increases for higher data rates like those needed for UHDTV. The extensions could provide up to a 20% increase for broadcast services and 64% for professional services. Key extensions include reducing roll-off factors and side lobes to fit more transponders in the same spectrum, using wider bandwidth transponders, adding 64 APSK modulation, and improving modulation and coding schemes. The extensions aim to enable higher data rates within existing satellite channels but wider bandwidth transponders may be best for professional not consumer services.
Similar to An Introduction to Video Principles-Part 1 (20)
This document provides an overview of high dynamic range (HDR) technology and workflows for HDR video production and mastering. It discusses HDR standards like SMPTE ST 2084 and ARIB STB-B67, camera log curves, luminance levels, and tools for setting up HDR monitoring including waveform monitors. Specific topics covered include HDR graticules, setting luminance levels for highlights and grey points, and using zebra patterns and zoom modes to evaluate highlight levels in HDR images.
The document discusses high dynamic range (HDR) video technology including:
- Different HDR formats such as SMPTE ST 2084 (PQ), ARIB STB-B67/ITU-R BT.2100 (HLG)
- Code value ranges for 10-bit and 12-bit RGB and color difference signals in narrow and full ranges
- Recommendations for using narrow versus full signal ranges for PQ and HLG
- Transcoding concepts when converting between PQ and HLG formats
- Considerations for including standard dynamic range (SDR) content in HDR programs
The document provides an overview of key concepts in high definition television (HDTV) including:
- Standards and definitions for SDTV and HDTV
- Interlacing and de-interlacing techniques
- Video scaling, edge enhancement, and frame rate conversion
- Signal quality issues in HDTV production and broadcast
- Cables and connectors used for HDTV production
The document contains diagrams and explanations of topics like color bars, genlocking, sampling, interlacing, field order, and 3D video sampling structures. It compares progressive and interlaced scanning and discusses concepts such as the Nyquist frequency, aliasing, and field dominance.
Hue refers to the dominant wavelength of light, which determines the color as perceived by the observer. Saturation refers to the purity of the hue, or the amount of white light mixed with it. Luminance refers to the brightness or intensity of the color.
The document discusses radiometry and photometry, which deal with measuring light across the electromagnetic spectrum and in the visible spectrum respectively. It defines terms like luminous flux, luminous intensity, illuminance, and luminance.
It also covers topics like additive and subtractive color mixing, primary and secondary colors, color spaces, and video signal formats like RGB, YUV, and YCbCr which are used to represent color images and video. Human cone sensitivity
This document provides an overview of sound and hearing, including:
1. It describes how the human ear works, from collecting sound waves through the outer ear and transmitting vibrations through the ossicles to the cochlea where hair cells detect different frequencies.
2. It discusses properties of sound like loudness, pitch, and timbre, and how they are perceived. Loudness depends on amplitude, pitch on frequency, and timbre on waveform complexity.
3. It explains characteristics of sound waves like wavelength, frequency, speed of sound, and the decibel scale used to measure sound intensity and pressure levels.
Video Compression, Part 4 Section 1, Video Quality Assessment Dr. Mohieddin Moradi
This document provides an overview of video compression artifacts that can occur when video is compressed for streaming or storage. It discusses both spatial artifacts, such as blurring, blocking, ringing, and color bleeding, as well as temporal artifacts like flickering and mosquito noise. For each artifact, it describes the visual appearance and potential causes from factors like quantization during compression, motion compensation between frames, and chroma subsampling. The document aims to help understand how compression can degrade perceptual video quality and different types of artifacts that may be evaluated both objectively and subjectively.
Video Compression, Part 4 Section 2, Video Quality Assessment Dr. Mohieddin Moradi
This document provides information on conducting subjective video quality assessments. It discusses different subjective assessment methods like double stimulus impairment scale (DSIS) and single stimulus continuous quality evaluation (SSCQE). It describes test parameters like number of observers, viewing conditions, grading scales and how to present the results. Guidelines are provided for tasks like screening observers, conducting test sessions, introducing impairments and collecting opinion scores to evaluate video coding standards and compression artifacts.
Advanced control scheme of doubly fed induction generator for wind turbine us...IJECEIAES
This paper describes a speed control device for generating electrical energy on an electricity network based on the doubly fed induction generator (DFIG) used for wind power conversion systems. At first, a double-fed induction generator model was constructed. A control law is formulated to govern the flow of energy between the stator of a DFIG and the energy network using three types of controllers: proportional integral (PI), sliding mode controller (SMC) and second order sliding mode controller (SOSMC). Their different results in terms of power reference tracking, reaction to unexpected speed fluctuations, sensitivity to perturbations, and resilience against machine parameter alterations are compared. MATLAB/Simulink was used to conduct the simulations for the preceding study. Multiple simulations have shown very satisfying results, and the investigations demonstrate the efficacy and power-enhancing capabilities of the suggested control system.
Low power architecture of logic gates using adiabatic techniquesnooriasukmaningtyas
The growing significance of portable systems to limit power consumption in ultra-large-scale-integration chips of very high density, has recently led to rapid and inventive progresses in low-power design. The most effective technique is adiabatic logic circuit design in energy-efficient hardware. This paper presents two adiabatic approaches for the design of low power circuits, modified positive feedback adiabatic logic (modified PFAL) and the other is direct current diode based positive feedback adiabatic logic (DC-DB PFAL). Logic gates are the preliminary components in any digital circuit design. By improving the performance of basic gates, one can improvise the whole system performance. In this paper proposed circuit design of the low power architecture of OR/NOR, AND/NAND, and XOR/XNOR gates are presented using the said approaches and their results are analyzed for powerdissipation, delay, power-delay-product and rise time and compared with the other adiabatic techniques along with the conventional complementary metal oxide semiconductor (CMOS) designs reported in the literature. It has been found that the designs with DC-DB PFAL technique outperform with the percentage improvement of 65% for NOR gate and 7% for NAND gate and 34% for XNOR gate over the modified PFAL techniques at 10 MHz respectively.
We have compiled the most important slides from each speaker's presentation. This year’s compilation, available for free, captures the key insights and contributions shared during the DfMAy 2024 conference.
Introduction- e - waste – definition - sources of e-waste– hazardous substances in e-waste - effects of e-waste on environment and human health- need for e-waste management– e-waste handling rules - waste minimization techniques for managing e-waste – recycling of e-waste - disposal treatment methods of e- waste – mechanism of extraction of precious metal from leaching solution-global Scenario of E-waste – E-waste in India- case studies.
ACEP Magazine edition 4th launched on 05.06.2024Rahul
This document provides information about the third edition of the magazine "Sthapatya" published by the Association of Civil Engineers (Practicing) Aurangabad. It includes messages from current and past presidents of ACEP, memories and photos from past ACEP events, information on life time achievement awards given by ACEP, and a technical article on concrete maintenance, repairs and strengthening. The document highlights activities of ACEP and provides a technical educational article for members.
DEEP LEARNING FOR SMART GRID INTRUSION DETECTION: A HYBRID CNN-LSTM-BASED MODELgerogepatton
As digital technology becomes more deeply embedded in power systems, protecting the communication
networks of Smart Grids (SG) has emerged as a critical concern. Distributed Network Protocol 3 (DNP3)
represents a multi-tiered application layer protocol extensively utilized in Supervisory Control and Data
Acquisition (SCADA)-based smart grids to facilitate real-time data gathering and control functionalities.
Robust Intrusion Detection Systems (IDS) are necessary for early threat detection and mitigation because
of the interconnection of these networks, which makes them vulnerable to a variety of cyberattacks. To
solve this issue, this paper develops a hybrid Deep Learning (DL) model specifically designed for intrusion
detection in smart grids. The proposed approach is a combination of the Convolutional Neural Network
(CNN) and the Long-Short-Term Memory algorithms (LSTM). We employed a recent intrusion detection
dataset (DNP3), which focuses on unauthorized commands and Denial of Service (DoS) cyberattacks, to
train and test our model. The results of our experiments show that our CNN-LSTM method is much better
at finding smart grid intrusions than other deep learning algorithms used for classification. In addition,
our proposed approach improves accuracy, precision, recall, and F1 score, achieving a high detection
accuracy rate of 99.50%.
Electric vehicle and photovoltaic advanced roles in enhancing the financial p...IJECEIAES
Climate change's impact on the planet forced the United Nations and governments to promote green energies and electric transportation. The deployments of photovoltaic (PV) and electric vehicle (EV) systems gained stronger momentum due to their numerous advantages over fossil fuel types. The advantages go beyond sustainability to reach financial support and stability. The work in this paper introduces the hybrid system between PV and EV to support industrial and commercial plants. This paper covers the theoretical framework of the proposed hybrid system including the required equation to complete the cost analysis when PV and EV are present. In addition, the proposed design diagram which sets the priorities and requirements of the system is presented. The proposed approach allows setup to advance their power stability, especially during power outages. The presented information supports researchers and plant owners to complete the necessary analysis while promoting the deployment of clean energy. The result of a case study that represents a dairy milk farmer supports the theoretical works and highlights its advanced benefits to existing plants. The short return on investment of the proposed approach supports the paper's novelty approach for the sustainable electrical system. In addition, the proposed system allows for an isolated power setup without the need for a transmission line which enhances the safety of the electrical network
CHINA’S GEO-ECONOMIC OUTREACH IN CENTRAL ASIAN COUNTRIES AND FUTURE PROSPECTjpsjournal1
The rivalry between prominent international actors for dominance over Central Asia's hydrocarbon
reserves and the ancient silk trade route, along with China's diplomatic endeavours in the area, has been
referred to as the "New Great Game." This research centres on the power struggle, considering
geopolitical, geostrategic, and geoeconomic variables. Topics including trade, political hegemony, oil
politics, and conventional and nontraditional security are all explored and explained by the researcher.
Using Mackinder's Heartland, Spykman Rimland, and Hegemonic Stability theories, examines China's role
in Central Asia. This study adheres to the empirical epistemological method and has taken care of
objectivity. This study analyze primary and secondary research documents critically to elaborate role of
china’s geo economic outreach in central Asian countries and its future prospect. China is thriving in trade,
pipeline politics, and winning states, according to this study, thanks to important instruments like the
Shanghai Cooperation Organisation and the Belt and Road Economic Initiative. According to this study,
China is seeing significant success in commerce, pipeline politics, and gaining influence on other
governments. This success may be attributed to the effective utilisation of key tools such as the Shanghai
Cooperation Organisation and the Belt and Road Economic Initiative.
3. − Elements of High-Quality Image Production
− Human Visual System and Color Perception
− A Short History of Film
− Mechanism of CCD and CMOS Sensors
− Television System History
− Color Video Signal Formats
− The Color Bars Test Signal Specifications
− CIE Color Spaces and Color Gamut Specifications
− Analog to Digital Conversion and Color Sub-Sampling
Outline
3
5. Not only more pixels, but better pixels
Elements of High-Quality Image Production
𝑻𝒐𝒕𝒂𝒍 𝑸𝒖𝒂𝒍𝒊𝒕𝒚 𝒐𝒇 𝑬𝒙𝒑𝒆𝒓𝒆𝒏𝒄𝒆 𝑸𝒐𝑬 𝒐𝒓 𝑸𝒐𝑿 = 𝒇(𝑸𝟏, 𝑸𝟐, 𝑸𝟑,….)
5
Q1 Spatial Resolution (HD, UHD)
Q2 Temporal Resolution (Frame Rate) (HFR)
Q3 Dynamic Range (SDR, HDR)
Q4 Color Gamut (BT. 709, BT. 2020)
Q5 Coding (Quantization, Bit Depth)
Q6 Compression Artifacts
.
.
.
6. Spatial Resolution (Pixels)
HD, FHD, UHD1, UHD2
Temporal Resolution (Frame rate)
24fps, 30fps, 60fps, 120fps …
Dynamic Range (Contrast)
From 100 nits to HDR
Color Space (Gamut)
From BT.709 to BT.2020
Quantization (Bit Depth)
8 bits, 10 bits, 12 bits …
Major Elements of High-Quality Image Production
6
7. UHDTV 1
3840 x 2160
8.3 MPs
Digital Cinema 2K
2048 x 1080 2.21 MPs
4K
4096 x 2160 8.84 MPs
SD (PAL)
720 x 576
0.414MPs
HDTV 720P
1280 x 720
0.922 MPs
HDTV 1920 x 1080
2.027 MPs
UHDTV 2
7680 x 4320
33.18 MPs
8K
8192×4320
35.39 MPs
Wider viewing angle
More immersive
7
Digital Cinema Initiatives
Q1: Spatial Resolution
11. Gamut
− The Gamut of a color space is the complete range of
colors allowed for a specific color space.
− It is the range of colors allowed for a video signal.
− No video, film or printing technology is able to fill all the
colors can be see by human eye.
− Outside edge defines fully saturated colours.
− Purple is “impossible”.
− Each corner of the gamut defines the primary colours.
11
Q3: Wide Color Gamut
Chromaticity coordinates of Rec. 2020 RGB primaries and
the corresponding wavelengths of monochromatic light
Parameter Values
Opto-electronic transfer
characteristics before
non-linear pre-correction
Assumed linear
Primary colours and
reference white
Chromaticity coordinates
(CIE, 1931)
x y
Red primary (R) 0.708 0.292
Green primary (G) 0.170 0.797
Blue primary (B) 0.131 0.046
Reference white (D65) 0.3127 0.3290
12. – Deeper Colors
– More Realistic Pictures
– More Colorful
WCG
Wide Color Space (ITU-R Rec. BT.2020)
75.8%, of CIE 1931
Color Space (ITU-R Rec. BT.709)
35.9%, of CIE 1931
CIE 1931 Color Space
Q3: Wide Color Gamut
12
16. Scenes Where HDR Performs Well
Expand the User’s Expression
Movie CG/Game Advertisement
(Signage, Event)
Digital Archive
(Museum)
Convey the Atmosphere/Reality
Music LIVE, concert Sports Nature, Night-view
16
17. Q3+Q4: Wide Color Gamut (WCG) + High Dynamic Range (HDR)
SDR
SDR
HDR
HDR+WCG
More vivid, More details
More real, More colorful
17
21. Brief Summary of ITU-R BT.709, BT.2020, and BT.2100
− ITU-R BT.709, BT.2020 and BT.2100 address transfer function, color space, matrix coefficients, and more.
− The following table is a summary comparison of those three documents.
Parameter ITU-R BT.709 ITU-R BT.2020 ITU-R BT.2100
Spatial Resolution HD UHD, 8K HD, UHD, 8K
Framerates 24, 25, 30, 50, 60
24, 25, 30, 50, 60,
100, 120
24, 25, 30, 50, 60,
100, 120
Interlace/Progressive Interlace, Progressive Progressive Progressive
Color Space BT.709 BT.2020 BT.2020
Dynamic Range SDR (BT.1886) SDR (BT.1886) HDR (PQ, HLG)
Bit Depth 8, 10 10, 12 10, 12
Color Representation RGB, YCBCR RGB, YCBCR RGB, YCBCR, ICTCP
21
22. What’s Important in UHD
Next Gen Audio
WCG
HDR
New EOTF
HFR (> 50 fps)
Screen Size
4K Resolution
0 1 2 3 4 5 6 7 8 9 10
22
23. Relative Bandwidth Demands of 4K,HDR, WCG, HFR
(Reference: HD SDR BT.709 8-Bit)
4K UHDTV
High Frame Rate
120FPS
High Frame Rate
60FPS
HDR
Color Gamut
10-Bit Bit Depth
23
24. Devoncroft’s Big Broadcast Survey (BBS)
− The dozens of times at conferences including Broadcast Asia, CES, IBC, NAB, NAB NY, and SVG.
− The Devoncroft’s Big Broadcast Survey (BBS), the largest study of broadcast and digital media end-users.
− BBS is conducted annually for more than a decade, with 6,000 – 10,000 media technology executives
participating each year.
− The unrivalled richness of the BBS data set provides Devoncroft with unique insight into the factors that
move markets, as well as the brands that are most likely to be successful over time.
• 2019 Big Broadcast Surveys
• More than 100 countries
24
27. Most Important Technology Trend
27
− The chart is visualized as a weighted index, not as a measure of the number of people that said which trend was most
important to them (a statistical weighting is applied to results, based on how research participants ranked).
− It is a measure of what research participants say is commercially important to their businesses in the future, not what they
are doing now, or where they are spending money today
28. 28
Most Important Technology Trend
− The chart is visualized as a weighted index, not as a measure of the number of people that said which trend was most
important to them (a statistical weighting is applied to results, based on how research participants ranked).
− It is a measure of what research participants say is commercially important to their businesses in the future, not what they
are doing now, or where they are spending money today
2020
Rank
2020 BBS Broadcast Global Trend
Index*
1 Multi-platform content delivery (2)
2 IP networking & content delivery (1)
3 4K / UHD (3)
4 5G (6)
5 Remote production (13)
6 Cloud computing / Virtualization (7)
7 Artificial Intelligence / Machine Learning (4)
8 Move to automated workflows (5)
9 Improvements in video compression efficiency (10)
10 Cyber Security (12)
11 High Dynamic Range (HDR) (11)
12 Centralized operations (playout, transmission etc.) (18)
13 Next generation broadcasting (ATSC 3.0, DVB T-2 etc)
(15)
14 File-based / tapeless workflows (9)
15 Targeted / Programmatic advertising (16)
16 Video on demand/SVOD (17)
17 Transition to multi-channel / immersive audio (8)
18 Virtual Reality (14)
19 Transition to HDTV / 3Gbps (1080p) operations (19)
20 Outsourced operations (playout, transmission etc.) (20)
*2019 rankings shown in parentheses
Source: Devoncroft 2020 Big Broadcast
Survey
29. Most Important Technology Trend
− The evolution of the BBS Broadcast Industry Global Trend Index in each of the years 2011, 2015 and 2019 is shown in the
table below
29
30. Most Important Technology Trend
− The evolution of the BBS Broadcast Industry Global Trend Index in each of the years 2011, 2015 and 2020 is shown in the
table below
30
31. 31
BBS Broadcast Industry Global Project Index
− Unlike industry trend data, which highlights what respondents are thinking or talking about doing in the future, this
information provides direct feedback about what major capital projects are being implemented by broadcast
technology end-users around the world, and provides useful insight into the expenditure plans of the industry.
The result is the 2020 BBS Broadcast Industry
Global Project Index, shown below, which
measures the number of projects that BBS
participants are currently implementing or
have budgeted to implement.
32. 32
BBS Broadcast Industry Global Project Index
− Unlike industry trend data, which highlights what respondents are thinking or talking about doing in the future, this
information provides direct feedback about what major capital projects are being implemented by broadcast
technology end-users around the world, and provides useful insight into the expenditure plans of the industry.
The result is the 2020 BBS Broadcast Industry
Global Project Index, shown below, which
measures the number of projects that BBS
participants are currently implementing or
have budgeted to implement.
33. 33
− IHS Markit combines information, analytics and expertise to provide solutions for business, finance and government.
− We help our customers see why things happen and focus on what really matters so they can make confident decisions to
improve efficiency, outpace competitors and drive growth.
Ex: 4K TV Penetration Trend
34. 34
− IHS Markit combines information, analytics and expertise to provide solutions for business, finance and government. We
help our customers see why things happen and focus on what really matters so they can make confident decisions to
improve efficiency, outpace competitors and drive growth.
Ex: UHD Household Share Forecast
49. Human Visual System
− The macula is the functional center of the retina (about 5 mm in diameter).
− It gives us the ability to see “20/20” and provides the best color vision.
− Central Macula Called Fovea (In the very center of the macular region is the fovea).
− Small region (1 or 2°) at the center of the visual field containing the highest density of cones (and no rods).
− The fovea is perhaps the most important part of the eye. Very often, vision is not lost until the fovea is affected by
diseases.
49
)central area of the retina(
Fovea
Cornea قرنيه
Retina چشم شبکيه
Sclera چشم سخت سفيده يا صلبيه
Pupil حدقه، چشم مردمک
Choroid مشيميه
Ciliary مژگان
Suspensory Ligamentتعليق رباط
Iris عنبيه
Vitreous زجاجيه
Macula شبکيه مرکزی قسمت
50. The human eye has one lens (used to focus) …
… an iris (used to adjust the light level)…
… and retina (used to sense the image).
The retina is made up of rod and cone shaped cells.
• About 120 million rods used for black & white (70 to 150 million cones in each eye).
• About 7 million cones used for colour (6 to 7 million cones in each eye).
Human Visual System
50
The three types have peak wavelengths in the range of 564–580 nm, 534–545 nm, and 420–440 nm, respectively, depending on the individual.
S : 420~440 nm (closed to blue) (2%)
M: 534~545 nm (green) (33%)
L : 564~580 nm (closed to red) (65%)
S = Short wavelength cone
M = Medium wavelength cone
L = Long wavelength cone
Rod cells
S-cone
M-cone
L-cone
51. − The highest point on each curve is called the “peak wavelength”, indicating the wavelength of radiation that the cone is
most sensitive to it.
Normalized Human Cone Sensitivity
Human Cone Sensitivity
51
S : 420~440 nm (closed to blue) (2%)
M: 534~545 nm (green) (33%)
L : 564~580 nm (closed to red) (65%)
S = Short wavelength cone
M = Medium wavelength cone
L = Long wavelength cone
Rod cells
S-cone
M-cone
L-cone
52. • There are 70 to 150 million rods in each eye.
• Contain photo-pigment
• Respond to low energy
• Enhance sensitivity
• Concentrated in retina, but outside of fovea (Distributed over the retina
surface)
• One type, sensitive to grayscale changes
• Rods don’t discern fine details.
• Rods give a general picture of the field of view.
• Rod vision is called scotopic or DIM-LIGHT VISION.
• There are 6 to 7 million cones in each eye.
• Contain photo-pigment
• Respond to high energy
• Enhance perception
• Concentrated in fovea, exist sparsely in retina
• Three types, sensitive to different wavelengths
• Each cone is connected to its own nerve end, so human can
resolve fine details.
• Cone vision is called photopic or BRIGHT-LIGHT VISION
Cones Rods
Human Visual System
52
53. Fovea - Small region (1 or 2°) at the center of the visual field containing the highest density of cones (and no rods).
• The centre of the image is the fovea.
– The fovea sees colour only.
• The nerve leaves the eye at the blind spot.
• Fovea is small, dense region of receptors only cones (no rods) gives visual acuity.
• Outside fovea fewer receptors overall larger proportion of rods.
Human Visual System
53
56. 56
Types of Visible Perception Possible
− As move further from fovea, vision becomes more limited
− Colour vision only possible in central visual field
(Left eye)
57. 57
Vertical and Horizontal Fields of View
(Binocular
Vision)
(Monocular Vision)
Visual Limit Left Eye (94°)
Visual Limit Right Eye (94°)
R
L
Normal
Viewing
Field
Normal
Viewing
Field
58. 58
Horizontal field of view
• The central field of vision for most people covers an angle of
between 50° and 60° (objects are recognized).
• Within this angle, both eyes observe an object simultaneously.
• This creates a central field of greater magnitude than that
possible by each eye separately.
• This central field of vision is termed the 'binocular field' and within
this field
images are sharp
depth perception occurs
colour discrimination is possible
Vertical Field of View
• The typical line of sight is considered horizontal or 0 °.
• A person’s natural or normal line of sight is normally a 10 ° cone of
view below the horizontal and, if sitting, approximately 15 °.
Vertical and Horizontal Fields of View
59. How Many Megapixels Is the Human Eye?
− According to scientist and photographer Dr. Roger Clark, the resolution of the human eye is 576
megapixels (www.curiosity.com (NOV 14, 2018)). But what does this mean, really?
− A 576-megapixel resolution means that in order to create a screen with a picture so sharp and clear that
you can't distinguish the individual pixels, you would have to pack 576 million pixels into an area the size
of your field of view.
− To get to his number, Dr. Clark assumed optimal visual acuity across the field of view; that is, it assumes
that your eyes are moving around the scene before you.
− But in a single snapshot-length glance, the resolution drops to a fraction of that: around 5–15 megapixels.
− That's because your eyes have a lot of flaws that wouldn't be acceptable in a camera.
• You only see high resolution in a very small area in the center of your vision, called the fovea.
• You have a blind spot where your optic nerve meets up with your retina.
− You move your eyes around a scene not only to take in more information but to correct for these
imperfections in your visual system.
59
60. The Wrong Question
− Is the human eye really analogous to a camera?
− Since the human eye doesn’t see in pixels at all, it’s pretty hard to
compare them to a digital display.
− Really, though, the megapixel resolution of your eyes is the wrong
question.
− The eye isn't a camera lens, taking snapshots to save in your
memory bank.
− It's more like a detective, collecting clues from your surrounding
environment, then taking them back to the brain to put the pieces
together and form a complete picture.
− There's certainly a screen resolution at which our eyes can no
longer distinguish pixels — and according to some, it already exists
— but when it comes to our daily visual experience, talking in
megapixels is way too simple.
60
66. • Hue ⇒ a measure of the colour
− Sometimes called “Chroma Phase”.
• Saturation ⇒ a measure of colour intensity
− Sometimes simply called “Color Intensity”.
• Luminance (Luminosity) (Intensity (Gray Level)) ⇒ a measure of brightness
− Sometimes simply called “Brightness” or “Lightness” (!?).
Hue, Saturation and Luminance
66
Chrominance (Chromaticity)
67. yellow
green
blue
#
Photons
Wavelength
Mean Hue
The dominant color as perceived by an observer
Hue, Saturation and Luminance
Hue is an attribute associated with the dominant wavelength in a mixture of light waves.
− Hue represents dominant color as perceived by an observer.
− Thus, when we call an object red, orange, or yellow, we are referring to its hue.
67
68. Variance Saturation
Wavelength
high
medium
low
hi.
m
ed.
low
#
Photons
The relative purity or the amount of white light mixed with a hue
Hue, Saturation and Luminance
Saturation refers to the relative purity or the amount of white light mixed with a hue.
− The pure spectrum colors are fully saturated.
− Colors such as pink (red and white) and lavender (violet and white) are less saturated, with the degree of
saturation being inversely proportional to the amount of white light added.
68
69. Area Luminance
#
Photons
Wavelength
B. Area Lightness
bright
dark
A measure of the amount of energy that an observer perceives from a light source
Hue, Saturation and Luminance
Luminance (Luminosity) (Intensity (Gray Level)) ⇒ a measure of brightness
− It is a measure of the amount of energy that an observer perceives from a light source (visible spectrum).
− Sometimes simply called “Brightness” or “Lightness” (!?).
69
70. Hue is an attribute associated with the dominant wavelength in a mixture of light waves.
− Hue represents dominant color as perceived by an observer.
− Thus, when we call an object red, orange, or yellow, we are referring to its hue.
Saturation refers to the relative purity or the amount of white light mixed with a hue.
− The pure spectrum colors are fully saturated.
− Colors such as pink (red and white) and lavender (violet and white) are less saturated, with the degree of
saturation being inversely proportional to the amount of white light added.
Luminance (Luminosity) (Intensity (Gray Level)) is a measure of brightness
− It is a measure of the amount of energy that an observer perceives from a light source.
Hue and saturation taken together are called chrominance or chromaticity and, therefore, a color may be
characterized by its brightness and chromaticity.
Hue, Saturation and Luminance
70
71. − Radiance is the total amount of energy that flows from the light source, and it is usually measured in watts
(W).
− Luminance, measured in lumens (lm), is a measure of the amount of energy that an observer perceives
from a light source.
• For example, light emitted from a source operating in the far infrared region of the spectrum could
have significant energy (radiance), but an observer would hardly perceive it; its luminance would be
almost zero.
− Brightness is a subjective descriptor that is practically impossible (difficult) to measure.
• It embodies the achromatic notion (idea) of intensity (gray level), and is one of the key factors in
describing color sensation.
Brightness vs. Luminance
71
In video Signal Volt
72. − Cone vision is called photopic or bright-light vision.
− The objects that appear brightly colored in daylight appear as colorless forms in
moonlight because only the rods are stimulated. This phenomenon is known as
scotopic or dim-light vision.
− The range of light intensity levels to which the human visual system can adapt is
enormous—on the order of 1010
— from the scotopic threshold to the glare limit.
− Experimental evidence indicates that subjective brightness (intensity as
perceived by the human visual system) is a logarithmic function of the light
intensity incident on the eye.
− The long solid curve represents the range of intensities to which the visual system
can adapt.
− In photopic vision alone, the range is about 106.
− The transition from scotopic to photopic vision is gradual over the approximate
range from −3 to −1 millilambert (0.001 to 0.1 mL) in the log scale, as the double
branches of the adaptation curve in this range show.
− 1 lambert (L) =
1
𝜋
candela per square centimetre (0.3183 cd/cm2) or
10000
𝜋
cd m−2
Brightness vs. Luminance
72
73. − The key point in interpreting the impressive dynamic range depicted in Fig is that
the visual system cannot operate over such a range simultaneously. Rather, it
accomplishes this large variation by changing its overall sensitivity, a
phenomenon known as brightness adaptation.
− The total range of distinct intensity levels the eye can discriminate simultaneously
is rather small when compared with the total adaptation range.
− For a given set of conditions, the current sensitivity level of the visual system is
called the brightness adaptation level, which may correspond, for example, to
brightness Ba.
− The short intersecting curve represents the range of subjective brightness that the
eye can perceive when adapted to this level.
• This range is rather restricted, having a level Bb at, and below which, all
stimuli are perceived as indistinguishable blacks.
• The upper portion of the curve is not actually restricted but, if extended too
far, loses its meaning because much higher intensities would simply raise the
adaptation level higher than Ba.
Brightness Adaptation
73
81. Additive vs. Subtractive Color Mixing
Subtractive Color Mix
The paint absorbs or subtracts out
wavelengths and the color you see
is the wavelengths that were
reflected back to you (not
absorbed)
Additive mixture
The wavelengths are added
together so the final color you see is
the sum of the wavelengths.
81
White Light
Red Green Blur
White Light
Red Green Blur
White Light
Red Green Blur
Green Blur Red Green
Red Blur
83. Subtractive Primaries Colours
Subtractive Primary colours:
• Magenta, Yellow & Cyan.
− A subtractive color model explains the mixing of a limited set of dyes, inks, paint pigments to create a
wider range of colors, each the result of partially or completely subtracting (that is, absorbing) some
wavelengths of light and not others.
− The color that a surface displays depends on which parts of the visible spectrum are not absorbed and
therefore remain visible.
83
85. − All colour images can be broken down into 3 primary colours.
− Subtractive primaries: Magenta, Yellow & Cyan.
− Additive primaries :Red, Green & Blue
Additive vs. Subtractive Color Primaries
85
86. Secondary and Tertiary Colours
− Secondary Additive Colours: Cyan, Yellow, Magenta
− Secondary Subtractive Colours: Red, Green, Blue
• Secondary additive colours are primary subtractive colours and visa versa
− Additive tertiary: White
− Subtractive tertiary: Black
86
87. Using Subtractive and Additive Primaries.
Using subtractive primaries.
• Colour printers have Cyan, Magenta & Yellow pigments.
• Black often included.
Using additive primaries.
• Colour primaries are Red, Green & Blue
• Film and drama set lighting uses additive primaries.
• Video uses additive primaries.
The camera splits image into 3 primaries.
Television builds image from 3 primaries.
87
88. 88
C, M, and Y components
Using Subtractive Primaries
R, G, and B components
89. − The color circle (color wheel) originated with Sir Isaac Newton, who in the seventeenth century created its
first form by joining the ends of the color spectrum.
− The color circle is a visual representation of colors that are arranged according to the chromatic
relationship between them.
Colour Circle (Colour Wheel)
89
90. − Based on the color wheel, for example, the proportion of any color can be increased by
• raising the proportion of the two immediately adjacent colors
• decreasing the amount of the opposite (or complementary) color in the image
• decreasing the percentage of the two colors adjacent to the complement.
Magenta
Adding
Red and Blue
Removing
Green
Colour Circle (Colour Wheel)
90
Removing
Cyan and Yellow
↑
91. − Suppose, for instance, that there is too much magenta in an RGB image. It can be decreased:
• by removing both red and blue
• by adding green. Magenta
Removing
Red and Blue
Adding
Green
Colour Circle (Colour Wheel)
91
↑
92. Color Temperature, Recall
– The spectral distribution of light emitted from a piece of carbon (a
black body that absorbs all radiation without transmission and
reflection) is determined only by its temperature.
– When heated above a certain temperature, carbon will start
glowing and emit a color spectrum particular to that temperature.
– This discovery led researchers to use the temperature of heated
carbon as a reference to describe different spectrums of light.
– This is called Color temperature.
92
Transmission, Reflection, Absorb
96. − For any given object, we can measure its reflectance (or emission) spectrum, and use that to precisely
identify a color.
− If we can reproduce the spectrum, we can certainly reproduce the color!
− The sunlight reflected from a point on a lemon might have a reflectance spectrum that looks like this:
𝑆 𝜆
Emission Spectrum and Reflectance Spectrums
96
Emission
Reflectance
98. − The highest point on each curve is called the “peak wavelength”, indicating the wavelength of radiation that the cone is
most sensitive to it.
Normalized Human Cone Sensitivity
Human Cone Sensitivity
98
S : 420~440 nm (closed to blue) (2%)
M: 534~545 nm (green) (33%)
L : 564~580 nm (closed to red) (65%)
S = Short wavelength cone
M = Medium wavelength cone
L = Long wavelength cone
Rod cells
S-cone
M-cone
L-cone
99. Ex: Cones extraction for a point on the lemon
− By looking at the normalized areas under the curves, we can see how much the radiation reflected from the real lemon
excites each of cones.
− In this case, the normalized excitations of the S, M, and L cones are 0.02, 0.12, and 0.16 respectively.
Normalized Excitation of the S, M and L Cones
99
Emission
Reflectance
100. Radiometry and Photometry
Radiometry
– The science of measuring light in any portion of the electromagnetic spectrum
including infrared, ultraviolet, and visible light.
– This range includes the infrared, visible, and ultraviolet regions of the electromagnetic spectrum
– Wavelength from 0.01 to 1000 micrometer (10nm to 1mm) (Note: micro=10-6, nano =10-9)
Photometry
– Photometry is like radiometry except that it weights everything by the sensitivity of the human eye
– Deals with only the visible spectrum (=visible band)
– A wavelength range of about 380 to 780 nanometer.
– Do not deal with the perception of color itself, but rather the perceived strength of various
wavelengths
100
102. Radiant Flux
(Radiant Power)
The radiant energy emitted, reflected, transmitted or received by an object, per unit time.
Radiant Flux is defines over wavelengths from 0.01 to 1000 μm and includes the regions of
the electromagnetic spectrum referred to as Ultra Violet (UV), Visible, and Infra Red (IR).
Watt
W or J/s
Radiant Intensity The radiant intensity is defined as the radiant flux per unit solid angle. It can also be
applied to emitted, transmitted, reflected or received radiation by an object.
W/sr
Luminous Flux
(Luminous Power)
Luminous flux is a measure of the total amount of visible light emitted by a light source.
The weighted emitted electromagnetic waves according to “luminosity function” model
of the human eye's sensitivity to various wavelengths (Visible Light).
Lumen
lm
Luminous Intensity The quantity of visible light emitted by a light source in a given direction per unit solid
angle.
Candela
1cd = 1lm / sr
Illuminance The amount of light or luminous flux falling on a surface. Lux (lumens per square meter)
1lx = 1lm / m²
Foot-candles (lumens per square foot)
1fc = 1lm / ft²
Luminance The luminous intensity that is reflected or emitted from an object per unit area in a
specific direction.
(a measure of the flux emitted from, or reflected by, a relatively flat and uniform surface)
Candela per square meter
cd/m² or nit
Radiometry and Photometry
102
104. Negative and Positive Pictures
Positive color Negative color
Monochrome negative picture
Color positive picture Color negative picture
Monochrome positive picture
104
105. Negative and Reversal Films
It is inverted during the scanning or optical
printing process to get the correct colors.
105
(Also called “Print” film)
(Also called “Reversal ” film)
106. Negative and Reversal Films
106
− Negative film is printed onto photographic paper to create printed positive images or converted to positive film for
projection by projector (Film Development).
Printed positive images on print
film
2.40:1
Audio
Negative Image Positive Image
Print
Optical Sound
107. Negative film: It offers high exposure latitude and does not handle under exposure very well. (dark areas)
Negative and Reversal Films
Reversal film: It offers Limited exposure latitude and does not handle high exposure. (highlighted areas)
• More natural and softer colors than positive films
• Allowing for a much greater latitude with exposure and dynamic
range.
• Usually have less contrast, but a wider dynamic range, than the “final
printed positive images”.
• The contrast typically increases when they are printed (Contrast may
be adjusted at the time of scanning or post-processing)
• Rich, saturated colors (vivid colors)
• Strong contrast
• The fine grain (almost as digital images)
• Higher resolution (fine grain) and better sharpness
• Faster process (more easer )
• Cheaper
107
110. Film Gauge (Width of the Film)
Four commonly in use for camera films: Super 8, 16 mm, 35mm, and 65 mm
– The 35 mm is most popular for feature films, commercials and US television.
• It can be printed to 35 mm print film or scanned or transferred on a telecine (film to video).
• Super 35 uses the space reserved for the soundtrack.
– The 16 mm film is typically supplied in single perforated format except for use in high-speed cameras, which use double
perforated film.
• The “Super 16” uses the space reserved for the soundtrack.
• The “Super 16” format is typically used for low to medium budget feature films, where it can be blown-up to 35 mm release prints (for
shooting with lower budget compared to 35 mm film).
• The “Super 16” format is also widely used for television production, where its aspect ratio fits16:9 wide-screen format well.
– The Super 8 is available as both negative film or reversal film, supplied in self-contained cartridges.
– The 65 mm format is used as a camera film gauge for making prints on 70 mm print film for widescreen presentation such
as IMAX and OMNIMAX.
110
111. Soundtrack on Film
– A soundtrack on film is often identified by a continuous stripe running along the length of the film. It looks
considerably different than the film picture frames.
– The strip may be a reddish-brown color (a magnetic, or "mag," soundtrack).
– It may also look like two strips that contain similar wavy forms (a variable area optical soundtrack), or it may
look like a gray strip of varying darkness (a variable density optical soundtrack).
111
112. Review of 16 mm Film Standards
Supper 16 mm Film
(included optical sound)
16 mm film
112
113. Supper 16 mm Film
(included optical sound)
Review of 16 mm Film Standards
113
114. HD - CIF
1920 x 1080
24p/50Hz/60Hz
PAL video
720 x 575
625/50
NTSC video
720 x 480
525/60
(For Wide Screen American
1.85:1 ~ 1800 x 1000)
114
Raster Comparison
HD: High Definition
CIF: Common Intermediate Format
“2k x 1k”
2K film (Open gate)
2048 x 1540
24 fps
SD
(Standard
Definition)
Supper
16
mm
Film
115. Review of 35 mm Film
Academy Ratio
The full picture shows the 1.37:1 aspect ratio.
The dotted lines show the border of the very similar 1.33:1 ratio or 4:3
1.85:1 (known as “Widescreen or Flat”), United States
Very similar to 1.78:1 or 16:9
2.40:1 (“Scope” or “Cinema Scope”), United States
1.66:1, Widescreen, Europe
115
116. Review of 35 mm Film
116
Optical Sound
(Included Optical Sound)
118. VISTAVISION (8-Perf)
The 2.40:1 frame is outlined in blue above but the VISTAVISION format is
primarily used for special effects and not entire films.
– VISTAVISION is a 35 mm horizontal format with an eight-perforation pull down (across), which was typically
used with high quality background plates in special effects work and not entire films.
– The camera aperture is approximately 1.5:1 (37.7x 25.2 mm).
Review of 35 mm Film
118
119. 119
35 mm Full Frame (1.5:1)
Review of 35 mm Film
35 mm Academy Ratio (1.375:1)
120. 65 mm
– Images made on 65 mm film have a 2.2:1 aspect ratio.
– For projection, the original 65 mm film is printed on 70 mm film.
– The additional 5 mm in 70 mm film are for four magnetic strips holding six tracks of stereophonic sound.
– This was once necessary to accommodate six magnetic sound tracks on the edges of the 70 mm film.
Today a double-system sound system is used with separate CDs having 6-track sound controlled by a time
code printed on the film.
65 mm Film
120
70 mm film (70 mm widescreen)
70 mm film (70 mm widescreen)
121. IMAX and IMAX DOME (formerly known as OMNIMAX) productions use 65 and 70 mm film but with a
horizontal image and a 15-perforation pulldown (across) for very large-screen shows.
– IMAX DOME films are shot with the same cameras and lenses, but are projected onto a domed screen
through a fisheye lens. The screen itself is tilted somewhat toward the audience, who sit in reclining chairs,
arranged in a steeply-sloping arrangement.
– So, two 70 mm formats are also in current use
• 70 mm widescreen at 2.2:1
• IMAX 70 mm at 1.43:1
– Both are projected onto much larger screens than 35 mm formats.
65 mm IMAX Film
The IMAX 70 mm format (1.43:1) 121
70 mm
70 mm widescreen (2.2:1)
70 mm
122. Persistence of vision
− It is the phenomenon of the eye by which an afterimage is thought to persist for approximately “one twenty-fifth
of a second” on the retina.
Continuity limit
− By 24 pictures/second we have natural continuity for 90% of movements (but we have flicker).
Flicker limit
− Flicker occurs when there is a “low refresh rate”, allowing the “brightness to drop” for time intervals that are
sufficiently long to be noticed by a human eye (during changing one picture to another one, we have dark
scene).
t
Brightness
t=1/48 s
122
How Many Picture Is Needed in One Second?
At least 48 picture/second.
124. 24 frames per second (fps)
1/24 of a second per frame
including film exposure and pull-down
Exposure time - 1/48
Pull-down time - 1/48
24 frames per second (fps)
Rotary Film
Shutter
124
Film Exposure and Projection
127. Rotary Film
Shutter
• Film is projected using double exposure
• Each frame is exposed twice
• Film is projected using
double exposure
• Each frame is exposed
twice
127
Film Exposure and Projection
130. 3:2 Pull-down
• To speed up video about 4% so that it runs at 25fps (104 min in cinema is changed to 100 min in TV).
• The speed difference will not be noticeable on playback.
4% Speed Change
130
Film Digitalization and Recording
131. − The 24p system is the first isotropic video production format.
− Runs at the film frame rate of 24 fps
− Originally, 24p was used in the non-linear editing of film-originated material.
− Global format
− Today, 24p formats are being increasingly used for
• aesthetic reasons in image acquisition
• delivering film-like motion characteristics
− Some vendors advertise 24p products as a cheaper alternative to film acquisition.
− Progressive image capture
131
The 24p Video System
133. 133
Film Camera Gates vs. Digital Sensors (Actual Size)
APS: Advanced Photo System (discontinued)
H (high-definition), C (classic) and P (panorama)
135. Mechanism of Human Eye
– Images (= light) seen with our eyes are directed to and projected onto the eye’s retina (it consists of
several million photosensitive cells).
– The retina reacts to light and converts it into a very small amount of electrical charges.
– These electrical charges are then sent to the brain through the optic nerve system.
135
چشم مردمک
(
حدقیه
)
136. Image Sensors
– Image sensors have photo-sensors that work in a similar way to our retina’s photosensitive cells, to convert
light into a signal charge.
– However, the charge readout method is quite different!!!!!
136
138. – Each pixel within the image sensor samples the intensity of just one primary color (red, green or blue). In order to provide
full color images from each pixel of the imager, the two other primary colors must be created electronically.
– These missing color components are mathematically calculated or interpolated in the RGB color processor which is
positioned after the image sensor.
– The easiest way to calculate a missing color component:
Add the values of the color components from two surrounding pixels and divide this by two.
Blue color component missing in pixel G22
𝑩𝟐𝟐 = (𝒑𝒊𝒙𝒆𝒍 𝑩𝟐𝟏 + 𝒑𝒊𝒙𝒆𝒍 𝑩𝟐𝟑)/𝟐
138
One-Chip Imaging System
140. Three-Chip Imaging System
− The dichroic prism system provides more accurate color filtering than a color filter array of a one-chip
system.
− Capturing the red, green, and blue signals with individual imagers generates purer color reproduction.
− Since the image sensing system captures three times more information than a one-chip system allows for
• a much wider dynamic range
• a higher horizontal resolution
140
142. Image Sensor Size
– Image sensor size is measured diagonally across the imager’s photosensitive area, from corner to corner.
– A larger image sensor size generally translates into better image capture.
– This is because a larger photosensitive area can be used for each pixel.
The benefits of larger image sensors
1. Higher sensitivity
2. Less smear
3. Better signal-to-noise characteristics
4. Use of better lens optics
5. Wider dynamic range
142
144. Image Sensor Size
APS: Advanced Photo System (discontinued)
H (high-definition), C (classic) and P (panorama)
144
Crop Factor =Diagonal35mm FF / Diagonalsensor
146. – The term Full Frame or FF is used by users of Digital Single-Lens Reflex (DSLR) cameras as a shorthand for an
image sensor format which is the same size as 35mm format (36 mm × 24 mm) film.
Image Sensor Size
146
147. 147
10. Main mirror: To reflect incoming light into the viewfinder compartment. It
must be in an angle of exactly 45 degrees. There is a small semi-transparent area
on it to facilitate auto focus.
9. Sub mirror: To reflect the light passes through the semi-transparent area on the
main mirror to the autofocus (AF) sensor.
8. AE (exposure sensor) sensor: It’s used to provide exposure information and
adjust the exposure settings after calculations under different situations.
7. Image sensor
6. LCD screen: It’s used to display the photos stored in its memory card, settings
and also what will be recorded on the image sensor in the live view mode.
1. Matte focusing screen: A screen on which the light passes through the lens will
project.
2. Condensing lens: A lens that is used to concentrate the incoming light.
3. Pentaprism: To produce a correctly oriented and right side up image and
project it to the viewfinder eyepiece.
4. AF (autofocus sensor) sensor: It’s used to accomplish correct auto focus.
5. Viewfinder eyepiece: To allow us to see what will be recorded on the sensor.
Digital Single-Lens Reflex (DSLR) Cameras
148. Image Sensor Size
• An old 2/3″ Tube camera would have had a 4:3 active area of about 8.8mm x 6.6mm giving an 11mm diagonal.
• This 4:3 11mm diagonal is the size now used to denote a modern 2/3″ sensor.
148
≃2/3×2/3 inch =11 mm
2/3 inch
Vidicon Tube (2/3 inch in diameter)
2/3 inch
2/3×2/3 inch≃ 11mm
1 inch
1″ tube
1×2/3 inch≃ 16mm
149. – It’s confusing!!
– But the same 2/3″ lenses as designed for tube cameras in the 1950’s can still be used today on a modern
2/3″ video camera and will give the same field of view today as they did back then.
– This is why some manufacturers are now using the term “1 inch type”, as this is the active area that would
be the equivalent to the active area of an old 1″ diameter Vidicon/Saticon/Plumbicon Tube from the
1950’s.
For comparison:
– 1/3″ → 6mm diag.
– 1/2″ → 8mm diag.
– 2/3″ → 11mm diag.
– 1″ → 16mm diag.
– 4/3″ → 22mm diag.
– A camera with a Super35mm sensor would be the equivalent of approx 35-40mm
– APS-C would be approx 30mm
Image Sensor Size
149
150. Focal Length and Depth of Field
150
1
Wider Lens for HD and UHD
153. Sensor Size and Depth of Field
– Considering we’re using the same focal length (35mm) and aperture (f/8), the larger the sensor size, the larger the depth
of field, and the smaller the sensor size, the narrower the depth of field you’ll see.
153
4
Different field of view
154. Sensor Size and Depth of Field
– For a given focal length and aperture and with a specific
subject and distance (i.e. same object from the same
distance) ⇒ different frame filling or field of view
– If we consider the “same field of view”, the depth of field will
be narrower in cameras with larger sensors.
– For having same field of view, larger sensors require in order to
fill the frame with that subject.
• Solutions:
To get closer to their subject
To use a longer focal length (zoom function)
154
4
Filling the frame with a subject of the
same size from the same distance.
⇒ Less depth of field
155. Camera Sensor Size vs Megapixels
– Camera sensor size and resolution aren’t necessarily
related to one another.
• A 20 MP phone camera and a 20 MP full-frame camera
both have 20 million pixels and the same resolution.
⇒ However, they don’t have the same image quality.
– A larger sensor allows you to have larger pixels relative to a
smaller sensor with the same resolution.
⇒ The larger pixels on the full-frame camera are
more efficient at gathering light.
⇒ They are not only more sensitive but have better
dynamic range, allowing to get tack-sharp photos.
155
156. Crop Factor (Camera Sensor Size) and Lens Focal Length Product
⇒ The equivalent view as if you were using a 35mm
camera (a full-frame camera)
– The smaller sensor cuts down on the view provided by the
35mm lens.
– This can be an advantage in smaller sensors when shooting
subject from afar.
– Ex:
• 𝟐 × 𝟐𝟎𝟎𝒎𝒎 = 𝟏 × 𝟒𝟎𝟎𝒎𝒎
⇒ A 200mm lens on a Micro 4/3rds body (2.0x crop factor)
has the reach of a 400mm full-frame camera and weighs
quite a bit less. 156
Crop factor of a sensor × Focal length of the lens
157. – Charge Transfer from Photo Sensor to Vertical CCD
– Like Water Draining from a Dam.
157
CCD Image Sensor
158. CCD Image Sensor
158
– Charge Transfer from Photo Sensor to Vertical CCD
– Like Water Draining from a Dam.
Charge to Voltage
159. – Charge Transfer by CCD in a Bucket-brigade Fashion.
– CCD image sensors get their name from the vertical and horizontal shift registers, which are Charge
Coupled Devices that act as bucket brigades.
159
CCD Image Sensor
Charge
Charge
Charge
Charge
160. CCD and CMOS Image Sensors
CCD and CMOS sensors perform the same steps, but at
different locations, and in a different sequence.
160
161. – CMOS sensors have an
amplifier at each pixel.
– The charge is first converted
to a voltage and amplified
right at the pixel.
161
CMOS Image Sensors
162. Analog Noise
– Where charge is transmitted in the form of an analog signal, the signal will pick up a certain degree of external noise
during its travel. Noise will increase in proportion to the travel distance.
Fixed Pattern Noise
– CMOS sensors have an amplifier at each pixel.
– It would be unreasonable to expect that all of these amplifiers will be exactly equivalent (production process).
– This non-uniformity among amplifiers results in a type of interference known as fixed pattern noise.
– Unlike conventional video noise, which has a random behavior, fixed pattern noise creates a permanent, unwanted
texture that can be especially visible in dark scenes.
– Fortunately, this problem can be corrected by incorporating CDS (correlated double sampling) circuits to cancel this
noise and restore the original signal.
– The "reset switch“ in each pixel also creates FPN.
162
Analog Noise and Fixed Pattern Noise
FPN noise for CCD (left) and CMOS (right) noise
165. – Active-pixel CMOS sensors use a "reset switch“ in each pixel to
drain the accumulated charge of the previous video field, in
preparation for the next video field.
– Unfortunately, the draining process is not perfect. Some electrons
will always remain in the image sensing area.
– These electrons represent switching noise, which can become
part of the video signal.
– Even worse, this noise is of the ‘fixed pattern’ type. Unlike
conventional video noise, which has a random behavior, fixed
pattern noise creates a permanent, unwanted texture that can
be especially visible in dark scenes.
– Modern CMOS sensors combat fixed pattern noise with
Correlated Double Sampling.
– CMOS image sensors conduct charge-to-voltage conversion
twice for every pixel. Both of these voltages are also amplified.
– The column circuit subtracts the noise-only voltage from the
signal-mixed-with-noise voltage to produces an output voltage.
– ⇒ Noise separation and cancelation
Analog Correlated Double Sampling
165
166. 166
− The digital CDS noise cancellation, works by measuring
the noise prior to conversion and then canceling the
noise after the conversion.
• The pixel outputs the amplified noise voltage.
• The column ADC converts the noise voltage to
digital.
• The pixel outputs the amplified signal-with noise
voltage.
• The column ADC converts the signal-with noise
voltage to digital.
• The column ADC subtracts the digital noise value
from the digital signal-with-noise value to create
the digital output value.
Digital Correlated Double Sampling
167. 167
Exmor™ Noise Reduction Technology
Analog CDS
CDS (Correlated Double Sampling)
– As a result, camcorders with Exmor technology
offer lower noise than those that use
conventional HD CMOS sensors.
– This is especially significant under low-light
conditions, where Exmor-equipped cameras
perform very well.
168. Conclusion
At the current state of development, CMOS and CCD sensors both deserve a place in broadcast and
professional video cameras.
– CMOS is particularly outstanding where issues of power consumption, systemization and processing speed
are most important.
– CCDs excel where the images will be subjected to the most critical evaluation.
– Recent CMOS sensors deliver:
• Improved global shutter
• Low dark and spatial noise
• Good image quality in low light condition
• Higher quantum efficiency
Together with the existing advantages in speed and cost which makes CMOS sensors suitable for a lot of
vision applications.
168
169. Electronic Shutter
– When a shutter speed selection is made with the electronic shutter (e.g., 1/500 second), electrons
accumulated only within this period are read out to the vertical register.
– All the electrons accumulated before this period – the gray triangle in Figure– are discarded to the CCD’s
N-substrate , an area within the CCD used to dispose such unnecessary electrons.
– Discarding electrons until the 1/500-second period commences means that only movement captured
during the shutter period contributes to the image, effectively reducing the picture blur of fast-moving
objects.
169
1/500
sec
1/500
sec
1/500
sec
172. Field , Frame , Progressive , Interlace
− Continuous scan is called a progressive scan.
− Progressive scans tend to flicker for 25fps.
− Television splits each frame into two scans.
• One for the odd lines and another for the even lines.
• Each interlaced scan called a field.
• Therefore odd lines (odd field) +even lines (even field) = 1 frame.
− This is called an interlaced scan.
Interlace benefits:
I. The needed bandwidth for odd lines (odd field) +even lines (even field) is equal to the needed bandwidth for one frame
(ex: 50i/25p).
II. Interlaced scans flicker a lot less than progressive scans (ex: 50i/25p).
172
1st field: odd field 2nd field: even field
One frame
Interlace Scanning
182. 182
1. Electron Guns
2. Electron Beams
3. Focusing Coils
4. Deflection Coils
5. Anode Connection
6. Shadow Mask
7. Phosphor layer
8. Close-up of the phosphor coated inner
side of the electron
Even Field Odd Field
+
183. During One Readout Cycle:
− Progressive CCDs create one picture frame.
(higher vertical resolution, twice the transfer rate than Interlace CCDs).
− Interlace CCDs create one interlace field.
(higher sensitivity).
Interlace CCD (Default: Field Integration mode)
Progressive CCD
Faster clocking of the horizontal shift register
(all lines are readout at once)
183
Progressive & Interlace CCD
Field Rate Charging (1/50 sec)
Frame Rate Charging (1/25 sec)
184. Frame Integration Mode for Creating Interlaced Video (50i)
High vertical resolution, High sensitivity, Picture blur
– To create even fields, only the charges of the CCD’s even lines are read out.
– To create odd fields, only the charges of the CCD’s odd lines are read out.
Frame Rate Charging (1/25 sec)
184
1/25
Frame Rate Charging (1/25 sec)
185. Field Integration Mode for Creating Interlaced Video (50i)
Reducing the sensitivity by one-half, Less vertical resolution, Less picture blur
– For an even field, B and C, D and E, and F and G are added together
– For an odd field, A and B, C and D, and E and F are added together.
185
1/50
Field Rate Charging (1/50 sec)
Field Rate Charging (1/50 sec)
186. – Field Integration method reduces the blur by shortening the charge accumulation period to the field rate
(e.g., 1/50 second for PAL video).
– Shortening the accumulation period and alternating the lines to read out – to create even and odd fields
would reduce the accumulated charges to one half of the Frame Integration method.
– This would result in reducing the sensitivity by one-half.
– After charges being transferred to the vertical register, the charges from two adjacent photo-sites are
added together to represent one pixel of the interlaced scanning line.
– Both even and odd fields are created by altering the photo-site pairs used to create a scanning line.
– This method provides less vertical resolution compared to the Frame Integration mode.(two adjacent
pixels is averaged in the vertical direction).
186
Field Integration Mode for Creating Interlaced Video (50i)
Field Integration has become the default method for all interlace video cameras, to
capture pictures without image blur ⇒ Field Rate Charging (1/50 sec)
187. Progressive scan (25p)
− Delivers higher spatial resolution for a given frame size
(better detail)
• Has the same (temporal) look as film
• Good for post and transfer to film
• No motion tear
Interlaced scan (50i)
− Delivers higher temporal resolution for a given frame size
(better motion portrayal)
• Has the same (temporal) look as video
• Shooting is easier
• Post production on video is easier
• Interlacing causes motion tears and ‘video’ look
187
Scanning Techniques Pros and Cons
193. Interlace (50i)
Progressive (25p)
Delivers higher spatial resolution for a given
frame size (better detail)
Delivers higher temporal resolution for a given
frame size (better motion portrayal)
193
Interlaced Frame (50i) and Progressive Frame (25p)
194. Odd and even lines are in different places when there is fast motion
Odd field Even field Odd + Even
No
motion
Motion
Fast
194
Scanning Techniques Pros and Cons
197. Persistence of vision
− It is the phenomenon of the eye by which an afterimage is thought to persist for approximately “one twenty-fifth
of a second” on the retina.
Continuity limit
− By 24 pictures/second we have natural continuity for 90% of movements (but we have flicker).
Flicker limit
− Flicker occurs when there is a “low refresh rate”, allowing the “brightness to drop” for time intervals that are
sufficiently long to be noticed by a human eye (during changing one picture to another one, we have dark
scene).
t
Brightness
t=1/50 s
197
How Many Picture Is Needed in One Second?
At least 48 picture/second.
198. − Flicker and Judder are terms used to describe visual interruptions between successive fields of a displayed
image. It affects both Film & TV.
− If the update rate is too low, persistence of vision is unable to give illusion of continuous motion.
− Flicker is caused by:
• Slow update of motion Information
• Refresh rate of the display device
• Phosphor persistence vs motion blur
Flicker
198
t
Brightness
t=1/50 s
203. Judder
Judder definition: Shake and vibrate rapidly and with force
Judder in TV:
• Judder looks like a jerky movement that is not smooth.
• It means jumps, shivering (sliding) and jerkiness.
• Judder makes camera movement look stuttered, and is especially noticeable with panning shots.
Judder reasons:
• Judder usually results from “Aliasing” between Sampling rates (in recording), Display rates and Scene motion.
• Basically if the displacement across the frame is too grate compared to the capture frame rate, judder will cure.
• Judder is an inconsistence time frame (some frames stay on the screen more than other ones)
203
204. Judder
Judder from Frame Drop
– Frame drops can be caused by the motion interpolation feature.
• If the movement is to fast and the TV does not know how to interpolate it, it will simply repeat the previous frame
another time. This will cause judder.
– Frame drop can be caused by an app that is too slow
• On some older TVs, the native apps are not very fast, so some have problems keeping up with the streaming video,
and some might drop frames from time to time. This is usually rare though.
– Frame drop can be caused by packet lost in video streaming
204
t
t=1/50 s
t
t=1/50 s
Judder by Frame Drop
205. Judder
Judder from 3:2 Pulldown
– When content recorded on film (24pfs) is shown on a television with a 60Hz refresh rate.
– Software in the TV or DVD player detects the incoming signal and fills in the missing 36 frames by repeating
frames that your eye has already seen.
– To ensure that there will consistently be 60 frames per second, the first frame is displayed on the TV screen 3 times
and the second frame is displayed 2 times.
– Because alternating frames are not repeated in a consistent manner, the picture on the television screen is
actually a little jittery (this is called judder).
– Most of us don't notice judder because a second goes by very quickly and we are used to viewing films on
television with a 3:2 pulldown.
205
206. Judder
Judder from Fast Panning
– In film, a classical rule says: minimum time is 7 seconds for a pan that crossers one horizontal fields of views (HFOV) (this is
the lens HFOV not the entire scene). It does not guarantee absence of judder.
– This guides was considered to be is independent of lens and sensor but in truth other parameters do influence the answer.
– The time for a pan is effected by:
• Number of recording frame per second: 1 stop frame rate ⇒ -1 stope pan time
• Number of degrees to pan: 1 stop pan angle ⇒ 1 stope pan time
• Focal length of the lens (Tele and Wide Lenses): -1 stop HFOV ⇒ 1 stope pan time
• Sensor resolution: 1 stop sensor resolution (2K to 4K) ⇒ -1 stope pan time
• Hiding factor of motion blur (shutter speed): slower shutter speed than 𝟏𝟖𝟎°
⇒ less action freezing ⇒ less judder
206
Slower shutter (More light, Blurs motion)
Faster shutter (Less light, Freeze motion)
209. 209
Human Visual Acuity
− Human visual acuity is the spatial resolving capacity of the human eye (as a function of viewing distance)
⇒ Ability of the eye to see fine detail.
− Visual acuity is limited by
• diffraction
• optical aberrations
• photoreceptor density in the eye
− For two points to be spatially discriminated a complete cycle has to be taken into account.
− This is twice the spatial resolution capability of the human eye.
Two black points
separated by a white
point of equal diameter
(With 20/20 vision, d=20 feet)
𝛼 = 1 arc minute=0.017 degrees
1 Cycle
210. Human Visual Acuity
20/20 Vision
− The goal of testing eyesight ⇒ Being able to resolve lines in characters that are separated by 1/60 of a degree.
− Since this resolution is typically assessed using an eye chart at a distance of 20 feet (6 m); this level of performance is
defined as 20/20 vision (or 6/6 vision in metric system).
− 20/50 means one can only resolve detail that someone with 20/20 vision could resolve from 50 feet away. It means a person
with 20/50 vision can clearly see something 20 feet away that a person with normal vision (20/20 vision) can see clearly
from a distance of 50 feet.
210
𝟏
𝟔𝟎
° = 𝟏 𝐚𝐫𝐜 𝐦𝐢𝐧𝐮𝐭𝐞
211. 211
Maximum Resolving Power of Eye
2.5
µm
2.5
µm
4
µm
− To perceive two objects as distinct ⇒ at least one unstimulated cone must lie between two stimulated cones.
− The cone density is greatest in the center of the retina and central visual acuity is highest.
− In the center of the retina the cones are spaced only 2.5 µm apart.
− Cone spacing and physical effects such as diffraction and optical aberrations limit the average of the minimum
threshold resolution, and limit the minimum visual angle to one minute of arc.
− One minute of arc is 1/60 of a degree or approximately 4 µm, which is somewhat more than the width of a cone.
− This corresponds to the maximum resolving power of the retina.
212. Viewing Angle Limit
Viewing Angle Limit, Minimum Visual Angle, Minimum Angle of Resolution ( )
− Minimum angle in which human eye can distinguish two isolated points ⇒ about 0.5 to 1 minute of arc for healthy eye
⇒ 1 minute of arc (for normal vision and with an appropriate brightness and contrast values)
− Ex: 3m distance
212
𝛼 = 1 arc minute=0.017 degrees
𝛼
1mm
3m
(1° = 60')
𝛼 = 1 arc minute=0.017 degrees
213. Viewers tend to perceive images with good resolution as sharp,
detailed, and above all, free of visible pixel structure.
213
Maximum Resolving Power of the Retina and Pixel Pitch
If we stand at 1m from display, pixel
pitch could be as small as 0.3 mm
214. − The thickness of the scanning beam is equal to the width of each line.
− The distance of the viewer from the screen and the acuity of the human eye have to be considered.
− The optimum viewing distance is found to be about six times the picture height, i.e. D/H = 6.
− At this distance, the line structure should just be no longer visible, i.e. the limit of the resolving power of the eye should be
reached.
For β=9.5273 degrees → D=6H
H
If β = 9.5273 degrees → Minimum Distinguishable Line Numbers=β/α=9.527/ (1/60) =571.64 lines
Ex: TV Lines Number in SDTV
214
D/H = 6 β = 9.5273 degrees
𝛽 = 2 tan−1
(
𝐻/2
𝐷
)
215. − Fundamental TV Research was done at the Japan Broadcasting Corporation (NHK).
− Showed viewers position themselves so the smallest detail subtends an angle of one arc minute (the limit for normal
vision).
− Closer than this, you can see scan lines/pixels, further away and the picture’s too small.
− Taking this result as a starting point, it was easy to calculate the optimal viewing distance for any scanning standard.
215
Distance is 3 screen heights
HD
16
9
1080
lines
32 º
SD
4
3
Distance is 6 screen heights
13º
4K
Distance is 1.5 screen height
2160
lines
16
9 58 º
Minimum Visual Angle: 𝛼 = 1 arc minute=0.017 degrees
Optimal Viewing Angle and Viewing Distance
216. Image system Reference
Aspect
ratio
Pixel aspect
ratio
Optimal
Horizontal
Viewing Angle
Optimal Viewing
Distance
720 483 Rec. ITU-R BT.601 4:3 0.88 11° 7 H
640 480 VGA 4:3 1 11° 7 H
720 576 Rec. ITU-R BT.601 4:3 1.07 13° 6 H
1024 768 XGA 4:3 1 17° 4.4 H
1280 720 Rec. ITU-R BT.1543 16:9 1 21° 4.8 H
1400 1050 SXGA+ 4:3 1 23° 3.1 H
1920 1080 Rec. ITU-R BT.709 16:9 1 32° 3.1 H
3840 2160 Rec. ITU-R BT.1769 16:9 1 58° 1.5 H
7680 4320 Rec. ITU-R BT.1769 16:9 1 96° 0.75 H
Proper viewing distance (D)
DHD ≈ 3H D4K ≈ 1.5H
H
Ex: 50 inch TV
DHD=0.625×3.1=1.937 m
D4K=0.625×1.5=0.937 m
D8k=0.625×0.75=0.468 m
D = 0.5H/tan(x)
216
Optimal Viewing Angle and Viewing Distance
219. 219
Horizontal Fields of View
Horizontal Viewing Filed of the Eye
Visual
Limit
Left
Eye
(94°)
Visual
Limit
Right
Eye
(94°)
(Monocular Vision) (Binocular Vision)
• The central field of vision for most people covers an angle of
between 50° and 60°.
• Filling this angle helps the viewer feel more as though they are
within a scene as opposed to looking at it inside a rectangle.
• In effect
higher resolution enhances the “sense of detail”
wider viewing angles enhance the sense of "being there”
⇒ Both are needed to enhance the sense of realism.
90°
R L
Normal Viewing Field
220. Horizontal Fields of View
220
− In some documents, the central field of view is considers more than 60 degrees.
221. 221
Proper Viewing Angle for each format
Wider Viewing Angle
More Immersive
≃
≃
≃
Sense of Realism with Enhancement of Resolution and Viewing Angle
− Human eyes has total horizontal field vision of 180 degrees but it is majorly perceived and remembered in central field of
vision, 90 degrees (objects are recognized).
− Larger 4K UHD display sets enable 60 degrees of the horizontal field of visions at the correct viewing distance dominating
the central field of vision to provide more realistic, natural and immersive viewing experience compared to 30 degrees in
conventional HD in which perception is largely outside the TV screen.
222. − Larger viewing angle (larger image or a closer viewing distance) ⇒ more resolvable pixels
− The HD displays are typically out-resolved and can appear pixelated.
− 4K resolution is required to produce maximally sharp and seemingly continuous pixels for a majority of viewers.
222
Viewing Angle and Resolvable Pixels
The Health and Nutrition Examination Survey of 1972 demonstrated that 72.8 percent of the civilian non-institutionalized population 4 to 74 years of
age in the United States has distance visual acuity of at least 20/20 in their better eye "with usual correction" (using glasses and other visual aids).
223. − The diagram depicts a typical large-screen theater with a 70 foot (21.3 meter) screen width.
− Although the IMAX, GSCA and other large-screen specifications recommend a minimum viewing distance of one screen
width (53° viewing angle), the seating rows above extend out to a viewing angle of 45°.
− Even then, note how the majority of viewers can resolve more than 2K resolution from every seat in the theater.
223
Example of Viewing Angle and Resolvable Pixels
A typical large-screen theater with a 70 foot (21.3 meter) screen width.
21.3 meter
224. Pixel Density
Retina distance: Point at which the human eye cannot see the pixels and varies based on pixels-per-inch.
– At about half the (Full) HD retina distance, Ultra HD focus is on the image not pixels.
– Ultra HD enables up-close viewing without seeing the pixels.
Pixel Per Inch (PPI)=
Width in Pixels ×Height in Pixels
Width in Inches ×Height in Inches
224
HD
UHD (4K)
1 foot=30.48 cm
226. 226
33.5 cycles
per image height
6.5 cycles
per image height
1.5 cycles
per image height
Vertical Resolution
227. – The horizontal resolution of a video device is its ability to reproduce picture details along the horizontal
direction of the image.
– It is expressed in TV line numbers such as 1000 TV lines.
– The human eye is much more sensitive to luminance information than to color, and accordingly from the
early days of video, emphasis has been put on the improvements of luminance detail.
– The reason that horizontal resolution is more often discussed compared to vertical resolution is because:
Horizontal Resolution
227
Horizontal resolution is a parameter that can largely vary from device to device.
228. Horizontal Resolution
– The horizontal resolution is expressed by the resolvable lines within a screen length equivalent to the
screen height, thus
⇒ For a 16:9 screen, only nine-sixteenths of the picture width.
228
229. Horizontal Resolution Measurement
Method 1:
− It is usually measured by shooting a resolution chart and viewing this on a picture monitor.
– Each black or white line is counted as one line.
229
Resolution Chart
Horizontal resolution is
determined by reading
these calibration
230. Horizontal Resolution Measurement
Method 2:
− By feeding this signal to a waveform monitor, horizontal resolution can be measured as:
The maximum number of vertical black and white lines where the white lines exceed a video level of 5%.
– Measurement of horizontal resolution must be performed with gamma, aperture, and detail set to ‘on’
and masking set to ‘off’.
230
• In this pictorial example, a square
waveform exists at the scan line
equal to 600 TV Lines on the scale.
• At 700 TV Lines on the scale, the
waveform begins to 'roll' out defining
areas of grey and black; with white
on the outside edges of the
wedge, therefore, the TV Line value
of the camera could be said to be
600 TV Lines.
231. Vertical Resolution
– Vertical resolution describes a device’s ability to reproduce picture detail in the vertical direction.
– The vertical resolution is determined solely by the scanning system, that is, the number of scanning lines and
whether it operates in interlace or progressive mode.
– However, there are two additional points to take into account:
1- The number of lines actually used for picture content (active lines)
2- The video system ( interlace or progressive) scanning.
– Since only half of the active are scanned in one field, this may sound interesting.
– However, the interlace mechanism makes the human eye perceive them that way.
– In a 625 line system, only 576 lines (active lines) are used for picture content, so resolution approximately 403 lines
(576 x 0.7 = 403).
231
The vertical resolution of all interlace systems is about 70% of their active line.
For progressive systems, the vertical resolution is exactly the same as the number of active lines.
232. 232
Resolution Test chart
The green curve shows the response
when both Detail Level and Vertical
Detail are set to 0, the default value.
Horizontal waveform
233. Standard Monochrome Signals
233
CRT
t
− First commercial standards were 60 lines.
− Original ‘high definition’ is 405 lines monochrome.
− Television is transmitted and recorded as frames.
• Similar to film.
− Each frame is scanned in the camera or
camcorder.
• This is called a raster scan.
• Raster scan scans line by line from top to bottom.
• Each line is scanned from left to right.
− SD standards were 525 and 625 lines.
• Half the number of lines in each field.
• Signal is “zero” for black.
• Signal increases as the brightness increases.
Raster (Odd lines)
234. Standard Monochrome Signals
234
t
A line:
Horizontal blanking + Active line
• Horizontal blanking: the horizontal flyback lines
• Active line: active picture (vision line, TV line)
A field (frame):
Horizontal blanking + Active picture + Vertical blanking
• Active picture: active lines within the picture
• Vertical blanking: flyback lines that are not seen
CRT
Raster (Odd lines)
Trace ⇒ Active Line
Retrace ⇒ Horizontal flyback Line, Horizontal blanking (interval)
Start of
a line
End of
a line
Vertical flyback Line
(Vertical blanking interval)
(Field blanking)
237. 621
308 309 310 311 312 313 314 315 316 317 318 319 320 333 334 335 336 337 338
622 623 624 625 1 2 3 4 5 6 7 8 21 22 23 24 26
25
9
Field 2 Field 1
Field 1 Field 2
Field blanking
Field blanking
20
Y
video
signal
Line number
Y
video
signal
Line number 332
321
237
0 V
Standard Monochrome Signals
0 V
238. Synchronization Pulses (Sync Pulses)
238
V-sync pulse
V-sync pulse
H-sync pulse H-sync pulse
− Horizontal sync in the horizontal blanking interval locks the picture horizontally
− Vertical sync in the vertical blanking interval locks the picture vertically
Camera TV
239. 621
308 309 310 311 312 313 314 315 316 317 318 319 320 333 334 335 336 337 338
622 623 624 625 1 2 3 4 5 6 7 8 21 22 23 24 26
25
9
Field 2 Field 1
Field 1 Field 2
Field blanking
Field blanking
20
Y
video
signal
Line number
Y
video
signal
Line number 332
321
239
0 V
0 V
Synchronization Pulses (Sync Pulses)
Horizontal Synchronizing Pulse
(H-sync pulse)
240. 621
308 309 310 311 312 313 314 315 316 317 318 319 320 333 334 335 336 337 338
622 623 624 625 1 2 3 4 5 6 7 8 21 22 23 24 26
25
9
Field 2 Field 1
Field 1 Field 2
Field blanking
Field blanking
20
Y
video
signal
Line number
Y
video
signal
Line number 332
321
240
0 V
0 V
Synchronization Pulses (Sync Pulses)
Horizontal Synchronizing Pulse
(H-sync pulse)
Vertical Synchronizing Pulse Sequence
(V-sync pulse)
242. 621
308 309 310 311 312 313 314 315 316 317 318 319 320 333 334 335 336 337 338
622 623 624 625 1 2 3 4 5 6 7 8 21 22 23 24 26
25
9
Field 2 Field 1
Field 1 Field 2
Field blanking
Field blanking
20
Y
video
signal
Line number
Y
video
signal
Line number 332
321
242
0 V
0 V
35 𝜇𝑠
25 𝜇𝑠
35 𝜇𝑠
25 𝜇𝑠
Note: (In NTSC switching window is on line 10 and 273)
Switching Window
Switching Window
Switching Window
244. The basic television signal
Short white areas of the
line for the sails produce
sharp white spikes in the
signal.
Trees and bushes with
light and dark areas
produce an undulating
signal.
The sky is bright and
produces a high signal
almost as high as the white
sails.
Shadows in the
trees produce a
low signal.
Very small bright area between the
trees produces a very sharp spike in
the signal
244
H-sync pulse
Example
This part of the line
with black shadows
produces a low signal.
245. Composite Video Signal (Monochrome)
Front Porch
Active or Visible Line Interval (Vision)
12 µs 52 µs
245
4.7 µs
BackPorch
Horizontal
Blanking
Interval
افقی محو فاصله
Horizontal Blanking Interval
افقی همزمانی پالس
Horizontal Synchronizing Pulse
(H-sync pulse)
پالس رشته
عمودی همزمانی
Vertical Synchronizing pulse Sequence
(V-sync pulse)
Composite Video Signal (CVS)
Video signal + Blanking + Sync pulse
700 mV
H-sync
300 mV
0 mV
246. IRE (Institute of Radio Engineers)
− The Institute of Radio Engineers (IRE) was a professional organization which existed from 1912 until 1962.
− On January 1, 1963 it merged with the American Institute of Electrical Engineers to form the Institute of
Electrical and Electronics Engineers (IEEE).
Since the sync signal is exactly 40 IRE
The active video range is exactly 100 IRE. (from black level to white)
246
Front Porch
Active or Visible Line Interval
(Vision)
12 µs 52 µs
4.7 µs
BackPorch
Horizontal
Blanking
Interval
700 mV
H-sync
-300 mV
0 mV
One IRE unit = 7.14 mV
100 IRE
40 IRE
247. VBS/BS Signal
– The VBS (Video Burst Sync) signal refers to a composite video signal in which the active video area
contains actual picture content or color bars .
– The BS (Burst Sync) signal does not contain picture content and the active video area is kept at setup
level.
247
248. Contrast vs. Brightness
DC level
DC level
248
H-Sync
H-Sync
Vision or Active Line
Vision or Active Line
t
t
V
V
H-Sync Vision or Active Line
V
Adding a constant DC Voltage
Amplification
Constant Voltage
249. 249
1. Electron Guns
2. Electron Beams
3. Focusing Coils
4. Deflection Coils
5. Anode Connection
6. Shadow Mask
7. Phosphor layer
8. Close-up of the phosphor coated inner
side of the electron
Deflection System
252. End of Active Video (EAV) & Start of Active Video (SAV) in Digital SDTV
252
Header : 3FFh, 000h, 000h
EAV SAV
Start of new line
End of previous line
621 622 623 624 625 1 2 3
Field 2 Field 1
r
Start of new line
End of previous line
253. End of Active Video (EAV) & Start of Active Video (SAV)
253
Header : 3FFh, 000h, 000h
NTSC Waveform
Black Level (Set up)
7.5 IRE
Color Bust Location
(9 Cycles)
Horizontal Timing
Reference in NTSC.
Mid point of leading
edge of H sync
SDI
Line
Start
NTSC
Line
Start
SDI Waveform
Black Level (Set up)
040 Hex
SDI Data
Horizontal Timing
Reference in SDI
Negative pulse caused by failing
to Black Clip the luminance
H Ancillary period.
Embedded audio
location.
(none shown)
EAV SAV
254. Timing Reference Signal (TRS) Codes in Digital SDTV
254
Header : 3FFh, 000h, 000h
E
A
V
S
A
V
− The “xyz” word is a 10-bit word with the two least significant bits set to zero
to survive an 8-bit signal path. Contained within the standard definition
“xyz” word are functions F, V, and H, which have the following values:
• Bit 8 – (F-bit): 0 for field one and 1 for field two
• Bit 7 – (V-bit): 1 in vertical blanking interval; 0 during active video lines
• Bit 6 – (H-bit): 1 indicates the EAV sequence; 0 indicates the SAV sequence