The document discusses standards for serial digital interface (SDI) video signals. It provides information on:
- Early SDI standards including SMPTE 259M for SD-SDI at 270Mbps and how they standardized a serial digital video connection.
- Video signal sampling structures and resolutions for SD, HD, and UHD formats.
- The development of higher data rate SDI standards up to 12G-SDI and 24G-SDI to support higher resolution video.
- Electrical parameters and cable distance limitations for different SDI data rates.
Serial Digital Interface (SDI), From SD-SDI to 24G-SDI, Part 2Dr. Mohieddin Moradi
This document discusses high definition video standards including SMPTE 274M, 292M, 372M and dual link SDI formats. It provides details on:
- The HD-SDI standards that define 1080p and 720p video formats and carriage through 1.5Gb/s serial digital interface.
- The timing reference signal codes used in HD-SDI to identify lines and perform error checking.
- How a 12-bit color depth can be achieved within the dual link standard by mapping the additional bits across both links.
- The benefits of 3Gb/s SDI and dual link formats for working at higher resolutions and color spaces prior to finishing.
The document discusses various networking protocols and standards related to professional media over IP, including:
- SMPTE ST 2110 standards that define carriage of uncompressed video, audio, and data over IP networks as separate elementary streams.
- AES67, which enables high-performance audio-over-IP streaming interoperability between different IP audio networking products.
- Other relevant standards and protocols like SMPTE ST 2022, AIMS recommendations, Video Services Forum TR-03/04, RTP, SDP, PTP, and IGMP.
- Considerations for designing IP infrastructures for media networks, including capacity, connectivity, timing, control, and redundancy.
This document discusses key elements that contribute to high quality image production, including spatial resolution, frame rate, dynamic range, color gamut, bit depth, and compression artifacts. It examines these elements in the context of 4K and 8K broadcast cameras and their advantages over HD. Factors like wider viewing angles, increased perceived motion, and benefits for nature documentaries are cited as motivations for 8K. Technical details covered include lens flange back distance, flare, shading, chromatic aberration, and testing procedures. Overall quality is represented as a function of these various image quality factors.
This document discusses IP interfaces for video production and summarizes the benefits of IP-based systems compared to SDI. It provides examples of IP-enabled video switchers and control systems from Sony and Grass Valley. The rest of the document discusses standards organizations and specifications that enable IP interoperability such as SMPTE ST 2110, AES67, and AIMS. It also summarizes IP routing and processing platforms like Grass Valley's GV Node and control systems like Lawo's VSM.
The document provides an overview of key elements and trends in high-quality image production, including spatial resolution, temporal resolution, dynamic range, color gamut, quantization, and related technologies. It discusses technologies like HD, UHD, HDR and WCG and how they improve the total quality of experience. Images and charts are included to illustrate comparisons of technologies and results from industry surveys on trends and commercial projects.
Video Compression, Part 3-Section 2, Some Standard Video CodecsDr. Mohieddin Moradi
This document discusses MPEG-2 Transport Streams and Packetized Elementary Streams. It describes how MPEG-2 Transport Streams use fixed length 188 byte packets containing compressed video, audio or data from one or more programs identified by Packet IDs. These packets can contain Packetized Elementary Stream packets which contain compressed elementary streams with timestamps for synchronization. The document also discusses how Transport Streams allow for synchronous multiplexing of multiple programs from independent time bases into a single stream.
This document provides an overview of video standards and concepts related to standard definition television (SDTV) and high definition television (HDTV). It begins with definitions of key terms like interlacing, progressive scanning, and frame rates. It then covers standards for monochrome signals, including signal timings, synchronization pulses, and blanking intervals. Digital SDTV standards like line counts, field structures, and ancillary data space are also summarized. The document concludes with discussions of spatial resolution, optimal viewing distances, and different aspect ratios used in television.
This document provides an overview of analog and digital triax systems used for video transmission. It discusses key aspects of triax cables such as their ability to transmit multiple signals simultaneously through bundled cables. Both analog and digital triax systems are described, with analog transmitting component signals on different carrier frequencies and digital transmitting signals in digital format. The document also covers triax cable specifications, common connectors types used for broadcasting applications from different standards, fiber optic cable types including single mode and multi-mode, and common fiber connectors. Transmission distances and electrical properties of triax cables are discussed.
Serial Digital Interface (SDI), From SD-SDI to 24G-SDI, Part 2Dr. Mohieddin Moradi
This document discusses high definition video standards including SMPTE 274M, 292M, 372M and dual link SDI formats. It provides details on:
- The HD-SDI standards that define 1080p and 720p video formats and carriage through 1.5Gb/s serial digital interface.
- The timing reference signal codes used in HD-SDI to identify lines and perform error checking.
- How a 12-bit color depth can be achieved within the dual link standard by mapping the additional bits across both links.
- The benefits of 3Gb/s SDI and dual link formats for working at higher resolutions and color spaces prior to finishing.
The document discusses various networking protocols and standards related to professional media over IP, including:
- SMPTE ST 2110 standards that define carriage of uncompressed video, audio, and data over IP networks as separate elementary streams.
- AES67, which enables high-performance audio-over-IP streaming interoperability between different IP audio networking products.
- Other relevant standards and protocols like SMPTE ST 2022, AIMS recommendations, Video Services Forum TR-03/04, RTP, SDP, PTP, and IGMP.
- Considerations for designing IP infrastructures for media networks, including capacity, connectivity, timing, control, and redundancy.
This document discusses key elements that contribute to high quality image production, including spatial resolution, frame rate, dynamic range, color gamut, bit depth, and compression artifacts. It examines these elements in the context of 4K and 8K broadcast cameras and their advantages over HD. Factors like wider viewing angles, increased perceived motion, and benefits for nature documentaries are cited as motivations for 8K. Technical details covered include lens flange back distance, flare, shading, chromatic aberration, and testing procedures. Overall quality is represented as a function of these various image quality factors.
This document discusses IP interfaces for video production and summarizes the benefits of IP-based systems compared to SDI. It provides examples of IP-enabled video switchers and control systems from Sony and Grass Valley. The rest of the document discusses standards organizations and specifications that enable IP interoperability such as SMPTE ST 2110, AES67, and AIMS. It also summarizes IP routing and processing platforms like Grass Valley's GV Node and control systems like Lawo's VSM.
The document provides an overview of key elements and trends in high-quality image production, including spatial resolution, temporal resolution, dynamic range, color gamut, quantization, and related technologies. It discusses technologies like HD, UHD, HDR and WCG and how they improve the total quality of experience. Images and charts are included to illustrate comparisons of technologies and results from industry surveys on trends and commercial projects.
Video Compression, Part 3-Section 2, Some Standard Video CodecsDr. Mohieddin Moradi
This document discusses MPEG-2 Transport Streams and Packetized Elementary Streams. It describes how MPEG-2 Transport Streams use fixed length 188 byte packets containing compressed video, audio or data from one or more programs identified by Packet IDs. These packets can contain Packetized Elementary Stream packets which contain compressed elementary streams with timestamps for synchronization. The document also discusses how Transport Streams allow for synchronous multiplexing of multiple programs from independent time bases into a single stream.
This document provides an overview of video standards and concepts related to standard definition television (SDTV) and high definition television (HDTV). It begins with definitions of key terms like interlacing, progressive scanning, and frame rates. It then covers standards for monochrome signals, including signal timings, synchronization pulses, and blanking intervals. Digital SDTV standards like line counts, field structures, and ancillary data space are also summarized. The document concludes with discussions of spatial resolution, optimal viewing distances, and different aspect ratios used in television.
This document provides an overview of analog and digital triax systems used for video transmission. It discusses key aspects of triax cables such as their ability to transmit multiple signals simultaneously through bundled cables. Both analog and digital triax systems are described, with analog transmitting component signals on different carrier frequencies and digital transmitting signals in digital format. The document also covers triax cable specifications, common connectors types used for broadcasting applications from different standards, fiber optic cable types including single mode and multi-mode, and common fiber connectors. Transmission distances and electrical properties of triax cables are discussed.
HDR, wide color gamut, and higher frame rates are new technologies that can improve image quality for ultra high definition televisions. They provide benefits like more vivid colors, deeper blacks, better shadow detail, and a more immersive viewing experience. However, supporting these new features requires significantly more data bandwidth compared to legacy standards. Future video standards will need to efficiently support higher resolutions, wider color, high dynamic range, and high frame rates to deliver next-generation picture quality while still allowing content to be economically distributed.
This document provides an overview of color video signals and color perception by the human visual system. It discusses:
1. The sensitivity of human cone cells to different wavelengths of light and how this determines color perception.
2. How color video signals like YUV, RGB, and composite video encode color and brightness information.
3. Standards for analog color television transmission including NTSC, PAL, and SECAM which differ in aspects like lines, frame rate, and color encoding.
VIDEO QUALITY ENHANCEMENT IN BROADCAST CHAIN, OPPORTUNITIES & CHALLENGESDr. Mohieddin Moradi
This document discusses elements of high-quality image production for television broadcasting such as spatial resolution, frame rate, dynamic range, color gamut, quantization, and total quality of experience. It outlines these elements and provides examples of their implementation in HD, UHD1, and UHD2 formats. Motivations for 8K and 4K broadcasting are discussed related to improved image quality, new applications, and bandwidth efficiency trends. Implementation examples of 4K and 8K broadcasting systems from Japan, Korea, Sweden, and the UK are also summarized.
This document provides an overview of high definition television (HDTV) standards and concepts such as color gamut, color bars test signals, colorimetry, chroma adjustment, and luminance adjustment. It discusses differences between standard definition (SDTV) and HDTV color bars, how wider color gamuts in HDTV allow for deeper colors, and how to use various elements of the color bars signal to properly adjust a display's color, brightness, contrast, and chroma. The document contains diagrams demonstrating color gamuts and examples of how objects appear within different gamuts.
This document provides information about 4K lens specifications and performance. It discusses key optical parameters for 4K lenses such as sharpness, chromatic aberration, depth of field, and resolution. The document explains how 4K lenses are designed to minimize chromatic aberration and enhance modulation transfer function to improve image quality. It also describes the benefits of 4K lenses for wide color gamut and high dynamic range imaging applications. These benefits include reduced color fringing, flare, and black level for increased dynamic range. Examples are provided comparing image quality between 4K and HD lenses. The document concludes with information about Canon's cinema lens lineup and technologies.
This document outlines elements of high-quality image production, including spatial and temporal resolution, dynamic range, color gamut, bit depth, and coding. It discusses color gamut conversion, gamma correction, HDR and SDR mastering, tone mapping, and backwards compatibility. The document also covers HDR metadata standards and different distribution scenarios for HDR content.
HEVC/H.265 is a video compression standard that provides around 50% better compression over H.264/AVC for the same level of video quality. It was finalized in 2013 by the joint collaboration of MPEG and ITU-T. Key features of HEVC include support for higher resolutions like 4K and 8K, improved parallel processing abilities, increased coding efficiency through larger block sizes and an expanded set of prediction modes.
Video Compression, Part 4 Section 1, Video Quality Assessment Dr. Mohieddin Moradi
This document provides an overview of video compression artifacts that can occur when video is compressed for streaming or storage. It discusses both spatial artifacts, such as blurring, blocking, ringing, and color bleeding, as well as temporal artifacts like flickering and mosquito noise. For each artifact, it describes the visual appearance and potential causes from factors like quantization during compression, motion compensation between frames, and chroma subsampling. The document aims to help understand how compression can degrade perceptual video quality and different types of artifacts that may be evaluated both objectively and subjectively.
This document discusses various optical and technical aspects of camera lenses, including:
1) It defines focal length as the distance between a lens and the point where light passing through converges, known as the focal point. Shorter focal lengths provide wide-angle views while longer focal lengths provide magnified close-up views.
2) F-number and f-stop are defined, with f-number indicating the maximum light a lens can admit and f-stop indicating light levels at smaller iris openings. Smaller f-numbers and f-stop numbers admit more light.
3) The relationship between aperture, focal length, and depth of field is explained. Smaller apertures provide deeper depth of field while
This document provides definitions and explanations of various optical terminology related to light passing through a lens, including:
- Dispersion, refraction, diffraction, reflection, focal point, focal length, principal point, image circle, aperture ratio, numerical aperture, optical axis, and more. It discusses concepts such as entrance pupil, exit pupil, angular aperture, and how they relate to lens performance. The document also covers topics like vignetting, the cosine law, and flare. Overall, it serves as a comprehensive reference for understanding optical and photographic lens terminology.
This document provides information about various camera settings and technologies for capturing clear images, including:
1. Clear Scan helps eliminate banding caused when a camera's frame rate does not match a CRT display's refresh rate.
2. Slow Shutter extends the camera's exposure time to produce blur effects or allow more light in low-light scenes.
3. Super Sampling uses a 1080p camera to produce sharper 720p images by maintaining higher frequency response.
4. Detail correction adds a spike-shaped detail signal to make edges appear sharper without degrading resolution. Settings like detail level and H/V ratio control the amount and balance of detail correction.
5. Other topics covered
Dr. Mohieddin Moradi provides an outline on high dynamic range (HDR) technology. The 3-page document covers various topics related to HDR including different HDR technologies, tone mapping, color representation, and HDR standards. It discusses concepts such as scene-referred vs display-referred conversions, and direct mapping vs tone mapping when converting between HDR and SDR formats. The document also examines potential side effects when mixing different conversion techniques in a production workflow.
This document outlines an educational course on audio and video over IP. The course covers IP networking fundamentals and standards including TCP/IP, OSI models, and SMPTE ST 2110. It also examines IP infrastructure, routing, timing issues, switching, compression techniques and case studies for broadcast facilities transitioning to IP. The document provides an in-depth outline of topics covered in each session, from IP basics to designing and integrating both hybrid and fully IP-based outside broadcast trucks. The goal is to educate on best practices for implementing audio and video over IP workflows and infrastructure.
This document provides information about quality control testing of audiovisual content. It discusses various quality control tests that can be performed, including tests for analogue frame synchronization errors, black bars, constant colour frames, flashing video, macroblocking, video deinterlacing artifacts, and digital tape dropouts. Examples are provided for how each test can be configured and what results might look like. The goal of the quality control tests is to help broadcasters optimize their automated quality control systems and cope with increasing amounts of digital content.
This document provides an overview of high dynamic range (HDR) technology and workflows for HDR video production and mastering. It discusses HDR standards like SMPTE ST 2084 and ARIB STB-B67, camera log curves, luminance levels, and tools for setting up HDR monitoring including waveform monitors. Specific topics covered include HDR graticules, setting luminance levels for highlights and grey points, and using zebra patterns and zoom modes to evaluate highlight levels in HDR images.
The document discusses high dynamic range (HDR) video technology including:
- Different HDR formats such as SMPTE ST 2084 (PQ), ARIB STB-B67/ITU-R BT.2100 (HLG)
- Code value ranges for 10-bit and 12-bit RGB and color difference signals in narrow and full ranges
- Recommendations for using narrow versus full signal ranges for PQ and HLG
- Transcoding concepts when converting between PQ and HLG formats
- Considerations for including standard dynamic range (SDR) content in HDR programs
Video Compression, Part 3-Section 1, Some Standard Video CodecsDr. Mohieddin Moradi
- ISO/IEC JTC 1/SC 29 and ITU-T are the main organizations that develop video coding standards through working groups like MPEG and VCEG.
- Early standards include H.261 for video telephony and conferencing, and MPEG-1 for DVD quality video.
- Later standards like H.264/AVC, HEVC, and future VVC provide increasingly higher compression through use of block transforms, motion compensation, and entropy coding in a hybrid video codec framework.
- Key organizations periodically collaborate through joint teams like JVT and JCT-VC to develop standards like AVC and HEVC.
Video Compression Standards - History & IntroductionChamp Yen
This document provides an overview of several video compression standards including MPEG-1/2, MPEG-4, H.264, and HEVC/H.265. It discusses the key concepts of video coding such as entropy coding, quantization, transformation, and intra- and inter-prediction. For each standard, it describes the main coding tools and improvements over previous standards, focusing on techniques for more efficient prediction and extraction of redundant spatial and temporal information while maintaining quality. The development of these standards has moved towards more fine-grained partitioning and new coding ideas and tools to reduce bitrates further.
1. The document discusses color temperature and how different light sources emit different color spectrums that video cameras must account for through color balancing. Color temperature is used as a reference to adjust the camera's color balance to match the light source.
2. After color temperature conversion optically or electronically, white balance is then used to precisely match the light source color temperature by adjusting the camera's video amplifiers.
3. Other topics covered include polarizers, neutral density filters, and technical aspects of video such as gamma correction and clipping levels.
The document introduces 3 Gbps SDI technology and product offerings from Gennum. Gennum's solution includes equalizers to compensate for cable attenuation, reclockers to reduce jitter, and transmitters and receivers with advanced video processing features. The document provides standards information and guidelines for implementing 3 Gbps SDI, including using surface-mount components and isolating power supplies to minimize noise and reflections.
The XFM50-MPCUHD-A is a multi-purpose converter module that supports UHDTV, 3G, HD, and SD formats. It provides up/down/cross conversion between these formats as well as color space conversion, scaling, cropping, and region of interest selection from UHDTV sources. The module handles various SDI interface configurations including single link, dual link, quad link, copper, and fiber and processes UHDTV signals with data rates up to 12Gbps.
HDR, wide color gamut, and higher frame rates are new technologies that can improve image quality for ultra high definition televisions. They provide benefits like more vivid colors, deeper blacks, better shadow detail, and a more immersive viewing experience. However, supporting these new features requires significantly more data bandwidth compared to legacy standards. Future video standards will need to efficiently support higher resolutions, wider color, high dynamic range, and high frame rates to deliver next-generation picture quality while still allowing content to be economically distributed.
This document provides an overview of color video signals and color perception by the human visual system. It discusses:
1. The sensitivity of human cone cells to different wavelengths of light and how this determines color perception.
2. How color video signals like YUV, RGB, and composite video encode color and brightness information.
3. Standards for analog color television transmission including NTSC, PAL, and SECAM which differ in aspects like lines, frame rate, and color encoding.
VIDEO QUALITY ENHANCEMENT IN BROADCAST CHAIN, OPPORTUNITIES & CHALLENGESDr. Mohieddin Moradi
This document discusses elements of high-quality image production for television broadcasting such as spatial resolution, frame rate, dynamic range, color gamut, quantization, and total quality of experience. It outlines these elements and provides examples of their implementation in HD, UHD1, and UHD2 formats. Motivations for 8K and 4K broadcasting are discussed related to improved image quality, new applications, and bandwidth efficiency trends. Implementation examples of 4K and 8K broadcasting systems from Japan, Korea, Sweden, and the UK are also summarized.
This document provides an overview of high definition television (HDTV) standards and concepts such as color gamut, color bars test signals, colorimetry, chroma adjustment, and luminance adjustment. It discusses differences between standard definition (SDTV) and HDTV color bars, how wider color gamuts in HDTV allow for deeper colors, and how to use various elements of the color bars signal to properly adjust a display's color, brightness, contrast, and chroma. The document contains diagrams demonstrating color gamuts and examples of how objects appear within different gamuts.
This document provides information about 4K lens specifications and performance. It discusses key optical parameters for 4K lenses such as sharpness, chromatic aberration, depth of field, and resolution. The document explains how 4K lenses are designed to minimize chromatic aberration and enhance modulation transfer function to improve image quality. It also describes the benefits of 4K lenses for wide color gamut and high dynamic range imaging applications. These benefits include reduced color fringing, flare, and black level for increased dynamic range. Examples are provided comparing image quality between 4K and HD lenses. The document concludes with information about Canon's cinema lens lineup and technologies.
This document outlines elements of high-quality image production, including spatial and temporal resolution, dynamic range, color gamut, bit depth, and coding. It discusses color gamut conversion, gamma correction, HDR and SDR mastering, tone mapping, and backwards compatibility. The document also covers HDR metadata standards and different distribution scenarios for HDR content.
HEVC/H.265 is a video compression standard that provides around 50% better compression over H.264/AVC for the same level of video quality. It was finalized in 2013 by the joint collaboration of MPEG and ITU-T. Key features of HEVC include support for higher resolutions like 4K and 8K, improved parallel processing abilities, increased coding efficiency through larger block sizes and an expanded set of prediction modes.
Video Compression, Part 4 Section 1, Video Quality Assessment Dr. Mohieddin Moradi
This document provides an overview of video compression artifacts that can occur when video is compressed for streaming or storage. It discusses both spatial artifacts, such as blurring, blocking, ringing, and color bleeding, as well as temporal artifacts like flickering and mosquito noise. For each artifact, it describes the visual appearance and potential causes from factors like quantization during compression, motion compensation between frames, and chroma subsampling. The document aims to help understand how compression can degrade perceptual video quality and different types of artifacts that may be evaluated both objectively and subjectively.
This document discusses various optical and technical aspects of camera lenses, including:
1) It defines focal length as the distance between a lens and the point where light passing through converges, known as the focal point. Shorter focal lengths provide wide-angle views while longer focal lengths provide magnified close-up views.
2) F-number and f-stop are defined, with f-number indicating the maximum light a lens can admit and f-stop indicating light levels at smaller iris openings. Smaller f-numbers and f-stop numbers admit more light.
3) The relationship between aperture, focal length, and depth of field is explained. Smaller apertures provide deeper depth of field while
This document provides definitions and explanations of various optical terminology related to light passing through a lens, including:
- Dispersion, refraction, diffraction, reflection, focal point, focal length, principal point, image circle, aperture ratio, numerical aperture, optical axis, and more. It discusses concepts such as entrance pupil, exit pupil, angular aperture, and how they relate to lens performance. The document also covers topics like vignetting, the cosine law, and flare. Overall, it serves as a comprehensive reference for understanding optical and photographic lens terminology.
This document provides information about various camera settings and technologies for capturing clear images, including:
1. Clear Scan helps eliminate banding caused when a camera's frame rate does not match a CRT display's refresh rate.
2. Slow Shutter extends the camera's exposure time to produce blur effects or allow more light in low-light scenes.
3. Super Sampling uses a 1080p camera to produce sharper 720p images by maintaining higher frequency response.
4. Detail correction adds a spike-shaped detail signal to make edges appear sharper without degrading resolution. Settings like detail level and H/V ratio control the amount and balance of detail correction.
5. Other topics covered
Dr. Mohieddin Moradi provides an outline on high dynamic range (HDR) technology. The 3-page document covers various topics related to HDR including different HDR technologies, tone mapping, color representation, and HDR standards. It discusses concepts such as scene-referred vs display-referred conversions, and direct mapping vs tone mapping when converting between HDR and SDR formats. The document also examines potential side effects when mixing different conversion techniques in a production workflow.
This document outlines an educational course on audio and video over IP. The course covers IP networking fundamentals and standards including TCP/IP, OSI models, and SMPTE ST 2110. It also examines IP infrastructure, routing, timing issues, switching, compression techniques and case studies for broadcast facilities transitioning to IP. The document provides an in-depth outline of topics covered in each session, from IP basics to designing and integrating both hybrid and fully IP-based outside broadcast trucks. The goal is to educate on best practices for implementing audio and video over IP workflows and infrastructure.
This document provides information about quality control testing of audiovisual content. It discusses various quality control tests that can be performed, including tests for analogue frame synchronization errors, black bars, constant colour frames, flashing video, macroblocking, video deinterlacing artifacts, and digital tape dropouts. Examples are provided for how each test can be configured and what results might look like. The goal of the quality control tests is to help broadcasters optimize their automated quality control systems and cope with increasing amounts of digital content.
This document provides an overview of high dynamic range (HDR) technology and workflows for HDR video production and mastering. It discusses HDR standards like SMPTE ST 2084 and ARIB STB-B67, camera log curves, luminance levels, and tools for setting up HDR monitoring including waveform monitors. Specific topics covered include HDR graticules, setting luminance levels for highlights and grey points, and using zebra patterns and zoom modes to evaluate highlight levels in HDR images.
The document discusses high dynamic range (HDR) video technology including:
- Different HDR formats such as SMPTE ST 2084 (PQ), ARIB STB-B67/ITU-R BT.2100 (HLG)
- Code value ranges for 10-bit and 12-bit RGB and color difference signals in narrow and full ranges
- Recommendations for using narrow versus full signal ranges for PQ and HLG
- Transcoding concepts when converting between PQ and HLG formats
- Considerations for including standard dynamic range (SDR) content in HDR programs
Video Compression, Part 3-Section 1, Some Standard Video CodecsDr. Mohieddin Moradi
- ISO/IEC JTC 1/SC 29 and ITU-T are the main organizations that develop video coding standards through working groups like MPEG and VCEG.
- Early standards include H.261 for video telephony and conferencing, and MPEG-1 for DVD quality video.
- Later standards like H.264/AVC, HEVC, and future VVC provide increasingly higher compression through use of block transforms, motion compensation, and entropy coding in a hybrid video codec framework.
- Key organizations periodically collaborate through joint teams like JVT and JCT-VC to develop standards like AVC and HEVC.
Video Compression Standards - History & IntroductionChamp Yen
This document provides an overview of several video compression standards including MPEG-1/2, MPEG-4, H.264, and HEVC/H.265. It discusses the key concepts of video coding such as entropy coding, quantization, transformation, and intra- and inter-prediction. For each standard, it describes the main coding tools and improvements over previous standards, focusing on techniques for more efficient prediction and extraction of redundant spatial and temporal information while maintaining quality. The development of these standards has moved towards more fine-grained partitioning and new coding ideas and tools to reduce bitrates further.
1. The document discusses color temperature and how different light sources emit different color spectrums that video cameras must account for through color balancing. Color temperature is used as a reference to adjust the camera's color balance to match the light source.
2. After color temperature conversion optically or electronically, white balance is then used to precisely match the light source color temperature by adjusting the camera's video amplifiers.
3. Other topics covered include polarizers, neutral density filters, and technical aspects of video such as gamma correction and clipping levels.
The document introduces 3 Gbps SDI technology and product offerings from Gennum. Gennum's solution includes equalizers to compensate for cable attenuation, reclockers to reduce jitter, and transmitters and receivers with advanced video processing features. The document provides standards information and guidelines for implementing 3 Gbps SDI, including using surface-mount components and isolating power supplies to minimize noise and reflections.
The XFM50-MPCUHD-A is a multi-purpose converter module that supports UHDTV, 3G, HD, and SD formats. It provides up/down/cross conversion between these formats as well as color space conversion, scaling, cropping, and region of interest selection from UHDTV sources. The module handles various SDI interface configurations including single link, dual link, quad link, copper, and fiber and processes UHDTV signals with data rates up to 12Gbps.
The 6241HDxl is a 4x1 switcher that takes four 3G HD-SDI video inputs and outputs one signal. It provides reclocking and equalization of signals up to 3Gbps for long cable runs. Key features include support for SDI, HD-SDI and 3G HD-SDI standards, Kramer equalization technology, and compact size for rack mounting. It has 4 BNC inputs, 1 BNC output, automatic equalization, and options for rack mounting.
This document provides an overview of Multidyne's product line of video and fiber optic systems. It summarizes the company's 30+ year history developing fiber optic technology, key customers and projects. It also describes Multidyne's various product offerings for professional AV, broadcast, graphics, and military applications, including fiber transport systems, switchers, test equipment and tactical fiber assemblies. The document concludes by highlighting Multidyne's strengths in field-proven performance, quality, support and manufacturing in the US.
This document describes a real-time video and image processing system that combines a Spartan 3 FPGA and DSP array on a SigC641x card. The DSP array can include up to 8 Texas Instruments 1 GHz DSPs with access to various analog and digital video I/O. Video data is preprocessed by the FPGA and DSPs before being sent to the PCI bus or between DSPs. The system provides SD and HD video I/O and is in alpha testing, with production units expected in 3Q08.
RGB Broadcast Services Corp. is a Puerto Rico-based company that provides various services including broadcast, RF, signage, audiovisual, and hospitality solutions. They have completed projects for many Puerto Rican television and radio stations. Their services include digital signage, radio transmitters, microwaves links, and in-room entertainment systems for hotels.
The PMW-300 camcorder has three 1/2-inch Full-HD CMOS sensors that provide high resolution and sensitivity with low noise. It offers various recording formats up to 50 Mbps and incorporates a 3.5-inch LCD viewfinder. An optional wireless adapter allows streaming footage to tablets and remote transmission of clips. The camcorder has a retractable chest pad for comfortable operation and various interfaces, including two SDI outputs and HDMI.
This document provides details of a CCTV surveillance system for a pig farm in Thailand with 144 cameras. It includes a network diagram, equipment list, server configuration, and summaries of the installation at different locations. The key components are 144 IP cameras, servers, recording and management software, and 133 monitors installed at the Organization Administration of Suphanburi province. Details such as camera locations, server setup, and delivery of the project are outlined.
The PMW-300 is a new solid-state memory camcorder introduced by Sony for field video production and studio applications. It has three 1/2-inch CMOS sensors providing high picture quality and low noise. It can record for up to 4 hours on two 64GB memory cards in HD422 50Mbps mode. It has various recording formats and codecs as well as wireless capabilities when used with an optional adapter. It includes features such as a 14x or 16x zoom lens, retractable chest pad, and 3.5-inch LCD viewfinder for comfortable operation.
The PMW-300K1 XDCAM camcorder is equipped with three 1/2-inch Exmor™ full-HD CMOS sensors, capable of delivering high-quality images even in low-light environments.
MIPI DevCon 2021: Meeting the Needs of Next-Generation Displays with a High-P...MIPI Alliance
Presented by Alain Legault, Hardent Inc.; Joe Rodriguez, Rambus Inc.; and Justin Endo, Mixel, Inc.
Next-generation display applications have an insatiable appetite for bandwidth. Using a combination of VESA Display Stream Compression (DSC) and MIPI DSI-2℠ technology, designers can achieve display resolutions up to 8K without compromise to video quality, battery life or cost. This presentation discusses a fully integrated, off-the-shelf display IP subsystem solution, consisting of Mixel (MIPI C-PHY℠/D-PHY℠ combo), Rambus (MIPI DSI-2® controller) and Hardent (VESA DSC) IP, that can deliver this state-of-the-art performance in a power-efficient and compact footprint.
This document discusses the essential considerations for 4K/UHD video. It covers 12 topics: resolution, throughput and need for compression, common compression formats, display standards and minimum viewing distances, connectivity interfaces like SDI and HDMI, frame rates, color spaces, bit depth, camera optics and sensors, audio standards, an example media workflow chain, and objective quality measurement. It provides information on resolution, sampling, bit depth, frame rates and data rates for different formats. It also includes example test results and source sample information.
Presentazione Broadcast H.265 & H.264 Sematron Italia - Maggio 2016Sematron Italia S.r.l.
This document provides an agenda and overview for a presentation on H.265 and H.264 technologies from Sematron Italia. The agenda includes presentations from Paralinx, TeraDek, Soliton, and Vitec on their latest products, followed by a question and answer session and commercial proposal. Sematron Italia is introduced as a partner for leading companies in defense, telecommunications, satellite communications, aerospace and broadcast with over 30 years of experience. The document also provides overviews of Sematron Italia's divisions for microwave/RF components, satellite communications, and broadcast solutions.
M3L Inc Company Profile (August 19th, 2020 version)M3L Inc.
M3L Inc. is an experienced media-over-IP company that provides customizable IP cores and technical consulting services. They have expertise in SMPTE ST 2110, ST 2059, and other media over IP standards. Their portfolio includes IP cores for video, audio, clock recovery, and network functions that help customers develop state-of-the-art interfaces for broadcast and communications equipment. M3L has had success partnering with broadcast equipment manufacturers in Japan and has demonstrated their solutions at industry events.
International Fiber Systems 601B-R/1B Data SheetJMAC Supply
Buy the International Fiber Systems 601B-R/1B at JMAC Supply.
https://www.jmac.com/IFS_International_Fiber_Systems_601BR_1B_p/ifs-601br-1b.htm?=slideshare
Omid Technologies Inc. designs and manufactures broadcast products including encoders, decoders, adaptors, and set-top boxes. The MCE-4000 is an H.264/MPEG-4/MPEG-2 encoder that can encode HD and SD video over IP, satellite, or terrestrial networks. The MCD-2000 is a professional receiver and decoder that can decode MPEG-2 and H.264/MPEG-4 streams from satellite, terrestrial, ASI, or IP inputs. Both products offer remote web-based management and are compatible with Omid's HMS-6000 management system.
This document summarizes the features of the JVC DT-V17G1Z 17-inch multi-format LCD monitor. It supports 3G-SDI and dual link formats, has a wide viewing angle IPS LCD panel, and includes waveform and vectorscope tools for signal monitoring. The monitor also provides gamma selection, audio level metering for up to 12 channels, and closed captioning support.
Similar to Serial Digital Interface (SDI), From SD-SDI to 24G-SDI, Part 1 (20)
This document provides an overview of color spaces and high dynamic range (HDR) technologies. It begins with definitions of color gamut and chromaticity coordinates. It then discusses several key color spaces including Rec.709, Rec.2020, DCI-P3, ACES, and S-Gamut3. It also covers HDR formats like PQ, HLG, and log encoding. The document aims to explain the essential aspects of different color spaces and HDR technologies used for digital cinema and television production.
The document discusses high dynamic range (HDR) imaging technologies including:
- Standards for HDR encoding like SMPTE ST 2084 (PQ) and ARIB/ITU-R BT.2100 (HLG)
- Opto-electronic transfer functions (OETFs) and electro-optical transfer functions (EOTFs) used in HDR systems
- The human visual system's sensitivity to luminance levels and how this relates to quantization in HDR images
The document outlines topics related to video over IP infrastructure and standards. It discusses IP technology trends, networking basics, video and audio over IP standards, SMPTE ST 2110, NMOS, infrastructure considerations, timing issues, clean switching methods, compression, broadcast controller/orchestration, and case studies for migrating broadcast facilities to IP. The document provides an overview and outline for presenting on designing, integrating, and managing IP-based broadcast facilities and production workflows.
The document provides an overview of key concepts in high definition television (HDTV) including:
- Standards and definitions for SDTV and HDTV
- Interlacing and de-interlacing techniques
- Video scaling, edge enhancement, and frame rate conversion
- Signal quality issues in HDTV production and broadcast
- Cables and connectors used for HDTV production
The document contains diagrams and explanations of topics like color bars, genlocking, sampling, interlacing, field order, and 3D video sampling structures. It compares progressive and interlaced scanning and discusses concepts such as the Nyquist frequency, aliasing, and field dominance.
Hue refers to the dominant wavelength of light, which determines the color as perceived by the observer. Saturation refers to the purity of the hue, or the amount of white light mixed with it. Luminance refers to the brightness or intensity of the color.
The document discusses radiometry and photometry, which deal with measuring light across the electromagnetic spectrum and in the visible spectrum respectively. It defines terms like luminous flux, luminous intensity, illuminance, and luminance.
It also covers topics like additive and subtractive color mixing, primary and secondary colors, color spaces, and video signal formats like RGB, YUV, and YCbCr which are used to represent color images and video. Human cone sensitivity
This document provides an overview of sound and hearing, including:
1. It describes how the human ear works, from collecting sound waves through the outer ear and transmitting vibrations through the ossicles to the cochlea where hair cells detect different frequencies.
2. It discusses properties of sound like loudness, pitch, and timbre, and how they are perceived. Loudness depends on amplitude, pitch on frequency, and timbre on waveform complexity.
3. It explains characteristics of sound waves like wavelength, frequency, speed of sound, and the decibel scale used to measure sound intensity and pressure levels.
Video Compression, Part 4 Section 2, Video Quality Assessment Dr. Mohieddin Moradi
This document provides information on conducting subjective video quality assessments. It discusses different subjective assessment methods like double stimulus impairment scale (DSIS) and single stimulus continuous quality evaluation (SSCQE). It describes test parameters like number of observers, viewing conditions, grading scales and how to present the results. Guidelines are provided for tasks like screening observers, conducting test sessions, introducing impairments and collecting opinion scores to evaluate video coding standards and compression artifacts.
This document discusses video compression techniques. It begins by outlining the history of video compression and describing the basic components of a generic video encoder/decoder system. It then covers specific compression methods including differential pulse code modulation, transform coding using discrete cosine transform, quantization, and entropy coding. The document also discusses techniques for reducing both spatial and temporal redundancy in video, such as prediction coding. It provides examples of how quantization is used to control quality and compression ratio in both lossy and lossless compression systems.
The document discusses video compression history and standards, including codecs such as H.261, H.262/MPEG-2, H.263, H.264/AVC, H.265/HEVC, and the roles of organizations like MPEG, VCEG, and ITU-T in developing video coding standards to ensure interoperability. It also covers video encoding and decoding principles, as well as common container formats and their applications in areas like broadcasting, streaming, and storage.
The document provides a history of the development of television technology from the late 1800s through the 1920s. Some key developments include:
- In 1873, experiments with selenium, which is light-sensitive and formed the basis for early televisions.
- In 1884, the Nipkow disk laid down many basic concepts like scanning and synchronization.
- In 1923, Vladimir Zworykin developed the Kinescope, which allowed television programs to be recorded on film.
- In 1924, John Logie Baird transmitted the first television image.
- In 1925, Vladimir Zworykin demonstrated 60-line television using a curved-line image structure typical of mechanical television at the time.
Rainfall intensity duration frequency curve statistical analysis and modeling...bijceesjournal
Using data from 41 years in Patna’ India’ the study’s goal is to analyze the trends of how often it rains on a weekly, seasonal, and annual basis (1981−2020). First, utilizing the intensity-duration-frequency (IDF) curve and the relationship by statistically analyzing rainfall’ the historical rainfall data set for Patna’ India’ during a 41 year period (1981−2020), was evaluated for its quality. Changes in the hydrologic cycle as a result of increased greenhouse gas emissions are expected to induce variations in the intensity, length, and frequency of precipitation events. One strategy to lessen vulnerability is to quantify probable changes and adapt to them. Techniques such as log-normal, normal, and Gumbel are used (EV-I). Distributions were created with durations of 1, 2, 3, 6, and 24 h and return times of 2, 5, 10, 25, and 100 years. There were also mathematical correlations discovered between rainfall and recurrence interval.
Findings: Based on findings, the Gumbel approach produced the highest intensity values, whereas the other approaches produced values that were close to each other. The data indicates that 461.9 mm of rain fell during the monsoon season’s 301st week. However, it was found that the 29th week had the greatest average rainfall, 92.6 mm. With 952.6 mm on average, the monsoon season saw the highest rainfall. Calculations revealed that the yearly rainfall averaged 1171.1 mm. Using Weibull’s method, the study was subsequently expanded to examine rainfall distribution at different recurrence intervals of 2, 5, 10, and 25 years. Rainfall and recurrence interval mathematical correlations were also developed. Further regression analysis revealed that short wave irrigation, wind direction, wind speed, pressure, relative humidity, and temperature all had a substantial influence on rainfall.
Originality and value: The results of the rainfall IDF curves can provide useful information to policymakers in making appropriate decisions in managing and minimizing floods in the study area.
Optimizing Gradle Builds - Gradle DPE Tour Berlin 2024Sinan KOZAK
Sinan from the Delivery Hero mobile infrastructure engineering team shares a deep dive into performance acceleration with Gradle build cache optimizations. Sinan shares their journey into solving complex build-cache problems that affect Gradle builds. By understanding the challenges and solutions found in our journey, we aim to demonstrate the possibilities for faster builds. The case study reveals how overlapping outputs and cache misconfigurations led to significant increases in build times, especially as the project scaled up with numerous modules using Paparazzi tests. The journey from diagnosing to defeating cache issues offers invaluable lessons on maintaining cache integrity without sacrificing functionality.
An improved modulation technique suitable for a three level flying capacitor ...IJECEIAES
This research paper introduces an innovative modulation technique for controlling a 3-level flying capacitor multilevel inverter (FCMLI), aiming to streamline the modulation process in contrast to conventional methods. The proposed
simplified modulation technique paves the way for more straightforward and
efficient control of multilevel inverters, enabling their widespread adoption and
integration into modern power electronic systems. Through the amalgamation of
sinusoidal pulse width modulation (SPWM) with a high-frequency square wave
pulse, this controlling technique attains energy equilibrium across the coupling
capacitor. The modulation scheme incorporates a simplified switching pattern
and a decreased count of voltage references, thereby simplifying the control
algorithm.
Software Engineering and Project Management - Introduction, Modeling Concepts...Prakhyath Rai
Introduction, Modeling Concepts and Class Modeling: What is Object orientation? What is OO development? OO Themes; Evidence for usefulness of OO development; OO modeling history. Modeling
as Design technique: Modeling, abstraction, The Three models. Class Modeling: Object and Class Concept, Link and associations concepts, Generalization and Inheritance, A sample class model, Navigation of class models, and UML diagrams
Building the Analysis Models: Requirement Analysis, Analysis Model Approaches, Data modeling Concepts, Object Oriented Analysis, Scenario-Based Modeling, Flow-Oriented Modeling, class Based Modeling, Creating a Behavioral Model.
artificial intelligence and data science contents.pptxGauravCar
What is artificial intelligence? Artificial intelligence is the ability of a computer or computer-controlled robot to perform tasks that are commonly associated with the intellectual processes characteristic of humans, such as the ability to reason.
› ...
Artificial intelligence (AI) | Definitio
Null Bangalore | Pentesters Approach to AWS IAMDivyanshu
#Abstract:
- Learn more about the real-world methods for auditing AWS IAM (Identity and Access Management) as a pentester. So let us proceed with a brief discussion of IAM as well as some typical misconfigurations and their potential exploits in order to reinforce the understanding of IAM security best practices.
- Gain actionable insights into AWS IAM policies and roles, using hands on approach.
#Prerequisites:
- Basic understanding of AWS services and architecture
- Familiarity with cloud security concepts
- Experience using the AWS Management Console or AWS CLI.
- For hands on lab create account on [killercoda.com](https://killercoda.com/cloudsecurity-scenario/)
# Scenario Covered:
- Basics of IAM in AWS
- Implementing IAM Policies with Least Privilege to Manage S3 Bucket
- Objective: Create an S3 bucket with least privilege IAM policy and validate access.
- Steps:
- Create S3 bucket.
- Attach least privilege policy to IAM user.
- Validate access.
- Exploiting IAM PassRole Misconfiguration
-Allows a user to pass a specific IAM role to an AWS service (ec2), typically used for service access delegation. Then exploit PassRole Misconfiguration granting unauthorized access to sensitive resources.
- Objective: Demonstrate how a PassRole misconfiguration can grant unauthorized access.
- Steps:
- Allow user to pass IAM role to EC2.
- Exploit misconfiguration for unauthorized access.
- Access sensitive resources.
- Exploiting IAM AssumeRole Misconfiguration with Overly Permissive Role
- An overly permissive IAM role configuration can lead to privilege escalation by creating a role with administrative privileges and allow a user to assume this role.
- Objective: Show how overly permissive IAM roles can lead to privilege escalation.
- Steps:
- Create role with administrative privileges.
- Allow user to assume the role.
- Perform administrative actions.
- Differentiation between PassRole vs AssumeRole
Try at [killercoda.com](https://killercoda.com/cloudsecurity-scenario/)
15. ANSI/SMPTE 259M – It is a standard published by SMPTE which describes a 10-bit serial digital interface operating at 143/270/360 Mb/s. The
goal of SMPTE 259M is to define a serial digital interface (based on a coaxial cable), called SDI or SD-SDI.
ANSI/SMPTE 240M – Signal Parameters – 1125-Line High-Definition Production Systems. Defines the basic characteristics of analog video
signals associated with origination equipment operating in 1125 (1035 active) production systems at 60 Hz and 59.94 Hz field rates.
SMPTE 260M – Digital Representation and Bit-Parallel Interface – 1125/60 High-Definition Production System. Defines the digital representation
of 1125/60 high-definition signal parameters defined in analog form by ANSI/SMPTE 240M.
ANSI/SMPTE 274M – 1920 x 1080 Scanning and Analog and Parallel Digital Interfaces for Multiple Picture Rates. Defines a family of scanning
systems having an active picture area of 1920 pixels by 1080 lines and an aspect ratio of 16:9.
ANSI/SMPTE 292M – Bit-Serial Digital Interface for High-Definition Television Systems. Defines the bit-serial digital coaxial and fiber-optic
interface for high-definition component signals operating at 1.485 Gb/s and 1.485/1.001 Gb/s.
ANSI/SMPTE 295M – 1920x1080 50 Hz – Scanning and Interface
ANSI/SMPTE 296M – 1280 x 720 Scanning, Analog and Digital Representation and Analog Interface. Defines a family of progressive scan
formats having an active picture area of 1280 pixels by 720 lines and an aspect ratio of 16:9.
ANSI/SMPTE 372M – Dual Link 292M. Defines a method for carrying 1080i/p YCbCr formats and RGBA 1080i/p formats in either 10- or 12-bit
formats via two HD-SDI links.
ANSI/SMPTE 424M – 3 Gb/s Signal/Data Serial Interface. Defines a method for transporting 3 Gb/s serial digital signal over a coaxial interface.
ANSI/SMPTE 425M – 3 Gb/s Signal/Data Serial Interface – Source Image Format Mapping. Defines the method of transporting 1920x1080 and
2048x1080 picture formats over a single transport interface of 3 Gb/s.
SMPTE ST 2082 – 12G-SDI Bit-Serial Interfaces
SDI Standards
15
21. − Combined Y, R-Y and B-Y signals.
− Professional equipment normally uses a single 75 Ohm BNC connector.
− Consumer and domestic equipment may use a RCA connector.
− RCA connector sometimes called Phono connector.
Analogue Composite Signal
21
23. 621
308 309 310 311 312 313 314 315 316 317 318 319 320 333 334 335 336 337 338
622 623 624 625 1 2 3 4 5 6 7 8 21 22 23 24 26259
Field 2 Field 1
Field 1 Field 2
Field blanking
Field blanking
20
Y
video
signal
Line number
Y
video
signal
Line number 332321
Horizontal and Vertical Blanking in Analogue Signal
23
24. Answered the industry’s call for a standard digital video connection
CCIR-601 published in 1982.
– CCIR-601 specified the signal (sampling structure for 525 and 625)
– Simple document, but allowed the industry to standardise.
CCIR-656 published in 1986.
– CCIR-656 specified the interconnection (Serialization process for standard definition signal).
– Both a parallel and a serial transmission format are defined.
– CCIR-601 rewritten.
CCIR-601 & CCIR-656 should be taken together.
– Certain amount of overlap between 601 & 656.
CCIR disbanded in favour of ITU.
– Now called ITU-BT R601.
– Also ratified as SMPTE-259M (SD-SDI Signal).
CCIR Recommendations
24
25. Pin 1
Pin 25
Pin 13
Pin 14
Clock +
Function
Data 2+
Data 5-
Spare B-
Spare A-
Data 0-
Data 1-
Data 2-
Data 3-
Data 7-
Data 4-
GND
Spare B+
Data 6-
Clock -
Data 0+
Shield/shell
Data 6+
Data 7+
GND 15
25
24
23
22
21
20
19
18
17
Pin
no.
Data 5+
Data 1+
Data 4+
Data 3+
Spare A+
1
Pin
no. Function
2
3
4
5
6
7
8
9
10
11
12
13
14
16
The CCIR 601/656 Connector
25
26. – Digitised component analogue.
– 4:2:2 sample structure.
– 720 pixels per active line (1440 samples).
– 13.5M pixels per second (27M samples).
– Special sync words.
– Partial commonality between 525 and 625 systems.
– Parallel connector with D25 connector and differential signals.
• CCIR601 cables and connectors were bulky.
• Cables could not be very long.
• Broadcast industry wanted serial digital video.
– Use standard commonly available BNC cables and connectors.
CCIR 601 & CCIR 656 Specifications
26
27. Parallel in
Serial out
Sync Detector
Sony designed two ground breaking devices, SBX-1601 and SBX-1602.
– SBX-1601 : Parallel to serial converter : Used as transmitter.
Sony SBX 1601 (SDI Encoder), Sony SBX 1602 (SDI Decoder)
27
28. Parallel out
Serial in
Sony SBX 1601 (SDI Encoder), Sony SBX 1602 (SDI Decoder)
Sony designed two ground breaking devices, SBX-1601 and SBX-1602.
– SBX-1602 : Serial to parallel converter : Used as receiver.
28
29. – Serial form of CCIR 601 and CCIR 656.
– Full 10 bit resolution.
– 270Mbps stream.
• Uses 4:2:2 sample structure.
– Half the colour samples and black-&-white samples.
• Up to 300 metres connection.
– 280 metres without error.
• Using good quality BNC cables and connectors.
– Sometimes use thinner versions of BNC.
• Component video connection.
– Digital connection keeps the samples separate.
MSW-M2000
Sony SBX 1601 (SDI Encoder), Sony SBX 1602 (SDI Decoder)
29
30. ANSI/SMPTE 259M (SD-SDI) (or ITU- BT.R 656-4)
– It is a standard published by SMPTE which describes a 10-bit serial digital interface operating at 143/270/360
Mb/s. (8,10-bits)
– The goal of SMPTE 259M is to define a serial digital interface (based on a coaxial cable), called SDI or SD-
SDI.
SMPTE 125M (or ITU- BT.R 601-5)
– Component Video Signal 4:2:2 – Bit-Parallel Digital Interface
ITU-R BT.601
– Standard Definition sampling structure for 525 and 625
ITU-R BT.656
– Serialization process for standard definition signal.
– Interfaces for digital component video signals in 525-line and 625-line television systems operating at the
4:2:2 level of Recommendation ITU-R BT.601 (Part A)
ANSI: American National Standards Institute
SMPTE: Society of Motion Picture & Television Engineers
SDI Standards
30
39. − A number of other color-difference formats are in use for various applications.
− In particular it is important to know that the coefficients currently in use for
composite PAL, SECAM, and NTSC encoding are different, as shown in Table.
Component Color Difference
39
40. − The RGB signals in the camera are gamma-corrected with the inverse function of the CRT.
− Gamma corrected signals are denoted R', G', and B'; the prime mark (') indicating a
correction factor has been applied to compensate for the transfer characteristics of the
pickup and display devices.
Gamma Correction
40
41. Conversion of R'G'B' into Luma and Color-Difference
(Bandwidth-Efficient Method)
41
42. − The luma signal has a dynamic range of 0 to 700 mv.
− The color-difference signals, R'-Y' and B'-Y', may have different dynamic ranges
dependent on the scaling factors for conversion to various component formats.
Conversion of R'G'B' into Luma and Color-Difference
42
44. − The analog component format denoted by Y'P'bP'r is scaled so that both color-difference
values have a dynamic range of ±350 mv.
− This allows for simpler processing of the video signals.
Y', (R'-Y'), (B'-Y') Conversion to Y',P'b ,P'r
44
45. − Analog Y'P'bP'r values are offset to produce Y'C'bC'r values typically used within the
digital standards.
− The resulting video components are a Y’ or luma channel similar to a monochrome video
signal, and two color-difference channels, C'b and C'r that convey chroma information
with no brightness information, all suitably scaled for quantization into digital data.
Y', P'b, P'r Conversion to Y', C'b, C'r
45
48. − A coprocessor is a computer processor used to supplement the functions of the primary processor
(the CPU). A co-processor is used to add timing reference signals, AES/EBU formatted digital audio,
and other ancillary data.
Processing and Serializing the Parallel Data Stream
Error detection code:
CRCC (Cyclic Redundancy Check Code)
48
49. – At the receiver, energy at half-clock frequency is sensed to apply an appropriate analog equalization to the incoming 270
Mb/s data signal.
– A new 270 MHz clock is recovered from the NRZI signal edges, and the equalized signal is sampled to determine its logic state.
– The deserializer unscrambles the data using an algorithm complimentary to the encoder’s scrambling algorithm and outputs a
10-bit data stream at 27 MB/s.
– The embedded checksum is extracted by the receiver and compared with a new checksum produced from the received data
and any error is reported and an appropriate flag added to the data stream.
SDI Receiver De-Serializes to Parallel
Unscrambles the data
49
50. – The 10-bit data is then demultiplexed into digital luma and chroma data streams,
converted to analog by three digital-to-analog converters, filtered to reconstruct the
discrete data levels back to smooth analog waveforms, and matrixed back to the original
R'G'B' for display.
Recovering Analog R'G'B' from Parallel Data
50
51. – ITU-R BT.601 is the sampling standard that evolved out of a joint SMPTE/EBU task force to
determine the parameters for digital component video for the 625/50 and 525/60 television
systems.
– This document specifies the sampling mechanism to be used for both 525 and 625 line signals. It
specifies orthogonal sampling at 13.5 MHz for analog luminance and 6.75 MHz for the two
analog color-difference signals.
– The sample values are digital luma Y' and digital color-difference C'b and C'r, which are scaled
versions of the analog gamma corrected B'-Y' and R'-Y'.
– 13.5 MHz was selected as the sampling frequency because the sub-multiple 2.25 MHz is a
factor common to both the 525 and 625 line systems.
601 Sampling
51
67. Header : 3FFh, 000h, 000h
− The “xyz” word is a 10-bit word with the two least significant bits set to zero
to survive an 8-bit signal path. Contained within the standard definition
“xyz” word are functions F, V, and H, which have the following values:
• Bit 8 – (F-bit): 0 for field one and 1 for field two
• Bit 7 – (V-bit): 1 in vertical blanking interval; 0 during active video lines
• Bit 6 – (H-bit): 1 indicates the EAV sequence; 0 indicates the SAV sequence
Timing Reference Signal (TRS) Codes
67
68. Notes:
1. The values shown are those recommended for 10–bit interfaces.
2. For compatibility with existing 8–bit interfaces, the values of bits D1 and D0 are not defined. These bits should be set to a fixed value and not left floating.
Timing Reference Signal (TRS) Codes
Protection bits for
SAV and EAV
It provides a double
error detection, single
error correction.
68
74. Luminance (Y):
Number of luminance samples in each line: 864 (858)
Number of luminance samples in each active line: 720
Number of samples in each horizontal blanking: 144 (138)
R-Y (Cr):
Number of chrominance samples in each line: 432 (429)
Number of chrominance samples in each active line: 360
Number of chrominance samples in horizontal blanking: 72 (69)
B-Y (Cb):
Number of chrominance samples in each line: 432 (429)
Number of chrominance samples in each active line: 360
Number of chrominance samples in horizontal blanking: 72 (69)
Total number of samples in each line:
864+432+432=1728 (in NTSC 1716)
Total number of samples in each active line:
720+360+360=1440
Samples in PAL and NTSC
74
80. Active Video field 1
frame n
Active Video field 2
frame n
H
A
N
C
S
P
A
C
E
VBI
E
M
B
E
D
D
E
D
A
U
D
I
O
FRAME N
FRAME N
FRAME N+ 1
VBI
VBI
S
A
V
S
A
V
S
A
V
S
A
V
E
A
V
E
A
V
E
A
V
E
A
V
Vert. Switch point
Vert. Switch point
Vert. Switch point
VBI
Vertical
Blanking
Interval
Ancillary (ANC) Data Space
80
81. − Room for eight AES 2-channel audio data streams
16 audio channels total
• Group 1 = Channels 1&2, 3&4
• Group 2 = Channels 5&6, 7&8
• Group 3 = Channels 9&10, 11&12
• Group 4 = Channels 13&14, 15&16
− Room for RP 165 EDH error detection signals
− Room for future data signals
− May be added or stripped by source/destination or external equipment
SDTV - ANC Data Types
81
83. – In the ideal situation
– The signal transitions align with the clock’s falling edges
– The sampling occurs at the clock’s rising edges
Data Sampling by Clock
83
84. – Each bit consists of two sections: the first section assumes a value that represents the bit value, and
the second section is always equal to a logical zero.
– In other words, every two symbols carrying information are separated by a redundant zero symbol.
It seems like “AND” operation of NRZ data with a clock having period of Tb
RZ data Encoding
84
85. “AND” operation in time domain The convolution in frequency domain
( RZ data spectrum * Periodic square clock spectrum)
BWRZ = 2 * BWNRZ
It has energy at bit rate
but
– In contrast to NRZ data, RZ waveforms exhibit a spectral line at a frequency equal to the
bit rate, thereby simplifying the task of clock recovery.
RZ data Encoding
85
88. Advantages
Easy line Failure detection (because its bipolar ) (like NRZ encoding).
Polarity insensitivity (In NRZ encoding high is a “1” and low is a “0.” It means it has polarity sensitivity).
• For a transmission system its convenient to not require a certain polarity of the signal at the receiver polarity.
• A data transition is used to represent each “1” and there’s no transition for a data “0.” The result’s that it’s only
necessary to detect transitions; this means either polarity of the signal may be used.
A signal of all “1”s after encoding to NRZI results in a square wave at one-half the clock frequency.
Disadvantage
NRZI is not a DC–free code.
If there are many consecutive “0”s, the clock recovery may be impaired; in effect the PLL oscillator
will run “free” for excessively long periods because it has nothing to lock–on to.
NRZI Data Encoding
88
89. Scrambling = Make Something Nonsense
– Long strings of 0 after encoding to NRZI , can make some problems In clock recovery.
– By scrambling we mix bits to overcome this problems and consequently DC term and Low frequency of signal will removes.
X
1
X4
X9
1
Scrambling
NRZI Coding
Scrambling
89
91. − Scrambling can also produce long runs of 1s or 0s.
− This happens infrequently in video
− It happens occasionally with EAV and SAV which contain 2 or 3 words of all 0s
− It can be encouraged to happen with certain input signals
− Scrambler Concern
− Specific pattern of .1s. and .0s input can create strings of 0s output.
Scrambled NRZI Problem
91
92. − A Cyclic Redundancy Check (CRC) can be used to provide information to the operator or even sound
an external alarm if the data does not arrive intact.
− Digital monitors may provide both a display of CRC values and an alarm on any CRC errors.
− A unique CRC pair is present in each video line with a separate value for chroma and luma
components in HD, and may be optionally inserted into each field in SD.
− A CRC is calculated and inserted into the data signal for comparison with a newly calculated CRC at
the receiving end.
CRC (Cyclic Redundancy Check) Error Testing
92
93. The CRC for SD is inserted into the vertical interval, after the switch point.
− SMPTE RP165 defines the optional method for the detection and handling of data errors in SD.
− Full Field and Active Picture data are separately checked and a 16-bit CRC word generated once
per field.
• The Full Field (FF) check covers all data transmitted except in lines reserved for vertical interval
switching (lines 9-11 in 525, or lines 5-7 in 625 line standards).
• The Active Picture (AP) check covers only the active video data words, between but not
including SAV and EAV. Half-lines of active video are not included in the AP check.
The CRC for HD (SMPTE 292M and in SMPTE 425) is inserted following the EAV and line number words
− The CRC checking is performed on a line-by-line basis.
− The user can then monitor the number of errors they have received along the transmission path.
CRC (Cyclic Redundancy Check) Error Testing
93
96. Quality of an SDI Signal Input
− The quality of a serial digital input is characterized by the following criteria:
− Design and dynamics of the input circuitry (impedance matching, frequency response and phase
linearity)
− Cable equalizer design (equalizing circuit matching cable loss characteristics)
− Robust and jitter resilient clock recovery
− For example doubling the data rate from 1.5 Gbit/s to 3 Gbit/s has increased the difficulty of building
workable equipment and systems:
− For example at 3 Gbit/s, cable losses increase by 40%, connector discontinuities become twice as
significant, the signal bandwidth doubles, the crosstalk potential increases and amplifier gain is
harder to achieve at the higher bandwidth.
96
97. Measurement in physical domain of the SDI Signal:
– Eye pattern
– Signal level
– Signal ripple
– Low–frequency signal level distortion
– DC offset
– Overshoot
– Rise and fall times
– Impedance
– Return loss
– Jitter.
Measurements in the data domain of SDI Signal:
– Number of active bits (eight or ten)
– Digital signal level
– Check for forbidden digital values
– Timing reference signal (TRS) verification
– Rise and fall times
– Bit–error ratio
– Cyclic redundancy check (CRC)
– Luminance/chrominance delay
– Picture position relative to the TRS
– Ancillary data verification
(including type and length identification)
– Color gamut verification
– Propagation delay in SDI equipment
SDI Signal Measurements
97
98. I. Jitter testing
II. Out-of-service testing (SDI check field)
III. In-service testing (CRC :Cyclic Redundancy Check)
IV. Eye-pattern testing
V. Cable-length stress testing
Digital System Testing
98
99. Unit Interval =1/Clock Frequency
Component Digital (SD-SDI)
1UI = 1/270MHz = 3.70 ns
Component Digital (HD-SDI)
1UI = 1/1.485GHz = 0.673 ns
Component Digital (3G-SDI)
1UI = 1/2.97 GHz = 0.336 ns
AES/EBU Digital Audio
1UI = 1/6.144MHz = 163 ns
Unit Interval (UI)
99
100. SDI signal with sinusoidal edge variation
(ideal positions shown in darker lines).
− Jitter is a significant, undesired issue with any communication link and can be defined as the temporal
deviation from a periodic signal with ideal duty cycle. This is known as Time Interval Error (TIE).
Jitter
Jitter Frequency and Jitter Amplitude
100
101. – In actual SDI signals, jitter will rarely have the simple sine wave characteristics. The jitter spectrum in
actual SDI signals generally contains a range of spectral components.
– In real systems, a wide variety of factors influence the timing of signal transitions. These different
sources introduce variations over a range of “frequencies and amplitudes”.
– The peak amounts that any particular edge leads or lags its ideal position may differ and there may
be long time intervals between edges with large peak-to-peak variation.
– Jitter waveform: the amount of variation in a signal’s transitions as a function of time-domain
– Jitter spectrum: the frequency-domain representation of the time-domain jitter waveform
– In actual signals, the jitter waveform typically has a complex shape created by the combined effects
of various sources, and the jitter spectrum contains a wide range of spectral components at different
frequencies and amplitudes.
Jitter
Jitter Frequency and Jitter Amplitude
101
102. – The significant instants are the exact
moments when the transitioning signal
crosses a chosen amplitude threshold, which
may be referred to as the reference level or
decision threshold.
– Phase and amplitude jitter influence digital
systems and can cause bit errors.
Phase Jitter and Amplitude Jitter
102
108. Measure jitter spectrum as shown with a Real-Time Scope or TIA and subtract RJ background spectrum
Deterministic
Jitter (DJ)
Random
Jitter (RJ)
Data Dependent
Jitter (DDJ)
Inter-symbol
Interference (ISI)
Duty Cycle
Distortion (DCD)
Periodic
Jitter PJ
or SJ
Total
Jitter (TJ)
Echo jitter
(ECJ)
Bounded Uncorrelated
Jitter (Crosstalk)
Random Jitter (RJ) and Deterministic Jitter (DJ)
108
109. − The Key to reducing jitter is to reduce deterministic jitter.
− Optimizing deterministic jitter causes the gap between right and left RJ to overlap so that it
can exist as an ideal normal distribution.
Random Jitter (RJ) and Deterministic Jitter (DJ)
109
110. Random Jitter
− It has essentially no discernable pattern.
− It is best characterized by a Gaussian probability
distribution and statistical properties like mean and
variance.
− Random processes, e.g., thermal or shot noise, introduce
random jitter into an SDI signal.
− We typically use a Gaussian probability distribution to
model this jitter behavior, and we can use the standard
deviation of this distribution (equivalent to the RMS value)
as a measure of the jitter amplitude.
− However, the peak-to-peak jitter amplitude and the RMS
jitter amplitude are not the same.
Random Jitter (RJ) and Deterministic Jitter (DJ)
110
111. Random Jitter
− In particular, the peak-to-peak amplitude value depends on the observation time.
− On average, we would expect that a peak-to-peak amplitude measurement made over a long
observation time would have a larger value than a peak-to-peak amplitude measurement
made over a short observation time.
− The “tails” of a Gaussian distribution can reach arbitrarily large amplitudes. Hence, by observing
over a sufficiently large time interval, we could theoretically measure arbitrarily large peak-to-
peak jitter amplitude.
− We describe this property by saying that random jitter has “unbounded” peak-to-peak
amplitude.
− Thus, we can say that over any region of interest, random jitter in actual SDI signals has
unbounded peak-to-peak amplitude.
Random Jitter (RJ) and Deterministic Jitter (DJ)
111
112. Deterministic Jitter
− It is more predictable (determinable) and is often characterized by some definable periodic or
repeatable pattern with a determinable peak-to-peak extent.
− A wide range of sources can introduce deterministic jitter into an SDI signal.
− For example:
• Noise in a switching power supply can introduce periodic deterministic jitter.
• The frequency response of cables or devices can introduce data-dependent jitter that is
correlated to the bit sequence in the SDI signal.
• Differences in the rise and fall times of transition can introduce duty-cycle dependent jitter.
Random Jitter (RJ) and Deterministic Jitter (DJ)
112
113. Deterministic Jitter
− In addition to these general sources of deterministic jitter, SDI signals can contain deterministic
jitter correlated with video properties.
− For example:
• The line and field structure of video data can introduce a periodic deterministic jitter that we will
call raster dependent jitter.
• Converting the 10-bit words used in digital video to and from a serial bit sequence can introduce
high frequency data-dependent jitter at 1/10 the clock rate, typically called word-correlated
jitter.
Random Jitter (RJ) and Deterministic Jitter (DJ)
113
114. Deterministic Jitter
− Deterministic jitter attains some maximum peak-to-peak amplitude within a determinable time
interval.
− Increasing the observation time beyond this time interval will not increase the peak-to-peak
jitter amplitude measurement.
− Unlike random jitter, repeatable deterministic jitter has a determinable upper bound on its peak-
to-peak jitter amplitude.
− Even if deterministic jitter has infrequent long-term determinable behavior, this jitter can be
adequately modeled with a predictable pattern that has bounded peak-to-peak amplitude.
Thus, for all practical purposes, deterministic jitter has bounded peak-to-peak amplitude and
random jitter has unbounded peak-to-peak jitter amplitude.
Random Jitter (RJ) and Deterministic Jitter (DJ)
114
122. – Measurement instruments create Eye diagrams by
superimposing short segments of the serial data signal.
– The finite rise and fall times of these transitions create the
characteristic ‘X’ patterns in the Eye diagram.
– Since the data transport stream contains components
that change between high and low at rates of 270 Mb/s
for SD, up to 1.485 Gb/s for some HD formats, the ones
and zeros will be overlaid for display on a video
waveform monitor.
– This is an advantage since we can now see the
cumulative data over many words, to determine any
errors or distortions that might intrude on the eye opening
and make recovery of the data high or low by the
receiver difficult.
Eye Pattern
122
124. Eye Pattern
− To make the Eye diagram, the instrument aligns the segments using a reference clock
signal.
− Typically this reference clock is extracted from the data signal, but may be a separate
reference clock signal (It can be externally supplied, e.g., through the trigger input on an
oscilloscope, or extracted within the measurement instrument).
− If the transitions in the input signal align with the edges in this reference clock they will lie
on top of each other in the Eye diagram.
− Any transitions that vary from the nominal positions determined by this reference clock will
appear in different locations.
124
125. Eye Pattern
crossoverpoint
crossoverpoint
crossoverpoint
− The time interval between the crossover points in the Eye equals the unit interval.
− In the ideal case, the decoding process samples the signal at the mid-point between the crossover
points and the decision threshold or decision level corresponds to the widest part of the Eye opening.
125
126. Eye Pattern
crossover point crossover point
− For signals with a small amount of jitter, the edges in the aligned segments occur in nearly the same
location.
− As the amplitude of the jitter increases, more transitions move into the open space between
crossover points, i.e. the Eye starts to “close”.
− Overally, a signal that forms a large, wide-open Eye is less likely to produce decoding errors than a
signal that forms a small or closed Eye.
126
127. – Jitter within the SDI signal will change the time when a transition occurs and cause a widening of the
overall transition point.
– This jitter can cause a narrowing or closing of the eye display and make the determination of the
decision threshold more difficult.
Eye Pattern
127
128. – To extract the digital content from an SDI signal, video equipment samples the SDI signal at the midpoint
of the time intervals containing data bits and converts these sampled levels to the corresponding bit
values.
– To determine whether a sampled signal voltage corresponds to a “high” or “low” signal level, decoders
compare the sampled voltage against a particular voltage level called the decision threshold or decision
level.
– SDI receivers generally use fixed decision thresholds in the decoding process.
– For optimal performance, signal levels must keep the same relative relationship to this fixed voltage level.
– A shift in the signal relative to the decision threshold reduces the noise margin for one of the signal levels,
which can lead to decoding errors.
Eye Pattern
128
129. − Real SDI signals, have some amount of jitter in their edges.
− Jitter of sufficiently large amplitude will cause sampling errors.
− The impact of incorrect sampling, if jitter is greater than 0.5 UI, is incorrect data.
Decoding Errors
129
130. – In the decoding process, SDI receivers use
a reference clock to determine when to
sample the input SDI signal.
– Ideally, the transitions in the input SDI
signal occur at appropriate clock edges
and sampling occurs at the midpoint of
the unit interval.
– In the ideal situation, the signal transitions
align with the clock’s falling edges and
sampling occurs at the clock’s rising
edges.
Decoding Errors
• The signal transitions align with the clock’s falling edges
• The sampling occurs at the clock’s rising edges
130
131. 0 1
Sampling Point
BER (Bit Error Rate)
Decoding Errors and Ideal Sampling Position
131
132. Ideal sampling position
Timing skew of Sampling Position Jitter
Ideal reference point
Voltage offset
Jitter
Decoding Errors and Ideal Sampling Position
132
133. – To generate the clock waveform we employ
a VCO, and to define its frequency and
phase we phase-lock the VCO to the input
data using a DFF operating as a phase
detector (PD).
– The low-pass filter (LPF) suppresses ripple on
the oscillator control line.
– Also, to retime the data, we add another DFF
that is clocked by the VCO output.
– The recovered clock, CKout, drives the D input
of the phase detector and the clock input of
the retimer. a) The role of a CDR circuit in retiming data
b) An example of CDR implementation.
Clock and Data Recovery (CDR)
Phase Detector (PD)
133
134. − At Low Frequency Jitter that are below the clock recovery bandwidth → The clock will follow timing
variations in the input signal
Jitter Amplitude
Jitter Tracking in Clock Recovery
SignalTransitions
FallingEdge
SamplingRisingEdge • The signal transitions align with the clock’s falling edges.
• The sampling occurs at the clock’s rising edges.
134
135. Small Jitter Amplitude
− High Frequency Jitter → The clock can not follow timing variations in the input signal
− Small Jitter Amplitude → Correct samples
Jitter Tracking in Clock Recovery
135
136. Large Jitter Amplitude
Jitter Tracking in Clock Recovery
− High Frequency Jitter → The clock can not follow timing variations in the input signal
− Large Jitter Amplitude → Incorrect samples
136
138. – The recovered clock will follow timing variations in the input signal that fall within the bandwidth of the
clock recovery process.
– The timing variations in the SDI signal introduce variations in the transitions of the recovered clock.
– So, actual reference clocks used in decoding are not jitter free.
Jitter in Clock and Decoding errors
Jitter frequency is within the bandwidth of the clock recovery process.
138
139. – The recovered clock does not track variations in signal transitions if the frequency of the variation lies
above the bandwidth of the clock extraction/recovery process.
– At these higher frequencies, the position of signal transitions can vary relative to the edges of the
recovered clock and these variations can create decoding errors (if jitter amplitude was bigger than
specified value).
Jitter frequency is above the bandwidth of the clock recovery process.
If jitter amplitude in input data was bigger than specified value then, create decoding errors
Jitter in Clock and Decoding errors
139
140. – The recovered signal can be as perfect as the original if the data is detected with a jitter-free clock.
– In a communications system with forward error correction, accurate data recovery can be made with
the eye nearly closed.
– As noise and jitter in the signal increase through the transmission channel, certainly the best decision
point is in the center of the eye.
Data Recovery
140
141. – Jitter refers to short-term time interval error, i.e. spectral components above some low frequency
threshold (for timing jitter> 10Hz)
(Difficult for receiver to adjust, so can be a problem)
– Wander refers to long-term time interval error (<= 10Hz)
(The receivers can generally track these long-term variations, so normally not a problem)
– The recovered clock will generally track spectral components below the clock recovery bandwidth,
but will not track spectral components above this bandwidth.
– Hence, the impact of jitter on decoding SDI signal depends on both the jitter’s amplitude and its
frequency components. This has led to a frequency-based classification of jitter.
Jitter and Wander
141
142. Timing Jitter:
− The changes related to an ideal
time reference.
− It is preferable to use the original
reference clock, but it is not
usually available, so a heavily
averaged oscillator in the
measurement instrument can be
used.
Alignment Jitter/Relative Jitter:
− The changes related to a
reference derived from the signal
itself.
Timing and Alignment Jitter Measurement
The receiver PLL cannot track this jitter
(PLL out of Band Jitter)
142
143. “Bandwidths of the clock recovery”
The receiver PLL can track this jitter
The receiver PLL cannot track this jitter
Timing and Alignment Jitter
143
Timing Jitter is used to determine general health of system
Alignment jitter can cause system problems
144. – The variation in time of the significant instants (such as zero crossings) of a digital signal relative to a clock with no
jitter.
– The standards set threshold at 10 Hz and refer to spectral components above this frequency as timing jitter.
Measurement based on very stable clock in test device
(It is preferable to use the original reference clock, but it is not usually available, so a heavily averaged oscillator in
the measurement instrument can be used)
Used to determine general health of system
Timing Jitter
In SDTV
f1=10Hz, f3=1KHz f4=PLL Upper limit (27KHz)
SD-SDI
144
145. – The variation in time of the significant instants (such as zero crossings) of a digital signal relative to a hypothetical
clock recovered from the signal itself.
– Alignment jitter refers to components in the jitter spectrum above a specified frequency threshold related to typical
bandwidths of the clock recovery processes (1kHz (SD), 100kHz (HD)).
– Alignment jitter shows signal-to-latch clock timing margin degradation.
Measured based on clock recovered from the signal itself
Indicates jitter that can cause system problems (Jitter that the receiver PLL cannot track it)
Bandwidths of the
clock recovery
Alignment Jitter or Relative Jitter
In SDTV
f1=10Hz, f3=1KHz f4=PLL Upper limit (27KHz)
SD-SDI
The digital systems will work beyond alignment jitter specification, but will fail at some point.
145
146. Timing and Alignment Jitter Measurement
HD-SDI
– Since video equipment can track wander
and low frequency timing jitter, these spectral
components often have less impact on signal
decoding.
– The digital systems will work beyond
alignment jitter specification, but will fail at
some point.
– In general, video equipment does not track
alignment jitter, though some equipment may
track some low frequency alignment jitter.
– High amplitude alignment jitter generally
introduces decoding errors.
Bandwidths of the
clock recovery
The receiver PLL
cannot track this jitter
Timing Jitter
Timing Jitter
146
147. Eye Pattern Measurement Specification
– For SD-SDI signals, SMPTE 259M specifies that measurements of source output signal characteristics shall be
made across a resistive load connected by a “short coaxial cable.”
– For HD-SDI signals, SMPTE 292M specifies a “1-m coaxial cable.”
– Hence, the standards only specify jitter performance near the source output as measured over a short
cable.
147
148. Eye Pattern Measurement Specification
– For SDI signal receivers, the standards place some requirements on the SDI inputs, including impedance and
return loss.
– They do not, however, define any performance limits on the jitter input tolerance of an SDI receiver.
– Also, the standards do not define performance limits on jitter transfer in system elements.
148
155. – Since video equipment can track wander and low
frequency timing jitter, these spectral components often
have less impact on signal decoding.
– Low frequency variations can have significant impact in
other areas, E.g., digital-to-analog conversion stages, use
this recovered clock, or a sub-multiple of this clock.
– Since this clock tracks the low frequency jitter in the input
SDI signal, its edges vary from their ideal positions. This
jitter in the clock signal can introduce errors, e.g., non-
linearity in D-to-A conversion.
Wander and Low Frequency Timing Jitter Effect on Clock
155
156. Recall of Linearity in Video ADC
If INL is big...Distortion and color unevenness occurred at a gradation part of an image. If DNL is big...Vertical noise occurred at a gradation part of an image
156
157. − Has been a problem at D/A conversion points
− Must be able to see analog version of signal
Wander and Low Frequency Timing Jitter Effect on Clock
157
158. Jitter Display with 10 Hz Filter(0.2UI) Jitter Display with 100 Hz Filter(0.12UI)
Jitter Display with 1 kHz Filter(0.12UI) Jitter Display with 100 kHz Filter(0.07UI)
Jitter Display with Different Filter Selections
158
159. − There are several ways in which jitter may be measured on a single waveform.
− It is important to understand how these measurements relate to each other and what they reveal.
Period Jitter (JPER)
• Time difference between measured period and ideal period (rising edge to next adjacent rising
edge)
Cycle to Cycle Jitter (JCC)
• Time difference between two adjacent clock periods
• Important for budgeting on-chip digital circuits cycle time
• It shows the instantaneous dynamics a clock recovery PLL might be subjected to.
Time Interval Error (TIE) (or Accumulated Jitter (JAC) )
• Time difference between measured clock and ideal trigger clock
• Jitter measurement most relative to high-speed link systems
Jitter measurements
159
160. – The period jitter (P1, P2 and P3) measures the period of each clock cycle.
– The cycle-cycle jitter (C2 and C3) measures how much the clock period changes between any two
adjacent cycles.
• It can be found by applying a first-order difference operation to the period jitter. (The ideal edge
locations of the reference clock was not required for above measurements)
– The TIE measures how far each active edge of the clock varies from its ideal position (The ideal edges
must be known or estimated)
Jitter measurements
160
162. Jitter Tolerance refers to how much jitter, as a function of the jitter frequency, can be tolerated by a
system.
– All receiving equipment needs to be tolerant to a specified level of jitter at different frequencies.
– Test and measurement equipment should be tolerant to a higher level of jitter.
Rx Jitter Tolerance
f2 = f3/(A1/A2)
f4 = 148.5MHz (ie 1/10th clock rate)
A1 = 8 UI (168psec) Timing Jitter
A2 = 0.3 UI (25.2psec) Alignement
162
163. Jitter Tolerance refers to how much jitter, as a function of the jitter frequency, can be tolerated by a
system.
– All receiving equipment needs to be tolerant to a specified level of jitter at different frequencies.
– Test and measurement equipment should be tolerant to a higher level of jitter.
Rx Jitter Tolerance
Timing Jitter
Alignement Jitter
10Hz 3.75kHz 100kHz 1.2GHz
SMPTE RP184 and RP192 jitter tolerance template (SMPTE 2082-1 for 12G-SDI interfaces)
f2 = f3/(A1/A2)
f4 = 148.5MHz (ie 1/10th clock rate)
A1 = 8 UI (168psec) Timing Jitter
A2 = 0.3 UI (25.2psec) Alignement
163
164. Rx Jitter Tolerance
– A receiver with low jitter input tolerance can generate errors in decoding a signal that forms a wide-
open Eye diagram
– A receiver with high jitter input tolerance may correctly decode a signal that forms a closed Eye
diagram.
Receiver with
low input jitter tolerance
Receiver with
high input jitter tolerance
It can generate errors in decoding
It may correctly do decoding
A wide-open Eye diagram
A closed Eye diagram
164
165. Jitter Transfer refers to the jitter on the output of equipment that is the result of jitter applied on
the equipment’s input.
– The jitter transfer function is the ratio of output jitter to applied input jitter as a function of
frequency.
Jitter Transfer
𝟏𝟎 𝒍𝒐𝒈 𝟏𝟎
𝑶𝒖𝒕𝒑𝒖𝒕 𝑱𝒊𝒕𝒕𝒆𝒓
𝑰𝒏𝒑𝒖𝒕 𝑱𝒊𝒕𝒕𝒆𝒓
165
166. – Some video equipment, e.g., a distribution amplifier, produces an SDI output from an SDI signal applied
at an input.
– This kind of jitter can also be the result of jitter from an analogue locking reference such as black & burst.
– Typically, jitter in the input SDI signal does not directly translate to jitter in the corresponding output.
– In particular, clock recovery
• can filter out high frequency jitter
• may amplify some jitter in the input signal
Jitter Transfer
Digital Video Distribution
Amplifier (DVDA)
Input SDI Signal Output SDI Signal
Analog Reference
166
167. – Jitter Transfer can be measured practically by generating a known level of jitter to the input of the
device under test, for each frequency from f1 to f3 and then measuring the jitter at the output of the
device under test.
– A non-stressing test pattern, such as color bars, should be used to avoid introducing data dependent
jitter into the measurements.
– The device under test and the jitter generator should not be connected to any external locking
reference as this may introduce Output Jitter.
Jitter Transfer Measurement
They should not be connected to any external locking reference
A known level of jitter @ f1 to f3
(non-stressing test pattern)
167
168. Output Jitter
It is the total jitter measured on the output of the equipment itself and includes:
I. Intrinsic Jitter introduced by the equipment’s own output circuitry
II. Jitter Transfer inherited from a signal connected to the equipment’s input.
– A device’s Output Jitter can be measured by directly monitoring the output of the device using a
jitter analyzer.
– The device’s Intrinsic Jitter can be measured when there is no input or locking reference connected.
Intrinsic Jitter and Output Jitter
168
169. – In electronics, slew rate is defined as the change of
voltage or current, or any other electrical quantity, per
unit of time.
– The slow slew rate increases the level of jitter (as shown by
the thicker line of the Eye patter display) and therefore
increases the risk of bit errors due to a narrower data
sample area.
– In practice the higher the signal slew rate the larger the
sample area which reduces the risk of bit errors.
Fast Slew Rate
Slow Slew Rate
Slew Rate and Jitter
169
170. − ITU-R BT.1120-7 requires the transmission loss to be ≤ 30 dB at ½ clock frequency for 3 Gbit/s
operation.
− The current generation cable equalizers are capable of 35 to 40 dB gain at this frequency.
− Some losses
• cable loss (has a √f characteristic)
• losses through connectors
• losses through patch panels
• etc.
− By planning installations with regard to possible losses, the overall link losses not to exceed
the recommended values of 30 dB at 3 Gbit/s, there should be sufficient safety margin to
ensure reliable operation
Cable Length & Equalization
170
171. − Coaxial cable has signal losses that increase with frequency, much like a low pass filter.
− Some of the losses are due to
1. the resistance of the wire
2. skin effect
− while other losses are caused by
3. dielectric absorption in the insulation.
− The loss curve is approximated by the following formula where L is the loss in dB per unit of cable
length:
− Where “f” is frequency and A, C and D are constants that depend upon the type of cable and unit
length.
− D is the resistive loss constant, C is the dielectric loss constant, F frequency in MHz
(MIL-C-17 Attenuation and Power Handling in coaxial cables).
Cable Length & Equalization
𝑳 = 𝑨 + 𝑪𝒇 + 𝑫 𝒇
171
172. − D is the resistive loss constant, C is the dielectric loss constant, F frequency in MHz)
− In many cases, the A and C components are ignored, resulting in the common approximation of
cable losses being proportional to 𝒇.
− This means, if the frequency is multiplied by four, the attenuation of the cable, expressed in dB,
doubles.
− This can be a good rule of thumb, but the other terms are still a part of the losses and may be
important.
Cable Length & Equalization
𝑳 = 𝑨 + 𝑪𝒇 + 𝑫 𝒇
𝑳 𝛼 𝒇
172
173. − To enable proper recovery of the serial data, a system (from the transmitter to detector),
must have a frequency response that is nearly flat to at least “half the clock rate”.
− It is also important that the roll-off be reasonably gentle out to three times the clock rate.
Cable Length & Equalization
SDI
Transmitter
Input SDI SignalOutput SDI Signal
SDI
Receiver
Frequency Response
Frequency
𝒇 𝒄𝒍𝒐𝒄𝒌
𝟐
𝟑𝒇 𝒄𝒍𝒐𝒄𝒌
Nearly Flat
Reasonably gentle roll-off
173
174. − Belden 1694A cable, for example, is specified
to have a frequency dependent insertion loss
of 26 dB per 100 m at 1.5 GHz.
− This amount of loss requires equalization in
order for the cable to work successfully for
serial digital video.
− Since the length of a cable is not always
predictable, adaptive equalizers have
become a practical solution (automatically
adjust the equalization to match the
apparent cable losses)
Cable Length & Equalization
174
175. SMPTE ST 292 and SMPTE ST 424
− For example, the maximum cable lengths listed by Belden are for a 20dB equaliser in the
receiver.
− In real systems, this distance will increase or decrease depending upon the receiver
characteristics.
− If a better receiver is available, the maximum cable lengths will be greater.
Cable Length & Equalization
SDI
Transmitter
Input SDI SignalOutput SDI Signal
SDI
Receiver
175
176. In receiver, energy at half-clock frequency is sensed to apply an appropriate analog equalization to the incoming data signal.
Equalizer (Signal Recovery) in SDI Signal
Eye display with closed eye Equalized eye display of same signal
176
After using equalizer, the edge energy of the equalized signal is kept at a constant level which is
representative of the original edge energy at the transmitter.
177. In receiver, energy at half-clock frequency is sensed to apply an appropriate analog equalization to the incoming data signal.
Equalizer (Signal Recovery) in SDI Signal
177
178. − Adding lengths of cable results in attenuation of the amplitude and frequency losses along
the cable producing longer rise and fall time of the signal (the eye opening closes).
− However, this signal is still able to be decoded correctly because the equalizer is able to
recover the data stream (it is an filter, low jitter and low power).
Equalizer (Signal Recovery) in SDI Signal
178
179. Intersymbol interference (ISI)
− Consider a 1 UI output pulse applied to a buffer:
If rise/fall time >> 1 UI, then the output pulse is attenuated and pulse width decreases.
UI
UI
UI
1UI
< 1UI
Equalizer (Signal Recovery) in SDI Signal
179
180. 0 0 11 0 1
Consider 2 different bit sequences:
t = ISI
Steady-state not reached
at end of 2nd bit
2 output sequences superimposed for
eye construction
ISI is characterized by a double edge
in the eye diagram.
Equalizer (Signal Recovery) in SDI Signal
Intersymbol interference (ISI)
− The frequency-dependent cable attenuation “spreads” transitions in SDI signals
180
181. Intersymbol interference (ISI)
− It occurs when the spreading of transitions in earlier bits affect transitions in later bits.
− It Caused by
• channel loss
• dispersion
• reflections
– These effects cause transitions to vary from their ideal shapes and locations.
– Specifically, it produces predictable and repeatable jitter whose magnitude depends on the
• frequency responses of devices
• Channels
• data patterns in the signal
– Hence, ISI produces deterministic, data-dependent jitter.
– In particular, cable attenuation greater than 1 dB can introduce significant intersymbol interference.
Equalizer (Signal Recovery) in SDI Signal
181
182. The typical frequency responses of
a 300m cable and equalizer
− It is able to recover signal so that the
equalized eye could be open.
− To avoid data errors due to ISI, receivers
typically have cable equalizers that
compensate for the 1/√𝑓 frequency
response of the cable (cable losses being
proportional to √𝒇).
− The standards do not specify particular
cable types, but require that coaxial
connections have 1/√𝑓 the frequency
response needed for the correct operation
of cable equalizers.
Equalizer (Signal Recovery) in SDI Signal
182
183. – If the equalizer within the instrument is able to recover signal, the equalized eye display
should be open.
– However, it should be remembered that not all receivers use the same designs and there
is a possibility that some device may still not be able to recover the signal.
– If eye is partially or fully closed then the receiver is going to have to work harder to recover
the clock and data.
– In this case there is more potential for data errors to occur in the receiver.
Equalizer (Signal Recovery) in SDI Signal
183
184. – Cable equalization algorithms need many edges in the signal to determine and maintain the
frequency-dependent gain that compensates for the
1
𝑓
frequency response of the cable.
– Long intervals of constant signal level stress equalizer and reclocker and can lead to decoding
errors or synchronization problems.
– Further, AC-coupling can reduce noise margins in decoding if the input signal remains at the
same voltage level for a significant percentage of time.
Equalizer (Signal Recovery) in SDI Signal
It must have a sufficient transition
density for equalizing and reclocking.
184
185. – The edge energy of the equalized signal is monitored
by a detector circuit which produces an error signal
corresponding to the difference between the “desired
edge energy” and the “actual edge energy”.
– This error signal is integrated by both an internal and an
external AGC filter capacitor providing a steady
control voltage for the gain stage.
– As the frequency response of the gain stage is
automatically varied by the application of negative
feedback, the edge energy of the equalized signal is
kept at a constant level which is representative of the
original edge energy at the transmitter.
SD SDI Adaptive Cable Equalizer
The “energy at half-
clock frequency” is
sensed to apply an
appropriate analog
equalization to the
incoming data
signal.
185
Equalizer (Signal Recovery) in SDI Signal
186. Reclocker and alignment jitter reduction
− Recovering the embedded clock from a digital video signal and retiming the incoming video data.
– The reclocker uses the recovered clock to regenerate the SDI signal.
– Since the recovered clock does not track alignment jitter well, reclocking can substantially reduce
alignment jitter.
Reclocking SDI Signal
186
187. Reclocker and wander/low-frequency timing jitter
– The Reclocking “may not significantly reduce” wander or low-frequency timing jitter since the recovered
clock tracks these variations.
– Hence, low-frequency variations can build through a video system.
– Amplitudes can eventually grow beyond the tracking capability of clock recovery processes.
– At this point, decoding errors will appear and the clock recovery hardware might not remain locked to the
input signal.
– So, the clock recovery also affects the way jitter and wander accumulate in a video system.
Reclocking SDI Signal
187
188. LMH0303
It drives SDI Signal to Cable.
• Loss-of-signal (LOS) detector
• The cable detect feature senses near-end
termination to determine if a cable is
correctly attached to the output BNC.
• The output amplitude is adjustable ±10% in
5mV steps through the SM Bus configuration
(System Management Bus).
• Input interfacing (accepts either differential
or single-ended input)
• Output interfacing (current mode outputs)
• Output slew rate control
• Cable fault detection (no cable is connected
to the output (near end))
SDI Cable Driver
The LMH0303 drives 75-Ω transmission lines (Belden 1694A,
Belden 8281, or equivalent) at data rates up to 2.97 Gbps.
188
189. – (1) Defined by mid-amplitude point of the signal.
– (2) Measured across a 75 Ω resistive load connected through a 1 m coaxial cable.
– (3) In the frequency range of 5 MHz to fc/2. (fc: serial clock frequency)
– (4) In the frequency range of fc/2 to fc.
– (5) Determined between the 20% and 80% amplitude points and measured across a 75 Ω resistive load.
– Overshoot of the rising and falling edges of the waveform shall not exceed 10% of the amplitude.
– (6) 1 UI corresponds to 1/fc. Specification of jitter and jitter measurements methods shall comply with Recommendation ITU-R BT.1363 – Jitter specifications and methods for
jitter measurement of bit-serial signals conforming to Recommendations ITU-R BT.656, ITU-R BT.799 and ITU-R BT.1120.
– Output amplitude excursions due to signals with a significant dc component occurring for a horizontal line (pathological signals) shall not exceed 50 mV above or below the
average peak-peak signal envelope. (In effect, this specification defines a minimum output coupling time constant.)
Line driver characteristics (source) (HD)
189
191. Equipment that does not modify the serial data
• Routing switcher
• Patch panel
• Digital delay
Equipment that should not modify the active picture data
• Frame synchronizer with Proc-amp controls at unity
• Embedded audio mux and Demux
Equipment that may modify the active picture area
• Digital VTR or disk recorder or production switcher
Types of Equipment in term of SDI Signal Modification
191
192. – If any digital data word changes value between the serial transmitter and receiver there is an error.
– Data errors can produce sparkle effects in the picture, line drop outs or even frozen images.
– At this point, the receiver is having problems extracting the clock and data from this SDI signal.
– Types of Errors:
Active Picture Error
• An error in the data representing the active picture as defined by the appropriate standard.
Full Field Error
• An error in the data representing all lines except those affected by a standard vertical interval switch.
(lines 9-11 in 525, or lines 5-7 in 625 line standards).
Bit Error Rate (BER)
• BER requires measurement over many bits, and is often measured over long periods of time.
• Typical video errors occur in bursts, maybe hundreds of errors in a burst.
Errored Second
• Errored Seconds is a count of seconds-with-errors.
• One errored second in a program gives more information about the type of problem than single bit error rate.
Types of Errors
192
193. 0 1
Sampling Point
BER
Time
BER
Time
BER
– The BER is a useful measure of system performance in situations where the SNR at the
receiver is at such a level that random errors occur.
Bit - Error Ratio (BER)
193
194. The ratio of the number of incorrect bits received to the total number of bits received.
− As an example, consider the serial digital interface (SDI) which has a data–rate of 270
Mbit/s. If there were one error per frame, the BER would be:
− To illustrate the significance of BER values, table
relates mean time intervals between errors to
approximate BER values in the case of the SDI.
Bit - Error Ratio (BER)
194
195. – Scrambling is used in the SDI to reduce the DC component of the transmitted signal and ensure that
the signal reaching the receiver has a sufficiently large number of zero–crossings to permit reliable
clock recovery.
– An inherent feature of the descrambler:
A single bit error → will cause an error in two data words (samples) → with 50% probability, the error in
one of the words will be in either the most–significant bit or in the second–most significant bit.
– Therefore, an error–rate of 1 error per frame will be noticeable by a reasonably patient observer.
–
– The fact that the error is noticeable is sufficient to make it unacceptable (in purist engineering terms,
at least, if not subjectively), but it is even more unacceptable because of the indications it gives
about the operation of the SDI system.
Bit - Error Ratio (BER)
195
196. − In most cases, scrambling and NZRI encoding ensures that SDI signals have many transitions.
− Typical SDI signals do not have long intervals of constant voltage that stress clock recovery,
equalization, or decoding processes.
− However, particular word patterns in digital video content can produce SDI signals with long constant-
voltage intervals.
− If the shift register used in the scrambling process has a particular state and the scrambler receives one
of several special input bit sequences, the resulting SDI signal after NRZI encoding will have one of the
patterns shown.
Out-of-service testing (SDI check field)
Stress Testing for Clock Recovery and Cable Equalization
196
197. – The SDI Check Field is also known as “pathological
signal”
– It has a maximum amount of low-frequency energy in
two separate signals.
– One signal tests equalizer operation and the other
tests phase-locked loop operation.
– Video equipment designers can use these signals to
“stress test” clock recovery and equalization
processes and to verify the correct operation of
clamping or DC-restoration circuits that compensate
for AC-coupling effects.
Out-of-service testing (SDI check field)
Stress Testing for Clock Recovery and Cable Equalization
197
198. Out-of-service testing (SDI check field)
Stress Testing for Clock Recovery and Cable Equalization
• The upper part of the image creates data
words with long runs of zeros in the serial data
stream.
• This results in a DC step in the SDI signal.
• Using an equalizer with insufficient dynamic
could lead to image distortions.
198
199. Out-of-service testing (SDI check field)
Stress Testing for Clock Recovery and Cable Equalization
• The data words in the lower image of the
check field provide only very few reference
edges for the clock recovery.
• Insufficient design of the input PLL will result in
image distortions.
199
200. An eye display in field mode shows the DC offset (DC glitch)
inserted into the signal path by the SDI Check Field test signal.
Out-of-service testing (SDI check field)
Stress Testing for Clock Recovery and Cable Equalization
200
201. – If errors occur they can manifest themselves as
• transient dropouts in the video signal or
• worse yet, a complete loss of synchronization
− If the SDI-check field fails to be transmitted successfully then tearing of the picture will be
observed on a display.
Out-of-service testing (SDI check field)
Stress Testing for Clock Recovery and Cable Equalization
201
202. The longest sequence of 0s in the active video signal will
occur if the value 80.0hex is followed by 01.Xhex
Creation of the longest possible sequence of
“0”s in active video data words.
Another SDI Check Field
202
206. − To determine whether a sampled signal voltage corresponds to a “high” or “low” signal level,
decoders compare the sampled voltage against a particular voltage level called the decision
threshold or decision level.
− Optimally chosen decision threshold
“will equally protect against errors generated by noise on either signal level”.
− If each signal level has the same amount of noise, the optimal decision threshold equals the average
of the two signal voltage levels.
Decoding Decision Threshold
206Sameamountofnoise
The optimal decision threshold
=
The average of the two signal voltage levels.
207. − SDI receivers generally use fixed decision thresholds in the decoding process.
− For optimal performance, signal levels (level high and low) must keep the same relative relationship
to this fixed voltage level.
− A shift in the signal relative to the decision threshold reduces the noise margin for one of the signal
levels, which can lead to decoding errors (ex: DC shift in SDI Signal).
Decoding Decision Threshold
207
The Fixed decision thresholds
in the decoding process
High signal level
Low signal level
Average signal level
208. − DC offsets in the input SDI signal could lead to decoding errors.
− SDI receivers typically have AC-coupled inputs that remove DC-offsets in the input SDI signal.
− So a constant average voltage will exist in the AC-coupled signal (because possible variable DC term is
removed from SDI Signal).
− In many implementations, this average signal level equals zero volts, although biasing circuitry in the
receiver could set the average signal level of the AC-coupled signal to a non-zero value.
Decoding Decision Threshold and AC Coupling Effects
Example Cable Equalizer Connections
208
AC-coupling
ShiftDC-offsets Removal
To maintain an average
signal level of zero volts
209. − SDI receivers generally use fixed decision thresholds in the decoding process.
− Typically, the fixed decision threshold is used by receiver is equal to the average voltage of the AC-
coupled signal.
If the average signal voltage of AC-coupled signal is always an optimal value for fixed decision threshold ?
The optimal decision threshold may differ from the average signal voltage if one signal level can have
more noise than the other.
209
The AC-coupling set the “Fixed decision threshold for transition
detection” to the average signal voltage of the AC-coupled signal.
Fixed Decision Threshold
=
AC-couple Signal Average Voltage
Decoding Decision Threshold and AC Coupling Effects
210. − The amount of shift depends on the coupling time constant.
• For example, with a coupling constant of 10 µsec, an equalizer stress pattern will shift almost 78%
closer to the fixed decision level over one-half of an HD video line.
• For example, with a coupling time constant of 75 µsec, the stress pattern will shift by less than 33%
over an entire HD video line.
− The value of the AC coupling capacitors should be large enough to support the pathological patterns
present in the SDI waveforms.
− For SDI applications, the AC coupling capacitors are typically in the 1 μF to 10 μF range.
While AC-coupling filters out DC offsets in the input SDI signal, it can also shift the signal
levels in the AC-coupled signal relative to a fixed decision threshold.
210
Decoding Decision Threshold and AC Coupling Effects
211. Example 1: non-stressing test pattern
− Input SDI signal not contain any long duration at the same voltage level.
− The situation for an implementation of AC-coupling that maintains the average signal level in the AC-
coupled signal at zero volts.
− The decoding process also uses zero volts as the fixed decision threshold.
− Here both signal levels can have same noise.
− The fixed decision threshold falls at an optimal position midway between the two levels.
Fixed Decision Threshold = Average Signal Voltage
211A segment of an AC-coupled signal derived from an input SDI signal
Decoding Decision Threshold and AC Coupling Effects
212. Example 2: stress testing input
− Input SDI signal stays at the low signal level for long periods of time.
− The signal remains at the low signal level 95% of the time.
− To maintain an average signal level of zero volts, the low signal level in the AC-coupled signal must
equal -0.05 Vpp, while the high signal level must equal +0.95 Vpp.
An example of the equalizer stress patterns.
212
Decoding Decision Threshold and AC Coupling Effects
213. Example 2: stress testing input
− The low signal level is very close to the fixed decision threshold for decoding → eliminates the noise
margins for this signal level → will lead to decoding errors (in the next stage).
− In effect, the AC-coupling has generated intersymbol interference. The values of earlier bits (long
strings of ‘0’ bit values after scrambling) have impacted the decoding of later bits.
A segment of an AC-coupled signal derived from an input SDI signal
213
Decoding Decision Threshold and AC Coupling Effects
214. − Due to scrambling and NRZI encoding, SDI signals are symmetric, i.e. they spend nearly the same
amount of time at each signal level.
− More specifically, typical SDI signals are symmetric when signal levels are averaged over many unit
intervals.
− Shorter-term, SDI signals can have several periods of constant signal level, with pathological SDI
signals as the extreme case.
Decoding Decision Threshold and Clamp or DC-Restore
To compensate for this AC-coupling shift, SDI receivers typically clamp or DC-restore the AC-coupled
signal to maintain the relationship between the signal levels (∓400mv) and the fixed decision threshold.
Input SDI Signal
SDI Receiver
AC-coupling
Shift
DC-Restore
(Clamp)
Equalizer Reclocker
214
DC-offsets Removal
215. – The Equalizer Filter block is a multi-stage adaptive filter.
If Bypass is high, the equalizer filter is disabled.
– The DC Restoration/Level Control block incorporates a
self-biasing DC restoration circuit to fully DC restore the
signals. If Bypass is high, this function is disabled.
– The signals before and after the DC Restoration/Level
Control block are used to generate the Automatic
Equalization Control (AEC) signal. This control signal sets
the gain and bandwidth of the equalizer filter.
– The Carrier Detect/Mute block generates the carrier
detect signal and controls the mute function of the
output. The Output Driver produces SDO and NOT SDO.
DC Restore or Clamping
1 µF capacitor
The “energy at half-clock frequency” is
sensed to apply an appropriate analog
equalization to the incoming data signal.
215
≈ 𝑺𝑫𝑰 𝑺𝒊𝒈𝒏𝒂𝒍 𝒂𝒕 𝑻𝒓𝒂𝒏𝒔𝒎𝒊𝒕𝒕𝒆𝒓
216. Conclusion
•The AC-coupling set the “decision threshold for transition detection” to the average signal voltage of
the AC-coupled signal.
•While AC-coupling filters out DC offsets in the input SDI signal, it can also shift the signal levels in the AC-
coupled signal relative to a fixed decision threshold.
•Shifts due to AC coupling can reduce noise margins in decoding if the input signal remains at the same
voltage level for a significant percentage of time (by SMPTE pathological test patterns).
•SDI receivers typically clamp or DC-restore the AC-coupled signal to maintain the relationship between
the signal levels and the fixed decision threshold.
•The equalized signal is DC restored, effectively restoring the logic threshold of the equalized signal to its
correct level (∓400mv) independent of shifts due to AC coupling (compensate for AC-coupling effects).
216
217. Even in SDI signals with frequent transitions, AC-coupling can introduce a shift in the signal relative to the
fixed decision level.
− For example, if the signal has fast rise times and slow fall times, it will spend more time in the high
signal state (In general, If the rise and fall times of signal transitions differ significantly)
− AC-coupling will then shift the high signal level closer to the fixed decision threshold, reducing noise
margin.
AC Coupling Effects in Asymmetric Signals
High Signal Time
Low Signal Time
AC-coupling
217
218. − Typically, SDI signals have symmetric rise and fall times, but asymmetric line drivers and optical signal
sources (lasers) can introduce non-symmetric transitions.
− While significant, these source asymmetries do not have especially large impacts on signal rise and
fall times.
− In particular, cable attenuation will generally have a much larger impact on signal transition times.
− Without appropriate compensation or other adjustments, asymmetries in SDI signals can reduce noise
margins with respect to the decision threshold used in decoding and can lead to decoding errors.
− These same asymmetric conditions can also impact jitter measurements.
AC Coupling Effects in Asymmetric Signals
218
219. Some typical approaches for determining optimal decision level for transition detection
– The standards do not give any guidance regarding the decision threshold for the transition detection
stage in the jitter measurement process.
• The 50% point in transition
• The Eye crossover points below the 50% point in transition
• The average signal voltage of the AC-coupled signal.
Transition Detection Issues
219
(maximum Eye width)
Decision threshold = Eye
crossover points
Decision level = 50%
point in the transition.
50% point 50% point
Eye crossover points
Decision level = Average signal
voltage of the AC-coupled signal
220. Transition Detection Issues
220
Optimal decision threshold in symmetric signals for transition detection
– To measure jitter, the measurement process needs to determine the point in time when an actual
signal transition occurs.
– For transitions with equal rise and fall times, this optimal decision level equals the 50% point in the
transition.
Optimum decision threshold for
transition detection at the 50% point.
50% point in the transition 50% point in the transition
221. Transition Detection Issues
221
IdealPosition
IdealPosition
NonIdealPosition
NonIdealPosition
NonIdealPosition
NonIdealPosition
Non-optimal decision threshold effect on transition
detection in measurement instruments
– To measure jitter, the measurement process needs to
determine the point in time when an actual signal transition
occurs.
– For transitions with equal rise and fall times, this optimal
decision level equals the 50% point in the transition.
• Using a non-optimal decision threshold in transition detection,
the detected transitions vary from their ideal positions.
• This non-optimal transition detection process has introduced a
deterministic jitter component called duty-cycle dependent
jitter.
• The time between two edges would be less than the
appropriate multiple of the unit interval.
222. Non-symmetric signal transitions effect on transitions detection in measurement instruments
– The standards do not give any guidance regarding the decision threshold for the transition detection
stage in the jitter measurement process.
– The standards allow a significant difference in rise and fall times.
– If the transition detection stage in a jitter measurement process used a decision level equal to the 50%
point in the transition, the results would include a significant amount of duty-cycle dependent jitter.
– A jitter measurement method that could align (set) the decision threshold with the Eye crossover points
(maximum Eye width) would give a smaller result (minimal jitter).
Transition Detection Issues
222
(maximum Eye width)
Fixed decision threshold equals
to the Eye crossover points
Decision level equals the
50% point in the transition.
50% point 50% point
Eye crossover points
223. Non-symmetric signal transitions effect on transitions detection in measurement instruments (cont.)
– The standards do not have specifications on compensating for AC-coupling effects or
accommodating non-symmetric signal transitions.
– Acceptable SDI signal with slow rise time and a fast fall time shows the 50% point in transition does
not always equal the optimal decision level for transition detection.
– The Eye crossover points appear well below the 50% point in transition in the transition (A jitter
measurement method that could align the decision threshold with the Eye crossover points
(maximum Eye width) would give a smaller result (minimal jitter)).
Transition Detection Issues
223
(maximum Eye width)
Fixed decision threshold equals
to the Eye crossover points
Decision level equals the
50% point in the transition.
50% point 50% point
Eye crossover points
224. AC-coupling effect on transition detection in measurement instruments
– Most measurement instruments have AC-coupled inputs and set the decision threshold for transition
detection to the average signal voltage of the AC-coupled signal.
– In most cases, this approach results in near-optimal transition detection because typical SDI signals
are symmetric “in the long term”.
– In the short term, typical SDI signals can spend several unit intervals at the same signal level.
• Generally, measurement instruments adequately compensate for AC-coupling effects related to
this short-term behavior.
– In the long term (durations equal to many unit intervals) the signal spends nearly the same amount of
time at each voltage level.
• The average signal voltage over these durations lies close to optimal position for transition
detection at the midpoint of the Eye height (does not introduce duty-cycle dependent jitter).
Transition Detection Issues
224
225. AC-coupling effect on transition detection in measurement instruments (cont.)
– In particular, long constant-voltage intervals in an SDI signal can shift the signal relative to a fixed
decision threshold.
– In this case, the transition detection stage in the jitter measurement introduces duty-cycle dependent
jitter.
– Equalizer stress patterns in pathological signals can cause significant shifts in the AC-coupled signal.
– In this case, duty-cycle dependent jitter introduced in the transition detection stage could increase
the peak-to-peak jitter amplitude measurement.
Transition Detection Issues
225
226. − Scrambling can also produce long runs of 1s or 0s.
− This happens infrequently in video
− It happens occasionally with EAV and SAV which contain 2 or 3 words of all 0s
− It can be encouraged to happen with certain input signals
− Scrambler Concern
− Specific pattern of .1s. and .0s input can create strings of 0s output.
Scrambled NRZI Problem
226
227. – For SD formats, the CRC value is inserted into the vertical interval, after the switch point.
– SMPTE RP165 defines the optional method for the detection and handling of data errors in standard
definition video formats (EDH: Error Detection Handling).
– Full Field and Active Picture data are separately checked and a 16-bit CRC word generated once
per field.
1. The Full Field (FF) check covers all data transmitted except in lines reserved for vertical interval
switching (lines 9-11 in 525, or lines 5-7 in 625 line standards).
2. The Active Picture (AP) check covers only the active video data words, between but not
including SAV and EAV. Half-lines of active video are not included in the AP check (Just
complete lines are used).
Digital monitors may provide both a display of EDH CRC values and an alarm on AP or FF CRC errors .
In-Service Testing (CRC: Cyclic Redundancy Check)
227
228. Error Detection and Handling (EDH) in Digital Television
– SMPTE RP165 defines the optional method for the
detection and handling of data errors in standard
definition video formats (EDH Error Detection
Handling).
– Basic principal similar to computer file transfer
(CRC)
– EDH is not used with high definition video, as the
HD serial digital interface includes a mandatory
embedded CRC for each line.
In-Service Testing (CRC: Cyclic Redundancy Check)
228
229. In HD formats, CRCs for luma and chroma follow EAV and line count ancillary data words.
– The CRC for high-definition formats is defined in SMPTE 292M to follow the EAV and line number words.
– So CRC checking is on a line by line basis for Y-CRC and C-CRC.
Ideally, the instrument will show zero errors indicating an error-free transmission path.
If the number of errors starts to increase, the user should start to pay attention to the increase in
errors.
As the errors increase to one every hour or minute, this is an indication that the system is getting
closer to the digital cliff.
If errors occur every minute or every second, the system is approaching the digital cliff and
significant CRC errors would be seen in the display.
3FF(C)
3FF(Y)
000(C)
000(Y)
XYZ(C)
XYZ(Y)
LN0(C)
LN0(Y)
LN1(C)
LN1(C)
CCR0
YCR0
CCR1
YCR1
000(C)
000(Y)
CbData
YData
CrData
YData
EAV LN CRC
In-Service Testing (CRC: Cyclic Redundancy Check)
229
230. Eye-Pattern Testing
To make the Eye diagram, the instrument aligns the segments using a reference clock signal.
− Typically this reference clock is extracted from the data signal, but may be a separate reference
clock signal.
− If the transitions in the input signal align with the edges in this reference clock they will lie on top of
each other in the Eye diagram.
− Any transitions that vary from the nominal positions determined by this reference clock will appear in
different locations.
− If the instrument uses a recovered clock to form the Eye diagram, the reference clock will track jitter
below the loop bandwidth of this clock recovery process in the measurement instrument.
Thus, the Eye diagram will only show jitter components with
frequencies above this bandwidth threshold, called the “Eye
Clock Recovery Bandwidth” of the measurement instrument.
230
231. Eye-Pattern Testing
The Eye diagram will only show jitter components with frequencies above
the “Eye Clock Recovery Bandwidth” of the measurement instrument.
Input SDI Signal
Frequency
If this signal is fed to
recorder, it has different
clock extraction circuit
from measurement
instrument
Eye Clock Recovery Bandwidth
Clock recovery bandwidth in the receiver
Frequency
0
231
232. Eye-Pattern Testing
Qualitative assessment of signal jitter using Eye diagrams
− The size of the Eye opening correlates reasonably well with the potential for decoding errors.
− If the input signal forms a large, “wide-open” Eye, the decoding process will most likely sample the signal before the
transition to the next bit.
The clock recovery bandwidth in the receiver
Frequency
Frequency
Eye Clock Recovery Bandwidth
Clock recovery bandwidth in the receiver
The Eye Clock Recovery Bandwidth=
0
232
233. − The signal may contain jitter frequencies below the Eye Clock Recovery Bandwidth that impact the decoding process
but do not appear in the Eye diagram.
− The decoding process may generate errors even though the Eye diagram has a large Eye opening.
Eye-Pattern Testing
Qualitative assessment of signal jitter using Eye diagrams
Frequency
Frequency
Eye Clock Recovery Bandwidth
Clock recovery bandwidth in the receiver
The clock recovery bandwidth in the receiver The Eye Clock Recovery Bandwidth<
0
233
234. − The Eye diagram may show jitter that does not impact the decoding process.
− The receiver may decode the signal without errors even though the Eye diagram has a small Eye opening or is
completely closed.
Eye-Pattern Testing
Qualitative assessment of signal jitter using Eye diagrams
Frequency
Eye Clock Recovery Bandwidth
Clock recovery bandwidth in the receiver
Frequency
The clock recovery bandwidth in the receiver The Eye Clock Recovery Bandwidth>
0
234
235. Other factors also influence the qualitative assessment of signal jitter using Eye diagrams.
− If receivers introduce a significant amount of internal jitter or do not consistently sample near the middle of the unit
interval, they may generate more decoding errors than suggested by the size of the Eye opening.
− In using an Eye diagram to assess the potential for data errors, engineers need to consider the combined effects of
the
• receiver’s clock recovery
• equalization
• decoding processes
− In other words, they need to consider the receiver’s jitter input tolerance.
Eye-Pattern Testing
Qualitative assessment of signal jitter using Eye diagrams
Receiver with
low input jitter
tolerance
Receiver with
high input jitter
tolerance
It can generate errors in decoding
It may correctly do decoding
A wide-open Eye diagram
A closed Eye diagram
235
236. The cable equalization used in receivers will restore the signal’s transitions and “re-open” the Eye.
− With adequate equalization, the ISI from cable attenuation will not significantly impact the decoding
process.
− Without adequate equalization, the data-dependent jitter introduced by cable effects can lead to
decoding errors.
− While equalization can compensate for cable effects, the equalized signal can still contain signal
jitter or amplitude noise that reduces or closes the Eye opening.
− To qualitatively assess the remaining potential for decoding errors after equalization, engineers can
use an Equalized Eye Diagram constructed from the equalized version of the input signal.
Eye-Pattern Testing
Qualitative assessment of signal jitter using Eye diagrams
Equalization
(WFM 7200)
Equalized Eye Diagram
SDI Signal
Non-equalized Eye Diagram
236
237. Eye diagrams can also show AC-coupling effects.
− Signal level shifts due to AC-coupling, causes a corresponding shift in the superimposed segments that form the Eye
diagram.
− This can occur even if the measurement instrument forming the Eye-diagram has a DC-coupled input.
− Other equipment in the system may have AC-coupled inputs, causing shifts in the SDI signal before it reaches the
measurement instrument.
Eye-Pattern Testing
Qualitative assessment of signal jitter using Eye diagrams
DC-Coupled Eye Diagram with DC-coupled input
SDI Signal
Eye Diagram with AC-coupled inputAC-Coupled
237
239. Long cable
• Decrease in amplitude
• Decrease in Frequency
response
• Eye opening narrows
• Rise/Fall time increases
Termination
• Incorrect termination
causes overshoot and
undershoot
Shift in Eye Crossing
• Shifts 50% point of eye
opening
• Caused by unequal rise
or fall time
Eye Pattern Distortions
239
240. • Signal amplitude is important because of:
1- Its relationship to noise tolerance
2- The receiver estimates the half-clock-frequency energy remaining in arrived signal.
Incorrect amplitude at the sending side could result in an incorrect equalization being applied at the
receiving end, causing signal distortions.
• Incorrect rise time could cause signal distortions such as ringing and overshoot, or if too slow, could reduce
the time available for sampling within the eye.
• Overshoot will likely be caused by impedance discontinuities or poor return loss at the receiving or sending
terminations.
Eye Pattern Distortions
SDI Signal with incorrect
amplitude Incorrect equalization
Signal distortions
240
241. Improper Termination and Return Loss
– It means that not all of the energy will be absorbed by the receiving termination or device.
– This residual energy will be reflected back along the cable creating a standing wave.
– These reflections will produce ringing within the signal and the user will observe overshoot and
undershoots on the eye.
– Note that this termination error by itself would not cause a problem in the signal being received.
However, this error added cumulatively to other errors along the signal path will narrow the eye
opening more quickly and decrease the receiver’s ability to recover the clock and data from the
signal.
Eye Pattern Distortions
241
242. Unterminated eye display
Internal and External Termination
– The termination within an HD system is more critical because of the high clock rate of the signal.
– The HD inputs are usually terminated internally and because that in-service eye-pattern testing
will not test the transmission path (cable) feeding other devices.
– Out-of-service transmission path testing is done by substituting a test signal generator for the
source, and a waveform monitor with eye-pattern display in place of the normal receiving
device.
Eye Pattern Distortions
242
243. Cable Loss
– It tends to reduce the visibility of reflections, especially at HD data rates and above.
– A low value of return loss is more likely to cause problems when short cable lengths are used.
– With long cable lengths, the effect of a mismatch is reduced owing to the greater cable attenuation.
Eye Pattern Distortions
It reduce the
visibility of
reflections
Cable attenuation
243
244. Non-symmetrical eye (inequality between the transitions)
– The eye display typically has the cross point of the transition in the middle of the eye display at the 50%
point.
– If the rises time or fall time of the signal transitions are unequal then the eye display will move away from the
50% point depending on the degree of inequality between the transitions.
– AC-coupling within a device will shift the high signal level closer to the fixed-decision threshold reducing
noise margin.
Non-symmetrical eye display
Eye Pattern Distortions
244
245. Typically, SDI signals have symmetric rise and fall times
– Without appropriate compensation or other adjustments, asymmetries in SDI signals can reduce noise
margins with respect to the decision threshold used in decoding and can lead to decoding errors.
– An asymmetric line drivers (cable drive) and optical signal sources (lasers) can introduce non-
symmetric transitions. While significant, these source asymmetries do not have especially large
impacts on signal rise and fall times.
– In particular, cable attenuation will generally have a much larger impact on signal rise and fall times.
Eye Pattern Distortions
Non-symmetrical eye display
245
246. – Low frequency jitter cause the hole eye move to left and right on waveform monitor.
– High frequency jitter cause eye edges move to left and right on waveform monitor.
– Larger opening indicates better receiver sensitivity and a greater tolerance for noise and jitter
– Wide top, base and transition region indicates reduced receiver sensitivity.
– As long as the noise and jitter don’t exceed the threshold of the detection circuits (that’s, the eye is
sufficiently open) the data will be perfectly reconstructed.
Eye Pattern Distortions
246
247. Things to consider during installation
– Choose appropriate cable
– Treat Cable with respect
– Cable Margin
– Apply stressing Pathological signal
Things to consider for operational monitoring
– Monitor the EDH of the SD-SDI signal
– Monitor the CRCs of the HD-SDI signal
– Set alarm thresholds for physical layer
– Use Eye and Jitter displays
Keeping an Eye on the SDI System
247
248. – Unlike analog systems that tend to degrade gracefully, digital systems tend to work without fault until
they crash.
– To date, there are no in-service tests that will measure the headroom of the SDI signal.
– Out-of-service stress tests are required to evaluate system operation.
• Stress testing consists of changing one or more parameters of the digital signal until failure occurs.
• The amount of change required to produce a failure is a measure of the headroom.
Stress Testing
Cable attenuation
248
249. – The most intuitive way to stress the system is to add cable until the onset of errors.
– Experimental results indicate that cable-length testing, in particular when used in conjunction with the
SDI check field signals, is the most meaningful stress test because it represents real operation.
– Stress testing the receiver’s ability to handle amplitude changes and added jitter are useful in
evaluating and accepting equipment, but not too meaningful in system operation.
– In the testing of in-studio transmission links and operational equipment, a single error is defined as one
data word whose digital value changes between the signal source and the measuring receiver.
Cable-Length Stress Testing
249
251. − Unlike analog systems that tend to degrade gracefully, digital systems tend to work
without fault until they crash.
Signal Strength/Quality
PictureQuality
Cliff Effect
251
252. 252
− When the upper energy bands disappear, the HD stream
no longer resembles a bit stream and more like a sine
wave.
− When this happens the signal is no longer recoverable.
− Do you know how much high frequency energy is left?
− The only way to check the headroom of a digital signal
used to be with expensive test equipment.
− The 4sight offers an inexpensive way to check how close
your signals are to the error headroom cliff.
− The HRM-1500 is a portable hand held device that quickly
displays the spectral health of an HD signal.
HD Signal Error Headroom Meter
HRM-1500, $1500