The document provides an overview of video coding techniques used in video compression standards. It discusses how video compression exploits both the spatial and temporal redundancy in video signals. Key techniques covered include motion-compensated prediction, where a current frame is predicted from previous frames using motion vectors, and block-based motion estimation to determine the motion vectors. The document also outlines the generic architecture of video compression systems, which apply representation, quantization, and binary encoding steps to remove redundancy from video signals.
The document provides an overview of the status of MPEG-4 developments and the AIC Initiative. It discusses the goals, history, and architecture of MPEG-4, which aims to code audio-visual objects and scenes to enable interactivity. MPEG-4 extends existing architectures like MPEG-2 and IP to new environments through tools like an interactive scene description and support for new content types and delivery formats. Profiles and levels are defined to suit different applications. Carriage of MPEG-4 over MPEG-2 and IP is also addressed.
This document provides an overview of key concepts in multimedia systems including digital video formats, properties of video such as frame rate and aspect ratio, video compression techniques, and video production equipment and processes. It covers analog vs digital video, interlacing vs progressive scanning, common video file formats like AVI, MOV, and MPG, and how to transfer video from a camcorder to a computer.
This document discusses different digital video technologies including desktop video formats, software and hardware codecs, DVD output, and video editing software systems. It covers popular formats like QuickTime, Video for Windows, MPEG, and RealPlayer. It also discusses hardware like DVD players, encoder/decoder cards, and semi-professional digital video editing solutions that allow capturing, editing and outputting video to tape or file.
This document discusses the transition from tape-based to file-based workflows in audiovisual production. It covers the history from linear tape-based workflows to emerging digital file-based workflows. Key benefits of file-based workflows include the ability to store rich metadata with media files throughout the production process, faster editing and sharing of content, and more flexible archiving and retrieval of content and associated metadata. Standards organizations are helping facilitate file-based workflows by establishing standards for file formats and metadata. Specific Sony products like XDCAM are highlighted as examples of technologies supporting file-based acquisition, editing and distribution.
This document provides an analysis of an interactive menu for the film Greenstreet. The menu features faded film clips in the background with a red title over green grass. Animation, visual effects, color rendering, and movement are used but there is no rotation. Techniques like blur, sharpening, and opacity are applied to the clips. The menu is formatted for H.264/MPEG-4 AVC video, which offers high quality at low file sizes but requires more encoding time and hardware. The analysis concludes with the course unit on motion graphics and video compositing.
Digital video can be recorded and edited on a computer. It is stored using file formats like AVI, MOV, MPEG, and FLV which determine compatibility and file size. Digital video is composed of individual frames that have a rate, size, and color depth. Video editing software allows cutting, combining, and adding effects to video clips. Captured digital video can be used in multimedia products like presentations, websites, and games.
This document provides an overview of video formats, which involve containers and codecs. The container describes the file structure and can contain different codecs. The codec is how the video is encoded and determines quality. Popular containers include MP4, MOV, and AVI, while common codecs are H.264, MPEG-4, and DivX. Choosing a format depends on factors like file size, quality, frame rate, bitrate, and resolution, as well as how the video will be transmitted or shared.
The document provides an overview of the status of MPEG-4 developments and the AIC Initiative. It discusses the goals, history, and architecture of MPEG-4, which aims to code audio-visual objects and scenes to enable interactivity. MPEG-4 extends existing architectures like MPEG-2 and IP to new environments through tools like an interactive scene description and support for new content types and delivery formats. Profiles and levels are defined to suit different applications. Carriage of MPEG-4 over MPEG-2 and IP is also addressed.
This document provides an overview of key concepts in multimedia systems including digital video formats, properties of video such as frame rate and aspect ratio, video compression techniques, and video production equipment and processes. It covers analog vs digital video, interlacing vs progressive scanning, common video file formats like AVI, MOV, and MPG, and how to transfer video from a camcorder to a computer.
This document discusses different digital video technologies including desktop video formats, software and hardware codecs, DVD output, and video editing software systems. It covers popular formats like QuickTime, Video for Windows, MPEG, and RealPlayer. It also discusses hardware like DVD players, encoder/decoder cards, and semi-professional digital video editing solutions that allow capturing, editing and outputting video to tape or file.
This document discusses the transition from tape-based to file-based workflows in audiovisual production. It covers the history from linear tape-based workflows to emerging digital file-based workflows. Key benefits of file-based workflows include the ability to store rich metadata with media files throughout the production process, faster editing and sharing of content, and more flexible archiving and retrieval of content and associated metadata. Standards organizations are helping facilitate file-based workflows by establishing standards for file formats and metadata. Specific Sony products like XDCAM are highlighted as examples of technologies supporting file-based acquisition, editing and distribution.
This document provides an analysis of an interactive menu for the film Greenstreet. The menu features faded film clips in the background with a red title over green grass. Animation, visual effects, color rendering, and movement are used but there is no rotation. Techniques like blur, sharpening, and opacity are applied to the clips. The menu is formatted for H.264/MPEG-4 AVC video, which offers high quality at low file sizes but requires more encoding time and hardware. The analysis concludes with the course unit on motion graphics and video compositing.
Digital video can be recorded and edited on a computer. It is stored using file formats like AVI, MOV, MPEG, and FLV which determine compatibility and file size. Digital video is composed of individual frames that have a rate, size, and color depth. Video editing software allows cutting, combining, and adding effects to video clips. Captured digital video can be used in multimedia products like presentations, websites, and games.
This document provides an overview of video formats, which involve containers and codecs. The container describes the file structure and can contain different codecs. The codec is how the video is encoded and determines quality. Popular containers include MP4, MOV, and AVI, while common codecs are H.264, MPEG-4, and DivX. Choosing a format depends on factors like file size, quality, frame rate, bitrate, and resolution, as well as how the video will be transmitted or shared.
Digital video has replaced analog video as the preferred method for making and delivering video content in multimedia. Video files can be extremely large, so compression techniques like MPEG and JPEG are used to reduce file sizes. There are two types of compression: lossless, which preserves quality, and lossy, which eliminates some data to provide greater compression ratios at the cost of quality. Digital video editing software allows for adding effects, transitions, titles and synchronizing video and audio.
Hardware Implementation of Genetic Algorithm Based Digital Colour Image Water...IDES Editor
This document describes a hardware implementation of a genetic algorithm based digital color image watermarking system. The system embeds a watermark image into the luminance channel (Y channel) of a host color image after converting the image from RGB to YUV color space. A genetic algorithm is used to determine optimal intensity values in the host image for embedding the watermark image bits invisibly. The proposed design is implemented as a custom integrated circuit for real-time watermarking of images as they are captured by a digital camera. Synthesis results show that the design can operate at 5ns clock speed and consumes a maximum power of 73.84mW when implemented on an Altera Cyclone II FPGA.
This document discusses digital video codecs and compression. It begins by defining pixel resolutions for standard definition, high definition, and digital cinema. It then covers CMOS image sensors used for HD, 2K and 4K capture and explains intra-frame and inter-frame compression. The document provides an example of the Apple ProRes 422 codec and analyzes its key attributes. It also discusses interlaced vs progressive scanning, picture impairments from compression, digital cinema standards, and predicts that advances in compression will continue to be needed to handle higher resolutions and frame rates.
The document discusses various analog and digital video interfaces. It describes common analog video interfaces like composite video, S-video, component video and RGB analog video. It then covers digital video interfaces such as HDMI, DVI, FireWire, S/PDIF and SDI. For each interface, it provides details on technical standards, maximum supported resolutions and example uses.
High-speed Distributed Video Transcoding for Multiple Rates ...Videoguy
This paper describes a distributed video transcoding system that can simultaneously transcode an MPEG-2 video file into various video coding formats with different rates. The transcoder divides the MPEG-2 file into small segments along the time axis and transcodes them in parallel on multiple PCs. Efficient video segment handling methods are proposed that minimize the inter-processor communication overhead and eliminate temporal discontinuities from the re-encoded video.
Generic Video Adaptation Framework Towards Content – and Context Awareness in...Alpen-Adria-Universität
This document proposes a generic video adaptation framework that is content- and context-aware. It introduces a unified adaptation format based on H.264 features to enable codec reusability and various adaptation algorithms. The framework includes format adapters between decoders, an adaptation pool, and encoders to convert between their formats. It aims to allow any type of adaptation like bitrate or resolution changes across any codec generically while maintaining quality and real-time constraints.
This document provides an overview of satellite communications fundamentals. It discusses how satellites provide capabilities not available through terrestrial systems like mobility and coverage for remote areas. While satellites are not generally cost-effective compared to landlines and fiber, they enable services like broadcasting, telecommunications, internet access, and more. The document covers different types of satellite orbits including geostationary and low earth orbits. It also examines satellite components, configurations, frequency reuse, earth station antennas, and considerations like signal delays.
Vector quantization maps high-dimensional vectors to codewords from a finite codebook. Each codeword defines a Voronoi region containing vectors closest to that codeword. The Lloyd and LBG algorithms are commonly used to optimize the codebook for a given dataset by iteratively clustering vectors and recomputing codeword averages. Tree-structured vector quantization reduces comparisons by recursively partitioning the codebook space, at the cost of potential distortion increases. The rate-distortion performance of vector quantization generally exceeds scalar quantization due to its ability to model correlations in vector datasets.
RSA is a widely used public-key cryptosystem. It works by generating a public and private key pair. The public key is used for encryption and digital signatures while the private key is used for decryption and signature verification. Key generation involves finding two prime numbers p and q, computing the modulus n as their product, and using these values to calculate the public and private exponents e and d respectively.
This document discusses digital set-top boxes (STBs) and related standards. It explains that DVB standards for digital TV broadcasting via different transmission media all use identical source coding and service multiplexing based on MPEG-2, while using optimized channel coding for each medium. STBs are needed until integrated digital TVs are cheaper. The document discusses how open architectures and interoperability across networks can help reduce STB costs. It provides an overview of typical STB components and architecture.
This document provides an overview of Codan's 6700/6900 series block up converter (BUC) systems and components. It describes the BUC, low-noise block converter (LNB), and redundancy systems. It also covers installation, operation, and maintenance of the systems. The document contains information on frequency bands, conversion plans, interfaces, cable connections, controls, indicator lights, commands, fault diagnosis, compliance, and definitions.
The document discusses DCT/IDCT concepts and applications. It provides an introduction to Spartan-II FPGAs and their extensive features. It then explains that DCT and IDCT are widely used in video and audio compression. It describes the DCT and IDCT functions and concepts, including how they are used to compress images and audio into frequency coefficients to reduce file sizes. It also provides examples of one-dimensional and two-dimensional DCT/IDCT equations and applications such as DVD players, HDTV, graphics cards, and medical imaging systems.
This document provides an introduction to digital television. It discusses analog TV standards and the conversion to digital with ITU-R BT.601 and BT.709 standards defining digital video formats. It also describes MPEG transport streams, the DVB system for content delivery over satellite, cable and terrestrial networks, and conditional access systems. Packetized elementary streams (PES) and program specific information (PSI) tables are also introduced.
This document discusses the basics of BISS scrambling. It describes BISS mode 1, which uses a session word, and BISS mode E, which encrypts the session word using an identifier and encryption algorithm. BISS mode E provides an additional layer of protection for transmitting the session word. The document also covers calculating the encrypted session word, using buried and injected identifiers, and how to operate scramblers in the different BISS modes.
The Event Logger monitors and logs Digital Program Insertion (DPI) messages to verify correct transmission of signals via satellite. It watches for configured GPI state changes that indicate an expected DPI message. If the message is received on time, it is logged as a matched event. If not received on time, it is flagged as missed. The Event Logger also decodes DPI messages to help diagnose issues, and is compatible with various encoding systems. It has 6 ASI inputs, 108 GPI sensors, and logs data in real-time and for archiving.
This document provides an overview of service information (SI) in digital video broadcasting (DVB) systems, including sections like the network information section (NIT), service description section (SDT), bouquet association section (BAT), program association section (PAT), conditional access section (CAT), transport stream description section (TSDT), event information section (EIT), and running status section (RST). It includes syntax diagrams and details for each section, such as table IDs, section lengths, descriptors, and other fields. It also provides the PID and refresh interval requirements for each table type.
The document compares video compression standards MPEG-4 and H.264. It discusses key aspects of each including profiles, levels, uses and future applications. MPEG-4 introduced object-based coding while H.264 provides around 50% better compression than MPEG-4 at similar quality levels. Both standards are widely used for video streaming, television broadcasting, and storage applications like Blu-ray discs. Ongoing development aims to improve support for high definition video formats.
This document compares video compression standards MPEG-4 and H.264. It provides an overview of both standards, including their development histories and profiles. MPEG-4 was the first standard to support object-based video coding and compression of different media types. H.264 provides significantly better compression than prior standards like MPEG-2 at the cost of higher computational complexity. Both standards are widely used today for applications ranging from mobile and internet video to television broadcasting and digital cinema.
The document discusses video streaming and video communication applications. It outlines different types of video applications including video storage, videoconferencing, digital TV, and video streaming over the internet. It then describes properties of video communication applications such as broadcast, multicast, point-to-point, real-time encoding, static or dynamic channels, and quality of service support. Finally, it discusses variable bitrate versus constant bitrate coding and how bit allocation affects quality.
This document summarizes key techniques used in video compression codecs. It discusses still image compression techniques like block transforms, quantization, and variable length coding that video codecs build upon. It then covers motion estimation and compensation, which take advantage of similarities between frames to greatly improve compression ratio. The document outlines processing requirements for techniques like block transforms, motion estimation, and motion compensation, noting they require substantial compute resources and memory bandwidth.
This document compares video compression standards MPEG-4 and H.264. It discusses key factors for video compression like spatial and temporal sampling. It provides an overview of MPEG-4 including object-based coding, profiles and levels. H.264 is introduced as a standard that provides 50% bit rate savings over MPEG-2. Profiles and levels are explained for both standards. Common uses of each are listed, along with future development options.
Digital video has replaced analog video as the preferred method for making and delivering video content in multimedia. Video files can be extremely large, so compression techniques like MPEG and JPEG are used to reduce file sizes. There are two types of compression: lossless, which preserves quality, and lossy, which eliminates some data to provide greater compression ratios at the cost of quality. Digital video editing software allows for adding effects, transitions, titles and synchronizing video and audio.
Hardware Implementation of Genetic Algorithm Based Digital Colour Image Water...IDES Editor
This document describes a hardware implementation of a genetic algorithm based digital color image watermarking system. The system embeds a watermark image into the luminance channel (Y channel) of a host color image after converting the image from RGB to YUV color space. A genetic algorithm is used to determine optimal intensity values in the host image for embedding the watermark image bits invisibly. The proposed design is implemented as a custom integrated circuit for real-time watermarking of images as they are captured by a digital camera. Synthesis results show that the design can operate at 5ns clock speed and consumes a maximum power of 73.84mW when implemented on an Altera Cyclone II FPGA.
This document discusses digital video codecs and compression. It begins by defining pixel resolutions for standard definition, high definition, and digital cinema. It then covers CMOS image sensors used for HD, 2K and 4K capture and explains intra-frame and inter-frame compression. The document provides an example of the Apple ProRes 422 codec and analyzes its key attributes. It also discusses interlaced vs progressive scanning, picture impairments from compression, digital cinema standards, and predicts that advances in compression will continue to be needed to handle higher resolutions and frame rates.
The document discusses various analog and digital video interfaces. It describes common analog video interfaces like composite video, S-video, component video and RGB analog video. It then covers digital video interfaces such as HDMI, DVI, FireWire, S/PDIF and SDI. For each interface, it provides details on technical standards, maximum supported resolutions and example uses.
High-speed Distributed Video Transcoding for Multiple Rates ...Videoguy
This paper describes a distributed video transcoding system that can simultaneously transcode an MPEG-2 video file into various video coding formats with different rates. The transcoder divides the MPEG-2 file into small segments along the time axis and transcodes them in parallel on multiple PCs. Efficient video segment handling methods are proposed that minimize the inter-processor communication overhead and eliminate temporal discontinuities from the re-encoded video.
Generic Video Adaptation Framework Towards Content – and Context Awareness in...Alpen-Adria-Universität
This document proposes a generic video adaptation framework that is content- and context-aware. It introduces a unified adaptation format based on H.264 features to enable codec reusability and various adaptation algorithms. The framework includes format adapters between decoders, an adaptation pool, and encoders to convert between their formats. It aims to allow any type of adaptation like bitrate or resolution changes across any codec generically while maintaining quality and real-time constraints.
This document provides an overview of satellite communications fundamentals. It discusses how satellites provide capabilities not available through terrestrial systems like mobility and coverage for remote areas. While satellites are not generally cost-effective compared to landlines and fiber, they enable services like broadcasting, telecommunications, internet access, and more. The document covers different types of satellite orbits including geostationary and low earth orbits. It also examines satellite components, configurations, frequency reuse, earth station antennas, and considerations like signal delays.
Vector quantization maps high-dimensional vectors to codewords from a finite codebook. Each codeword defines a Voronoi region containing vectors closest to that codeword. The Lloyd and LBG algorithms are commonly used to optimize the codebook for a given dataset by iteratively clustering vectors and recomputing codeword averages. Tree-structured vector quantization reduces comparisons by recursively partitioning the codebook space, at the cost of potential distortion increases. The rate-distortion performance of vector quantization generally exceeds scalar quantization due to its ability to model correlations in vector datasets.
RSA is a widely used public-key cryptosystem. It works by generating a public and private key pair. The public key is used for encryption and digital signatures while the private key is used for decryption and signature verification. Key generation involves finding two prime numbers p and q, computing the modulus n as their product, and using these values to calculate the public and private exponents e and d respectively.
This document discusses digital set-top boxes (STBs) and related standards. It explains that DVB standards for digital TV broadcasting via different transmission media all use identical source coding and service multiplexing based on MPEG-2, while using optimized channel coding for each medium. STBs are needed until integrated digital TVs are cheaper. The document discusses how open architectures and interoperability across networks can help reduce STB costs. It provides an overview of typical STB components and architecture.
This document provides an overview of Codan's 6700/6900 series block up converter (BUC) systems and components. It describes the BUC, low-noise block converter (LNB), and redundancy systems. It also covers installation, operation, and maintenance of the systems. The document contains information on frequency bands, conversion plans, interfaces, cable connections, controls, indicator lights, commands, fault diagnosis, compliance, and definitions.
The document discusses DCT/IDCT concepts and applications. It provides an introduction to Spartan-II FPGAs and their extensive features. It then explains that DCT and IDCT are widely used in video and audio compression. It describes the DCT and IDCT functions and concepts, including how they are used to compress images and audio into frequency coefficients to reduce file sizes. It also provides examples of one-dimensional and two-dimensional DCT/IDCT equations and applications such as DVD players, HDTV, graphics cards, and medical imaging systems.
This document provides an introduction to digital television. It discusses analog TV standards and the conversion to digital with ITU-R BT.601 and BT.709 standards defining digital video formats. It also describes MPEG transport streams, the DVB system for content delivery over satellite, cable and terrestrial networks, and conditional access systems. Packetized elementary streams (PES) and program specific information (PSI) tables are also introduced.
This document discusses the basics of BISS scrambling. It describes BISS mode 1, which uses a session word, and BISS mode E, which encrypts the session word using an identifier and encryption algorithm. BISS mode E provides an additional layer of protection for transmitting the session word. The document also covers calculating the encrypted session word, using buried and injected identifiers, and how to operate scramblers in the different BISS modes.
The Event Logger monitors and logs Digital Program Insertion (DPI) messages to verify correct transmission of signals via satellite. It watches for configured GPI state changes that indicate an expected DPI message. If the message is received on time, it is logged as a matched event. If not received on time, it is flagged as missed. The Event Logger also decodes DPI messages to help diagnose issues, and is compatible with various encoding systems. It has 6 ASI inputs, 108 GPI sensors, and logs data in real-time and for archiving.
This document provides an overview of service information (SI) in digital video broadcasting (DVB) systems, including sections like the network information section (NIT), service description section (SDT), bouquet association section (BAT), program association section (PAT), conditional access section (CAT), transport stream description section (TSDT), event information section (EIT), and running status section (RST). It includes syntax diagrams and details for each section, such as table IDs, section lengths, descriptors, and other fields. It also provides the PID and refresh interval requirements for each table type.
The document compares video compression standards MPEG-4 and H.264. It discusses key aspects of each including profiles, levels, uses and future applications. MPEG-4 introduced object-based coding while H.264 provides around 50% better compression than MPEG-4 at similar quality levels. Both standards are widely used for video streaming, television broadcasting, and storage applications like Blu-ray discs. Ongoing development aims to improve support for high definition video formats.
This document compares video compression standards MPEG-4 and H.264. It provides an overview of both standards, including their development histories and profiles. MPEG-4 was the first standard to support object-based video coding and compression of different media types. H.264 provides significantly better compression than prior standards like MPEG-2 at the cost of higher computational complexity. Both standards are widely used today for applications ranging from mobile and internet video to television broadcasting and digital cinema.
The document discusses video streaming and video communication applications. It outlines different types of video applications including video storage, videoconferencing, digital TV, and video streaming over the internet. It then describes properties of video communication applications such as broadcast, multicast, point-to-point, real-time encoding, static or dynamic channels, and quality of service support. Finally, it discusses variable bitrate versus constant bitrate coding and how bit allocation affects quality.
This document summarizes key techniques used in video compression codecs. It discusses still image compression techniques like block transforms, quantization, and variable length coding that video codecs build upon. It then covers motion estimation and compensation, which take advantage of similarities between frames to greatly improve compression ratio. The document outlines processing requirements for techniques like block transforms, motion estimation, and motion compensation, noting they require substantial compute resources and memory bandwidth.
This document compares video compression standards MPEG-4 and H.264. It discusses key factors for video compression like spatial and temporal sampling. It provides an overview of MPEG-4 including object-based coding, profiles and levels. H.264 is introduced as a standard that provides 50% bit rate savings over MPEG-2. Profiles and levels are explained for both standards. Common uses of each are listed, along with future development options.
The document discusses different types of video compression standards including MPEG, H.261, H.263, and JPEG. It explains key concepts in video compression like frame rate, color resolution, spatial resolution, and image quality. MPEG standards like MPEG-1, MPEG-2, MPEG-4, and MPEG-7 are defined for compressing video and audio at different bit rates. Techniques like spatial and temporal redundancy reduction are used to compress video frames and consecutive frames. Compression reduces file sizes but can cause data loss during transmission.
This document discusses various topics related to data compression including compression techniques, audio compression, video compression, and standards like MPEG and JPEG. It covers lossless versus lossy compression, explaining that lossy compression can achieve much higher levels of compression but results in some loss of quality, while lossless compression maintains the original quality. The advantages of data compression include reducing file sizes, saving storage space and bandwidth.
Video encoding uses various techniques to compress video files in a lossy manner. It involves representing color information using RGB or YCbCr color spaces, sampling and quantizing signals to convert them to digital form, using the Fourier transform to analyze signal frequencies, windowing to divide signals for transform analysis, inter-frame encoding to remove redundancy between frames, and intra-frame encoding to remove redundancy within frames. Key compression techniques include motion compensation between inter-coded frames and periodic insertion of intra-coded frames.
H.264, also known as MPEG-4 Part 10 or AVC, is a video compression standard that provides significantly better compression than previous standards such as MPEG-2. It achieves this through spatial and temporal redundancy reduction techniques including intra-frame prediction, inter-frame prediction, and entropy coding. Motion estimation, which finds motion vectors between frames to enable inter-frame prediction, is the most computationally intensive part of H.264 encoding. Previous GPU implementations of H.264 motion estimation have sacrificed quality for parallelism or have not fully addressed dependencies between blocks. This document proposes a pyramid motion estimation approach on GPU that can better address dependencies while maintaining quality.
1. The document discusses video compression technology, including digital television formats, video compression standards like MPEG-2 and H.264, video quality metrics, and video coding concepts.
2. Key video coding concepts covered are temporal compression using motion estimation and compensation between frames, spatial compression within frames using DCT transform and quantization, and entropy coding of coefficients.
3. Video compression aims to reduce the data required for transmission by removing spatial and temporal redundancy in video sequences.
Compression: Video Compression (MPEG and others)danishrafiq
This document provides an overview of video compression techniques used in standards like MPEG and H.261. It discusses how uncompressed video data requires huge storage and bandwidth that compression aims to address. It explains that lossy compression methods are needed to achieve sufficient compression ratios. The key techniques discussed are intra-frame coding using DCT and quantization similar to JPEG, and inter-frame coding using motion estimation and compensation to remove temporal redundancy between frames. Motion vectors are found using techniques like block matching and sum of absolute differences. MPEG and other standards use a combination of these intra and inter-frame coding techniques to efficiently compress video for storage and transmission.
This document proposes a method for preserving privacy in video surveillance by scrambling regions of interest (ROIs) in video sequences. It discusses scrambling quantized DCT or DWT coefficients in compressed video to conceal information in ROIs while maintaining understanding of the overall scene. The scrambling is flexible and reversible with a private key, has low computational complexity, and introduces minimal impact on video coding performance. Previous approaches are also summarized.
Introduction to Video Compression Techniques - Anurag JainVideoguy
The document provides an overview of video compression techniques and standards. It discusses the motivation for video compression to reduce data sizes for storage and transmission. It then reviews several key video compression standards including H.261, H.263, MPEG-1, MPEG-2, MPEG-4, H.264 and others. For each standard, it summarizes the goals, features, applications and technical details like motion compensation methods, block sizes, and bitrate ranges.
The document discusses 3D graphics compression standards. It provides an overview of MPEG's work in developing standards for compressing 3D graphics content, similar to how other standards compress video and audio. This includes MPEG-4's initial work with surfaces like Indexed Face Sets as well as later efforts involving patches and subdivision surfaces to improve compression ratios and representation of curved surfaces. The goal is to standardize a format for compressed 3D graphics to enable widespread use in applications.
MPEG is a video compression standard developed in the late 1980s to enable full-motion video over networks and storage mediums. It was created by the Motion Picture Experts Group to address the need for high compression ratios to transmit video given bandwidth limitations of the time. MPEG uses spatial and temporal redundancy reduction techniques like discrete cosine transformation, quantization, and entropy coding to compress video frames and take advantage of similarities between neighboring pixels and successive frames. It defines a group of pictures structure and different frame types like I, P, and B frames to enable features like random access while maintaining synchronization and error robustness. MPEG became widely adopted and evolved through standards like MPEG-1, MPEG-2, and MPEG
This document discusses digital video codecs and compression. It begins by defining pixel resolutions for standard definition, high definition, and digital cinema. It then covers CMOS image sensors used for HD, 2K and 4K capture and explains intra-frame and inter-frame compression. The document provides an example of the Apple ProRes 422 codec and analyzes its key attributes. It also discusses interlaced vs progressive scanning, picture impairments from compression, digital cinema standards, and predicts that requirements on compression will reduce over time due to technological advances.
Video is recorded as a sequence of images called frames. A minimum of 16 frames per second is needed for smooth motion. Digital video requires large storage due to uncompressed size. Various techniques are used for compression, including filtering, downscaling resolution and frame rate, transforming frames to the frequency domain, quantizing coefficients, and interframe compression by storing differences between frames. MPEG standards use intraframe and interframe compression along with a system layer to form a single stream for storage and transmission of video and audio. MPEG-1 achieves around 1.5 Mbps for video CDs while MPEG-2 is used for digital TV at higher bitrates. Streaming video adapts to varying network speeds by buffering and dropping frames
AI research is enabling more efficient video and voice codecs through techniques like generative models and deep learning. Qualcomm's latest research includes a neural video codec that achieves state-of-the-art compression rates compared to other learned video compression solutions. Their work on B-frame coding also provides improved rate-distortion results by extending neural P-frame codecs to allow for B-frame coding and interpolation. Future research aims to develop more efficient on-device deployment methods and semantically aware compression focused on regions of interest.
This document provides an overview of various video compression techniques and standards. It discusses fundamentals of digital video including frame rate, color resolution, spatial resolution, and image quality. It describes different compression techniques like intraframe, interframe, and lossy vs lossless. Key video compression standards discussed include MPEG-1, MPEG-2, MPEG-4, H.261, H.263 and JPEG for still image compression. Factors that impact compression like compression ratio, bit rate control, and real-time vs non-real-time are also summarized.
The document discusses security vulnerabilities in digital video broadcast chipsets used for digital satellite TV. It begins with background on digital satellite TV security, including how premium content is encrypted and control words are used for decryption. It then describes challenges with analyzing the security of proprietary broadcast chipsets, which implement encryption pairing functions in hardware. The document outlines researchers' approach to gathering information through documentation and reverse engineering to analyze chipset security implementations and identify weaknesses.
The document provides an overview of MPEG-4, a standard for multimedia delivery. Key points include:
- MPEG-4 offers advanced audio and video codecs as well as tools to combine media like graphics, text, and interactivity.
- Its codecs like AVC and AAC provide high compression efficiency, enabling new applications from HDTV to mobile video.
- MPEG-4's development process considers technical merit through collaboration of hundreds of organizations.
- The standard specifies interoperable bitstreams while allowing competitive encoder implementations.
This document discusses image compression using the discrete cosine transform (DCT). It begins by explaining the 1D DCT and how it converts a signal into elementary frequency components. It then shows how to compute the 1D and 2D DCT using Mathematica functions. The 2D DCT is computed by applying the 1D DCT to rows and then columns of an image. Examples compress a test image and recover it to demonstrate the process works as expected.
DVB-S2 is the second-generation specification for satellite broadcasting developed by DVB in 2003. It uses more advanced channel coding (LDPC codes) and modulation formats (QPSK, 8PSK, 16APSK, 32APSK) for improved transmission performance, achieving up to a 30% increase in capacity over DVB-S. DVB-S2 allows for backwards compatibility with DVB-S receivers and uses adaptive coding and modulation to optimize transmission for different users and conditions. It provides high flexibility in modulation, coding rates and input stream formats to suit various broadcast and interactive applications.
The STi7167 is an integrated system-on-chip suitable for set-top boxes that features an advanced high-definition video decoder supporting formats like H264, VC-1 and MPEG2. It integrates either a DVB-C or DVB-T demodulator and includes a 450MHz ST40 applications processor. The chip provides extensive connectivity options and security features for compliant conditional access systems. It allows for low cost and small size set-top box designs for cable or terrestrial networks with advanced HD decoding and display capabilities.
1) The document describes a modification to the Huffman coding used in JPEG image compression. It proposes pairing each non-zero DCT coefficient with the run-length of subsequent (rather than preceding) zero coefficients.
2) This allows using separate optimized Huffman code tables for each DCT coefficient position, improving compression by 10-15% over the standard JPEG approach.
3) The decoding procedure is not changed and remains a single pass like JPEG, avoiding increased complexity.
This document provides implementation guidelines for the DVB Simulcrypt standard. It describes the architecture and protocols involved in simulcrypt systems, including the ECMG protocol between the security client system and conditional access modules, and the EMMG/PDG protocol between conditional access modules and multiplex equipment. The document outlines differences between version 1 and 2 of the standards, and provides recommendations for compliance. It also includes detailed state diagrams and descriptions of the protocols to clarify permissible messages in each state.
1) The document discusses quantization and pulse code modulation (PCM) in voice signal encoding. PCM assigns 256 possible values to digitally represent analog voice samples, divided into chords and steps on a linear scale.
2) A logarithmic quantization scale is better than a linear one for voice signals, as it allocates more quantization steps to lower amplitudes prevalent in speech. This compression and expansion technique is called companding.
3) Quantization error occurs when samples with different amplitudes are assigned the same digital value, distorting the reconstructed waveform. Companding helps maintain a higher signal-to-noise ratio especially for low-amplitude speech segments.
The document provides implementation guidelines for using the DVB Simulcrypt standard, including describing the architecture and protocols, clarifying differences between protocol versions, explaining state diagrams and error handling, and providing recommendations for redundancy management between system components. It aims to facilitate reliable implementation of the DVB Simulcrypt model and interoperable interfaces between conditional access systems.
Euler's theorem states that for any plane graph, the number of vertices (v) minus the number of edges (e) plus the number of faces (f) equals 2. The document proves this theorem by considering a minimal tree (T) within the graph and its dual tree (D), showing that the number of edges of T and D sum to the total edges (e) of the original graph. Some applications of the theorem are that any plane graph contains an edge of degree 5 or higher and any finite set of points not all on a line contains a line with exactly two points.
This document discusses quantization in analog-to-digital conversion. It begins by outlining the three processes of A/D conversion: sampling, quantization, and binary encoding. It then discusses uniform quantization and how to determine the sampling period and quantization interval. Next, it covers non-uniform quantization including μ-law quantization. It provides examples of quantizing audio signals and compares the performance of uniform and non-uniform quantization. Finally, it briefly discusses binary encoding and provides sample MATLAB code for quantization.
1) Reed-Solomon codes are a type of error-correcting code invented in 1960 that can detect and correct multiple symbol errors. They work by encoding data into redundant symbols that can be used to detect and locate errors.
2) Reed-Solomon codes are particularly good at correcting burst errors, where a run of symbols are corrupted together, because they can correct a set number of errors regardless of where in the codeword they occur.
3) The error correction capability increases with lower code rates (more redundant symbols) and longer block lengths, as this averages the noise over more symbols and makes it less likely for a noise burst to corrupt too many consecutive symbols.
This document describes the head-end architecture and synchronization for digital video broadcasting using SimulCrypt. It defines components such as the event information scheduler, SimulCrypt synchronizer, entitlement control message generator, and multiplexer. Interfaces between these components are also described, including processes for establishing and closing channels and streams, as well as bandwidth allocation and handling errors.
This document provides the European standard for the frame structure, channel coding and modulation for a second generation digital transmission system for cable systems (DVB-C2). It defines the system architecture and specifications for input processing, bit-interleaved coding and modulation, data slice packet generation, layer 1 part 2 signalling, frame building, and OFDM generation. The standard aims to provide improved performance for cable systems over the existing DVB-C standard.
This document discusses Euler's formula, which relates the number of vertices (V), edges (E), and faces (P) of a polyhedron. Through examples of attaching polygons and constructing polyhedra, it is shown that for any polyhedron, V - E + F = 2. Removing a face demonstrates that a similar formula holds for non-polyhedral shapes, with V - E + F = 1. This insight reveals that Euler's formula can distinguish polyhedra from other 3D shapes by calculating their Euler characteristic.
The RSA cryptosystem document discusses:
1) The RSA cryptosystem uses a public and private key to encrypt and decrypt messages based on large prime number factorization.
2) An example is provided where a message is encrypted with a public key and decrypted with a private key.
3) The security of RSA relies on the difficulty of factoring large numbers, as factorization algorithms take exponential time relative to the number of bits.
This document provides an introduction to Reed-Solomon codes, which are word-oriented, non-binary BCH codes that are simple, robust, and perform well for burst errors. Reed-Solomon codes use Galois field theory to encode data into blocks of length 2^m - 1 by adding 2t parity check words, allowing the correction of t errors. Encoding involves dividing the data by a generator polynomial to calculate parity bits, while decoding uses syndromes to find error locations and magnitudes to recover the original data. Key algorithms for decoding include Berlekamp-Massey for finding error locations, Chien search to factor roots, and Forney's algorithm to find error values.
This document provides a 3-sentence summary of the given document on video compression:
The document discusses video compression algorithms used in standards like MPEG, explaining how video compression works through motion estimation, discrete cosine transformation, quantization, and entropy coding to reduce spatial and temporal redundancies in video streams. It analyzes the tradeoff between compression ratio and quality, and provides an overview of common video compression standards and their applications.
This document provides an overview of the Linux operating system and fundamentals for learning Linux, including:
- Details on Linux distributions like Debian, Red Hat, and SUSE and their licensing models.
- A brief history of open source software development and benefits of the open source model.
- Essentials of the Linux operating system like filesystem structure, shell commands, file permissions and redirection.
- Information on Linux certification programs.
- Setup instructions for a Linux emulator for the fundamentals course.
- Appendices on Linux Professional Institute certification levels and the Linux kernel.
This document discusses three methods for reducing the bit-rate of transmitted video streams: 1) Time-shifting of MPEG-2 packets, which smooths out variable bit-rates without changing individual encoding rates; 2) Open loop transrating, which uses encoding tools to recompress streams at lower rates in a non-reversible way; 3) Closed loop transrating, which iteratively adjusts rates using feedback to maintain quality. These techniques help network operators optimize bandwidth usage and revenues by controlling streaming rates to match infrastructure limits and service pricing models.
inQuba Webinar Mastering Customer Journey Management with Dr Graham HillLizaNolte
HERE IS YOUR WEBINAR CONTENT! 'Mastering Customer Journey Management with Dr. Graham Hill'. We hope you find the webinar recording both insightful and enjoyable.
In this webinar, we explored essential aspects of Customer Journey Management and personalization. Here’s a summary of the key insights and topics discussed:
Key Takeaways:
Understanding the Customer Journey: Dr. Hill emphasized the importance of mapping and understanding the complete customer journey to identify touchpoints and opportunities for improvement.
Personalization Strategies: We discussed how to leverage data and insights to create personalized experiences that resonate with customers.
Technology Integration: Insights were shared on how inQuba’s advanced technology can streamline customer interactions and drive operational efficiency.
Have you ever been confused by the myriad of choices offered by AWS for hosting a website or an API?
Lambda, Elastic Beanstalk, Lightsail, Amplify, S3 (and more!) can each host websites + APIs. But which one should we choose?
Which one is cheapest? Which one is fastest? Which one will scale to meet our needs?
Join me in this session as we dive into each AWS hosting service to determine which one is best for your scenario and explain why!
Connector Corner: Seamlessly power UiPath Apps, GenAI with prebuilt connectorsDianaGray10
Join us to learn how UiPath Apps can directly and easily interact with prebuilt connectors via Integration Service--including Salesforce, ServiceNow, Open GenAI, and more.
The best part is you can achieve this without building a custom workflow! Say goodbye to the hassle of using separate automations to call APIs. By seamlessly integrating within App Studio, you can now easily streamline your workflow, while gaining direct access to our Connector Catalog of popular applications.
We’ll discuss and demo the benefits of UiPath Apps and connectors including:
Creating a compelling user experience for any software, without the limitations of APIs.
Accelerating the app creation process, saving time and effort
Enjoying high-performance CRUD (create, read, update, delete) operations, for
seamless data management.
Speakers:
Russell Alfeche, Technology Leader, RPA at qBotic and UiPath MVP
Charlie Greenberg, host
Monitoring and Managing Anomaly Detection on OpenShift.pdfTosin Akinosho
Monitoring and Managing Anomaly Detection on OpenShift
Overview
Dive into the world of anomaly detection on edge devices with our comprehensive hands-on tutorial. This SlideShare presentation will guide you through the entire process, from data collection and model training to edge deployment and real-time monitoring. Perfect for those looking to implement robust anomaly detection systems on resource-constrained IoT/edge devices.
Key Topics Covered
1. Introduction to Anomaly Detection
- Understand the fundamentals of anomaly detection and its importance in identifying unusual behavior or failures in systems.
2. Understanding Edge (IoT)
- Learn about edge computing and IoT, and how they enable real-time data processing and decision-making at the source.
3. What is ArgoCD?
- Discover ArgoCD, a declarative, GitOps continuous delivery tool for Kubernetes, and its role in deploying applications on edge devices.
4. Deployment Using ArgoCD for Edge Devices
- Step-by-step guide on deploying anomaly detection models on edge devices using ArgoCD.
5. Introduction to Apache Kafka and S3
- Explore Apache Kafka for real-time data streaming and Amazon S3 for scalable storage solutions.
6. Viewing Kafka Messages in the Data Lake
- Learn how to view and analyze Kafka messages stored in a data lake for better insights.
7. What is Prometheus?
- Get to know Prometheus, an open-source monitoring and alerting toolkit, and its application in monitoring edge devices.
8. Monitoring Application Metrics with Prometheus
- Detailed instructions on setting up Prometheus to monitor the performance and health of your anomaly detection system.
9. What is Camel K?
- Introduction to Camel K, a lightweight integration framework built on Apache Camel, designed for Kubernetes.
10. Configuring Camel K Integrations for Data Pipelines
- Learn how to configure Camel K for seamless data pipeline integrations in your anomaly detection workflow.
11. What is a Jupyter Notebook?
- Overview of Jupyter Notebooks, an open-source web application for creating and sharing documents with live code, equations, visualizations, and narrative text.
12. Jupyter Notebooks with Code Examples
- Hands-on examples and code snippets in Jupyter Notebooks to help you implement and test anomaly detection models.
In the realm of cybersecurity, offensive security practices act as a critical shield. By simulating real-world attacks in a controlled environment, these techniques expose vulnerabilities before malicious actors can exploit them. This proactive approach allows manufacturers to identify and fix weaknesses, significantly enhancing system security.
This presentation delves into the development of a system designed to mimic Galileo's Open Service signal using software-defined radio (SDR) technology. We'll begin with a foundational overview of both Global Navigation Satellite Systems (GNSS) and the intricacies of digital signal processing.
The presentation culminates in a live demonstration. We'll showcase the manipulation of Galileo's Open Service pilot signal, simulating an attack on various software and hardware systems. This practical demonstration serves to highlight the potential consequences of unaddressed vulnerabilities, emphasizing the importance of offensive security practices in safeguarding critical infrastructure.
LF Energy Webinar: Carbon Data Specifications: Mechanisms to Improve Data Acc...DanBrown980551
This LF Energy webinar took place June 20, 2024. It featured:
-Alex Thornton, LF Energy
-Hallie Cramer, Google
-Daniel Roesler, UtilityAPI
-Henry Richardson, WattTime
In response to the urgency and scale required to effectively address climate change, open source solutions offer significant potential for driving innovation and progress. Currently, there is a growing demand for standardization and interoperability in energy data and modeling. Open source standards and specifications within the energy sector can also alleviate challenges associated with data fragmentation, transparency, and accessibility. At the same time, it is crucial to consider privacy and security concerns throughout the development of open source platforms.
This webinar will delve into the motivations behind establishing LF Energy’s Carbon Data Specification Consortium. It will provide an overview of the draft specifications and the ongoing progress made by the respective working groups.
Three primary specifications will be discussed:
-Discovery and client registration, emphasizing transparent processes and secure and private access
-Customer data, centering around customer tariffs, bills, energy usage, and full consumption disclosure
-Power systems data, focusing on grid data, inclusive of transmission and distribution networks, generation, intergrid power flows, and market settlement data
The Microsoft 365 Migration Tutorial For Beginner.pptxoperationspcvita
This presentation will help you understand the power of Microsoft 365. However, we have mentioned every productivity app included in Office 365. Additionally, we have suggested the migration situation related to Office 365 and how we can help you.
You can also read: https://www.systoolsgroup.com/updates/office-365-tenant-to-tenant-migration-step-by-step-complete-guide/
Your One-Stop Shop for Python Success: Top 10 US Python Development Providersakankshawande
Simplify your search for a reliable Python development partner! This list presents the top 10 trusted US providers offering comprehensive Python development services, ensuring your project's success from conception to completion.
The Department of Veteran Affairs (VA) invited Taylor Paschal, Knowledge & Information Management Consultant at Enterprise Knowledge, to speak at a Knowledge Management Lunch and Learn hosted on June 12, 2024. All Office of Administration staff were invited to attend and received professional development credit for participating in the voluntary event.
The objectives of the Lunch and Learn presentation were to:
- Review what KM ‘is’ and ‘isn’t’
- Understand the value of KM and the benefits of engaging
- Define and reflect on your “what’s in it for me?”
- Share actionable ways you can participate in Knowledge - - Capture & Transfer
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/temporal-event-neural-networks-a-more-efficient-alternative-to-the-transformer-a-presentation-from-brainchip/
Chris Jones, Director of Product Management at BrainChip , presents the “Temporal Event Neural Networks: A More Efficient Alternative to the Transformer” tutorial at the May 2024 Embedded Vision Summit.
The expansion of AI services necessitates enhanced computational capabilities on edge devices. Temporal Event Neural Networks (TENNs), developed by BrainChip, represent a novel and highly efficient state-space network. TENNs demonstrate exceptional proficiency in handling multi-dimensional streaming data, facilitating advancements in object detection, action recognition, speech enhancement and language model/sequence generation. Through the utilization of polynomial-based continuous convolutions, TENNs streamline models, expedite training processes and significantly diminish memory requirements, achieving notable reductions of up to 50x in parameters and 5,000x in energy consumption compared to prevailing methodologies like transformers.
Integration with BrainChip’s Akida neuromorphic hardware IP further enhances TENNs’ capabilities, enabling the realization of highly capable, portable and passively cooled edge devices. This presentation delves into the technical innovations underlying TENNs, presents real-world benchmarks, and elucidates how this cutting-edge approach is positioned to revolutionize edge AI across diverse applications.
Dandelion Hashtable: beyond billion requests per second on a commodity serverAntonios Katsarakis
This slide deck presents DLHT, a concurrent in-memory hashtable. Despite efforts to optimize hashtables, that go as far as sacrificing core functionality, state-of-the-art designs still incur multiple memory accesses per request and block request processing in three cases. First, most hashtables block while waiting for data to be retrieved from memory. Second, open-addressing designs, which represent the current state-of-the-art, either cannot free index slots on deletes or must block all requests to do so. Third, index resizes block every request until all objects are copied to the new index. Defying folklore wisdom, DLHT forgoes open-addressing and adopts a fully-featured and memory-aware closed-addressing design based on bounded cache-line-chaining. This design offers lock-free index operations and deletes that free slots instantly, (2) completes most requests with a single memory access, (3) utilizes software prefetching to hide memory latencies, and (4) employs a novel non-blocking and parallel resizing. In a commodity server and a memory-resident workload, DLHT surpasses 1.6B requests per second and provides 3.5x (12x) the throughput of the state-of-the-art closed-addressing (open-addressing) resizable hashtable on Gets (Deletes).
Freshworks Rethinks NoSQL for Rapid Scaling & Cost-EfficiencyScyllaDB
Freshworks creates AI-boosted business software that helps employees work more efficiently and effectively. Managing data across multiple RDBMS and NoSQL databases was already a challenge at their current scale. To prepare for 10X growth, they knew it was time to rethink their database strategy. Learn how they architected a solution that would simplify scaling while keeping costs under control.
How information systems are built or acquired puts information, which is what they should be about, in a secondary place. Our language adapted accordingly, and we no longer talk about information systems but applications. Applications evolved in a way to break data into diverse fragments, tightly coupled with applications and expensive to integrate. The result is technical debt, which is re-paid by taking even bigger "loans", resulting in an ever-increasing technical debt. Software engineering and procurement practices work in sync with market forces to maintain this trend. This talk demonstrates how natural this situation is. The question is: can something be done to reverse the trend?
"$10 thousand per minute of downtime: architecture, queues, streaming and fin...Fwdays
Direct losses from downtime in 1 minute = $5-$10 thousand dollars. Reputation is priceless.
As part of the talk, we will consider the architectural strategies necessary for the development of highly loaded fintech solutions. We will focus on using queues and streaming to efficiently work and manage large amounts of data in real-time and to minimize latency.
We will focus special attention on the architectural patterns used in the design of the fintech system, microservices and event-driven architecture, which ensure scalability, fault tolerance, and consistency of the entire system.
"Scaling RAG Applications to serve millions of users", Kevin GoedeckeFwdays
How we managed to grow and scale a RAG application from zero to thousands of users in 7 months. Lessons from technical challenges around managing high load for LLMs, RAGs and Vector databases.
Main news related to the CCS TSI 2023 (2023/1695)Jakub Marek
An English 🇬🇧 translation of a presentation to the speech I gave about the main changes brought by CCS TSI 2023 at the biggest Czech conference on Communications and signalling systems on Railways, which was held in Clarion Hotel Olomouc from 7th to 9th November 2023 (konferenceszt.cz). Attended by around 500 participants and 200 on-line followers.
The original Czech 🇨🇿 version of the presentation can be found here: https://www.slideshare.net/slideshow/hlavni-novinky-souvisejici-s-ccs-tsi-2023-2023-1695/269688092 .
The videorecording (in Czech) from the presentation is available here: https://youtu.be/WzjJWm4IyPk?si=SImb06tuXGb30BEH .
Biomedical Knowledge Graphs for Data Scientists and Bioinformaticians
video_compression_2004
1. Video
Coding
Video Compression
MIT 6.344, Spring 2004
John G. Apostolopoulos
Streaming Media Systems Group
Hewlett-Packard Laboratories
japos@hpl.hp.com
John G. Apostolopoulos
April 22, 2004 Page 1
2. Video
Coding
Overview of Next Three Lectures
Today • Video Compression (Thurs, 4/22)
– Principles and practice of video coding
– Basics behind MPEG compression algorithms
– Current image & video compression standards
• Video Communication & Video Streaming I (Tues, 4/27)
– Video application contexts & examples: DVD and Digital TV
– Challenges in video streaming over the Internet
– Techniques for overcoming these challenges
• Video Communication & Video Streaming II (Thurs, 4/29)
– Video over lossy packet networks and wireless links → Error-
resilient video communications
John G. Apostolopoulos
April 22, 2004 Page 2
3. Video
Coding
Outline of Today’s Lecture
• Motivation for compression
• Brief review of generic compression system (from prior lecture)
• Brief review of image compression (from last lecture)
• Video compression
– Exploit temporal dimension of video signal
– Motion-compensated prediction
– Generic (MPEG-type) video coder architecture
– Scalable video coding
• Overview of current video compression standards
– What do the standards specify?
– Frame-based video coding: MPEG-1/2/4, H.261/3/4
– Object-based video coding: MPEG-4
John G. Apostolopoulos
April 22, 2004 Page 3
4. Video Motivation for Compression:
Coding
Example of HDTV Video Signal
• Problem:
– Raw video contains an immense amount of data
– Communication and storage capabilities are limited
and expensive
• Example HDTV video signal:
– 720x1280 pixels/frame, progressive scanning at
60 frames/s:
⎛ 720 × 1280 pixels ⎞⎛ 60 frames ⎞⎛ 3colors ⎞⎛ 8bits ⎞
⎜ ⎟⎜ ⎟⎜ ⎟⎜ ⎟ = 1.3Gb / s
⎝ frame ⎠⎝ sec ⎠⎝ pixel ⎠⎝ color ⎠
– 20 Mb/s HDTV channel bandwidth
→ Requires compression by a factor of 70 (equivalent
to .35 bits/pixel)
John G. Apostolopoulos
April 22, 2004 Page 4
5. Video
Coding
Achieving Compression
• Reduce redundancy and irrelevancy
• Sources of redundancy
– Temporal: Adjacent frames highly correlated
– Spatial: Nearby pixels are often correlated with
each other
– Color space: RGB components are correlated
among themselves
→ Relatively straightforward to exploit
• Irrelevancy
– Perceptually unimportant information
→ Difficult to model and exploit
John G. Apostolopoulos
April 22, 2004 Page 5
6. Video Spatial and Temporal Redundancy
Coding
• Why can video be compressed?
– Video contains much spatial and temporal redundancy.
• Spatial redundancy: Neighboring pixels are similar
• Temporal redundancy: Adjacent frames are similar
Compression is achieved by exploiting the spatial and temporal
redundancy inherent to video.
John G. Apostolopoulos
April 22, 2004 Page 6
7. Video
Coding
Outline of Today’s Lecture
• Motivation for compression
• Brief review of generic compression system (from prior lecture)
• Brief review of image compression (from last lecture)
• Video compression
– Exploit temporal dimension of video signal
– Motion-compensated prediction
– Generic (MPEG-type) video coder architecture
– Scalable video coding
• Overview of current video compression standards
– What do the standards specify?
– Frame-based video coding: MPEG-1/2/4, H.261/3/4
– Object-based video coding: MPEG-4
John G. Apostolopoulos
April 22, 2004 Page 7
8. Video
Coding
Generic Compression System
Original Compressed
Signal Representation Binary Bitstream
Quantization
(Analysis) Encoding
A compression system is composed of three key building blocks:
• Representation
– Concentrates important information into a few parameters
• Quantization
– Discretizes parameters
• Binary encoding
– Exploits non-uniform statistics of quantized parameters
– Creates bitstream for transmission
John G. Apostolopoulos
April 22, 2004 Page 8
9. Video
Coding
Generic Compression System (cont.)
Original Compressed
Signal Representation Binary Bitstream
Quantization
(Analysis) Encoding
Generally Lossy Lossless
lossless
• Generally, the only operation that is lossy is the
quantization stage
• The fact that all the loss (distortion) is localized to a
single operation greatly simplifies system design
• Can design loss to exploit human visual system (HVS)
properties
John G. Apostolopoulos
April 22, 2004 Page 9
10. Video
Coding
Generic Compression System (cont.)
Original Compressed
Signal Bitstream
Representation Binary
Quantization
(Analysis) Encoding
Source Encoder Channel
Reconstructed
Signal
Representation Inverse Binary
(Synthesis) Quantization Decoding
Source Decoder
• Source decoder performs the inverse of each of the three
operations
John G. Apostolopoulos
April 22, 2004 Page 10
11. Video
Coding
Review of Image Compression
Original Compressed
Image RGB Runlength & Bitstream
to Block DCT Quantization Huffman
YUV Coding
• Coding an image (single frame):
– RGB to YUV color-space conversion
– Partition image into 8x8-pixel blocks
– 2-D DCT of each block
– Quantize each DCT coefficient
– Runlength and Huffman code the nonzero quantized DCT
coefficients
→ Basis for the JPEG Image Compression Standard
→ JPEG-2000 uses wavelet transform and arithmetic coding
John G. Apostolopoulos
April 22, 2004 Page 11
12. Video
Coding
Outline of Today’s Lecture
• Motivation for compression
• Brief review of generic compression system (from prior lecture)
• Brief review of image compression (from last lecture)
• Video compression
– Exploit temporal dimension of video signal
– Motion-compensated prediction
– Generic (MPEG-type) video coder architecture
– Scalable video coding
• Overview of current video compression standards
– What do the standards specify?
– Frame-based video coding: MPEG-1/2/4, H.261/3/4
– Object-based video coding: MPEG-4
John G. Apostolopoulos
April 22, 2004 Page 12
13. Video
Coding
Video Compression
• Video: Sequence of frames (images) that are related
• Related along the temporal dimension
– Therefore temporal redundancy exists
• Main addition over image compression
– Temporal redundancy
→ Video coder must exploit the temporal redundancy
John G. Apostolopoulos
April 22, 2004 Page 13
14. Video
Coding
Temporal Processing
• Usually high frame rate: Significant temporal redundancy
• Possible representations along temporal dimension:
– Transform/subband methods
– Good for textbook case of constant velocity uniform
global motion
– Inefficient for nonuniform motion, I.e. real-world motion
– Requires large number of frame stores
– Leads to delay (Memory cost may also be an issue)
– Predictive methods
– Good performance using only 2 frame stores
– However, simple frame differencing in not enough…
John G. Apostolopoulos
April 22, 2004 Page 14
15. Video Video Compression
Coding
• Goal: Exploit the temporal redundancy
• Predict current frame based on previously coded frames
• Three types of coded frames:
– I-frame: Intra-coded frame, coded independently of all
other frames
– P-frame: Predictively coded frame, coded based on
previously coded frame
– B-frame: Bi-directionally predicted frame, coded based
on both previous and future coded frames
I frame P-frame B-frame
John G. Apostolopoulos
April 22, 2004 Page 15
16. Video Temporal Processing:
Coding
Motion-Compensated Prediction
• Simple frame differencing fails when there is motion
• Must account for motion
→ Motion-compensated (MC) prediction
• MC-prediction generally provides significant improvements
• Questions:
– How can we estimate motion?
– How can we form MC-prediction?
John G. Apostolopoulos
April 22, 2004 Page 16
17. Video Temporal Processing:
Coding
Motion Estimation
• Ideal situation:
– Partition video into moving objects
– Describe object motion
→ Generally very difficult
• Practical approach: Block-Matching Motion Estimation
– Partition each frame into blocks, e.g. 16x16 pixels
– Describe motion of each block
→ No object identification required
→ Good, robust performance
John G. Apostolopoulos
April 22, 2004 Page 17
18. Video Block-Matching Motion Estimation
Coding
4
3 4 3
2 2
1 7 8 1 8
7
6 6
5 12 5 12
11 11
10 10
Motion Vector 9 16 9 16
14 15 15
(mv1, mv2) 13 13
14
Reference Frame Current Frame
• Assumptions:
– Translational motion within block:
f (n1 , n2 , kcur ) = f (n1 − mv1 , n2 − mv2 , k ref )
– All pixels within each block have the same motion
• ME Algorithm:
1) Divide current frame into non-overlapping N1xN2 blocks
2) For each block, find the best matching block in reference frame
• MC-Prediction Algorithm:
– Use best matching blocks of reference frame as prediction of
blocks in current frame
John G. Apostolopoulos
April 22, 2004 Page 18
19. Video Block Matching:
Coding
Determining the Best Matching Block
• For each block in the current frame search for best matching
block in the reference frame
– Metrics for determining “best match”:
MSE = ∑ ∑ [ f (n1, n2 , kcur ) − f (n1 − mv1, n2 − mv2 , kref )]2
( n1 ,n2 )∈Block
MAE = ∑ ∑ f (n1, n2 , kcur ) − f (n1 − mv1, n2 − mv2 , kref )
( n1 ,n2 )∈Block
– Candidate blocks: All blocks in, e.g., (± 32,±32) pixel area
– Strategies for searching candidate blocks for best match
– Full search: Examine all candidate blocks
– Partial (fast) search: Examine a carefully selected subset
• Estimate of motion for best matching block: “motion vector”
John G. Apostolopoulos
April 22, 2004 Page 19
20. Video
Coding
Motion Vectors and Motion Vector Field
• Motion vector
– Expresses the relative horizontal and vertical offsets
(mv1,mv2), or motion, of a given block from one
frame to another
– Each block has its own motion vector
• Motion vector field
– Collection of motion vectors for all the blocks in a
frame
John G. Apostolopoulos
April 22, 2004 Page 20
21. Video Example of Fast Motion Estimation Search:
Coding
3-Step (Log) Search
• Goal: Reduce number of search
points
• Example: (± 7,±7 ) search area
• Dots represent search points
• Search performed in 3 steps
(coarse-to-fine):
Step 1: (± 4 pixels )
Step 2: (± 2 pixels )
Step 3: (± 1 pixels )
• Best match is found at each step
• Next step: Search is centered
around the best match of prior step
• Speedup increases for larger
search areas
John G. Apostolopoulos
April 22, 2004 Page 21
22. Video Motion Vector Precision?
Coding
• Motivation:
– Motion is not limited to integer-pixel offsets
– However, video only known at discrete pixel locations
– To estimate sub-pixel motion, frames must be spatially
interpolated
• Fractional MVs are used to represent the sub-pixel motion
• Improved performance (extra complexity is worthwhile)
• Half-pixel ME used in most standards: MPEG-1/2/4
• Why are half-pixel motion vectors better?
– Can capture half-pixel motion
– Averaging effect (from spatial interpolation) reduces
prediction error → Improved prediction
– For noisy sequences, averaging effect reduces noise →
Improved compression
John G. Apostolopoulos
April 22, 2004 Page 22
23. Video Practical Half-Pixel Motion Estimation
Coding
Algorithm
• Half-pixel ME (coarse-fine) algorithm:
1) Coarse step: Perform integer motion estimation on blocks; find
best integer-pixel MV
2) Fine step: Refine estimate to find best half-pixel MV
a) Spatially interpolate the selected region in reference frame
b) Compare current block to interpolated reference frame
block
c) Choose the integer or half-pixel offset that provides best
match
• Typically, bilinear interpolation is used for spatial interpolation
John G. Apostolopoulos
April 22, 2004 Page 23
24. Video Example: MC-Prediction for Two
Coding
Consecutive Frames
Previous Frame Current Frame
(Reference Frame) (To be Predicted)
4
3 4 3
2 2
1 7 8 1 8
7
6 6
5 12 5 12
11 11
10 16 10
9 15 9 16
14 15
14
13 13
Reference Frame Predicted Frame John G. Apostolopoulos
April 22, 2004 Page 24
25. Video Example: MC-Prediction for Two
Coding
Consecutive Frames (cont.)
Prediction of
Current Frame
Prediction Error
(Residual)
John G. Apostolopoulos
April 22, 2004 Page 25
26. Video
Coding
Block Matching Algorithm: Summary
• Issues:
– Block size?
– Search range?
– Motion vector accuracy?
• Motion typically estimated only from luminance
• Advantages:
– Good, robust performance for compression
– Resulting motion vector field is easy to represent (one MV
per block) and useful for compression
– Simple, periodic structure, easy VLSI implementations
• Disadvantages:
– Assumes translational motion model → Breaks down for
more complex motion
– Often produces blocking artifacts (OK for coding with
Block DCT)
John G. Apostolopoulos
April 22, 2004 Page 26
27. Video Bi-Directional MC-Prediction
Coding
4 4
3 4 3
2 2 3
1 7 8 1 8 1 2 7 8
7
6 6 6
5 12 5 12 5 11 12
11 11
10 16 10 10
9 15 9 16 9 15 16
14 15 14
14 13
13 13
Previous Frame Current Frame Future Frame
• Bi-Directional MC-Prediction is used to estimate a block in the
current frame from a block in:
1) Previous frame
2) Future frame
3) Average of a block from the previous frame and a block
from the future frame
4) Neither, i.e. code current block without prediction
John G. Apostolopoulos
April 22, 2004 Page 27
28. Video MC-Prediction and Bi-Directional
Coding
MC-Prediction (P- and B-frames)
• Motion compensated prediction: Predict the current frame
based on reference frame(s) while compensating for the motion
• Examples of block-based motion-compensated prediction
(P-frame) and bi-directional prediction (B-frame):
4 4 4
3 4 3 3 4 3
2 2 2 2 3
1 7 8 1 8 1 7 8 1 8 1 2 7 8
7 7
6 6 6 6 6
12 5 12 5 12 5 12 5 11 12
5 11 11 11 11
10 10 10 16 10 10
16 9
9 15 9 16 9
14 15 9
15
16 15 16
14 15 14
14 14 13
13 13 13 13
Previous Frame P-Frame Previous Frame B-Frame Future Frame
John G. Apostolopoulos
April 22, 2004 Page 28
29. Video Video Compression
Coding
• Main addition over image compression:
– Exploit the temporal redundancy
• Predict current frame based on previously coded frames
• Three types of coded frames:
– I-frame: Intra-coded frame, coded independently of all
other frames
– P-frame: Predictively coded frame, coded based on
previously coded frame
– B-frame: Bi-directionally predicted frame, coded based
on both previous and future coded frames
I frame P-frame B-frame
John G. Apostolopoulos
April 22, 2004 Page 29
30. Video Example Use of I-,P-,B-frames:
Coding
MPEG Group of Pictures (GOP)
• Arrows show prediction dependencies between frames
I0 B1 B2 P3 B4 B5 P6 B7 B8 I9
MPEG GOP
John G. Apostolopoulos
April 22, 2004 Page 30
31. Video
Coding
Summary of Temporal Processing
• Use MC-prediction (P and B frames) to reduce temporal
redundancy
• MC-prediction usually performs well; In compression have a
second chance to recover when it performs badly
• MC-prediction yields:
– Motion vectors
– MC-prediction error or residual → Code error with
conventional image coder
• Sometimes MC-prediction may perform badly
– Examples: Complex motion, new imagery (occlusions)
– Approach:
1. Identify frame or individual blocks where prediction fails
2. Code without prediction
John G. Apostolopoulos
April 22, 2004 Page 31
32. Video
Coding
Basic Video Compression Architecture
• Exploiting the redundancies:
– Temporal: MC-prediction (P and B frames)
– Spatial: Block DCT
– Color: Color space conversion
• Scalar quantization of DCT coefficients
• Zigzag scanning, runlength and Huffman coding of the
nonzero quantized DCT coefficients
John G. Apostolopoulos
April 22, 2004 Page 32
33. Video Example Video Encoder
Coding
Input Buffer fullness
Video Residual
Signal RGB
Huffman
to DCT Quantize Buffer
Coding
YUV Output
Bitstream
Inverse
Quantize MV data
Inverse
DCT
MC-Prediction
Motion Frame Store
Compensation
Previous
MV data Reconstructed
Frame
Motion
Estimation
John G. Apostolopoulos
April 22, 2004 Page 33
34. Video
Coding
Example Video Decoder
Reconstructed
Residual Frame
Huffman Inverse Inverse
Buffer YUV to RGB
Decoder Quantize DCT
Input Output
Bitstream Video
Signal
MC-Prediction Frame Store
Previous
MV data Motion Reconstructed
Compensation Frame
John G. Apostolopoulos
April 22, 2004 Page 34
35. Video
Coding
Outline of Today’s Lecture
• Motivation for compression
• Brief review of generic compression system (from prior lecture)
• Brief review of image compression (from last lecture)
• Video compression
– Exploit temporal dimension of video signal
– Motion-compensated prediction
– Generic (MPEG-type) video coder architecture
– Scalable video coding
• Overview of current video compression standards
– What do the standards specify?
– Frame-based video coding: MPEG-1/2/4, H.261/3/4
– Object-based video coding: MPEG-4
John G. Apostolopoulos
April 22, 2004 Page 35
36. Video Motivation for Scalable Coding
Coding
Basic situation:
1. Diverse receivers may request the same video
– Different bandwidths, spatial resolutions, frame rates,
computational capabilities
2. Heterogeneous networks and a priori unknown network conditions
– Wired and wireless links, time-varying bandwidths
→ When you originally code the video you don’t know which client
or network situation will exist in the future
→ Probably have multiple different situations, each requiring a
different compressed bitstream
→ Need a different compressed video matched to each situation
• Possible solutions:
1. Compress & store MANY different versions of the same video
2. Real-time transcoding (e.g. decode/re-encode)
3. Scalable coding
John G. Apostolopoulos
April 22, 2004 Page 36
37. Video
Coding
Scalable Video Coding
• Scalable coding:
– Decompose video into multiple layers of prioritized
importance
– Code layers into base and enhancement bitstreams
– Progressively combine one or more bitstreams to produce
different levels of video quality
• Example of scalable coding with base and two enhancement
layers: Can produce three different qualities
1. Base layer
2. Base + Enh1 layers Higher quality
3. Base + Enh1 + Enh2 layers
• Scalability with respect to: Spatial or temporal resolution, bit
rate, computation, memory
John G. Apostolopoulos
April 22, 2004 Page 37
38. Video
Coding
Example of Scalable Coding
• Encode image/video into three layers:
Base Enh1 Enh2
Encoder
• Low-bandwidth receiver: Send only Base layer
Base
Decoder Low Res
• Medium-bandwidth receiver: Send Base & Enh1 layers
Base Enh1
Decoder Med Res
• High-bandwidth receiver: Send all three layers
Base Enh1 Enh2
Decoder High Res
• Can adapt to different clients and network situations John G. Apostolopoulos
April 22, 2004 Page 38
39. Video
Coding
Scalable Video Coding (cont.)
• Three basic types of scalability (refine video quality
along three different dimensions):
– Temporal scalability → Temporal resolution
– Spatial scalability → Spatial resolution
– SNR (quality) scalability → Amplitude resolution
• Each type of scalable coding provides scalability of one
dimension of the video signal
– Can combine multiple types of scalability to provide
scalability along multiple dimensions
John G. Apostolopoulos
April 22, 2004 Page 39
40. Video
Coding
Scalable Coding: Temporal Scalability
• Temporal scalability: Based on the use of B-frames to
refine the temporal resolution
– B-frames are dependent on other frames
– However, no other frame depends on a B-frame
– Each B-frame may be discarded without affecting
other frames
I0 B1 B2 P3 B4 B5 P6 B7 B8 I9
MPEG GOP John G. Apostolopoulos
April 22, 2004 Page 40
41. Video
Coding
Scalable Coding: Spatial Scalability
• Spatial scalability: Based on refining the spatial resolution
– Base layer is low resolution version of video
– Enh1 contains coded difference between upsampled
base layer and original video
– Also called: Pyramid coding
Enh layer
Enc Dec
↓2 ↑2 ↑2 High-Res
Original Video
Video Dec
Base layer Low-Res
Enc Dec Video
John G. Apostolopoulos
April 22, 2004 Page 41
42. Video Scalable Coding: SNR (Quality)
Coding
Scalability
• SNR (Quality) Scalability: Based on refining the
amplitude resolution
– Base layer uses a coarse quantizer
– Enh1 applies a finer quantizer to the difference
between the original DCT coefficients and the
coarsely quantized base layer coefficients
EP frame
EI frame
Note: Base & enhancement
layers are at the same spatial
I frame P-frame resolution
John G. Apostolopoulos
April 22, 2004 Page 42
43. Video
Coding
Summary of Scalable Video Coding
• Three basic types of scalable video coding:
– Temporal scalability
– Spatial scalability
– SNR (quality) scalability
• Scalable coding produces different layers with prioritized
importance
• Prioritized importance is key for a variety of applications:
– Adapting to different bandwidths, or client resources
such as spatial or temporal resolution or computational
power
– Facilitates error-resilience by explicitly identifying most
important and less important bits
John G. Apostolopoulos
April 22, 2004 Page 43
44. Video
Coding
Outline of Today’s Lecture
• Motivation for compression
• Brief review of generic compression system (from prior lecture)
• Brief review of image compression (from last lecture)
• Video compression
– Exploit temporal dimension of video signal
– Motion-compensated prediction
– Generic (MPEG-type) video coder architecture
– Scalable video coding
• Overview of current video compression standards
– What do the standards specify?
– Frame-based video coding: MPEG-1/2/4, H.261/3/4
– Object-based video coding: MPEG-4
John G. Apostolopoulos
April 22, 2004 Page 44
45. Video
Coding
Motivation for Standards
• Goal of standards:
– Ensuring interoperability: Enabling communication
between devices made by different manufacturers
– Promoting a technology or industry
– Reducing costs
John G. Apostolopoulos
April 22, 2004 Page 45
46. Video
Coding
What do the Standards Specify?
Encoder Bitstream Decoder
John G. Apostolopoulos
April 22, 2004 Page 46
47. Video
Coding
What do the Standards Specify?
Encoder Bitstream Decoder
(Decoding
Process)
• Not the encoder Scope of Standardization
• Not the decoder
• Just the bitstream syntax and the decoding process (e.g. use IDCT,
but not how to implement the IDCT)
→ Enables improved encoding & decoding strategies to be
employed in a standard-compatible manner
John G. Apostolopoulos
April 22, 2004 Page 47
48. Video Current Image and Video
Coding
Compression Standards
Standard Application Bit Rate
JPEG Continuous-tone still-image Variable
compression
H.261 Video telephony and p x 64 kb/s
teleconferencing over ISDN
MPEG-1 Video on digital storage media 1.5 Mb/s
(CD-ROM)
MPEG-2 Digital Television 2-20 Mb/s
H.263 Video telephony over PSTN 33.6-? kb/s
MPEG-4 Object-based coding, synthetic Variable
content, interactivity
JPEG-2000 Improved still image compression Variable
H.264 / Improved video compression 10’s to 100’s kb/s
MPEG-4 AVC
John G. Apostolopoulos
April 22, 2004 Page 48
49. Video Comparing Current Video Compression
Coding
Standards
• Based on the same fundamental building blocks
– Motion-compensated prediction (I, P, and B frames)
– 2-D Discrete Cosine Transform (DCT)
– Color space conversion
– Scalar quantization, runlengths, Huffman coding
• Additional tools added for different applications:
– Progressive or interlaced video
– Improved compression, error resilience, scalability, etc.
• MPEG-1/2/4, H.261/3/4: Frame-based coding
• MPEG-4: Object-based coding and Synthetic video
John G. Apostolopoulos
April 22, 2004 Page 49
50. Video MPEG Group of Pictures (GOP)
Coding
Structure
• Composed of I, P, and B frames
• Arrows show prediction dependencies
• Periodic I-frames enable random access into the coded bitstream
• Parameters: (1) Spacing between I frames, (2) number of B frames
between I and P frames
I0 B1 B2 P3 B4 B5 P6 B7 B8 I9
MPEG GOP John G. Apostolopoulos
April 22, 2004 Page 50
51. Video
Coding
MPEG Structure
• MPEG codes video in a hierarchy of layers. The
sequence layer is not shown.
GOP Layer Picture Layer
P
B
B
P
B
B 4 8x8 DCT
I 1 MV 8x8 DCT
Block
Macroblock Layer
Slice Layer
Layer
John G. Apostolopoulos
April 22, 2004 Page 51
52. Video
Coding
MPEG-2 Profiles and Levels
• Goal: To enable more efficient implementations for
different applications (interoperability points)
– Profile: Subset of the tools applicable for a family of
applications
– Level: Bounds on the complexity for any profile
Level
HDTV: Main Profile at
High High Level (MP@HL)
Main DVD & SD Digital TV:
Main Profile at Main Level
Low (MP@ML)
Profile
Simple Main High
John G. Apostolopoulos
April 22, 2004 Page 52
53. Video
Coding
MPEG-4 Natural Video Coding
• Extension of MPEG-1/2-type algorithms to code
arbitrarily shaped objects
Frame-based Coding
Object-based Coding [MPEG Committee]
Basic Idea: Extend Block-DCT and Block-ME/MC-prediction
to code arbitrarily shaped objects
John G. Apostolopoulos
April 22, 2004 Page 53
54. Video
Coding
Example of
MPEG-4
Scene
(Object-based
Coding)
[MPEG Committee] John G. Apostolopoulos
April 22, 2004 Page 54
55. Video Example MPEG-4 Object Decoding Process
Coding
[MPEG Committee]
John G. Apostolopoulos
April 22, 2004 Page 55
56. Video
Coding
Sprite Coding (Background Prediction)
• Sprite: Large background image
– Hypothesis: Same background exists for many frames,
changes resulting from camera motion and occlusions
• One possible coding strategy:
1. Code & transmit entire sprite once
2. Only transmit camera motion parameters for each
subsequent frame
• Significant coding gain for some scenes
John G. Apostolopoulos
April 22, 2004 Page 56
57. Video
Coding
Sprite Coding Example
Sprite (background) Foreground
Object
Reconstructed
Frame [MPEG Committee]
John G. Apostolopoulos
April 22, 2004 Page 57
58. Video
Coding
Review of Today’s Lecture
• Motivation for compression
• Brief review of generic compression system (from prior lecture)
• Brief review of image compression (from last lecture)
• Video compression
– Exploit temporal dimension of video signal
– Motion-compensated prediction
– Generic (MPEG-type) video coder architecture
– Scalable video coding
• Overview of current video compression standards
– What do the standards specify?
– Frame-based video coding: MPEG-1/2/4, H.261/3/4
– Object-based video coding: MPEG-4
John G. Apostolopoulos
April 22, 2004 Page 58
59. Video
Coding
References and Further Reading
General Video Compression References:
• J.G. Apostolopoulos and S.J. Wee, ``Video Compression Standards'',
Wiley Encyclopedia of Electrical and Electronics Engineering, John
Wiley & Sons, Inc., New York, 1999.
• V. Bhaskaran and K. Konstantinides, Image and Video Compression
Standards: Algorithms and Architectures, Boston, Massachusetts:
Kluwer Academic Publishers, 1997.
• J.L. Mitchell, W.B. Pennebaker, C.E. Fogg, and D.J. LeGall, MPEG
Video Compression Standard, New York: Chapman & Hall, 1997.
• B.G. Haskell, A. Puri, A.N. Netravali, Digital Video: An Introduction to
MPEG-2, Kluwer Academic Publishers, Boston, 1997.
MPEG web site:
http://drogo.cselt.stet.it/mpeg
John G. Apostolopoulos
April 22, 2004 Page 59