This document provides an overview of video compression fundamentals and standards. It discusses JPEG compression for still images and video conferencing specifications involving intra-frame and inter-frame coding. Several video compression standards are described, including H.261 for ISDN video phones using QCIF resolution, H.263 for low bit-rate video using resolutions up to 16CIF, and MPEG formats including MPEG-1, MPEG-2 for digital TV, and MPEG-4 for internet applications. Benchmark metrics for evaluating compressed video quality are also covered.
This document describes a project to design an H.264 video decoder using Verilog. It implements the key decoding blocks like Context-Based Adaptive Binary Arithmetic Coding (CABAC), inverse quantization, and inverse discrete cosine transform. CABAC is the entropy decoding method used in H.264 that is computationally intensive. The project develops hardware modules for these blocks to accelerate decoding and enable real-time performance. It presents the designs of the individual modules and simulation results showing their functionality. The goal is to improve on software implementations by using dedicated hardware for the critical decoding stages.
H.261 is a video coding standard published in 1990 by ITU-T for videoconferencing over ISDN networks. It uses techniques like DCT, motion compensation, and entropy coding to achieve compression ratios over 100:1 for video calling. H.261 remains widely used in applications like Windows NetMeeting and video conferencing standards H.320, H.323, and H.324.
The surveillance systems are expected to record the videos in 24/7 and obviously it requires a huge storage space. Even though the hard disks are cheaper today, the number of CCTV cameras is also vertically increasing in order to boost up security. The video compression techniques is the only better option to reduce required the storage space; however, the existing video compression techniques are not adequate at all for the modern digital surveillance system monitoring as they require huge video streams. In this paper, a novel video compression technique is presented with a critical analysis of the experimental results.
H.120 was the first digital video coding standard developed in 1984. H.261 in the late 1980s was the first widespread success and established the modern structure for video compression that is still used today. MPEG-1 and MPEG-2/H.262 built upon H.261 with improvements like bidirectional prediction and half-pixel motion compensation. H.263 further enhanced compression performance and is now dominant for videoconferencing, adding features such as overlapped block motion compensation.
The document discusses video compression standards for conferencing and internet video. It describes the components and evolution of standards including H.261, H.263, H.263+, MPEG-1, MPEG-2, and MPEG-4. It focuses on the basics of H.263 including its frame formats, picture and macroblock types, and motion vectors. It also explains the improvements of H.263+ over H.263 such as additional negotiable options.
A Hybrid DWT-SVD Method for Digital Video Watermarking Using Random Frame Sel...researchinventy
This document presents a hybrid DWT-SVD method for digital video watermarking using random frame selection. The proposed method embeds a watermark into randomly selected video frames by applying discrete wavelet transform and singular value decomposition. The blue channel of selected frames is used for watermark embedding in the mid-frequency DWT coefficients. Experimental results show the method provides good imperceptibility and robustness against various attacks like compression, cropping, noise addition, contrast changes and tampering. The normalization coefficient between original and extracted watermarks is used to evaluate the performance under different attacks.
Spatial Scalable Video Compression Using H.264IOSR Journals
H.264 is a video compression standard that provides improved compression performance over prior standards like H.261 and H.263. It achieves spatial scalability by encoding video in a spatial manner that reduces the number of frames and file size. The paper simulates H.264 encoding and decoding of a QCIF video using JM software. It compares parameters like PSNR, CSNR, and MSE between the encoded and decoded video. H.264 provides 31-35% greater efficiency and lower bit rates compared to prior standards.
IOSR journal of VLSI and Signal Processing (IOSRJVSP) is an open access journal that publishes articles which contribute new results in all areas of VLSI Design & Signal Processing. The goal of this journal is to bring together researchers and practitioners from academia and industry to focus on advanced VLSI Design & Signal Processing concepts and establishing new collaborations in these areas.
This document describes a project to design an H.264 video decoder using Verilog. It implements the key decoding blocks like Context-Based Adaptive Binary Arithmetic Coding (CABAC), inverse quantization, and inverse discrete cosine transform. CABAC is the entropy decoding method used in H.264 that is computationally intensive. The project develops hardware modules for these blocks to accelerate decoding and enable real-time performance. It presents the designs of the individual modules and simulation results showing their functionality. The goal is to improve on software implementations by using dedicated hardware for the critical decoding stages.
H.261 is a video coding standard published in 1990 by ITU-T for videoconferencing over ISDN networks. It uses techniques like DCT, motion compensation, and entropy coding to achieve compression ratios over 100:1 for video calling. H.261 remains widely used in applications like Windows NetMeeting and video conferencing standards H.320, H.323, and H.324.
The surveillance systems are expected to record the videos in 24/7 and obviously it requires a huge storage space. Even though the hard disks are cheaper today, the number of CCTV cameras is also vertically increasing in order to boost up security. The video compression techniques is the only better option to reduce required the storage space; however, the existing video compression techniques are not adequate at all for the modern digital surveillance system monitoring as they require huge video streams. In this paper, a novel video compression technique is presented with a critical analysis of the experimental results.
H.120 was the first digital video coding standard developed in 1984. H.261 in the late 1980s was the first widespread success and established the modern structure for video compression that is still used today. MPEG-1 and MPEG-2/H.262 built upon H.261 with improvements like bidirectional prediction and half-pixel motion compensation. H.263 further enhanced compression performance and is now dominant for videoconferencing, adding features such as overlapped block motion compensation.
The document discusses video compression standards for conferencing and internet video. It describes the components and evolution of standards including H.261, H.263, H.263+, MPEG-1, MPEG-2, and MPEG-4. It focuses on the basics of H.263 including its frame formats, picture and macroblock types, and motion vectors. It also explains the improvements of H.263+ over H.263 such as additional negotiable options.
A Hybrid DWT-SVD Method for Digital Video Watermarking Using Random Frame Sel...researchinventy
This document presents a hybrid DWT-SVD method for digital video watermarking using random frame selection. The proposed method embeds a watermark into randomly selected video frames by applying discrete wavelet transform and singular value decomposition. The blue channel of selected frames is used for watermark embedding in the mid-frequency DWT coefficients. Experimental results show the method provides good imperceptibility and robustness against various attacks like compression, cropping, noise addition, contrast changes and tampering. The normalization coefficient between original and extracted watermarks is used to evaluate the performance under different attacks.
Spatial Scalable Video Compression Using H.264IOSR Journals
H.264 is a video compression standard that provides improved compression performance over prior standards like H.261 and H.263. It achieves spatial scalability by encoding video in a spatial manner that reduces the number of frames and file size. The paper simulates H.264 encoding and decoding of a QCIF video using JM software. It compares parameters like PSNR, CSNR, and MSE between the encoded and decoded video. H.264 provides 31-35% greater efficiency and lower bit rates compared to prior standards.
IOSR journal of VLSI and Signal Processing (IOSRJVSP) is an open access journal that publishes articles which contribute new results in all areas of VLSI Design & Signal Processing. The goal of this journal is to bring together researchers and practitioners from academia and industry to focus on advanced VLSI Design & Signal Processing concepts and establishing new collaborations in these areas.
The document summarizes a master's thesis presentation on real-time image processing using an Altera FPGA. It discusses using the FPGA to process high-resolution microscope images in real-time for feedback control. It presents the problem statement, theoretical background on FPGAs and image processing, and design and implementation of a system using the Altera Cyclone III FPGA board. The design implements a Nios II soft processor, video processing IP cores, and interfaces to DDR memory and DVI input/output. Future work focuses on improving system stability and migrating to the Zynq platform.
Video coding standards define bitstream structures and decoding methods for video compression. Popular standards include MPEG-1/2/4 and H.264/HEVC developed by ISO/IEC and ITU-T. Standards are developed through identification of requirements, algorithm development, selection of core techniques, validation testing, and publication. They enable interoperability and future decoding of emerging standards. [/SUMMARY]
This document discusses a project that aims to capture real-time video frames using a webcam, compress the frames using the H.263 codec, transmit the encoded stream over Ethernet, decode it at the receiving end for display. It describes the tools, video compression and encoding process using H.263, packetization for transmission, decoding, and analysis of compression ratio and quality using PSNR.
This document summarizes the development of a mobile locationing system using video, an inertial measurement unit (IMU), and a wireless network. The system was developed using a BeagleBoard-XM embedded platform connected to a 3G network. Video from a camera was compressed on the DSP of the BeagleBoard and transmitted over the wireless network. An IMU provided positioning data that was sent over the network. Testing showed the system could estimate positions and stream video while drawing between 698-1.16 amps of current depending on whether video compression used the ARM or DSP processors. Future work includes displaying trajectories, power optimization, using MQTT protocols, and casing design.
This document discusses post-processing and rate distortion algorithms for the VP8 video codec. It first provides background on the need for post-processing algorithms to reduce blocking artifacts in compressed video, and for rate control algorithms to regulate bitrates and achieve high video quality within bandwidth constraints. It then summarizes existing in-loop deblocking filters and post-processing algorithms. A novel optimal post-processing/in-loop filtering algorithm is described that can achieve better performance than H.264/AVC or VP8 by computing optimal filter coefficients. Finally, a proposed rate distortion optimization algorithm for VP8 is discussed to improve its rate control and coding efficiency.
The document discusses the H.264 video compression standard and its applications in video surveillance. H.264 provides much more efficient video compression than previous standards like MPEG-4 and Motion JPEG, reducing file sizes by over 80% without compromising quality. This allows for higher resolution, frame rate, and quality video streams using the same or lower bandwidth and storage compared to earlier standards. H.264 compression will enable uses like high frame rate surveillance at airports and casinos where bandwidth savings are most significant.
Requiring only half the bitrate of its predecessor, the new standard – HEVC or H.265 – will significantly reduce the need for bandwidth and expensive, limited spectrum. HEVC (H.265) will enable the launch of new video services and in particular ultra HD television (UHDTV).
State-of-the-art video compression techniques – HEVC/H.265 – can reduce the size of raw video by a factor of about 100 without any noticeable reduction in visual quality. With estimates indicating that compressed real-time video accounts for more than 50 percent of current network traffic, and this figure is set to rise to 90 percent within a few years, HEVC/H.265 will be a welcome relief for network operators.
New services, devices and changing viewing patterns are among the factors contributing to the growth in video traffic as people watch more and more traditional TV and video-streaming services on their mobile devices.
Ericsson has been heavily involved in the standardization of HEVC since it began in 2010, and this Ericsson Review article highlights some of the contributions that have led to the compression efficiency offered by HEVC.
.
Overview of the H.264/AVC video coding standard - Circuits ...Videoguy
The document provides an overview of the H.264/AVC video coding standard. Some key points:
- H.264/AVC aims to double the coding efficiency of prior standards like MPEG-2 and H.263 to allow higher quality video at lower bit rates.
- It achieves this through new coding tools like fractional pixel motion compensation, variable block-size motion compensation, intra prediction, and entropy coding.
- The standard defines the decoding process but provides flexibility in encoding implementations. It is intended for both conversational and non-conversational applications like video telephony, streaming, and storage.
1. The document discusses video compression technology, including digital television formats, video compression standards like MPEG-2 and H.264, video quality metrics, and video coding concepts.
2. Key video coding concepts covered are temporal compression using motion estimation and compensation between frames, spatial compression within frames using DCT transform and quantization, and entropy coding of coefficients.
3. Video compression aims to reduce the data required for transmission by removing spatial and temporal redundancy in video sequences.
This document summarizes a test report of the Technotrend Premium S2300 (Rev 2.3) modified DVB-S PCI satellite card distributed by DVB-Shop. The card was modified from the Hauppauge Nexus-S card to include a Crystal Audio DAC and RGB/S-Video outputs via Scart connector. Installation of the card and drivers was easy. Picture quality on a 42" plasma TV was improved over the composite signal of the Hauppauge Nexus-S card. The card supports DiSEqC 1.0 for satellite selection and ProgDVB software allows DiSEqC 1.2 support.
Video coding is an essential component of video streaming, digital TV, video chat and many other technologies. This presentation, an invited lecture to the US Patent and Trade Mark Office, describes some of the key developments in the history of video coding.
Many of the components of present-day video codecs were originally developed before 1990. From 1990 onwards, developments in video coding were closely associated with industry standards such as MPEG-2, H.264 and H.265/HEVC.
The presentation covers:
- Basic concepts of video coding
- Fundamental inventions prior to 1990
- Industry standards from 1990 to 2014
- Video coding patents and patent pools.
The document provides an overview of the High Efficiency Video Coding (HEVC) standard. It was developed jointly by ISO/IEC and ITU-T to provide roughly half the bit-rate of H.264/AVC for the same subjective quality. Key aspects of HEVC include use of larger block sizes, intra-picture prediction with 33 directional modes, motion vectors with quarter-sample precision, transform sizes from 4x4 to 32x32, adaptive coefficient scanning, in-loop filtering including deblocking and sample adaptive offset, and support for lossless and transform skipping modes. Many companies are starting to support HEVC in their video products and services.
This document provides an overview of HEVC (High Efficiency Video Coding) including:
- HEVC aims to provide roughly half the bitrate of H.264/AVC at the same quality.
- It uses block-based hybrid video coding with improved intra-prediction, transform, quantization and entropy coding techniques.
- HEVC supports a wide range of resolutions, color spaces and bit depths for 4K and beyond.
Pactron is an electronics manufacturing services provider that offers end-to-end design and manufacturing services. They have expertise in board-level design, engineering, and manufacturing. Pactron can compress time to market for customers through their integrated approach from design to manufacturing. They have experience serving industries such as semiconductor, medical devices, telecom, aerospace, and defense.
This document provides an overview and comparison of the H.265/HEVC and H.264/AVC video coding standards. It summarizes the key features and techniques of each, such as HEVC achieving around 40% higher data compression compared to H.264/AVC through improvements to prediction, transform coding, and entropy encoding. Experimental results testing various video sequences show HEVC provides significantly better compression efficiency. The document also reviews the technical details and implementations of both standards.
The document provides an overview of the High Efficiency Video Coding (HEVC) standard. Some key points:
- HEVC was created as a new video compression standard to address the growing needs of higher resolution video content and more efficient compression compared to prior standards like H.264.
- It achieves 50% bitrate reduction over H.264 for the same visual quality or improved quality at the same bitrate.
- The standard uses a block-based coding structure with coding tree units and supports intra-frame and inter-frame coding with motion estimation/compensation.
- It introduces more intra-prediction modes and block sizes along with improved transforms, quantization, and entropy coding.
The document discusses the H.264 video compression standard. It provides an overview of the standard, including its objectives to improve compression performance over previous standards. Key features that allow for superior compression compared to other standards are described, such as enhanced motion estimation and an improved deblocking filter. Performance comparisons show H.264 can provide bit rate savings of up to 50% compared to other standards like MPEG-2 and H.263.
The PMW-350K is a shoulder-mounted camcorder that records HD and SD video to SxS PRO solid state memory cards. It has three 2/3-inch CMOS sensors, a 16x HD zoom lens, and supports 1080p, 1080i, and 720p recording at various frame rates. It can record for up to 280 minutes to dual memory cards. Key features include four-channel audio, variable bitrates, and multi-format recording and playback capabilities.
The document discusses post-processing deblocking filters used in video coding standards like H.264 and MPEG-2. It describes how blocking artifacts can occur during video compression due to quantization and motion compensation. It then explains that deblocking filters help reduce blocking artifacts by applying filtering to block boundaries in the decoded video. Specifically, it discusses the differences between post-processing and in-loop deblocking filters, and provides details on how deblocking is implemented in standards like H.263+, H.264, MPEG-2, and JPEG.
This document proposes power efficient sum of absolute difference (SAD) algorithms for video compression. It describes:
1. Developing low power 1-bit full adder architectures including a proposed design using NAND, AND, and OR gates that improves power over existing designs.
2. Implementing 4x4 and 8x8 SAD architectures using the proposed low power full adder, ripple carry adders, and carry save adders.
3. Synthesizing the SAD designs in a 180nm technology and finding the proposed 4x4 SAD improves total power by 61% compared to an existing design.
This document summarizes spatial scalable video compression using H.264. It discusses previous video compression standards like H.261 and H.263. It then describes the key components of the H.264 encoder and decoder, including prediction models, spatial models and entropy encoding. Simulation results comparing parameters like PSNR, CSNR and MSE between encoded and decoded video using H.264 are presented. The paper concludes that H.264 provides 31-35% improved efficiency and bit rate reduction over previous standards.
IRJET- A Hybrid Image and Video Compression of DCT and DWT Techniques for H.2...IRJET Journal
This document discusses a hybrid image and video compression technique using both discrete cosine transform (DCT) and discrete wavelet transform (DWT) for H.265/HEVC video compression. The proposed hybrid DWT-DCT method exploits the advantages of both techniques for improved compression performance compared to using them individually. It involves applying DWT-DCT transformations to video frames, entropy coding the compressed frames with Huffman coding, and transmitting the bitstreams to the decoder. The technique is evaluated based on compression ratio, peak signal-to-noise ratio, and mean square error.
The document summarizes a master's thesis presentation on real-time image processing using an Altera FPGA. It discusses using the FPGA to process high-resolution microscope images in real-time for feedback control. It presents the problem statement, theoretical background on FPGAs and image processing, and design and implementation of a system using the Altera Cyclone III FPGA board. The design implements a Nios II soft processor, video processing IP cores, and interfaces to DDR memory and DVI input/output. Future work focuses on improving system stability and migrating to the Zynq platform.
Video coding standards define bitstream structures and decoding methods for video compression. Popular standards include MPEG-1/2/4 and H.264/HEVC developed by ISO/IEC and ITU-T. Standards are developed through identification of requirements, algorithm development, selection of core techniques, validation testing, and publication. They enable interoperability and future decoding of emerging standards. [/SUMMARY]
This document discusses a project that aims to capture real-time video frames using a webcam, compress the frames using the H.263 codec, transmit the encoded stream over Ethernet, decode it at the receiving end for display. It describes the tools, video compression and encoding process using H.263, packetization for transmission, decoding, and analysis of compression ratio and quality using PSNR.
This document summarizes the development of a mobile locationing system using video, an inertial measurement unit (IMU), and a wireless network. The system was developed using a BeagleBoard-XM embedded platform connected to a 3G network. Video from a camera was compressed on the DSP of the BeagleBoard and transmitted over the wireless network. An IMU provided positioning data that was sent over the network. Testing showed the system could estimate positions and stream video while drawing between 698-1.16 amps of current depending on whether video compression used the ARM or DSP processors. Future work includes displaying trajectories, power optimization, using MQTT protocols, and casing design.
This document discusses post-processing and rate distortion algorithms for the VP8 video codec. It first provides background on the need for post-processing algorithms to reduce blocking artifacts in compressed video, and for rate control algorithms to regulate bitrates and achieve high video quality within bandwidth constraints. It then summarizes existing in-loop deblocking filters and post-processing algorithms. A novel optimal post-processing/in-loop filtering algorithm is described that can achieve better performance than H.264/AVC or VP8 by computing optimal filter coefficients. Finally, a proposed rate distortion optimization algorithm for VP8 is discussed to improve its rate control and coding efficiency.
The document discusses the H.264 video compression standard and its applications in video surveillance. H.264 provides much more efficient video compression than previous standards like MPEG-4 and Motion JPEG, reducing file sizes by over 80% without compromising quality. This allows for higher resolution, frame rate, and quality video streams using the same or lower bandwidth and storage compared to earlier standards. H.264 compression will enable uses like high frame rate surveillance at airports and casinos where bandwidth savings are most significant.
Requiring only half the bitrate of its predecessor, the new standard – HEVC or H.265 – will significantly reduce the need for bandwidth and expensive, limited spectrum. HEVC (H.265) will enable the launch of new video services and in particular ultra HD television (UHDTV).
State-of-the-art video compression techniques – HEVC/H.265 – can reduce the size of raw video by a factor of about 100 without any noticeable reduction in visual quality. With estimates indicating that compressed real-time video accounts for more than 50 percent of current network traffic, and this figure is set to rise to 90 percent within a few years, HEVC/H.265 will be a welcome relief for network operators.
New services, devices and changing viewing patterns are among the factors contributing to the growth in video traffic as people watch more and more traditional TV and video-streaming services on their mobile devices.
Ericsson has been heavily involved in the standardization of HEVC since it began in 2010, and this Ericsson Review article highlights some of the contributions that have led to the compression efficiency offered by HEVC.
.
Overview of the H.264/AVC video coding standard - Circuits ...Videoguy
The document provides an overview of the H.264/AVC video coding standard. Some key points:
- H.264/AVC aims to double the coding efficiency of prior standards like MPEG-2 and H.263 to allow higher quality video at lower bit rates.
- It achieves this through new coding tools like fractional pixel motion compensation, variable block-size motion compensation, intra prediction, and entropy coding.
- The standard defines the decoding process but provides flexibility in encoding implementations. It is intended for both conversational and non-conversational applications like video telephony, streaming, and storage.
1. The document discusses video compression technology, including digital television formats, video compression standards like MPEG-2 and H.264, video quality metrics, and video coding concepts.
2. Key video coding concepts covered are temporal compression using motion estimation and compensation between frames, spatial compression within frames using DCT transform and quantization, and entropy coding of coefficients.
3. Video compression aims to reduce the data required for transmission by removing spatial and temporal redundancy in video sequences.
This document summarizes a test report of the Technotrend Premium S2300 (Rev 2.3) modified DVB-S PCI satellite card distributed by DVB-Shop. The card was modified from the Hauppauge Nexus-S card to include a Crystal Audio DAC and RGB/S-Video outputs via Scart connector. Installation of the card and drivers was easy. Picture quality on a 42" plasma TV was improved over the composite signal of the Hauppauge Nexus-S card. The card supports DiSEqC 1.0 for satellite selection and ProgDVB software allows DiSEqC 1.2 support.
Video coding is an essential component of video streaming, digital TV, video chat and many other technologies. This presentation, an invited lecture to the US Patent and Trade Mark Office, describes some of the key developments in the history of video coding.
Many of the components of present-day video codecs were originally developed before 1990. From 1990 onwards, developments in video coding were closely associated with industry standards such as MPEG-2, H.264 and H.265/HEVC.
The presentation covers:
- Basic concepts of video coding
- Fundamental inventions prior to 1990
- Industry standards from 1990 to 2014
- Video coding patents and patent pools.
The document provides an overview of the High Efficiency Video Coding (HEVC) standard. It was developed jointly by ISO/IEC and ITU-T to provide roughly half the bit-rate of H.264/AVC for the same subjective quality. Key aspects of HEVC include use of larger block sizes, intra-picture prediction with 33 directional modes, motion vectors with quarter-sample precision, transform sizes from 4x4 to 32x32, adaptive coefficient scanning, in-loop filtering including deblocking and sample adaptive offset, and support for lossless and transform skipping modes. Many companies are starting to support HEVC in their video products and services.
This document provides an overview of HEVC (High Efficiency Video Coding) including:
- HEVC aims to provide roughly half the bitrate of H.264/AVC at the same quality.
- It uses block-based hybrid video coding with improved intra-prediction, transform, quantization and entropy coding techniques.
- HEVC supports a wide range of resolutions, color spaces and bit depths for 4K and beyond.
Pactron is an electronics manufacturing services provider that offers end-to-end design and manufacturing services. They have expertise in board-level design, engineering, and manufacturing. Pactron can compress time to market for customers through their integrated approach from design to manufacturing. They have experience serving industries such as semiconductor, medical devices, telecom, aerospace, and defense.
This document provides an overview and comparison of the H.265/HEVC and H.264/AVC video coding standards. It summarizes the key features and techniques of each, such as HEVC achieving around 40% higher data compression compared to H.264/AVC through improvements to prediction, transform coding, and entropy encoding. Experimental results testing various video sequences show HEVC provides significantly better compression efficiency. The document also reviews the technical details and implementations of both standards.
The document provides an overview of the High Efficiency Video Coding (HEVC) standard. Some key points:
- HEVC was created as a new video compression standard to address the growing needs of higher resolution video content and more efficient compression compared to prior standards like H.264.
- It achieves 50% bitrate reduction over H.264 for the same visual quality or improved quality at the same bitrate.
- The standard uses a block-based coding structure with coding tree units and supports intra-frame and inter-frame coding with motion estimation/compensation.
- It introduces more intra-prediction modes and block sizes along with improved transforms, quantization, and entropy coding.
The document discusses the H.264 video compression standard. It provides an overview of the standard, including its objectives to improve compression performance over previous standards. Key features that allow for superior compression compared to other standards are described, such as enhanced motion estimation and an improved deblocking filter. Performance comparisons show H.264 can provide bit rate savings of up to 50% compared to other standards like MPEG-2 and H.263.
The PMW-350K is a shoulder-mounted camcorder that records HD and SD video to SxS PRO solid state memory cards. It has three 2/3-inch CMOS sensors, a 16x HD zoom lens, and supports 1080p, 1080i, and 720p recording at various frame rates. It can record for up to 280 minutes to dual memory cards. Key features include four-channel audio, variable bitrates, and multi-format recording and playback capabilities.
The document discusses post-processing deblocking filters used in video coding standards like H.264 and MPEG-2. It describes how blocking artifacts can occur during video compression due to quantization and motion compensation. It then explains that deblocking filters help reduce blocking artifacts by applying filtering to block boundaries in the decoded video. Specifically, it discusses the differences between post-processing and in-loop deblocking filters, and provides details on how deblocking is implemented in standards like H.263+, H.264, MPEG-2, and JPEG.
This document proposes power efficient sum of absolute difference (SAD) algorithms for video compression. It describes:
1. Developing low power 1-bit full adder architectures including a proposed design using NAND, AND, and OR gates that improves power over existing designs.
2. Implementing 4x4 and 8x8 SAD architectures using the proposed low power full adder, ripple carry adders, and carry save adders.
3. Synthesizing the SAD designs in a 180nm technology and finding the proposed 4x4 SAD improves total power by 61% compared to an existing design.
This document summarizes spatial scalable video compression using H.264. It discusses previous video compression standards like H.261 and H.263. It then describes the key components of the H.264 encoder and decoder, including prediction models, spatial models and entropy encoding. Simulation results comparing parameters like PSNR, CSNR and MSE between encoded and decoded video using H.264 are presented. The paper concludes that H.264 provides 31-35% improved efficiency and bit rate reduction over previous standards.
IRJET- A Hybrid Image and Video Compression of DCT and DWT Techniques for H.2...IRJET Journal
This document discusses a hybrid image and video compression technique using both discrete cosine transform (DCT) and discrete wavelet transform (DWT) for H.265/HEVC video compression. The proposed hybrid DWT-DCT method exploits the advantages of both techniques for improved compression performance compared to using them individually. It involves applying DWT-DCT transformations to video frames, entropy coding the compressed frames with Huffman coding, and transmitting the bitstreams to the decoder. The technique is evaluated based on compression ratio, peak signal-to-noise ratio, and mean square error.
IBM VideoCharger and Digital Library MediaBase.docVideoguy
This document provides an overview of video streaming over the internet. It discusses video compression standards like H.261, H.263, MJPEG, MPEG1, MPEG2 and MPEG4. It also covers internet transport protocols like TCP and UDP, and challenges like firewall penetration. Both commercial streaming products and research projects aiming to improve streaming are reviewed, with limitations of current approaches outlined. The SuperNOVA research project is evaluated against other work seeking to make high quality video streaming over the internet practical.
The document provides information about MPEG compression standards. It discusses the history of MPEG and how it was established in 1988 as a joint effort between ISO and IEC to set standards for audio and video compression. It describes several MPEG standards including MPEG-1, MPEG-2, MPEG-4, MPEG-7, and MPEG-21. MPEG-4 is discussed in more detail, explaining that it offers greater efficiency than MPEG-2, allows encoding of mixed data types, and enables interaction of audio-visual scenes at the receiver end. The document contains diagrams and tables to illustrate key points about the different MPEG standards and compression techniques.
A REAL-TIME H.264/AVC ENCODER&DECODER WITH VERTICAL MODE FOR INTRA FRAME AND ...csandit
The video coding standards are being developed to satisfy the requirements of applications for
various purposes, better picture quality, higher coding efficiency, and more error robustness.
The new international video coding standard H.264 /AVC aims at having significant
improvements in coding efficiency, and error robustness in comparison with the previous
standards such as MPEG-2, H261, H263,and H264. Video stream needs to be processed from
several steps in order to encode and decode the video such that it is compressed efficiently with
available limited resources of hardware and software. All advantages and disadvantages of
available algorithms should be known to implement a codec to accomplish final requirement.
The purpose of this project is to implement all basic building blocks of H.264 video encoder and
decoder. The significance of the project is the inclusion of all components required to encode
and decode a video in MatLab .
The document discusses video compression basics and MPEG-2 video compression. It explains that video frames contain redundant spatial and temporal data that can be compressed. MPEG-2 uses three frame types (I, P, B frames) and compresses frames using intra-frame and inter-frame encoding techniques like DCT, quantization, and entropy encoding to remove redundancy. The encoding process transforms raw video frames to compressed bitstreams for efficient storage and transmission.
Performance and Analysis of Video Compression Using Block Based Singular Valu...IJMER
This document presents an analysis of low-complexity video compression using block-based singular value decomposition (SVD) algorithms. It begins with an introduction to video compression and its importance for reducing storage and transmission costs. Current video compression standards like MPEG and H.26x are computationally expensive, making them unsuitable for real-time applications. The document then discusses block SVD algorithms as an alternative that can provide higher quality compression at lower computational complexity. It analyzes reducing the time complexity of video compression using block SVD and compares it to other compression methods. The document outlines the SVD decomposition process and how a 2D version can be applied to groups of image blocks for more efficient compression than 1D SVD.
An overview Survey on Various Video compressions and its importanceINFOGAIN PUBLICATION
With the rise of digital computing and visual data processing, the need for storage and transmission of video data became prevalent. Storage and transmission of uncompressed raw visual data is not a good practice, because it requires a large storage space and great bandwidth. Video compression algorithms can compress this raw visual data or video into smaller files with a little sacrifice on the quality. This paper an overview and comparison of standard efforts on video compression algorithm of: MPEG-1, MPEG-2, MPEG-4, MPEG-7
This document discusses techniques for effective compression of digital video. It introduces several key algorithms used in video compression, including discrete cosine transform (DCT) for spatial redundancy reduction, motion estimation (ME) for temporal redundancy reduction, and embedded zerotree wavelet (EZW) transforms. DCT is used to compress individual video frames by removing spatial correlations within frames. Motion estimation compares blocks of pixels between frames to find and encode motion vectors rather than full pixel values, reducing file size. Combined, these techniques can achieve high compression ratios while maintaining high video quality for storage and transmission.
Video compression techniques & standards lama mahmoud_report#1engLamaMahmoud
This document provides a mid-semester report on video signal coding, compression, and transmission over computer networks. It discusses the history and theoretical background of key standards like JPEG and MPEG. It then describes different video compression formats including JPEG, MPEG-1, MPEG-2, MPEG-4, and H.264. The basics of compression techniques like lossless vs lossy and latency are also covered. Commercial applications of these standards and their differences are analyzed.
This document discusses video quality analysis for H.264 based on the human visual system. It proposes an improved video quality assessment method that adds color comparison to structural similarity measurement. The method separates similarity measurement into four comparisons: luminance, contrast, structure, and color. Experimental results on video sets with two distortion types show the proposed method's quality scores are more consistent with visual quality than classical methods. It also discusses the H.264 video coding standard and provides examples of encoding and decoding experimental results.
Video compression reduces the quantity of data used to represent digital video images through spatial image compression and temporal motion compensation. It is an example of source coding in information theory. Standard video compression techniques include H.120, H.261, MPEG-1, H.262/MPEG-2, H.263, MPEG-4, and H.264/AVC, which are used for applications like video conferencing, DVDs, broadcasting, and online video. H.264/AVC implements motion estimation and compensation algorithms through block-based motion prediction and a two-step search algorithm to further reduce file sizes, but these can result in block artifacts. The author aims to improve throughput and PSNR by implementing
A REVIEW ON LATEST TECHNIQUES OF IMAGE COMPRESSIONNancy Ideker
This document reviews various techniques for image compression. It begins by discussing the need for image compression in applications like remote sensing, broadcasting, and long-distance communication. It then categorizes compression techniques as either lossless or lossy. Popular lossless techniques discussed include run length encoding, LZW coding, and Huffman coding. Lossy techniques reviewed are transform coding, block truncation coding, vector quantization, and subband coding. The document evaluates these techniques and compares their advantages and disadvantages. It also discusses performance metrics for image compression like PSNR, compression ratio, and mean square error. Finally, it reviews several research papers on topics like vector quantization-based compression and compression using wavelets and Huffman encoding.
Perceptually Lossless Compression with Error Concealment for Periscope and So...sipij
We present a video compression framework that has two key features. First, we aim at achieving perceptually lossless compression for low frame rate videos (6 fps). Four well-known video codecs in the literature have been evaluated and the performance was assessed using four well-known performance metrics. Second, we investigated the impact of error concealment algorithms for handling corrupted pixels
due to transmission errors in communication channels. Extensive experiments using actual videos have been performed to demonstrate the proposed framework.
PERFORMANCE EVALUATION OF H.265/MPEG-HEVC, VP9 AND H.264/MPEGAVC VIDEO CODINGijma
This study evaluates the performance of the three latest video codecs H.265/MPEG-HEVC, H.264/MPEGAVC
and VP9. The evaluation is based on both subjective and objective quality metrics. The assessment
metric Double Stimulus Impairment Scale (DSIS) is used to evaluate the subjective quality of the
compressed video sequences. The Peak Signal-to-Noise Ratio (PSNR) metricis used for the objective
evaluation. Moreover, this work studies the effect of frame rate and resolution on the encoders’
performance. The extensive number of experiments are conducted with similar encoding configurations for
the three studied encoders. The evaluation results show that H.265/MPEG-HEVC provides superior bitrate
saving capabilities compared to H.264 and VP9. However, VP9 shows lower encoding time than
H.265/MPEG-HEVC but higher encoding time compared to H.264.
Low complexity video coding for sensor networkeSAT Journals
Abstract Modern video codecs such as H.264/AVC give state-of-the-art compression performance. However, extensive use of optimization tools makes them highly complex and hence not suitable for wireless video sensor network. In this paper an efficient video codec with substantially reduced complexity is proposed. Simulation result shows that the proposed video codec gives comparable compression performance compared to H.264/AVC but at substantially reduced computational complexity. Keywords—Low complexity coding, Sensor network, Video coding, Wavelet transform.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
Axis offers a broad portfolio of network cameras and video encoders based on its ARTPEC-3 chip. The
performance of Axis products, in terms of streams and frame rate, is important, and we will focus on the
performance of Axis network products based on ARTPEC-3 in this paper.
The intended audience of this document is technical personnel and system integrators.
this is based on JNVU jodhpur for BCA student
prepared by :
Assistant Professor
Gajendra Jinagr
for more update connected with me 9166304153(whatsapp+)
Similar to Video compressiontechniques&standards lamamahmoud_report#2 (20)
KuberTENes Birthday Bash Guadalajara - K8sGPT first impressionsVictor Morales
K8sGPT is a tool that analyzes and diagnoses Kubernetes clusters. This presentation was used to share the requirements and dependencies to deploy K8sGPT in a local environment.
A review on techniques and modelling methodologies used for checking electrom...nooriasukmaningtyas
The proper function of the integrated circuit (IC) in an inhibiting electromagnetic environment has always been a serious concern throughout the decades of revolution in the world of electronics, from disjunct devices to today’s integrated circuit technology, where billions of transistors are combined on a single chip. The automotive industry and smart vehicles in particular, are confronting design issues such as being prone to electromagnetic interference (EMI). Electronic control devices calculate incorrect outputs because of EMI and sensors give misleading values which can prove fatal in case of automotives. In this paper, the authors have non exhaustively tried to review research work concerned with the investigation of EMI in ICs and prediction of this EMI using various modelling methodologies and measurement setups.
6th International Conference on Machine Learning & Applications (CMLA 2024)ClaraZara1
6th International Conference on Machine Learning & Applications (CMLA 2024) will provide an excellent international forum for sharing knowledge and results in theory, methodology and applications of on Machine Learning & Applications.
Using recycled concrete aggregates (RCA) for pavements is crucial to achieving sustainability. Implementing RCA for new pavement can minimize carbon footprint, conserve natural resources, reduce harmful emissions, and lower life cycle costs. Compared to natural aggregate (NA), RCA pavement has fewer comprehensive studies and sustainability assessments.
Embedded machine learning-based road conditions and driving behavior monitoringIJECEIAES
Car accident rates have increased in recent years, resulting in losses in human lives, properties, and other financial costs. An embedded machine learning-based system is developed to address this critical issue. The system can monitor road conditions, detect driving patterns, and identify aggressive driving behaviors. The system is based on neural networks trained on a comprehensive dataset of driving events, driving styles, and road conditions. The system effectively detects potential risks and helps mitigate the frequency and impact of accidents. The primary goal is to ensure the safety of drivers and vehicles. Collecting data involved gathering information on three key road events: normal street and normal drive, speed bumps, circular yellow speed bumps, and three aggressive driving actions: sudden start, sudden stop, and sudden entry. The gathered data is processed and analyzed using a machine learning system designed for limited power and memory devices. The developed system resulted in 91.9% accuracy, 93.6% precision, and 92% recall. The achieved inference time on an Arduino Nano 33 BLE Sense with a 32-bit CPU running at 64 MHz is 34 ms and requires 2.6 kB peak RAM and 139.9 kB program flash memory, making it suitable for resource-constrained embedded systems.
Electric vehicle and photovoltaic advanced roles in enhancing the financial p...IJECEIAES
Climate change's impact on the planet forced the United Nations and governments to promote green energies and electric transportation. The deployments of photovoltaic (PV) and electric vehicle (EV) systems gained stronger momentum due to their numerous advantages over fossil fuel types. The advantages go beyond sustainability to reach financial support and stability. The work in this paper introduces the hybrid system between PV and EV to support industrial and commercial plants. This paper covers the theoretical framework of the proposed hybrid system including the required equation to complete the cost analysis when PV and EV are present. In addition, the proposed design diagram which sets the priorities and requirements of the system is presented. The proposed approach allows setup to advance their power stability, especially during power outages. The presented information supports researchers and plant owners to complete the necessary analysis while promoting the deployment of clean energy. The result of a case study that represents a dairy milk farmer supports the theoretical works and highlights its advanced benefits to existing plants. The short return on investment of the proposed approach supports the paper's novelty approach for the sustainable electrical system. In addition, the proposed system allows for an isolated power setup without the need for a transmission line which enhances the safety of the electrical network
Literature Review Basics and Understanding Reference Management.pptxDr Ramhari Poudyal
Three-day training on academic research focuses on analytical tools at United Technical College, supported by the University Grant Commission, Nepal. 24-26 May 2024
Technical Drawings introduction to drawing of prisms
Video compressiontechniques&standards lamamahmoud_report#2
1. Page 1 of 13
VIDEO COMPRESSION FUNDAMENTALS AND
STANDARDS
LAMA MAHMOUD
Khalifa University of Science, Technology and Research
Electronics and Computer Engineering Department (ECE)
ELCE 491: Independent Study
Fall Semester / 2013-2014
A report submitted to Dr. Andrzej Sluzek as a one of the independent study course
reports
2. Page 2 of 13
CONTENTS
ABSTRACT.................................................................................................................................... 3
CHAPTER [1]: INTRODUCTION ................................................................................................ 4
1.1 An overview of JPEG format for still images.................................................................. 4
1.2 Theoretical background about video conferencing specifications ................................... 5
CHAPTER [2]: VIDEO COMPRESSION STANDARDS ............................................................ 5
1. H. 261 STANDARD................................................................................................................ 6
2. H. 263 STANDARD................................................................................................................ 6
3. MPEG - 1 Format: ................................................................................................................... 7
4. MPEG – 2 Format.................................................................................................................... 8
5. MPEG – 4 Format.................................................................................................................... 9
CHAPTER [4]: CONCLUSION..................................................................................................... 9
1. Average Absolute Difference ............................................................................................ 10
2. Mean Square Error (MSE)................................................................................................. 10
3. Signal to Noise Ratio (SNR).............................................................................................. 11
4. Peak Signal to Noise Ratio (PSNR)................................................................................... 11
CHAPTER [4]: CONCLUSION................................................................................................... 11
REFERENCES ............................................................................................................................. 12
LIST OF FIGURES AND TABLES
Figure 1: A block diagram for the JPEG encoder........................................................................... 4
Figure 2: A block diagram for the JPEG Decoder.......................................................................... 4
Figure 3: An example of the intra-frame and predictive frame coding .......................................... 5
Figure 4: Subjective and Objective Benchmarks for comparisons................................................. 9
Table 1: Different applications with most suitable video compression format for each .............. 12
3. Page 3 of 13
ABSTRACT
A video could be categorized into two types; namely, a video conference (slow motion) or a high
motion video. The former can be defined as a video with no or small motions. One example of
video conferencing is the video taken by a webcam in your laptop during a Skype video call or
the videos recorded by the security cameras. On the other hand, the latter or high motion video is
a video just like the sports videos in which an intensive motion will be presented. Technically,
video conferencing applications contains successive frames with few pixels changed from one
frame to another, and, high motion videos includes a high number of pixels changed from one
frame to another.
Several video compression formats took place in the past few decades. As technology
gets vastly improved especially with the newly installed hardware devices (such as HD TV and
Internet TV); higher compression ratio was demanded in order to compensate for the limited
available bandwidth. In addition, along with the newly designed video compression formats,
some benchmarks to evaluate the quality of the modified video were determined.
This report describes the resolution, advantages and disadvantages of the most commonly
used video compression formats such as H. 261, H. 263, MPEG-1, MPEG-2 and MPEG-4
formats. Furthermore, benchmark assessments are discussed in order to determine the quality of
the resultant video. Finally, a list of the most commonly-used video applications will be matched
with the most suitable compression format for each.
4. Page 4 of 13
CHAPTER [1]: INTRODUCTION
1.1 An overview of JPEG format for still images
Basically, JPEG format is one of the ways in which a still image can be compressed. In general,
the image will be divided into small 8x8 blocks. Then, discrete cosine transform (or DCT for
short) will be generated for each block and such DCT values will be quantized according to a
quantization table. In the next step, the 8x8 block will be converted into 64 (1DC + 63 AC)
values by using the zigzag method and usually, an entropy encoder will be used to code these
values. Figures [1] & [2] below present the schematic diagram of the JPEG encoder and decoder
respectively.
Figure 1: A block diagram for the JPEG encoder
Figure 2: A block diagram for the JPEG Decoder
Furthermore, in image compression, any image will consist of the three main colors which are
red, green and blue (RGB colors for short). Recently, for further compression and in order to
have more control on the colors’ ratios, a still image will be further filtered into chrominance
(i.e. grey information) and luminance (i.e. color information). The luminance (Y) is equivalent to
the following formula:
Y= 0.3R + 0.59G + 0.11B
5. Page 5 of 13
1.2 Theoretical background about video conferencing specifications
Mainly, video conferencing techniques are applications which are not motion intensive and
require limited motion search and estimation strategies (i.e. Skype video conferencing is an
example). They are optimized to achieve very high compression ratios for full color; real time
video transmissions. They combine intra frame (DCT) coding and inter-frame coding to provide
a good compression and decompression ratios as follows;
1. Intra-Frame: We use JPEG compression which is basically a discrete cosine transform
(DCT) according to the following formula;
N-1 M-1
C (k1,k2) = ∑ ∑ 4x(n,m) cos[pk1(2n+1)/2N]cos [pk2(2n+1)/2M]
n=0 m=0
2. Inter-Frame: Because a video is basically a 20-30 frames/second and in video
conferencing; we assume that motion is limited. Therefore, when you compare between
one frame and the previous one; you will notice that only few pixels are changed. So that;
inter-frame; we will only send the changed pixels from one frame to another. Inter-frame
coding is called “predictive inter-frame coding”.
Figure [3] below shows an example of the Intra-frame and the predictive-frame coding.
Figure 3: An example of the intra-frame and predictive frame coding
CHAPTER [2]: VIDEO COMPRESSION STANDARDS
Several video conferencing techniques were developed in the past few decades. However, only
some of them were used in real life applications. In fact, for a video compression technique to be
well-considered by the International Telecommunication Union (or ITU for short), several
criteria should be met as follows;
Interoperability: should assure that encoders and decoders from different manufacturers
work together seamlessly.
Innovation: should perform significantly better than previous standard.
6. Page 6 of 13
Competition: should be flexible enough to allow competition between manufacturers
based on technical merit. Only standardize bit-stream syntax and reference decoder.
Independence from transmission and storage media: should be flexible enough to be
used for a range of applications.
Forward compatibility: should decode bit-streams from prior standard
Backward compatibility: prior generation decoders should be able to partially decode
new bit-streams
In this section, selected video compression standards will be presented along with their
specifications, advantages, disadvantages and most suitable applications.
1. H. 261 STANDARD
H. 261 International standard was mainly designed for ISDN picture phones and for video
conferencing systems in 1990. The main characteristics of the H. 261 Standard are as follows;
1. Quarter Common Intermediate Format (OCIF).
Luminance: 144x176, Chrominance: 72x88
2. Common Intermediate Format (CIF).
Luminance: 288x352, Chrominance: 144x176
3. The Chrominance components are subsampled by two in both the vertical and horizontal
directions
4. 8x8 DCT
5. Macro-Block
6. 4Y + U + V = 6 blocks; more gray information
7. Group of Blocks= 11x3 Marco Blocks
8. Interceded Blocks use 16x16
Although H. 261 standard requires a comparatively low bandwidth, it has some disadvantages as
follows;
1. Old days ago; Etisalat in UAE for instance, used to sell them as both hardware and
software together in one hardware machine. Unless you have the hardware. You cannot
use the technology; that’s why it was unsuccessful.
2. Limited Resolution: 144x176 Chrominance
2. H. 263 STANDARD
H. 263 is an improved standard for low bit-rate. Like H. 261, it uses the transform coding for
intra-frames and predictive coding for inter-frames. Furthermore, H. 263 supports the following
resolutions;
7. Page 7 of 13
1. Sub QCIF:
Luminance: 96x128, Chrominance: 48x64
2. QCIF
Luminance 144x176 Chrominance 72x88
3. CIF
Luminance 288x352 Chrominance 144x176
4. 4CIF:
Luminance 576x704, Chrominance: 288x352 just likw a normal TV
5. 16 CIF:
Luminance 1152x1408, Chrominance 576X704
6. 8x8 DCT, JPEG uses DCT (Discrete Cosine Transform)
7. Macro-block; 4Y + U +V
8. Motion estimation 16x16 and 8x8; varies according to the residual error to achieve better
performance
In comparison, the advantages of H. 263 include;
1. Comparatively, unlike H. 261, it was a software only (that is why skype was more
successful)
2. Improved resolution than H. 261, but still limited
3. Resolution could be alternated according to the bandwidth (more bandwidth, more pixels,
more frames; depending in the used software mobile or laptop).
As was mentioned previously, H.261 and H. 263 formats are designed for video conferencing
applications were slow motion only will take place. On the other hand, the following formats are
used for full motion video applications (with more motion-intensive than video conferencing
format). Clearly, this format type requires higher data rate and high bandwidth because
compression ratio will be less.
3. MPEG - 1 Format:
The Moving Picture Experts Group (MPEG) is a working group of experts that was formed
by ISO and IEC to set standards for audio and video compression and transmission. MPEG – 1
Format Algorithm is as follows:
1. Intra Frame coding: I-Frame; similar to JPEG (low compression ratios) and they are used
as random access points
2. Inter-frame Coding: -Frames, prediscted frames are coded using forward predictive
coding, where the actual frame coded with reference to the previous frame. Compression
ratio is higher than of the I frame
8. Page 8 of 13
3. B-Frames: Bi-directional frames are coded using two reference frames a past and future
frame.
MPEG -1 Resolution are as follows:
1. 8x8 DCT
2. 16 X 16 Motion compensated blocks
3. 4x3 aspect ratio
The disadvantage of MPEG -1:
1. MPEG – 1 is used to store video on compact disc VDC. 1 hour/Disc and 2 Discs/ movie.
So you need to change the VDC
2. Uses two channel steros audio; not surrounded sound. MPEG 1, Layer 1, MAPEG 2,
Layer 2 and MPEG 1, layer 3 or (mp3)
3. Low resolution: 4x4 aspect ratio (normal TV, not HD) with bit rate: 1 to 1.5
Mbits/second
The advantages of MPEG 1:
1. Bidirectional frame coding
2. Used for full motion rather than video conferencing compression only.
3. MPEG 2 outperforms MPEG 1
4. MPEG – 2 Format
The MPEG-2 format is the format that DVD’s are based on. Any software DVD player should be
able to play an MPEG-2 movie. However, note that MPEG-2 files are very large, approaching a
megabyte per running second.
MPEG – 2 Resolutions:
1. Supports high resolution (16383 x 16383)
2. Chrominance Sampling:
a. 3:2:0
b. 4:2:2
c. 4:4:4
Advantages of MPEG – 2:
1. Supports Bit rate 2-10Mbits/second and 3x4 aspect ratio
2. DVD up to 5.1 channels (surround sound)
9. Page 9 of 13
Benchmarks for
Comparasions
Subjective
Assesment
Objective
Assesment
1. Average Absolute Difference
2. Mean Squared Error (MSE)
3. Signal to Noise Ratio (SNR)
4. Peak Signal to Noise Ratio
(PSNR)
Disadvantages of MPEG – 2:
1. Pixels are not labeled, so I cannot change objects from one frame and put it in another
one.
2. Designed to be played only on a computer
5. MPEG – 4 Format
Was originally intended for very low bit rates; however; it went beyond high compression ratios
into areas such as content-based interactivity. MPEG – 4 will have a special language MSDL
which describes how to process the elementary data streams. Data could be MPEG – 1 or MPEG
– 2 or 2D or 3D synthesized images. Compression ratios will be achieved because missing bits
will be reconstructed using sets of tools and algorithms.
MPEG – 4 Features:
1. Universal accessibility
2. The ability to operate in extremely error prone environment (Mobile systems)
3. The possibility of user interaction when presenting audio and video information
4. Better compression ratio than MPEG – 2
5. Downloading decoding tools
6. Simultaneous use of data from different sources
7. Hybrid coding of natural and synthetic objects
8. Communications between several participants
9. Integration of real time applications and non-real time (stored) applications.
CHAPTER [4]: CONCLUSION
Consequently, after performing the compression for a selected video, some evaluation tools
could be used in order to evaluate the quality of the compressed video file. In this section,
several benchmarks for comparison will take place such as average absolute error, MSE, SNR
and PSNR (see figure 4).
Figure 4: Subjective and Objective Benchmarks for comparisons
10. Page 10 of 13
To begin with, subjective and objective assessments are the two main benchmarks in which a
video could be evaluated. The subjective assessment is based on considering the ranking taken
by a survey for a selected number of persons. For example, the distorted data set will be rank-
ordered from best to worst as follows;
Imperceptible 5
Perceptible, not annoying 4
Slightly annoying 3
Annoying 2
Very annoying 1
As it might be noticed; such an assessment is never absolutely accurate since it depends on
people’s opinions which might be slightly different from one sample to another. Alternatively,
the objective assessment considers mathematical interpretations and numbers in which a more
accurate evaluation will be revealed. The following sections will mainly present the four
different objective benchmarks that are used to evaluate the compressed data.
1. Average Absolute Difference
Basically, if you have two images with NxM pixels where Pij are the pixels of the original image
and P’
ij are the pixels of the modified image, then the average absolute difference is;
∑ ∑ | Pij - P`ij |
For example; if both the original and modified images have 4x4 pixels as follows;
Original
20 50
10 60
Then, the absolute average error could be calculated as follows; = 2
2. Mean Square Error (MSE)
Similarly, the MSE is basically given by:
∑ ∑ (Pij - P`ij )2
Based on the previous example; the MSE could be calculated as follows; = 6.5
Modified
23 49
10 64
11. Page 11 of 13
3. Signal to Noise Ratio (SNR)
The signal to noise ratio could be calculated as follows;
SNR=
∑ ∑
∑ ∑ ( )
Based on the previous example; SNR could be calculated as follows;
SNR =
( ) ( ) ( ) ( )
= 253.746
SNR dB = 10 log10 (SNR) = 10 log10 (253.746) = 24 dB
4. Peak Signal to Noise Ratio (PSNR)
PSNR=
∑ ∑ ( )
∑ ∑ ( )
Based on the previous example; the PSNR could be calculated as follows;
PSNR =
( )( )
= 10003.8
PSNR dB = 10 log10 (PSNR) = 10 log10 (10003.8) = 40 dB
CHAPTER [4]: CONCLUSION
Finally, different video applications require different compression ratios and hence, different
video compression formats. However, some formats would outperform the others. For example,
to compare with, MPEG1 and MPEG2, they are different in the following aspects:
1. MPEG2 succeeded the MPEG1 to address some of the older standard's weaknesses;
2. MPEG2 has better quality than MPEG1;
3. MPEG1 is used for VCD while MPEG2 is used for DVD;
4. One may consider MPEG2 as MPEG1 that supports higher resolutions and capable of
using higher and variable bitrates;
5. MPEG1 is older than MPEG2 but the former is arguably better in lower bitrates;
6. MPEG2 has a more complex encoding algorithm.
The following table presents the different applications and the most suitable compression format
for each application.
12. Page 12 of 13
Table 1: Different applications with most suitable video compression format for each
REFERENCES
[1] Graphics & Media Lab Video Group (2007). Lossless Video Codecs Comparison. Moscow State
University.
[2] B. G. Haskell, A. Puri, and A. N. Netravali, Digital Video: Chapter 2: MPEG-4 compression basics,
Springer, New York, 1997.
[3] B. G. Haskell, A. Puri, and A. N. Netravali, Digital Video: An Introduction to MPEG-2,
Chapman and Hall, New York, 1997.
[4] D. J. LeGall, “MPEG: A Video Compression Standard for Multimedia Applications,’ Communications of
the ACM, Vol. 34, No.4, April 1991, pp. 47–58.