Video compression
 LaLit DuBey
 Sachin Sharma
 Somnath
 Liniker
 DeePak bhanushali
 Ajay
“ Video compression technologies are about
reducing
and removing redundant video data so that a digital
video file can be effectively sent over a network and
stored on computer disks ”
Uncompressed video that we record from a video
camera(e.g. movies) occupies huge amount of data.
For example,
a video clip recorded with a resolution of 720x576(PAL), with a
refresh rate of 25fps and 8-bit color depth takes:
720 x 576 x 25 x 8 + 2 x (360 x 576 x 25 x 8) = 1.66 Mb/s
(luminance+chrominance)
For HDTV(High definition television) which uses a resolution of
1920x1080
1920 x 1080 x 60 x 8 + 2 x (960 x 1080 x 60 x 8) = 1.99 Gb/s
(Note: In YUV color space, each pixel has one
brightness(luminance) value and two color(chrominance) values)
 Digital video sequences are the most demanding form
of data in various fields pass through computer network
or portable storage devices like pen drives, CD’s etc.
 Uncompressed video impose of following:
1.Large bandwidth requirements for transmission ,
2.Enormous demands on the storage capacity of media
Different compression technologies, both proprietary and
industry standards, are available. Most network video
vendors today use standard compression techniques.
Standards are important in ensuring compatibility and
interoperability. They are particularly relevant to video
compression since video may be used for different purposes
and, in some video surveillance applications, needs to be
viewable many years from the recording date. By deploying
standards, end users are able to pick and choose from
different vendors, rather than be tied to one supplier when
designing a video surveillance system.
The process of compression involves applying an
algorithm to the source video to create a compressed
file that is ready for transmission or storage. To play the
compressed file, an inverse algorithm is applied to
produce a video that shows virtually the same content
as the original source video.
Encoding
Decoding
original
digital data
Compression
algorithm
Compress
ed data
Storage
Compresse
d data
Decompressio
n algorithm
original
digital data
A pair of algorithms that works together is called a video
codec (encoder/decoder). Video codecs of different
standards are normally not compatible with each other; that
is, video content that is compressed using one standard
cannot be decompressed with a different standard. For
instance, an MPEG-4 decoder will not work with an H.264
encoder. This is simply because one algorithm cannot
correctly decode the output from another algorithm but it is
possible to implement many different algorithms in the
same software or hardware, which would then enable
multiple formats to coexist.
Video compression algorithms such as MPEG-4 and
H.264 use interframe prediction to reduce video data
between a series of frames.
This involves techniques such as difference coding,
where one frame is compared with a reference frame
and only pixels that have changed with respect to the
reference frame are coded. In this way, the number of
pixel values that is coded and sent is reduced. When
such an encoded sequence is displayed, the images
appear as in the original video sequence.
With inter frame prediction, each frame in a sequence of
images is classified as a certain type of frame, such as
an I-frame, P-frame or B-frame.
An I-frame, or intra frame, is a self-contained frame that can
be independently decoded without any reference to other
images. The first image in a video sequence is always an I-
frame. I-frames are needed as starting points for new
viewers or resynchronization points if the transmitted bit
stream is damaged. I-frames can be used to implement fast-
forward, rewind and other random access functions. An
encoder will automatically insert I-frames at regular
intervals or on demand if new clients are expected to join in
viewing a stream. The drawback of I-frames is that they
consume much more bits, but on the other hand, they do not
generate many artifacts, which are caused by missing data.
A P-frame, which stands for predictive inter frame,
makes references to parts of earlier I and/or P frame(s)
to code the frame. P-frames usually require fewer bits
than I-frames, but a drawback is that they are very
sensitive to transmission errors because of the complex
dependency on earlier P and/or I frames.
A B-frame, or bi-predictive inter frame, is a frame that
makes references to both an earlier reference frame and
a future frame. Using B-frames increases latency.
- Occupies less disk space.
- Reading and writing is faster.
- File transferring is faster.
- The order of bytes is independent.
- Compilation need to be done again for
compression.
- Errors may occur while transmitting data.
- The byte / pixel relationship is unknown
- Has to decompress the previous data.
Multimedia presentation video compression

Multimedia presentation video compression

  • 1.
  • 2.
     LaLit DuBey Sachin Sharma  Somnath  Liniker  DeePak bhanushali  Ajay
  • 3.
    “ Video compressiontechnologies are about reducing and removing redundant video data so that a digital video file can be effectively sent over a network and stored on computer disks ”
  • 4.
    Uncompressed video thatwe record from a video camera(e.g. movies) occupies huge amount of data. For example, a video clip recorded with a resolution of 720x576(PAL), with a refresh rate of 25fps and 8-bit color depth takes: 720 x 576 x 25 x 8 + 2 x (360 x 576 x 25 x 8) = 1.66 Mb/s (luminance+chrominance) For HDTV(High definition television) which uses a resolution of 1920x1080 1920 x 1080 x 60 x 8 + 2 x (960 x 1080 x 60 x 8) = 1.99 Gb/s (Note: In YUV color space, each pixel has one brightness(luminance) value and two color(chrominance) values)
  • 5.
     Digital videosequences are the most demanding form of data in various fields pass through computer network or portable storage devices like pen drives, CD’s etc.  Uncompressed video impose of following: 1.Large bandwidth requirements for transmission , 2.Enormous demands on the storage capacity of media
  • 6.
    Different compression technologies,both proprietary and industry standards, are available. Most network video vendors today use standard compression techniques. Standards are important in ensuring compatibility and interoperability. They are particularly relevant to video compression since video may be used for different purposes and, in some video surveillance applications, needs to be viewable many years from the recording date. By deploying standards, end users are able to pick and choose from different vendors, rather than be tied to one supplier when designing a video surveillance system.
  • 7.
    The process ofcompression involves applying an algorithm to the source video to create a compressed file that is ready for transmission or storage. To play the compressed file, an inverse algorithm is applied to produce a video that shows virtually the same content as the original source video.
  • 8.
  • 9.
    A pair ofalgorithms that works together is called a video codec (encoder/decoder). Video codecs of different standards are normally not compatible with each other; that is, video content that is compressed using one standard cannot be decompressed with a different standard. For instance, an MPEG-4 decoder will not work with an H.264 encoder. This is simply because one algorithm cannot correctly decode the output from another algorithm but it is possible to implement many different algorithms in the same software or hardware, which would then enable multiple formats to coexist.
  • 10.
    Video compression algorithmssuch as MPEG-4 and H.264 use interframe prediction to reduce video data between a series of frames. This involves techniques such as difference coding, where one frame is compared with a reference frame and only pixels that have changed with respect to the reference frame are coded. In this way, the number of pixel values that is coded and sent is reduced. When such an encoded sequence is displayed, the images appear as in the original video sequence.
  • 12.
    With inter frameprediction, each frame in a sequence of images is classified as a certain type of frame, such as an I-frame, P-frame or B-frame.
  • 13.
    An I-frame, orintra frame, is a self-contained frame that can be independently decoded without any reference to other images. The first image in a video sequence is always an I- frame. I-frames are needed as starting points for new viewers or resynchronization points if the transmitted bit stream is damaged. I-frames can be used to implement fast- forward, rewind and other random access functions. An encoder will automatically insert I-frames at regular intervals or on demand if new clients are expected to join in viewing a stream. The drawback of I-frames is that they consume much more bits, but on the other hand, they do not generate many artifacts, which are caused by missing data.
  • 14.
    A P-frame, whichstands for predictive inter frame, makes references to parts of earlier I and/or P frame(s) to code the frame. P-frames usually require fewer bits than I-frames, but a drawback is that they are very sensitive to transmission errors because of the complex dependency on earlier P and/or I frames.
  • 15.
    A B-frame, orbi-predictive inter frame, is a frame that makes references to both an earlier reference frame and a future frame. Using B-frames increases latency.
  • 17.
    - Occupies lessdisk space. - Reading and writing is faster. - File transferring is faster. - The order of bytes is independent.
  • 18.
    - Compilation needto be done again for compression. - Errors may occur while transmitting data. - The byte / pixel relationship is unknown - Has to decompress the previous data.