video compression techique


Published on

Published in: Technology, Art & Photos
  • Be the first to comment

No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide

video compression techique

  1. 1. Video Compression Techniques
  2. 2. Fundamentals of Video Compression• Introduction to Digital Video• Basic Compression Techniques• Still Image CompressionTechniques - JPEG• Video Compression
  3. 3. Factors Associated with CompressionThe goal of video compression is to massively reducethe amount of data required to store the digitalvideo file, while retaining the quality of theoriginal video# Real-Time versus Non-Real-Time# Symmetrical versus Asymmetrical# Compression Ratios# Lossless versus Lossy# Interframe versus Intraframe# Bit Rate Control
  4. 4. Lossless vs. Lossy Compression• In lossless compression, data is not altered or lostin the processof compression or decompression• Some examples of lossless standards are:— Run-Length Encoding— Dynamic Pattern Substitution - Lampel-ZivEncoding— Huffman Encoding• Lossy compression is used for compressingaudio, pictures, video• Some examples are:— JPEG— MPEG— H.261 (Px64) Video Coding Algorithm
  5. 5. Real-Time V/s Non-Real-TimeSome compression systems capture, compress to disk, decompress andplay back video (30 frames per second) all in real time; there are nodelays.Other systems are only capable of capturing some of the 30 frames persecond and/or are only capable of playing back some of the frames.Insufficient frame rate is one of the most noticeable video deficiencies.Without a minimum of 24 frames per second, the video will be noticeablyjerky. In addition, the missing frames will contain extremely importantlip synchronisation data.If the movement of a persons lips is missing due to dropped framesduring capture or playback, it is impossible to match the audio correctlywith the video.
  6. 6. Symmetrical V/s AsymmetricalThis refers to how video images are compressed and decompressed.Symmetrical compression means that if you can play back asequence of 640 by 480 video at 30 frames per second, then you canalso capture, compress and store it at that rate.Asymmetrical compression means just the opposite. The degree ofasymmetry is usually expressed as a ratio. A ratio of 150:1 means ittakes approximately 150 minutes to compress one minute of video.Asymmetrical compression can sometimes be more elaborate andmore efficient for quality and speed at playback because it uses somuch more time to compress the video.The two big drawbacks to asymmetrical compression are that it takesa lot longer, and often you must send the source material out to adedicated compression company for encoding
  7. 7. Compression RatioThe compression ratio relates the numerical representation ofthe original video in comparison to the compressed video.For example, 200:1 compression ratio means that theoriginal video is represented by the number 200. Incomparison, the compressed video is represented by thesmaller number, in this case, that is 1.With MPEG, compression ratios of 100:1 are common, withgood image quality.Motion JPEG provides ratios ranging from 15:1 to80:1, although 20:1 is about the maximum for maintaining agood quality image.
  8. 8. Interframe V/s IntraframeOne of the most powerful techniques for compressing video isinterframe compression. Interframe compression uses one or moreearlier or later frames in a sequence to compress the currentframe, while intraframe compression uses only the currentframe, which is effectively image compression.Since interframe compression copies data from one frame toanother, if the original frame is simply cut out (or lost intransmission), the following frames cannot be reconstructed properly.Making cuts in intraframe-compressed video is almost as easy asediting uncompressed video — one finds the beginning and ending ofeach frame, and simply copies bit-for-bit each frame that one wants tokeep, and discards the frames one doesnt want.Another difference between intraframe and interframe compression isthat with intraframe systems, each frame uses a similar amount ofdata.
  9. 9. Bit Rate ControlA good compression system should allow the userto instruct the compression hardware andsoftware which parameters are most important. In some applications, frame rate may be ofparamount importance, while frame size is not.In other applications, you may not care if theframe rate drops below 15 frames per second, butthe quality of those frames must be of very good.
  10. 10. Introduction to Digital Video• Video is a stream of data composed of discrete frames,containing both audio and pictures• Continuous motion produced at a frame rate of 15 fps orhigher• Traditional movies run at 24 fps• TV standard in USA (NTSC) uses ≈ 30 fpsWith digital video, four factors have to be kept in mind.# Frame rate# Colour Resolution# Spatial Resolution# Image Quality
  11. 11. Frame RateThe standard for displaying any type of non-film video is 30 frames persecond (film is 24 frames per second). Additionally these frames are split inhalf (odd lines and even lines), to form what are called fields.When a television set displays its analogue video signal, it displays the oddlines (the odd field) first. Then is displays the even lines (the even field).Each pair forms a frame and there are 60 of these fields displayed everysecond (or 30 frames per second). This is referred to as interlaced video. Fragment of the "matrix" After processing the fragment on the left by the sequence (2 frames) FRC filter the frame rate increased 4 times
  12. 12. Colour ResolutionThis second factor is a bit more complex. Colour resolutionrefers to the number of colours displayed on the screen at onetime. Computers deal with colour in an RGB (red-green-blue)format, while video uses a variety of formats. One of the mostcommon video formats is called YUV. This test table was used to estimate the color resolution. First we determine the border when one of the colors on the resolution chart disappears, and color sharpness is found on the scale on the right.
  13. 13. Spatial ResolutionThe third factor is spatial resolution - or in other words, "How big is thepicture?". Since PC and Macintosh computers generally have resolutionsin excess of 640 by 480,The National Television Standards Committee ( NTSC) standard used inNorth America and Japanese Television uses a 768 by 484 display.The Phase Alternative system (PAL) standard for European television isslightly larger at 768 by 576. Spatial resolution is a parameter that shows how many pixels are used to represent a real object in digital form. Fig. 2 shows the same color image represented by different spatial resolution. Left flower have a much better resolution that right one
  14. 14. Image qualityThe final objective is video that looks acceptable for yourapplication.For some this may be 1/4 screen, 15 frames per second(fps), at 8 bits per pixel.Other require a full screen (768 by 484), full frame ratevideo, at 24 bits per pixel (16.7 million colours).
  15. 15. MPEG Compression Compression through  Spatial  Temporal
  16. 16. Spatial Redundancy Take advantage of similarity among most neighboring pixels
  17. 17. Spatial Redundancy Reduction RGB to YUV  less information required for YUV (humans less sensitive to chrominance) Macro Blocks  Take groups of pixels (16x16) Discrete Cosine Transformation (DCT)  Based on Fourier analysis where represent signal as sum of sines and cosine‟s  Concentrates on higher-frequency values  Represent pixels in blocks with fewer numbers Quantization  Reduce data required for co-efficients Entropy coding  Compress
  18. 18. Spatial Redundancy Reduction “Intra-Frame Encoded” Quantization Zig-Zag • major reduction Scan, • controls Run-length „quality‟ coding
  19. 19. Loss of Resolution Original (63 kb) Low (7kb) Very Low (4 kb)
  20. 20. Temporal Redundancy Take advantage of similarity between successive frames 950 951 952
  21. 21. Temporal Activity “Talking Head”
  22. 22. Temporal Redundancy Reduction
  23. 23. Temporal Redundancy Reduction• I frames are independently encoded• P frames are based on previous I, P frames – Can send motion vector plus changes• B frames are based on previous and following I and P frames – In case something is uncovered
  24. 24. Group of Pictures (GOP)• Starts with an I-frame• Ends with frame right before next I- frame• “Open” ends in B-frame, “Closed” in P- frame – (What is the difference?)• MPEG Encoding a parameter, but „typical‟: –IBBPBBPBBI –IBBPBBPBBPBBI• Why not have all P and B frames after initial I?
  25. 25. Typical MPEG Parameters
  26. 26. Typical Compress. Performance Type Size Compression --------------------- I 18 KB 7:1 P 6 KB 20:1 B 2.5 KB 50:1 Avg 4.8 KB 27:1 --------------------- Note, results are Variable Bit Rate, even if frame rate is constant
  27. 27. MPEG (Moving Picture Expert Group)MPEG was set standard for Audio and Video compression and transmissionMPEG-1 is a standard for lossy compression of video and audio. It is designed to compressVHS-quality raw digital video and CD audio down to 1.5 Mbit/s (26:1 and 6:1compression ratios respectively) without excessive quality loss, making Video CDs, digitalcable/satellite TV and digital audio broadcasting (DAB) possible.MPEG-1 has become the most widely compatible lossy audio/video format in theworld, and is used in a large number of products and technologies.The best-known part of the MPEG-1 standard is the MP3 audio format .The standard consists of the following five Parts:1. Systems (storage and synchronization of video, audio, and other data together)2. Video (compressed video content)3. Audio (compressed audio content)4. Conformance testing & 5. reference software
  28. 28. MPEG-2was designed for coding interlaced images at transmissionrates above 4 million bits per second.MPEG 2 can be used on HD-DVD and blue ray disc.handles 5 audio channels,Covers wider range of frame sizes (HDTV).Provides resolution 720*480 and 1280*720 at 60 fps with fullCD quality audio used by DVD-ROM.MPEG-2 can compress 2 hours video into a few GHz.MPEG-2 is used for digital TV broadcast and DVD. An MPEG-2 is designed to offer higher quality than MPEG-1, at a higher bandwidth (between 4 and 10 Mbit/s).The scheme is very similar to MPEG-1, and scalable.
  29. 29. MPEG-3Designed to handle HDTV signal inrange 20 to 40 Mbits/sec.HDTV-resolution is 1920* 1080*30 HzBut MPEG-2 was fully capable ofhandling HDTV so MPEG -3 is no longermentioned.
  30. 30. MPEG-4MPEG-4 is a collection of methods defining compression ofaudio and visual (AV) digital data.MPEG-4 absorbs many of the features of MPEG-1 and MPEG-2and other related standards, Wavelength band MPEG-4 filesare smaller than JPEG. so they transmit video and imagesover narrower bandwidth and can mix video with textgraphics and 2D and 3D animation layers.MPEG-4 provides a series of technolgies for developers forvarious service providers and end users.SP use for data transparencyHelps end users with wide range of interaction with animatedobjects.MPEG-4 multiplexes and synchronizes data .Interaction with audio visual scene.
  31. 31. MPEG-7MPEG-7 is a content representation standard for information search.It is also titled Multimedia Content Description Interface.It will define the manner in which audiovisual materials can be coded andclassified so the materials can be easily located using search engines just assearch engines are used to locate text-based information. Music, art, line drawings, photos, and videos are examples of the kinds ofmaterials that will become searchable based on descriptive language definedby MPEG-7. * Provide a fast and efficient searching, filtering and content identificationmethod. * Describe main issues about the content (low-levelcharacteristics, structure, models, collections, etc.). * Index a big range of applications. * Audiovisual information that MPEG-7 deals is :Audio, voice, video, images, graphs and 3D models * Inform about how objects are combined in a scene. * Independence between description and the information itself.
  32. 32. MPEG-7 applications * Digital library: Image/video catalogue, musicaldictionary. * Multimedia directory services: e.g. yellow pages. * Broadcast media selection: Radio channel, TV channel. * Multimedia editing: Personalized electronic newsservice, media authoring. * Security services: Traffic control, production chains... * E-business: Searching process of products. * Cultural services: Art-galleries, museums... * Educational applications. * Biomedical applications.
  33. 33. Still Image Compression - JPEG• Defined by Joint Photographic Experts Group• Released as an ISO standard for still color and gray-scaleimages• Provides four modes of operation:— Sequential (each pixel is traversed only once)— progressive (image gets progressively sharper)— Hierarchical (image compressed to multipleresolutions)— lossless (full detail at selected resolution)Definitions in the JPEG StandardThree levels of definition:• Baseline system (every codec must implement it)• Extended system (methods to extend the baseline system)• Special lossless function (ensures lossless compression/decompression)
  34. 34. H.261 (Px64)• H.261 was designed for datarates whichare multiples of64Kbit/s, and is sometimes called p x64Kbit/s (p is in therange 1-30).•These datarates suit ISDN lines, for whichthis video codecwas designed for• Intended for videophone and videoconferencing systems
  35. 35. H.263 Standard•The development of modems allowingtransmission in therange of 28-33 kbps paved the way for thedevelopment of animproved version of H.261• It was designed for low bitratecommunication , however thislimitationhas now been removed• It is expected that H.263 will replace H.261
  36. 36. Prepared by:Saurabh VermaB.Tech Vth Sem. CSE 12