VII Compression Introduction


Published on

Published in: Education
  • Be the first to comment

  • Be the first to like this

No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide

VII Compression Introduction

  1. 1. Compression Fundamentals
  2. 2. Topics today… <ul><li>Why Compression ? </li></ul><ul><li>Information Theory Basics </li></ul><ul><li>Classification of Compression Algorithms </li></ul><ul><li>Data Compression Model </li></ul><ul><li>Compression Performance </li></ul>
  3. 3. Why Compression? <ul><li>Digital representation of analog signals requires huge storage </li></ul><ul><ul><li>High quality audio signal require 1.5 megabits/sec </li></ul></ul><ul><ul><li>A low resolution movie (30 frames per second, 640 x 580 pixels per frame, 24 bits per pixel) requires </li></ul></ul><ul><ul><ul><li>210 megabits per minute!! </li></ul></ul></ul><ul><ul><ul><li>95 gigabytes per hour </li></ul></ul></ul><ul><li>It is challenging transferring such files through the available limited bandwidth network. </li></ul>
  4. 4. Why Compression? Table –1 : Uncompressed source data rates Source Bit Rates for uncompressed Sources (Approximate) Telephony (200-3400 Hz) 8000 samples/second x 12 bits/sample = 96 kbps Wideband Audio (20-20000 Hz) 44100 samples/second x 2 channels x 16 bits/sample= 1.412Mbps Images 512x512 pixel color image x 24 bits/pixel = 6.3Mbits/image Video 640x480 pixel color image x 24 bits/pixel x 30 images/second=221 Mbps  650 megabyte CD can store 23.5 mins of video ? HDTV 1280x720 pixel color image x 60 images/second x 24 bits/pixel=1.3Gbps
  5. 5. The compression problem <ul><li>Efficient digital representation of a source </li></ul><ul><li>Data compression is the representation of the source in digital form with as few bits as possible while maintaining an acceptable loss in fidelity. Source can be data, still images, speech, audio, video or whatever signal needs to be stored & transmitted . </li></ul>
  6. 6. Synonyms for Data Compression <ul><li>Signal compression & signal coding </li></ul><ul><li>Source coding & source coding with fidelity criterion (in information theory) </li></ul><ul><li>Noiseless & Noisy Source coding (lossless & lossy compression) </li></ul><ul><ul><li>Noise  Reconstruction noise </li></ul></ul><ul><li>Bandwidth compression, redundancy removal (more dated terminologies, in 80’s.) </li></ul>
  7. 7. Types of Data Compression Problem <ul><li>Distortion-rate Problem </li></ul><ul><ul><li>Given the constraint on transmitted data rate or storage capacity, problem is to compress the source at or below this rate but at the highest fidelity possible </li></ul></ul><ul><ul><li>Ex . Voice mail, video conferencing, digital cellular </li></ul></ul><ul><li>Rate-distortion Problem </li></ul><ul><ul><li>Given the constraint on the fidelity, problem is to achieve it with as few bits as possible </li></ul></ul><ul><ul><li>Ex . CD-Quality audio </li></ul></ul>
  8. 8. Information Theory Basics <ul><li>Representation of data is the combination of information and redundancy </li></ul><ul><li>Data compression is essentially a redundancy reduction technique </li></ul><ul><li>Data compression scheme can be broadly divided into two phases </li></ul><ul><ul><li>Modelling </li></ul></ul><ul><ul><li>Coding </li></ul></ul>
  9. 9. Information Theory Basics <ul><li>In Modeling phase information about redundancy is analyzed & represented as a model </li></ul><ul><ul><li>This can be done via observing the empirical distribution of the symbols the source generates </li></ul></ul><ul><li>In the coding phase the difference between the actual data and the model is coded </li></ul>
  10. 10. Discrete Memoryless Model <ul><li>Source is discrete memoryless if it generates symbol that is statistically independent of one another </li></ul><ul><li>Described by the source alphabet A={a 1 ,a 2 ,a 3 …a n } and associated probabilities P=(p(a 1 ), p(a 2 ), p(a 3 ),…. p(a n )) </li></ul><ul><li>The amount of information content for a source symbol I(a i ) is </li></ul><ul><li>The base 2 logarithm indicates the information content is represented in bits. Higher probability symbols are coded with less bits. </li></ul>
  11. 11. Discrete Memoryless Model [2] <ul><li>Averaging the information content over all symbols, we get the entropy E as follows </li></ul><ul><li>Hence, entropy is the expected length of a binary code over all the symbols. </li></ul><ul><li>Estimation of entropy depends on the observation & assumption on the structure of source symbols </li></ul>
  12. 12. Noiseless source coding theorem <ul><li>The Noiseless Source Coding Theorem states that any source can be losslessly encoded with a code whose average number of bits per source symbol is arbitrarily close to, but not less than, the source entropy E in bits by coding infinitely long extensions of the source. </li></ul>
  13. 13. Entropy Reduction <ul><li>Consider a discrete memoryless source, with source alphabet A1 = {α, β, γ, δ} & probability p (α) = 0.65, p (β) = 0.20, p (γ) = 0.10, p (δ) = 0.05 respectively </li></ul><ul><li>The entropy of this source is </li></ul><ul><li>E = −(0.65 log2 0.65 + 0.20 log2 0.20 + 0.10 log2 0.10 + 0.05 log2 0.05) </li></ul><ul><li> = 1.42 bits per symbol </li></ul><ul><li>A data source of 2000 symbols can be represented using 2000 x 1.42 = 2840 bits </li></ul>
  14. 14. Entropy Reduction [2] <ul><li>Now assume we know something about the structure of the sequence </li></ul><ul><ul><li>Alphabet A2 = {0, 1, 2, 3} </li></ul></ul><ul><ul><li>Sequence D = 0 1 1 2 3 3 3 3 3 3 3 3 3 2 2 2 3 3 3 3 </li></ul></ul><ul><ul><li>p (0) = 0.05, p (1) = 0.10, p (2) = 0.20, and p (3) = 0.65 </li></ul></ul><ul><ul><li>E = 1.42 bits per symbol </li></ul></ul><ul><li>Assume the correlation between consecutive bits and we attempt to reduce it by r i = s i − s i −1 for each sample s i </li></ul>
  15. 15. Entropy Reduction [3] <ul><li>Now </li></ul><ul><ul><li>D = 0 1 0 1 1 0 0 0 0 0 0 0 0 −1 0 0 1 0 0 0 </li></ul></ul><ul><ul><li>A2 = {−1, 1, 0} </li></ul></ul><ul><ul><li>P (−1) = 0.05, p (1) = 0.2, and p (0) = 0.75 </li></ul></ul><ul><ul><li>E = 0.992 </li></ul></ul><ul><li>If used appropriate entropy coding technique maximum compression can be achieved </li></ul>
  16. 16. Unique Decipherability <ul><li>Consider the following table </li></ul><ul><li>Symbols are encoded with codes A, B and C. </li></ul><ul><li>Consider the string S = ααγαβαδ </li></ul>
  17. 17. Unique Decipherability [2] <ul><li>Deciphering C A (S) and C B (S) are unambiguous and we get the string S </li></ul><ul><li>C C (S) is ambiguous and not uniquely decipherable </li></ul><ul><li>Fixed length codes are always uniquely decipherable. </li></ul><ul><li>Not all variable length codes are uniquely decipherable. </li></ul>
  18. 18. Unique Decipherability [3] <ul><li>Uniquely decipherable codes maintain prefix property , ie no codeword in the code-set forms the prefix of another distinct codeword </li></ul><ul><li>Popular variable-length coding techniques </li></ul><ul><ul><li>Shannon-Fano Coding </li></ul></ul><ul><ul><li>Huffman Coding </li></ul></ul><ul><ul><li>Elias Coding </li></ul></ul><ul><ul><li>Arithmetic Coding </li></ul></ul><ul><li>Fixed-length codes can be treated as a special case of uniquely decipherable variable-length code. </li></ul>
  19. 19. Classification of compression algorithms CODEC
  20. 20. Classification of compression algorithms [2] <ul><li>Data compression as a method that takes an input data D and generates a shorter representation of the data c ( D ) with a fewer number of bits compared to that of D </li></ul><ul><li>The reverse process is called decompression, which takes the compressed data c(D) and generates or reconstructs the data D′ </li></ul><ul><li>Sometimes the compression (coding) and decompression (decoding) systems together are called a &quot;CODEC,&quot; </li></ul>
  21. 21. Classification of compression algorithms [3] <ul><li>If the reconstructed data D ′ is an exact replica of the original data D, we call the algorithm applied to compress D and decompress c ( D ) to be lossless . Otherwise the algorithms are lossy </li></ul><ul><li>Text, scientific data, medical images are some of the applications requires lossless compression </li></ul><ul><li>Compression can be static or dynamic , depends on the coding scheme used </li></ul>
  22. 22. Data compression model <ul><li>A data compression system mainly consists of three major steps </li></ul><ul><ul><li>removal or reduction in data redundancy </li></ul></ul><ul><ul><li>reduction in entropy </li></ul></ul><ul><ul><li>entropy encoding </li></ul></ul>
  23. 23. Data compression model REDUCTION IN DATA REDUNDANCY <ul><li>Removal or reduction in data redundancy is typically achieved by transforming the original data from one form or representation to another </li></ul><ul><li>Popular transformation techniques are </li></ul><ul><ul><li>Discrete Cosine Transform (DCT) </li></ul></ul><ul><ul><li>Discrete Wavelet Transformation (DWT) etc </li></ul></ul><ul><ul><li>This step leads to the reduction of entropy </li></ul></ul><ul><li>For Lossless compression this transformation is completely reversible </li></ul>
  24. 24. Data compression model REDUCTION IN ENTROPY <ul><li>Non reversible process </li></ul><ul><li>Achieved by dropping insignificant information in the transformed data ( Lossy!!! ) </li></ul><ul><li>Done by some quantization techniques </li></ul><ul><li>Amount of quantization dictate the quality of the reconstructed data </li></ul><ul><li>Entropy of the quantized data is less compared to the original one, hence more compression. </li></ul>
  25. 25. Compression Performance <ul><li>The performance measures of data compression algorithms can be looked at from different perspectives depending on the application requirements </li></ul><ul><ul><li>amount of compression achieved </li></ul></ul><ul><ul><li>objective and subjective quality of the reconstructed data </li></ul></ul><ul><ul><li>relative complexity of the algorithm </li></ul></ul><ul><ul><li>speed of execution, etc. </li></ul></ul>
  26. 26. Compression Performance AMOUNT OF COMPRESSION ACHIEVED <ul><li>Compression ratio , the ratio of the number of bits to represent the original data to the number of bits to represent the compressed data </li></ul><ul><li>Achievable compression ratio using a lossless compression scheme is totally input data dependent. </li></ul><ul><li>Sources with less redundancy have more entropy and hence are more difficult to achieve compression </li></ul>
  27. 27. Compression Performance SUBJECTIVE QUALITY METRIC <ul><li>MOS : m ean o bservers s core or m ean o pinion s core is a common measure </li></ul><ul><ul><li>A statistically significant number of observers are randomly chosen to evaluate visual quality of the reconstructed images. </li></ul></ul><ul><ul><li>Each observer assigns a numeric score to each reconstructed image based on his or her perception of quality of the image, say within a range 1–5 to describe the quality of the image—5 being the highest quality and 1 being the worst quality. </li></ul></ul><ul><ul><li>MOS is the average of these scores </li></ul></ul>
  28. 28. Compression Performance OBJECTIVE QUALITY METRIC <ul><li>Common quality metrics are </li></ul><ul><ul><li>root-mean-squared error ( RMSE ) </li></ul></ul><ul><ul><li>signal-to-noise ratio ( SNR ) </li></ul></ul><ul><ul><li>peak signal-to-noise ratio ( PSNR ). </li></ul></ul><ul><li>If I is an M × N image and I is the corresponding reconstructed image after compression and decompression, RMSE is calculated by </li></ul><ul><li>The SNR in decibel unit (dB) is expressed as </li></ul>
  29. 29. Compression Performance CODING DELAY AND COMPLEXITY <ul><li>Coding delay , a performance measure for compression algorithms where interactive encoding and decoding is the requirement (e.g., interactive video teleconferencing, on-line image browsing, real-time voice communication, etc.) </li></ul><ul><li>The complex the compression algorithm  Increased coding delay </li></ul><ul><li>Compression system designer often use a less sophisticated algorithm for the compression system. </li></ul>
  30. 30. Compression Performance CODING DELAY AND COMPLEXITY <ul><li>Coding complexity , a performance measure considered where the computational requirement to implement the codec is an important criteria </li></ul><ul><li>MOPS (millions of operations per second), MIPS (millions of instructions per second) are often used to measure the compression performance in a specific computing engine's architecture. </li></ul>