Upcoming SlideShare
×

# Dic rd theory_quantization_07

431 views

Published on

rate distortion theory

Published in: Technology
0 Likes
Statistics
Notes
• Full Name
Comment goes here.

Are you sure you want to Yes No
• Be the first to comment

• Be the first to like this

Views
Total views
431
On SlideShare
0
From Embeds
0
Number of Embeds
1
Actions
Shares
0
18
0
Likes
0
Embeds 0
No embeds

No notes for slide

### Dic rd theory_quantization_07

1. 1. Rate Distortion Theory & Quantization Rate Distortion Theory Rate Distortion Function R(D*) for Memoryless Gaussian Sources R(D*) for Gaussian Sources with Memory Scalar Quantization Lloyd-Max Quantizer High Resolution Approximations Entropy-Constrained Quantization Vector QuantizationThomas Wiegand: Digital Image Communication RD Theory and Quantization 1
2. 2. Rate Distortion TheoryTheoretical discipline treating data compression fromthe viewpoint of information theory.Results of rate distortion theory are obtained withoutconsideration of a specific coding method.Goal: Rate distortion theory calculates minimumtransmission bit-rate R for a given distortion D andsource.Thomas Wiegand: Digital Image Communication RD Theory and Quantization 2
3. 3. Transmission System Distortion D U V Source Coder Decoder Sink Bit-Rate R Need to define U, V, Coder/Decoder, Distortion D, and Rate R Need to establish functional relationship between U, V, D, and RThomas Wiegand: Digital Image Communication RD Theory and Quantization 3
4. 4. DefinitionsSource symbols are given by the random sequence {U k } • Each U k assumes values in the discrete set = {u0 ,u1,...,uM 1 } - For a binary source: U = {0,1} - For a picture: U = {0,1,...,255} • For simplicity, let us assume U k to be independent and identically distributed (i.i.d.) with distribution {P(u),u U}Reconstruction symbols are given by the random sequence {Vk }with distribution {P(v),v } • Each Vk assumes values in the discrete set = {v 0 ,v1,...,v N 1} • The sets and need not to be the sameThomas Wiegand: Digital Image Communication RD Theory and Quantization 4
5. 5. Coder / DecoderStatistical description of Coder/Decoder, i.e. the mapping of thesource symbols to the reconstruction symbols, via Q = {Q(v | u),u ,v } is the conditional probability distribution over the letters of thereconstruction alphabet given a letter of the source alphabetTransmission system is described via Joint pdf: P(u,v) P(u) = P(u,v) v P(v) = P(u,v) u P(u,v) = P(u) Q(v | u) (Bayes‘ rule)Thomas Wiegand: Digital Image Communication RD Theory and Quantization 5
6. 6. DistortionTo determine distortion, we define a non-negative cost function d(u,v),d(.,.) : [0, )Examples for d 0, for u v • Hamming distance: d(u,v) = 1, for u = v 2 • Squared error: d(u,v) = u vAverage Distortion D(Q) = P(u) 244 d(u,v) 14 Q(v |3 4 u) u v P (u,v )Thomas Wiegand: Digital Image Communication RD Theory and Quantization 6
7. 7. Mutual InformationShannon average mutual information I = H(U) H(U |V ) = P(u) ld P(u) + P(u,v) ld P(u | v) u u v P(u,v) = - P(u,v) ld P(u) + P(u,v) ld u v u v P(v) P(u,v) = P(u,v) ld u v P(u) P(v)Using Bayes‘ rule Q(v | u) I(Q) = 14 244 ld P(u) Q(v |3 4 u) u v P(u,v ) P(v) with P(v) = P(u) Q(v | u) uThomas Wiegand: Digital Image Communication RD Theory and Quantization 7
8. 8. RateShannon average mutual information expressed viaentropy I(U;V ) = H(U) H(U |V ) Source entropy Equivocation: conditional entropyEquivocation: • The conditional entropy (uncertainty) about the source U given the reconstruction V • A measure for the amount of missing [quantized] information in the received signal VThomas Wiegand: Digital Image Communication RD Theory and Quantization 8
9. 9. Rate Distortion Function Definition: R(D*) = min {I(Q)} Q:D(Q) D* For a given maximum average distortion D, the rate distortion function R(D*) is the lower bound for the transmission bit-rate. The minimization is conducted for all possible mappings Q that satisfy the average distortion constraint. R(D*) is measured in bits for ld .Thomas Wiegand: Digital Image Communication RD Theory and Quantization 9
10. 10. Discussion In information theory: maximize mutual information for efficient communication In rate distortion theory: minimize mutual information In rate distortion theory: source is given, not the channel Problem which is addressed: Determine the minimum rate at which information about the source must be conveyed to the user in order to achieve a prescribed fidelity. Another view: Given a prescribed distortion, what is the channel with the minimum capacity to convey the information. Alternative definition via interchanging the roles of rate and distortionThomas Wiegand: Digital Image Communication RD Theory and Quantization 10
11. 11. Distortion Rate Function Definition: D(R*) = min {d(Q)} Q:I(Q) R* For a given maximum average rate R , the distortion rate function R(D*) is the lower bound for the average distortion. Here, we can set R(D*) to the capacity C of the transmission channel and determine the minimum distortion for this ideal communication systemThomas Wiegand: Digital Image Communication RD Theory and Quantization 11
12. 12. Properties of the Rate Distortion Function, I R(D) for a discrete amplitude source (H(U),Dmin = 0) (H(U) H(U |V ) = 0,Dmax ) 0 D 0 Dmax 1R(D) is well defined for D (Dmin ,Dmax )For discrete amplitude sources, Dmin = 0R(D) = 0, if D > DmaxThomas Wiegand: Digital Image Communication RD Theory and Quantization 12
13. 13. Properties of the Rate Distortion Function, IIR(D) is always positive 0 I(U;V ) H(U)R(D) is non-increasing in DR(D) is strictly convex downward in the range (Dmin ,Dmax )The slope of R(D) is continous in the range (Dmin ,Dmax ) R(D) 0 D 0 1 DmaxThomas Wiegand: Digital Image Communication RD Theory and Quantization 13
14. 14. Shannon Lower Bound It can be shown that H(U V |V) = H(U |V ) R(D*) = min {H(U) H(U |V )} Q:D(Q) D* Then we can write = H(U) max {H(U |V )} Q:D(Q) D* = H(U) max {H(U V |V )} Q:D(Q) D* Ideally, the source coder would produce distortions u v that are statistically independent from the reconstructed signal v (not always possible!). Shannon Lower Bound: R(D*) H(U) max H(U V ) Q:D(Q) D*Thomas Wiegand: Digital Image Communication RD Theory and Quantization 14
15. 15. R(D*) for a Memoryless Gaussian Source and MSE DistortionGaussian source, variance 2Mean squared error (MSE) D = E{(u v) 2 } 2 1 2 2 R* R(D*) = log ; D(R*) = 2 ,R 0 2 D* 2 SNR = 10 log10 = 10 log10 2 2 R 6R [dB] DRule of thumb: 6 dB ~ 1 bitThe R(D*) for non-Gaussian sources with the samevariance 2 is always below this Gaussian R(D*) curve.Thomas Wiegand: Digital Image Communication RD Theory and Quantization 15
16. 16. R(D*) Function for Gaussian Source with Memory IJointly Gaussian source with power spectrum Suu ( )MSE: D = E{(u v) 2 }Parametric formulation of the R(D*) function D= 1 min[D*,Suu ( )]d 2 1 Suu ( ) R= 1 max[0, log ]d 2 2 D*R(D*) for non-Gaussian sources with the same powerspectral density is always lower.Thomas Wiegand: Digital Image Communication RD Theory and Quantization 16
17. 17. R(D*) Function for Gaussian Source with Memory II Suu ( )reconstruction error spectrum preserved spectrum Svv ( ) white noise D* D* no signal transmitted Thomas Wiegand: Digital Image Communication RD Theory and Quantization 17
18. 18. R(D*) Function for Gaussian Source with Memory IIIACF and PSD for a first order AR(1) Gauss-Markovprocess: U[n] = Z[n] + U[n 1] 2 |k| 2 (1 2 ) Ruu (k) = , Suu ( ) = 2 1 2 cos +Rate Distortion Function: 1 Suu ( ) D* 1 R(D*) = log 2 d , 2 4 D* 1+ 2 1 (1 2 ) 1 2 = log 2 d log 2 (1 2 cos + )d 4 D* 4 2 2 1 (1 2 ) 1 = log 2 = log 2 z 2 D* 2 D*Thomas Wiegand: Digital Image Communication RD Theory and Quantization 18
19. 19. R(D*) Function for Gaussian Source with Memory IV SNR [dB] 45 2 40 D* 1 =0,99= 10 log10 D 2 1+ =0,95 35 =0,9 30 =0,78 25 =0,5 =0 20 15 10 5 0 R [bits] 0 0.5 1 1.5 2 2.5 3 3.5 4 Thomas Wiegand: Digital Image Communication RD Theory and Quantization 19
20. 20. Quantization Structure u v Quantizer Alternative: coder ( ) / decoder ( ) structure u i v Insert entropy coding ( ) and transmission channelu i b b 1 i v channelThomas Wiegand: Digital Image Communication RD Theory and Quantization 20
21. 21. Scalar Quantization Average distortion Output v vi+2 N D = E{d(U,V )} reconstruction levels N 1 uk+1 vi+1 = d(u,v k ) fU (u) du k= 0 uk vi ui ui+1 ui+1 Assume MSE input d(u,v k ) = (u v k ) 2 signal u N 1 uk+1 N-1 decision 2 D= (u v k ) fU (u) du thresholds k= 0 uk Fixed code word length vs. variable code word length R = logN vs. R = E{log P(v)}Thomas Wiegand: Digital Image Communication RD Theory and Quantization 21
22. 22. Lloyd-Max Quantizer0: Given: a source distribution fU (u) a set of reconstruction levels {v k }1: Encode given {v k } (Nearest Neighbor Condition): (u) = argmin {d(u,v k )} uk = (v k + v k +1 ) 2 (MSE)2: Update set of reconstruction levels given (Centroid Condition): u k+1 u fU (u)du uk v k = argmin E{d(u,v k ) | (u) = k} vk = uk+1 (MSE) fU (u)du uk3: Repeat steps 1 and 2 until convergence Thomas Wiegand: Digital Image Communication RD Theory and Quantization 22
23. 23. High Resolution ApproximationsPdf of U is roughly constant over individual cells Ck fU (u) fk, u CkThe fundamental theorem of calculus uk+1 Pk = Pr(u Ck ) = fU (u) du (uk +1 uk ) f k = f k k ukApproximate average distortion (MSE) N 1 uk+1 N 1 uk+1 D= (u v k ) 2 fU (u) du = fk (u v k ) 2 du k= 0 uk k= 0 uk N 1 1 N1 3 k 2 = fk = Pk k k= 0 12 12 k= 0Thomas Wiegand: Digital Image Communication RD Theory and Quantization 23
24. 24. Uniform QuantizationReconstruction levels of quantizer { k } , k K areuniformly spacedQuantizer step size, i.e. distance vbetween reconstruction levels:Average distortion N 1 Pk = 1, k = k= 0 u N 1 2 N 1 2 1 D= Pk 2k = Pk = 12 k= 0 12 k= 0 12Closed-form solutions for pdf-optimized uniformquantizers for Gaussian RV only exist for N=2 and N=3Optimization of is conducted numericallyThomas Wiegand: Digital Image Communication RD Theory and Quantization 24
25. 25. Panter and Dite ApproximationApproximate solution for optimized spacing ofreconstruction and decision levelsAssumptions: high resolution and smooth pdf (u) const (u) = 3 f (u) UOptimal pdf of reconstruction levels is not the same asfor the input levels 1 1Average Distortion D ( fU 3 (u) du) 3 12N 2Operational distortion rate function for Gaussian RV 2 3 2 2R U ~ N(0, ), D(R) 2 2Thomas Wiegand: Digital Image Communication RD Theory and Quantization 25
26. 26. Entropy-Constrained Quantization So far: each reconstruction level is transmitted with fixed code word length Encode reconstruction levels with variable code word length Constrained design criteria: min D, s.t. R < Rc or min R, s.t. D < Dc Pose as unconstrained optimization via Lagrangian formulation: min D + R R Lines of constant For a given , an optimum is obtained slope: -1/ corresponding to either Rc or Dc If small, then D small and R large if large, then D large and R small Optimality also for functions that are neither continuous nor differentiable DThomas Wiegand: Digital Image Communication RD Theory and Quantization 26
27. 27. Chou, Lookabaugh, and Gray Algorithm*0: Given: a source distribution fU (u) a set of reconstruction levels {vk} a set of variable length code (VLC) words { k} with associated length | k|1: Encode given {vk} and { k}: (u) = argmin {d(u, vk) + | k| }2: Update VLC given (uk) and {vk} | k| = -log P( (u)=k)3: Update set of reconstruction levels given (uk) and { k} vk = argmin E { d(u, vk) | (u)=k}4: Repeat steps 1 - 3 until convergence*1989, has been proposed for Vector QuantizationThomas Wiegand: Digital Image Communication RD Theory and Quantization 27
28. 28. Entropy-Constrained Scalar Quantization: High Resolution Approximations Assume: uniform quantization: Pk=fk N 1 N 1 R= Pk log Pk = f k log ( f k ) k= 0 k= 0 N 1 N 1 = f k log ( f k ) f k log ( ) du k= 0 k= 0 fU (u) log ( fU (u))du log fU (u) du 1444 2444 3 4 4 14243 Differential Entropy h(U ) 1 = h(U) log Operational distortion rate function for Gaussian RV 2 e 2 2R U ~ N(0, ),D(R) 2 6 It can be shown that for high resolution: Uniform Entropy-Constrained Scalar Quantization is optimumThomas Wiegand: Digital Image Communication RD Theory and Quantization 28
29. 29. Comparison for Gaussian Sources 30 SNR [dB] R(D*), =0.9 2 R(D*), =0= 10 log10 25 Lloyd-Max D Uniform Fixed-Rate Panter & Dite App Entropy-Constrained Opt. 20 15 10 5 R [bits] 0 0 0.5 1 1.5 2 2.5 3 3.5 4 Thomas Wiegand: Digital Image Communication RD Theory and Quantization 29
30. 30. Vector QuantizationSo far: scalars have been quantizedEncode vectors, ordered sets of scalarsGain over scalar quantization (Lookabaugh and Gray 1989) • Space filling advantage - Z lattice is not most efficient sphere packing in K-D (K>1) - Independent from source distribution or statistical dependencies - Maximum gain for K : 1.53 dB • Shape advantage - Exploit shape of source pdf - Can also be exploited using entropy-constrained scalar quantization • Memory advantage - Exploit statistical dependencies of the source - Can also be exploited using DPCM, Transform coding, block entropy codingThomas Wiegand: Digital Image Communication RD Theory and Quantization 30
31. 31. Comparison for Gauss-Markov Source: =0.9 40SNR [dB] 35 30 25 20 15 R(D*), =0.9 VQ, K=100 VQ, K=10 10 VQ, K=5 VQ, K=2 5 Panter & Dite App Entropy-Constrained Opt. R [bits] 0 0 1 2 3 4 5 6 7 Thomas Wiegand: Digital Image Communication RD Theory and Quantization 31
32. 32. Vector Quantization II Vector quantizers can achieve R(D*) if K Complexity requirements: storage and computation Delay Impose structural constraints that reduce complexity Tree-Structured, Transform, Multistage, etc. Lattice Codebook VQ • • pdf • • • • • • Amplitude 2 • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • Representative • • • • • • • • • vector cell • • • • • • • Amplitude 1Thomas Wiegand: Digital Image Communication RD Theory and Quantization 32
33. 33. SummaryRate-distortion theory: minimum bit-rate for given distortionR(D*) for memoryless Gaussian source and MSE: 6 dB/bitR(D*) for Gaussian source with memory and MSE: encodespectral components independently, introduce white noise,suppress small spectral componentsLloyd-Max quantizer: minimum MSE distortion for given number ofrepresentative levelsVariable length coding: additional gains by entropy-constrainedquantizationMinimum mean squared error for given entropy: uniform quantizer(for fine quantization!)Vector quantizers can achieve R(D*) if K - Are we done ?No! Complexity of vector quantizers is the issueDesign a coding system with optimum rate distortion performance,such that the delay, complexity, and storage requirements are met.Thomas Wiegand: Digital Image Communication RD Theory and Quantization 33