Vector quantization has several advantages over scalar quantization for data compression:
1) Vector quantization groups input symbols into vectors and processes them together, while scalar quantization treats each symbol separately, reducing efficiency.
2) Vector quantization increases quantizer optimality and provides more flexibility for modification compared to scalar quantization.
3) Vector quantization can lower average distortion for the same number of reconstruction levels, or increase reconstruction levels for the same distortion, which scalar quantization cannot do.
2. Introduction To Quantization
1) The process of representing a large - possibly infinite - set of values
with a much smaller set is called as quantization.
2) A simple quantization scheme would be used to represent each output of
the source with the integer value closest to it.
3) Eg:
Consider a source that generates numbers between -10.0 - 10.0. Then,
If a source output is 2.47, w would represent it as 3, and of the source
output is 3.1415926, we would represent it as 3.
3. 4) As in the previous example we saw that we have lost the original value of the
source output forever.
5) If we are told that the reconstruction value is 3, we cannot tell whether the
source output was 2.95, 3.16, 3.057932 or any other of an infinite set of values. In
other words, we have lost some information.
6) This loss of information is the reason for the use of the word “lossy” in many
lossy compression schemes.
7) The set of inputs and outputs of a quantizer can be scalar or vectors.
If they are scalars, we call the quantizers as scalar quantizers.
If they are vectors we call the quantizers as vector quantizers.
4. ➔ The most common type of quantization is scalar quantization. It is
typically denoted as y =Q(x) is the process of using quantization function Q(
) to map a scalar input value x to scalar output value y.
➔ Being a subset of vector quantization, scalar quantization deals with
quantizing a string of symbols (i.e. random variables) by addressing one
symbol at a time. Although, as one would expect, this is not ideal and will not
approach any theoretical limits; scalar quantization is a rather simple
technique that can be easily implemented in hardware.
Scalar Quantization
5. Vector quantization, also called "block quantization" or "pattern matching
quantization" is often used in lossy data compression. It works by encoding values
from a multidimensional vector space into a finite set of values from a discrete
subspace of lower dimension. A lower-space vector requires less storage space, so
the data is compressed.
➔ The transformation is usually done by projection or by using a codebook.
➔ The amount of compression will be described in terms of the rate, which will be
measured in bits per sample.
Vector Quantization
7. Advantages of Vector Quantization over Scalar Quantization
VECTOR QUANTIZATION SCALAR QUANTIZATION
The input symbol are clubbed together called
vectors, and then process them to give the output.
Each input symbol is treated separately and the
produce output.
Increase the optimality of the quantizer. Decreases the optimality of the quantizer.
This is more efficient. This is less efficient.
The granular error is effected by size and shape of
the quantization. interval.
The granular error was determined by the size of
quantization interval.
Vector quantization provides more flexibility
towards modification.
Scalar quantization does not provide flexibility
towards modification.
8. Vector Quantization can lower the average
distortion with the number of reconstruction
levels held constant and vice versa,
Scalar Quantization cannot.
Vector Quantization have improved
performance when there is not the sample to
sample dependence of input.
While it is not the case in Scalar
Quantization.
In Scalar Quantization, in One Dimension,
the quantization regions are restricted to be
in intervals(i.e., Output points are restricted
to be rectangular grids) and the only
parameter we can manipulate is the size of
the interval.
While, in Vector Quantization, When we
divide the input into vectors of some length n,
the quantization regions are no longer
restricted to be rectangles or squares, we
have the freedom to divide the range of the
inputs in an infinite number of ways.