SlideShare a Scribd company logo
Lecture Notes on Quantization
for
Open Educational Resource
on
Data Compression(CA209)
by
Dr. Piyush Charan
Assistant Professor
Department of Electronics and Communication Engg.
Integral University, Lucknow
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
Unit 5-Syllabus
• Quantization
– Vector Quantization,
– Advantages of Vector Quantization over Scalar
Quantization,
– The Linde-BuzoGray Algorithm,
– Tree-structured Vector Quantizers,
– Structured Vector Quantizers
02 February 2021 Dr. Piyush Charan, Dept. of ECE, Integral University, Lucknow 3
Introduction
• Quantization is one of the efficient tool for lossy
compression.
• It can reduce the bits required to represent the source.
• In lossy compression application, we represent each source
output using one of a small number of codewords.
• The number of distinct source output values is generally
much larger than the number of codewords available to
represent them.
• The process of representing the number of distinct output
values to a much smaller set is called quantization.
02 February 2021 Dr. Piyush Charan, Dept. of ECE, Integral University, Lucknow 4
Introduction contd…
• The set of input and output of a quantizer can
be scalars or vectors.
02 February 2021 Dr. Piyush Charan, Dept. of ECE, Integral University, Lucknow 5
Types of Quantization
• Scalar Quantization: The most common types of
quantization is scalar quantization. Scalar quantization,
typically denoted as y = Q(x) is the process of using
quantization function Q(x) to map a input value x to scalar
output value y.
• Vector Quantization: A vector quantization map k-
dimensional vector in the vector space Rk into a finite set of
vectors Y=[Yi : i=1,2,..,N]. Each vector Yi is called a code
vector or a codeword and set of all the codeword is called a
codebook.
02 February 2021 Dr. Piyush Charan, Dept. of ECE, Integral University, Lucknow 6
Vector Quantization
• VQ is a lossy data compression method based on the principal of
block coding technique that quantizes blocks of data instead of
signal sample.
• VQ exploits the correlation existing between neighboring signal
sample by quantizing them together.
• VQ is one of the widely used and efficient technique for image
compression.
• Since last few decades in the field of multimedia data compression,
VQ has received a great attention because it has simple decoding
structure and can provide high compression ratio.
02 February 2021 Dr. Piyush Charan, Dept. of ECE, Integral University, Lucknow 7
Vector Quantization contd…
• VQ based image compression technique has three major steps namely:
1. Codebook Design
2. VQ Encoding Processes.
3. VQ Decoding Processes.
• In VQ based image compression first image is decomposed into non-
overlapping sub-blocks and each sub block is converted into one-
dimension vector termed as training vector.
• From training vectors, a set of representative vector are selected to
represent the entire set of training vector.
• The set of representative training vector is called a codebook and each
representative training vector is called codeword or code-vector.
02 February 2021 Dr. Piyush Charan, Dept. of ECE, Integral University, Lucknow 8
Vector Quantization contd…
• The goal of VQ code book generation is to find an optimal code book that yields the
lowest possible distortion when compared with all other code books of the same size.
• The performance of VQ based image compression technique depends upon the
constructed codebook.
• The search complexity increases with the number of vectors in the code book and to
minimize the search complexity, the tree search vector quantization schemes was
introduced.
• The number of code vectors N depends on two parameters, rate R and dimensions L.
• The number of code vector is calculated using the following formula-
𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑐𝑜𝑑𝑒 𝑣𝑒𝑐𝑡𝑜𝑟𝑠 (𝑁) = 2𝑅×𝐿
where;
R → Rate in bits/pixel,
L → Dimensions
02 February 2021 Dr. Piyush Charan, Dept. of ECE, Integral University, Lucknow 9
Vector Quantization Process
02 February 2021 Dr. Piyush Charan, Dept. of ECE, Integral University, Lucknow 10
Difference between Vector and Scalar
Quantization
02 February 2021 Dr. Piyush Charan, Dept. of ECE, Integral University, Lucknow 11
• ⇢ 1: Vector Quantization can lower the average distortion with the
number of reconstruction levels held constant, While Scalar Quantization
cannot.
• ⇢ 2: Vector Quantization can reduce the number of reconstruction levels
when distortion is held constant, While Scalar Quantization cannot.
• ⇢ 3: The most significant way Vector Quantization can improve
performance over Scalar Quantization is by exploiting the statistical
dependence among scalars in the block.
• ⇢ 4: Vector Quantization is also more effective than Scalar Quantization
When the source output values are not correlated.
Difference between Vector and Scalar
Quantization contd…
02 February 2021 Dr. Piyush Charan, Dept. of ECE, Integral University, Lucknow 12
• ⇢ 5: In Scalar Quantization, in One Dimension, the quantization regions
are restricted to be in intervals(i.e., Output points are restricted to be
rectangular grids) and the only parameter we can manipulate is the size of
the interval. While, in Vector Quantization, When we divide the input into
vectors of some length n, the quantization regions are no longer restricted to
be rectangles or squares, we have the freedom to divide the range of the
inputs in an infinite number of ways.
• ⇢ 6: In Scalar Quantization, the Granular Error is affected by size of
quantization interval only, while in Vector Quantization, Granular Error is
affected by the both shape and size of quantization interval.
• ⇢ 7: Vector Quantization provides more flexibility towards modifications
than Scalar Quantization. The flexibility of Vector Quantization towards
modification increases with increasing dimension.
Difference between Vector and Scalar
Quantization contd…
02 February 2021 Dr. Piyush Charan, Dept. of ECE, Integral University, Lucknow 13
• ⇢ 8: Vector Quantization have improved performance when there is
sample to sample dependence of input, While not in Scalar
Quantization.
• ⇢ 9: Vector Quantization have improved performance when there is
not the sample to sample dependence of input, While not in Scalar
Quantization.
• ⇢ 10:Describing the decision boundaries between reconstruction
levels is easier in Scalar Quantization than in Vector Quantization.
Advantages of Vector Quantization
over Scalar Quantization
• Vector Quantization provide flexibility in choosing
multidimensional Quantizer cell shape and in choosing a
desired code-book size.
• The advantage of VQ over SQ is the fractional value of
resolution that is achieved and very important for low-bit rate
applications where low resolution is sufficient.
• For a given rate VQ results in a lower distortion than SQ.
• VQ can utilize the memory of the source better than SQ.
02 February 2021 Dr. Piyush Charan, Dept. of ECE, Integral University, Lucknow 14
Linde-Buzo Gray Algorithm
• The need for multi-dimensional integration for the
design of a vector quantizer was a challenging problem
in the earlier days.
• The main concepts is to divide a group of vector. To
find a most representative vector from one group. Then
gather the vectors to from a codebook. The inputs are
not longer scalars in the LBG algorithm.
02 February 2021 Dr. Piyush Charan, Dept. of ECE, Integral University, Lucknow 15
LBG Algorithm
1. Divide image into block. Then we can view one block as k-dimension
vector.
2. Arbitrarily choose initial codebook. Set these initial codebook as
centroids. Other are grouped. Vector are in the same group when they
have the same nearest centroids.
3. Again to find new centroids for every group. Get new codebooks.
Repeat 2,3 steps until the centroids of every groups converge.
• Thus at every iteration the codebook become progressively better. This
processed is continued till there is no change in the overall distortion.
02 February 2021 Dr. Piyush Charan, Dept. of ECE, Integral University, Lucknow 16
Initializing the LBG Algorithm
• The important thing we need to consider is the good set of initial
quantization points that will guarantee the convergence the LBG
algorithm guarantee that the distortion from one iteration to the next will
not increase.
• The performance of the LBG algorithm depends heavily on the initial
codebook.
• We will use splitting technique to design the initial codebook.
1. Random selection of Hilbert technique
2. Pair wise Nearest Neighbor (PNN) method.
02 February 2021 Dr. Piyush Charan, Dept. of ECE, Integral University, Lucknow 17
Empty Cell Problem
• What we will do if one of the reconstruction or quantization region in
some iteration is empty?
• There might be no points which are closer to a given reconstruction
point than any other reconstruction points.
• This is problem because in order to update an output points, we need to
take the average of the input vectors assigned to that output.
• But in this case we will end up with an output that is never used.
• A common solution to a empty cell problem is to remove an output
point that has no inputs associated with it and replace it with a point
from the quantization region with most training points.
02 February 2021 Dr. Piyush Charan, Dept. of ECE, Integral University, Lucknow 18
Tree Structure Vector Quantization
• Another fast codebook design technique-structured VQ and
was presented by Buzo.
• The number of operation can be reduced by enforcing a certain
structure on the codebook.
• One such possibility is using a tree structure, while turns into a
tree codebook and the method is called the binary search
clustering.
02 February 2021 Dr. Piyush Charan, Dept. of ECE, Integral University, Lucknow 19
Tree Structure Vector Quantization
• The disadvantage of tree-search is that we might not end up with the reconstruction
point that is closest the distortion will be a little higher compared to a full search
Quantizer.
• The storage requirement will also be larger, since we have to store all test vector too.
02 February 2021 Dr. Piyush Charan, Dept. of ECE, Integral University, Lucknow 20
How to design TSVQ
1. Obtain the average of all the training vectors, unsettled it to obtain a
second vector, and use these vector to from a two level VQ.
2. Call the vector v0and v1 and the group of training set vector that would
be quantized to each as g0 and g1.
3. Unsettled v0 and v1 to get the initial vectors for a four-level VQ.
4. Use g0 to design a two-level VQ and g1 and to design the another two-
level VQ.
5. Label the vectors v00,v01,v10,v11.
6. Split g0 using v00 and v01 into two groups g00,g01
7. Split g1 using v10 and v11 into two groups g10 and g11.
02 February 2021 Dr. Piyush Charan, Dept. of ECE, Integral University, Lucknow 21
Pruned Tree- structured Vector Quantizer
• Now we have develop tree-structured codebook and we can
improve its rate distortion performance by pruning removing
carefully selected subgroups that will reduce the size of the
codebook and thus the rate.
• But it may increase the distortion so the main objective of
pruning is to remove those groups that will result in the best
trade-off rate and distortion.
• Prune trees by finding sub tree T that minimizes ‘𝜆𝑇’
• 𝜆𝑇 =
𝑐ℎ𝑎𝑛𝑔𝑒 𝑖𝑛 𝑑𝑖𝑠𝑡𝑜𝑟𝑡𝑖𝑜𝑛 𝑖𝑓 𝑝𝑟𝑢𝑛𝑒 𝑠𝑢𝑏 𝑡𝑟𝑒𝑒 𝑇
𝑐ℎ𝑎𝑛𝑔𝑒 𝑖𝑛 𝑟𝑎𝑡𝑒 𝑖𝑓 𝑝𝑟𝑢𝑛𝑒 𝑠𝑢𝑏𝑡𝑟𝑒𝑒 𝑇
02 February 2021 Dr. Piyush Charan, Dept. of ECE, Integral University, Lucknow 22
Structured Vector Quantization
• Several structured code impose a structure that allows for reduces implementation
complexity and also constrain codewords or codeword search.
• Let L be the digestion of the VQ. If R is the bit rate, then 𝐿2𝑅𝐿 scalar need to be stored.
Also, 𝐿2𝑅𝐿 scalar distortion calculation are required.
• So solution is to introduce some from of structure in the codebook and also in quantization
process.
• Disadvantage of structure VQ is inventible loss in rate-distortion performance.
• Different types of structure vector Quantizer are:
1. Lattice quantization
2. Tree-structure code
3. Multistage code
4. Product code: gain/shape code
02 February 2021 Dr. Piyush Charan, Dept. of ECE, Integral University, Lucknow 23
Lattice Vector Quantizer
• VQ codebook designed using LBG
algorithm complicated the
quantization process and have no
visible structure.
• So alternative is a Lattice point
quantization sine we can use it as
fast encoding algorithm.
• For a bit rate of n bit/sample and
spatial dimension v, the number of
codebook vectors, or equivalently of
lattice points used in 2nv.
02 February 2021 Dr. Piyush Charan, Dept. of ECE, Integral University, Lucknow 24
How are tree structured vector quantizers
better?
• Tree-structured vector quantization (TSVQ) reduces the complexity
by imposing a hierarchical structure on the partitioning. We study the
design of optimal tree-structured vector quantizers that minimize the
expected distortion subject to cost functions related to storage cost,
encoding rate, or quantization time.
02 February 2021 Dr. Piyush Charan, Dept. of ECE, Integral University, Lucknow 25
Thanks!!
02 February 2021 Dr. Piyush Charan, Dept. of ECE, Integral University, Lucknow 26
Dr. Piyush Charan
Assistant Professor,
Department of ECE,
Integral University, Lucknow
Email: er.piyush.charan@gmail.com, piyush@iul.ac.in

More Related Content

What's hot

Video Compression
Video CompressionVideo Compression
Video Compression
Shreyash Patel
 
Digital image processing- Compression- Different Coding techniques
Digital image processing- Compression- Different Coding techniques Digital image processing- Compression- Different Coding techniques
Digital image processing- Compression- Different Coding techniques
sudarmani rajagopal
 
Fundamentals of Neural Networks
Fundamentals of Neural NetworksFundamentals of Neural Networks
Fundamentals of Neural Networks
Gagan Deep
 
Adaptive huffman coding
Adaptive huffman codingAdaptive huffman coding
Adaptive huffman coding
Burqaa Hundeessaa
 
Lzw coding technique for image compression
Lzw coding technique for image compressionLzw coding technique for image compression
Lzw coding technique for image compression
Tata Consultancy Services
 
Fundamentals and image compression models
Fundamentals and image compression modelsFundamentals and image compression models
Fundamentals and image compression models
lavanya marichamy
 
Arithmetic coding
Arithmetic codingArithmetic coding
Arithmetic coding
Vikas Goyal
 
Introduction to Image Compression
Introduction to Image CompressionIntroduction to Image Compression
Introduction to Image Compression
Kalyan Acharjya
 
Data compression
Data compressionData compression
Data compression
VIKAS SINGH BHADOURIA
 
Run time storage
Run time storageRun time storage
Run time storage
Rasineni Madhan Mohan Naidu
 
Huffman and Arithmetic coding - Performance analysis
Huffman and Arithmetic coding - Performance analysisHuffman and Arithmetic coding - Performance analysis
Huffman and Arithmetic coding - Performance analysis
Ramakant Soni
 
Chapter 8 image compression
Chapter 8 image compressionChapter 8 image compression
Chapter 8 image compression
asodariyabhavesh
 
Lossless predictive coding in Digital Image Processing
Lossless predictive coding in Digital Image ProcessingLossless predictive coding in Digital Image Processing
Lossless predictive coding in Digital Image Processing
priyadharshini murugan
 
Fidelity criteria in image compression
Fidelity criteria in image compressionFidelity criteria in image compression
Fidelity criteria in image compression
KadamPawan
 
Interpixel redundancy
Interpixel redundancyInterpixel redundancy
Interpixel redundancy
Naveen Kumar
 
JPEG Image Compression
JPEG Image CompressionJPEG Image Compression
JPEG Image Compression
Aishwarya K. M.
 
Lecture 18: Gaussian Mixture Models and Expectation Maximization
Lecture 18: Gaussian Mixture Models and Expectation MaximizationLecture 18: Gaussian Mixture Models and Expectation Maximization
Lecture 18: Gaussian Mixture Models and Expectation Maximizationbutest
 
Data Compression (Lossy and Lossless)
Data Compression (Lossy and Lossless)Data Compression (Lossy and Lossless)
Data Compression (Lossy and Lossless)
Project Student
 

What's hot (20)

Video Compression
Video CompressionVideo Compression
Video Compression
 
Huffman Coding
Huffman CodingHuffman Coding
Huffman Coding
 
Digital image processing- Compression- Different Coding techniques
Digital image processing- Compression- Different Coding techniques Digital image processing- Compression- Different Coding techniques
Digital image processing- Compression- Different Coding techniques
 
Fundamentals of Neural Networks
Fundamentals of Neural NetworksFundamentals of Neural Networks
Fundamentals of Neural Networks
 
Adaptive huffman coding
Adaptive huffman codingAdaptive huffman coding
Adaptive huffman coding
 
Lzw coding technique for image compression
Lzw coding technique for image compressionLzw coding technique for image compression
Lzw coding technique for image compression
 
Fundamentals and image compression models
Fundamentals and image compression modelsFundamentals and image compression models
Fundamentals and image compression models
 
Arithmetic coding
Arithmetic codingArithmetic coding
Arithmetic coding
 
Introduction to Image Compression
Introduction to Image CompressionIntroduction to Image Compression
Introduction to Image Compression
 
Data compression
Data compressionData compression
Data compression
 
Run time storage
Run time storageRun time storage
Run time storage
 
Huffman and Arithmetic coding - Performance analysis
Huffman and Arithmetic coding - Performance analysisHuffman and Arithmetic coding - Performance analysis
Huffman and Arithmetic coding - Performance analysis
 
Chapter 8 image compression
Chapter 8 image compressionChapter 8 image compression
Chapter 8 image compression
 
Run length encoding
Run length encodingRun length encoding
Run length encoding
 
Lossless predictive coding in Digital Image Processing
Lossless predictive coding in Digital Image ProcessingLossless predictive coding in Digital Image Processing
Lossless predictive coding in Digital Image Processing
 
Fidelity criteria in image compression
Fidelity criteria in image compressionFidelity criteria in image compression
Fidelity criteria in image compression
 
Interpixel redundancy
Interpixel redundancyInterpixel redundancy
Interpixel redundancy
 
JPEG Image Compression
JPEG Image CompressionJPEG Image Compression
JPEG Image Compression
 
Lecture 18: Gaussian Mixture Models and Expectation Maximization
Lecture 18: Gaussian Mixture Models and Expectation MaximizationLecture 18: Gaussian Mixture Models and Expectation Maximization
Lecture 18: Gaussian Mixture Models and Expectation Maximization
 
Data Compression (Lossy and Lossless)
Data Compression (Lossy and Lossless)Data Compression (Lossy and Lossless)
Data Compression (Lossy and Lossless)
 

Similar to Unit 5 Quantization

ResNeSt: Split-Attention Networks
ResNeSt: Split-Attention NetworksResNeSt: Split-Attention Networks
ResNeSt: Split-Attention Networks
Seunghyun Hwang
 
AN EFFICIENT CODEBOOK INITIALIZATION APPROACH FOR LBG ALGORITHM
AN EFFICIENT CODEBOOK INITIALIZATION APPROACH FOR LBG ALGORITHMAN EFFICIENT CODEBOOK INITIALIZATION APPROACH FOR LBG ALGORITHM
AN EFFICIENT CODEBOOK INITIALIZATION APPROACH FOR LBG ALGORITHM
IJCSEA Journal
 
A systematic image compression in the combination of linear vector quantisati...
A systematic image compression in the combination of linear vector quantisati...A systematic image compression in the combination of linear vector quantisati...
A systematic image compression in the combination of linear vector quantisati...
eSAT Publishing House
 
Performance Comparison of K-means Codebook Optimization using different Clust...
Performance Comparison of K-means Codebook Optimization using different Clust...Performance Comparison of K-means Codebook Optimization using different Clust...
Performance Comparison of K-means Codebook Optimization using different Clust...
IOSR Journals
 
Towards better analysis of deep convolutional neural networks
Towards better analysis of deep convolutional neural networksTowards better analysis of deep convolutional neural networks
Towards better analysis of deep convolutional neural networks
曾 子芸
 
Automatic Grading of Handwritten Answers
Automatic Grading of Handwritten AnswersAutomatic Grading of Handwritten Answers
Automatic Grading of Handwritten Answers
IRJET Journal
 
BULK IEEE PROJECTS IN MATLAB ,BULK IEEE PROJECTS, IEEE 2015-16 MATLAB PROJEC...
 BULK IEEE PROJECTS IN MATLAB ,BULK IEEE PROJECTS, IEEE 2015-16 MATLAB PROJEC... BULK IEEE PROJECTS IN MATLAB ,BULK IEEE PROJECTS, IEEE 2015-16 MATLAB PROJEC...
BULK IEEE PROJECTS IN MATLAB ,BULK IEEE PROJECTS, IEEE 2015-16 MATLAB PROJEC...
Nexgen Technology
 
final year ieee pojects in pondicherry,bulk ieee projects ,bulk 2015-16 i...
  final  year ieee pojects in pondicherry,bulk ieee projects ,bulk  2015-16 i...  final  year ieee pojects in pondicherry,bulk ieee projects ,bulk  2015-16 i...
final year ieee pojects in pondicherry,bulk ieee projects ,bulk 2015-16 i...nexgentech
 
Week 11: Programming for Data Analysis
Week 11: Programming for Data AnalysisWeek 11: Programming for Data Analysis
Week 11: Programming for Data Analysis
Ferdin Joe John Joseph PhD
 
Neural Networks for Machine Learning and Deep Learning
Neural Networks for Machine Learning and Deep LearningNeural Networks for Machine Learning and Deep Learning
Neural Networks for Machine Learning and Deep Learning
comifa7406
 
Unit 1 Introduction to Data Compression
Unit 1 Introduction to Data CompressionUnit 1 Introduction to Data Compression
Unit 1 Introduction to Data Compression
Dr Piyush Charan
 
Unit 1 Introduction to Data Compression
Unit 1 Introduction to Data CompressionUnit 1 Introduction to Data Compression
Unit 1 Introduction to Data Compression
Dr Piyush Charan
 
Unit 2 Lecture notes on Huffman coding
Unit 2 Lecture notes on Huffman codingUnit 2 Lecture notes on Huffman coding
Unit 2 Lecture notes on Huffman coding
Dr Piyush Charan
 
Researc-paper_Project Work Phase-1 PPT (21CS09).pptx
Researc-paper_Project Work Phase-1 PPT (21CS09).pptxResearc-paper_Project Work Phase-1 PPT (21CS09).pptx
Researc-paper_Project Work Phase-1 PPT (21CS09).pptx
AdityaKumar993506
 
Types of Machine Learnig Algorithms(CART, ID3)
Types of Machine Learnig Algorithms(CART, ID3)Types of Machine Learnig Algorithms(CART, ID3)
Types of Machine Learnig Algorithms(CART, ID3)
Fatimakhan325
 
Generalization of linear and non-linear support vector machine in multiple fi...
Generalization of linear and non-linear support vector machine in multiple fi...Generalization of linear and non-linear support vector machine in multiple fi...
Generalization of linear and non-linear support vector machine in multiple fi...
CSITiaesprime
 
phase2.pptx project slides which helps to know the content
phase2.pptx project slides which helps to know the contentphase2.pptx project slides which helps to know the content
phase2.pptx project slides which helps to know the content
4PS20CS062NandithaKP
 
Machine learning for Data Science
Machine learning for Data ScienceMachine learning for Data Science
Machine learning for Data Science
Dr. Vaibhav Kumar
 
Deep learning Tutorial - Part II
Deep learning Tutorial - Part IIDeep learning Tutorial - Part II
Deep learning Tutorial - Part II
QuantUniversity
 
Comparison of Learning Algorithms for Handwritten Digit Recognition
Comparison of Learning Algorithms for Handwritten Digit RecognitionComparison of Learning Algorithms for Handwritten Digit Recognition
Comparison of Learning Algorithms for Handwritten Digit Recognition
Safaa Alnabulsi
 

Similar to Unit 5 Quantization (20)

ResNeSt: Split-Attention Networks
ResNeSt: Split-Attention NetworksResNeSt: Split-Attention Networks
ResNeSt: Split-Attention Networks
 
AN EFFICIENT CODEBOOK INITIALIZATION APPROACH FOR LBG ALGORITHM
AN EFFICIENT CODEBOOK INITIALIZATION APPROACH FOR LBG ALGORITHMAN EFFICIENT CODEBOOK INITIALIZATION APPROACH FOR LBG ALGORITHM
AN EFFICIENT CODEBOOK INITIALIZATION APPROACH FOR LBG ALGORITHM
 
A systematic image compression in the combination of linear vector quantisati...
A systematic image compression in the combination of linear vector quantisati...A systematic image compression in the combination of linear vector quantisati...
A systematic image compression in the combination of linear vector quantisati...
 
Performance Comparison of K-means Codebook Optimization using different Clust...
Performance Comparison of K-means Codebook Optimization using different Clust...Performance Comparison of K-means Codebook Optimization using different Clust...
Performance Comparison of K-means Codebook Optimization using different Clust...
 
Towards better analysis of deep convolutional neural networks
Towards better analysis of deep convolutional neural networksTowards better analysis of deep convolutional neural networks
Towards better analysis of deep convolutional neural networks
 
Automatic Grading of Handwritten Answers
Automatic Grading of Handwritten AnswersAutomatic Grading of Handwritten Answers
Automatic Grading of Handwritten Answers
 
BULK IEEE PROJECTS IN MATLAB ,BULK IEEE PROJECTS, IEEE 2015-16 MATLAB PROJEC...
 BULK IEEE PROJECTS IN MATLAB ,BULK IEEE PROJECTS, IEEE 2015-16 MATLAB PROJEC... BULK IEEE PROJECTS IN MATLAB ,BULK IEEE PROJECTS, IEEE 2015-16 MATLAB PROJEC...
BULK IEEE PROJECTS IN MATLAB ,BULK IEEE PROJECTS, IEEE 2015-16 MATLAB PROJEC...
 
final year ieee pojects in pondicherry,bulk ieee projects ,bulk 2015-16 i...
  final  year ieee pojects in pondicherry,bulk ieee projects ,bulk  2015-16 i...  final  year ieee pojects in pondicherry,bulk ieee projects ,bulk  2015-16 i...
final year ieee pojects in pondicherry,bulk ieee projects ,bulk 2015-16 i...
 
Week 11: Programming for Data Analysis
Week 11: Programming for Data AnalysisWeek 11: Programming for Data Analysis
Week 11: Programming for Data Analysis
 
Neural Networks for Machine Learning and Deep Learning
Neural Networks for Machine Learning and Deep LearningNeural Networks for Machine Learning and Deep Learning
Neural Networks for Machine Learning and Deep Learning
 
Unit 1 Introduction to Data Compression
Unit 1 Introduction to Data CompressionUnit 1 Introduction to Data Compression
Unit 1 Introduction to Data Compression
 
Unit 1 Introduction to Data Compression
Unit 1 Introduction to Data CompressionUnit 1 Introduction to Data Compression
Unit 1 Introduction to Data Compression
 
Unit 2 Lecture notes on Huffman coding
Unit 2 Lecture notes on Huffman codingUnit 2 Lecture notes on Huffman coding
Unit 2 Lecture notes on Huffman coding
 
Researc-paper_Project Work Phase-1 PPT (21CS09).pptx
Researc-paper_Project Work Phase-1 PPT (21CS09).pptxResearc-paper_Project Work Phase-1 PPT (21CS09).pptx
Researc-paper_Project Work Phase-1 PPT (21CS09).pptx
 
Types of Machine Learnig Algorithms(CART, ID3)
Types of Machine Learnig Algorithms(CART, ID3)Types of Machine Learnig Algorithms(CART, ID3)
Types of Machine Learnig Algorithms(CART, ID3)
 
Generalization of linear and non-linear support vector machine in multiple fi...
Generalization of linear and non-linear support vector machine in multiple fi...Generalization of linear and non-linear support vector machine in multiple fi...
Generalization of linear and non-linear support vector machine in multiple fi...
 
phase2.pptx project slides which helps to know the content
phase2.pptx project slides which helps to know the contentphase2.pptx project slides which helps to know the content
phase2.pptx project slides which helps to know the content
 
Machine learning for Data Science
Machine learning for Data ScienceMachine learning for Data Science
Machine learning for Data Science
 
Deep learning Tutorial - Part II
Deep learning Tutorial - Part IIDeep learning Tutorial - Part II
Deep learning Tutorial - Part II
 
Comparison of Learning Algorithms for Handwritten Digit Recognition
Comparison of Learning Algorithms for Handwritten Digit RecognitionComparison of Learning Algorithms for Handwritten Digit Recognition
Comparison of Learning Algorithms for Handwritten Digit Recognition
 

More from Dr Piyush Charan

Unit 1- Intro to Wireless Standards.pdf
Unit 1- Intro to Wireless Standards.pdfUnit 1- Intro to Wireless Standards.pdf
Unit 1- Intro to Wireless Standards.pdf
Dr Piyush Charan
 
Unit 1 Solar Collectors
Unit 1 Solar CollectorsUnit 1 Solar Collectors
Unit 1 Solar Collectors
Dr Piyush Charan
 
Unit 4 Lossy Coding Preliminaries
Unit 4 Lossy Coding PreliminariesUnit 4 Lossy Coding Preliminaries
Unit 4 Lossy Coding Preliminaries
Dr Piyush Charan
 
Unit 3 Geothermal Energy
Unit 3 Geothermal EnergyUnit 3 Geothermal Energy
Unit 3 Geothermal Energy
Dr Piyush Charan
 
Unit 2: Programming Language Tools
Unit 2:  Programming Language ToolsUnit 2:  Programming Language Tools
Unit 2: Programming Language Tools
Dr Piyush Charan
 
Unit 4 Arrays
Unit 4 ArraysUnit 4 Arrays
Unit 4 Arrays
Dr Piyush Charan
 
Unit 3 Lecture Notes on Programming
Unit 3 Lecture Notes on ProgrammingUnit 3 Lecture Notes on Programming
Unit 3 Lecture Notes on Programming
Dr Piyush Charan
 
Unit 3 introduction to programming
Unit 3 introduction to programmingUnit 3 introduction to programming
Unit 3 introduction to programming
Dr Piyush Charan
 
Forensics and wireless body area networks
Forensics and wireless body area networksForensics and wireless body area networks
Forensics and wireless body area networks
Dr Piyush Charan
 
Final PhD Defense Presentation
Final PhD Defense PresentationFinal PhD Defense Presentation
Final PhD Defense Presentation
Dr Piyush Charan
 
Unit 1 Introduction to Non-Conventional Energy Resources
Unit 1 Introduction to Non-Conventional Energy ResourcesUnit 1 Introduction to Non-Conventional Energy Resources
Unit 1 Introduction to Non-Conventional Energy Resources
Dr Piyush Charan
 
Unit 5-Operational Amplifiers and Electronic Measurement Devices
Unit 5-Operational Amplifiers and Electronic Measurement DevicesUnit 5-Operational Amplifiers and Electronic Measurement Devices
Unit 5-Operational Amplifiers and Electronic Measurement Devices
Dr Piyush Charan
 
Unit 4 Switching Theory and Logic Gates
Unit 4 Switching Theory and Logic GatesUnit 4 Switching Theory and Logic Gates
Unit 4 Switching Theory and Logic Gates
Dr Piyush Charan
 
Unit 1 Numerical Problems on PN Junction Diode
Unit 1 Numerical Problems on PN Junction DiodeUnit 1 Numerical Problems on PN Junction Diode
Unit 1 Numerical Problems on PN Junction Diode
Dr Piyush Charan
 
Unit 4_Part 1_Number System
Unit 4_Part 1_Number SystemUnit 4_Part 1_Number System
Unit 4_Part 1_Number System
Dr Piyush Charan
 
Unit 5 Global Issues- Early life of Prophet Muhammad
Unit 5 Global Issues- Early life of Prophet MuhammadUnit 5 Global Issues- Early life of Prophet Muhammad
Unit 5 Global Issues- Early life of Prophet Muhammad
Dr Piyush Charan
 
Unit 4 Engineering Ethics
Unit 4 Engineering EthicsUnit 4 Engineering Ethics
Unit 4 Engineering Ethics
Dr Piyush Charan
 
Unit 3 Professional Responsibility
Unit 3 Professional ResponsibilityUnit 3 Professional Responsibility
Unit 3 Professional Responsibility
Dr Piyush Charan
 
Unit 5 oscillators and voltage regulators
Unit 5 oscillators and voltage regulatorsUnit 5 oscillators and voltage regulators
Unit 5 oscillators and voltage regulators
Dr Piyush Charan
 
Unit 4 feedback amplifiers
Unit 4 feedback amplifiersUnit 4 feedback amplifiers
Unit 4 feedback amplifiers
Dr Piyush Charan
 

More from Dr Piyush Charan (20)

Unit 1- Intro to Wireless Standards.pdf
Unit 1- Intro to Wireless Standards.pdfUnit 1- Intro to Wireless Standards.pdf
Unit 1- Intro to Wireless Standards.pdf
 
Unit 1 Solar Collectors
Unit 1 Solar CollectorsUnit 1 Solar Collectors
Unit 1 Solar Collectors
 
Unit 4 Lossy Coding Preliminaries
Unit 4 Lossy Coding PreliminariesUnit 4 Lossy Coding Preliminaries
Unit 4 Lossy Coding Preliminaries
 
Unit 3 Geothermal Energy
Unit 3 Geothermal EnergyUnit 3 Geothermal Energy
Unit 3 Geothermal Energy
 
Unit 2: Programming Language Tools
Unit 2:  Programming Language ToolsUnit 2:  Programming Language Tools
Unit 2: Programming Language Tools
 
Unit 4 Arrays
Unit 4 ArraysUnit 4 Arrays
Unit 4 Arrays
 
Unit 3 Lecture Notes on Programming
Unit 3 Lecture Notes on ProgrammingUnit 3 Lecture Notes on Programming
Unit 3 Lecture Notes on Programming
 
Unit 3 introduction to programming
Unit 3 introduction to programmingUnit 3 introduction to programming
Unit 3 introduction to programming
 
Forensics and wireless body area networks
Forensics and wireless body area networksForensics and wireless body area networks
Forensics and wireless body area networks
 
Final PhD Defense Presentation
Final PhD Defense PresentationFinal PhD Defense Presentation
Final PhD Defense Presentation
 
Unit 1 Introduction to Non-Conventional Energy Resources
Unit 1 Introduction to Non-Conventional Energy ResourcesUnit 1 Introduction to Non-Conventional Energy Resources
Unit 1 Introduction to Non-Conventional Energy Resources
 
Unit 5-Operational Amplifiers and Electronic Measurement Devices
Unit 5-Operational Amplifiers and Electronic Measurement DevicesUnit 5-Operational Amplifiers and Electronic Measurement Devices
Unit 5-Operational Amplifiers and Electronic Measurement Devices
 
Unit 4 Switching Theory and Logic Gates
Unit 4 Switching Theory and Logic GatesUnit 4 Switching Theory and Logic Gates
Unit 4 Switching Theory and Logic Gates
 
Unit 1 Numerical Problems on PN Junction Diode
Unit 1 Numerical Problems on PN Junction DiodeUnit 1 Numerical Problems on PN Junction Diode
Unit 1 Numerical Problems on PN Junction Diode
 
Unit 4_Part 1_Number System
Unit 4_Part 1_Number SystemUnit 4_Part 1_Number System
Unit 4_Part 1_Number System
 
Unit 5 Global Issues- Early life of Prophet Muhammad
Unit 5 Global Issues- Early life of Prophet MuhammadUnit 5 Global Issues- Early life of Prophet Muhammad
Unit 5 Global Issues- Early life of Prophet Muhammad
 
Unit 4 Engineering Ethics
Unit 4 Engineering EthicsUnit 4 Engineering Ethics
Unit 4 Engineering Ethics
 
Unit 3 Professional Responsibility
Unit 3 Professional ResponsibilityUnit 3 Professional Responsibility
Unit 3 Professional Responsibility
 
Unit 5 oscillators and voltage regulators
Unit 5 oscillators and voltage regulatorsUnit 5 oscillators and voltage regulators
Unit 5 oscillators and voltage regulators
 
Unit 4 feedback amplifiers
Unit 4 feedback amplifiersUnit 4 feedback amplifiers
Unit 4 feedback amplifiers
 

Recently uploaded

H.Seo, ICLR 2024, MLILAB, KAIST AI.pdf
H.Seo,  ICLR 2024, MLILAB,  KAIST AI.pdfH.Seo,  ICLR 2024, MLILAB,  KAIST AI.pdf
H.Seo, ICLR 2024, MLILAB, KAIST AI.pdf
MLILAB
 
一比一原版(IIT毕业证)伊利诺伊理工大学毕业证成绩单专业办理
一比一原版(IIT毕业证)伊利诺伊理工大学毕业证成绩单专业办理一比一原版(IIT毕业证)伊利诺伊理工大学毕业证成绩单专业办理
一比一原版(IIT毕业证)伊利诺伊理工大学毕业证成绩单专业办理
zwunae
 
Fundamentals of Electric Drives and its applications.pptx
Fundamentals of Electric Drives and its applications.pptxFundamentals of Electric Drives and its applications.pptx
Fundamentals of Electric Drives and its applications.pptx
manasideore6
 
MCQ Soil mechanics questions (Soil shear strength).pdf
MCQ Soil mechanics questions (Soil shear strength).pdfMCQ Soil mechanics questions (Soil shear strength).pdf
MCQ Soil mechanics questions (Soil shear strength).pdf
Osamah Alsalih
 
space technology lecture notes on satellite
space technology lecture notes on satellitespace technology lecture notes on satellite
space technology lecture notes on satellite
ongomchris
 
CFD Simulation of By-pass Flow in a HRSG module by R&R Consult.pptx
CFD Simulation of By-pass Flow in a HRSG module by R&R Consult.pptxCFD Simulation of By-pass Flow in a HRSG module by R&R Consult.pptx
CFD Simulation of By-pass Flow in a HRSG module by R&R Consult.pptx
R&R Consult
 
road safety engineering r s e unit 3.pdf
road safety engineering  r s e unit 3.pdfroad safety engineering  r s e unit 3.pdf
road safety engineering r s e unit 3.pdf
VENKATESHvenky89705
 
在线办理(ANU毕业证书)澳洲国立大学毕业证录取通知书一模一样
在线办理(ANU毕业证书)澳洲国立大学毕业证录取通知书一模一样在线办理(ANU毕业证书)澳洲国立大学毕业证录取通知书一模一样
在线办理(ANU毕业证书)澳洲国立大学毕业证录取通知书一模一样
obonagu
 
RAT: Retrieval Augmented Thoughts Elicit Context-Aware Reasoning in Long-Hori...
RAT: Retrieval Augmented Thoughts Elicit Context-Aware Reasoning in Long-Hori...RAT: Retrieval Augmented Thoughts Elicit Context-Aware Reasoning in Long-Hori...
RAT: Retrieval Augmented Thoughts Elicit Context-Aware Reasoning in Long-Hori...
thanhdowork
 
一比一原版(SFU毕业证)西蒙菲莎大学毕业证成绩单如何办理
一比一原版(SFU毕业证)西蒙菲莎大学毕业证成绩单如何办理一比一原版(SFU毕业证)西蒙菲莎大学毕业证成绩单如何办理
一比一原版(SFU毕业证)西蒙菲莎大学毕业证成绩单如何办理
bakpo1
 
Student information management system project report ii.pdf
Student information management system project report ii.pdfStudent information management system project report ii.pdf
Student information management system project report ii.pdf
Kamal Acharya
 
The Benefits and Techniques of Trenchless Pipe Repair.pdf
The Benefits and Techniques of Trenchless Pipe Repair.pdfThe Benefits and Techniques of Trenchless Pipe Repair.pdf
The Benefits and Techniques of Trenchless Pipe Repair.pdf
Pipe Restoration Solutions
 
weather web application report.pdf
weather web application report.pdfweather web application report.pdf
weather web application report.pdf
Pratik Pawar
 
一比一原版(UofT毕业证)多伦多大学毕业证成绩单如何办理
一比一原版(UofT毕业证)多伦多大学毕业证成绩单如何办理一比一原版(UofT毕业证)多伦多大学毕业证成绩单如何办理
一比一原版(UofT毕业证)多伦多大学毕业证成绩单如何办理
ydteq
 
Governing Equations for Fundamental Aerodynamics_Anderson2010.pdf
Governing Equations for Fundamental Aerodynamics_Anderson2010.pdfGoverning Equations for Fundamental Aerodynamics_Anderson2010.pdf
Governing Equations for Fundamental Aerodynamics_Anderson2010.pdf
WENKENLI1
 
Investor-Presentation-Q1FY2024 investor presentation document.pptx
Investor-Presentation-Q1FY2024 investor presentation document.pptxInvestor-Presentation-Q1FY2024 investor presentation document.pptx
Investor-Presentation-Q1FY2024 investor presentation document.pptx
AmarGB2
 
Cosmetic shop management system project report.pdf
Cosmetic shop management system project report.pdfCosmetic shop management system project report.pdf
Cosmetic shop management system project report.pdf
Kamal Acharya
 
block diagram and signal flow graph representation
block diagram and signal flow graph representationblock diagram and signal flow graph representation
block diagram and signal flow graph representation
Divya Somashekar
 
AKS UNIVERSITY Satna Final Year Project By OM Hardaha.pdf
AKS UNIVERSITY Satna Final Year Project By OM Hardaha.pdfAKS UNIVERSITY Satna Final Year Project By OM Hardaha.pdf
AKS UNIVERSITY Satna Final Year Project By OM Hardaha.pdf
SamSarthak3
 
Pile Foundation by Venkatesh Taduvai (Sub Geotechnical Engineering II)-conver...
Pile Foundation by Venkatesh Taduvai (Sub Geotechnical Engineering II)-conver...Pile Foundation by Venkatesh Taduvai (Sub Geotechnical Engineering II)-conver...
Pile Foundation by Venkatesh Taduvai (Sub Geotechnical Engineering II)-conver...
AJAYKUMARPUND1
 

Recently uploaded (20)

H.Seo, ICLR 2024, MLILAB, KAIST AI.pdf
H.Seo,  ICLR 2024, MLILAB,  KAIST AI.pdfH.Seo,  ICLR 2024, MLILAB,  KAIST AI.pdf
H.Seo, ICLR 2024, MLILAB, KAIST AI.pdf
 
一比一原版(IIT毕业证)伊利诺伊理工大学毕业证成绩单专业办理
一比一原版(IIT毕业证)伊利诺伊理工大学毕业证成绩单专业办理一比一原版(IIT毕业证)伊利诺伊理工大学毕业证成绩单专业办理
一比一原版(IIT毕业证)伊利诺伊理工大学毕业证成绩单专业办理
 
Fundamentals of Electric Drives and its applications.pptx
Fundamentals of Electric Drives and its applications.pptxFundamentals of Electric Drives and its applications.pptx
Fundamentals of Electric Drives and its applications.pptx
 
MCQ Soil mechanics questions (Soil shear strength).pdf
MCQ Soil mechanics questions (Soil shear strength).pdfMCQ Soil mechanics questions (Soil shear strength).pdf
MCQ Soil mechanics questions (Soil shear strength).pdf
 
space technology lecture notes on satellite
space technology lecture notes on satellitespace technology lecture notes on satellite
space technology lecture notes on satellite
 
CFD Simulation of By-pass Flow in a HRSG module by R&R Consult.pptx
CFD Simulation of By-pass Flow in a HRSG module by R&R Consult.pptxCFD Simulation of By-pass Flow in a HRSG module by R&R Consult.pptx
CFD Simulation of By-pass Flow in a HRSG module by R&R Consult.pptx
 
road safety engineering r s e unit 3.pdf
road safety engineering  r s e unit 3.pdfroad safety engineering  r s e unit 3.pdf
road safety engineering r s e unit 3.pdf
 
在线办理(ANU毕业证书)澳洲国立大学毕业证录取通知书一模一样
在线办理(ANU毕业证书)澳洲国立大学毕业证录取通知书一模一样在线办理(ANU毕业证书)澳洲国立大学毕业证录取通知书一模一样
在线办理(ANU毕业证书)澳洲国立大学毕业证录取通知书一模一样
 
RAT: Retrieval Augmented Thoughts Elicit Context-Aware Reasoning in Long-Hori...
RAT: Retrieval Augmented Thoughts Elicit Context-Aware Reasoning in Long-Hori...RAT: Retrieval Augmented Thoughts Elicit Context-Aware Reasoning in Long-Hori...
RAT: Retrieval Augmented Thoughts Elicit Context-Aware Reasoning in Long-Hori...
 
一比一原版(SFU毕业证)西蒙菲莎大学毕业证成绩单如何办理
一比一原版(SFU毕业证)西蒙菲莎大学毕业证成绩单如何办理一比一原版(SFU毕业证)西蒙菲莎大学毕业证成绩单如何办理
一比一原版(SFU毕业证)西蒙菲莎大学毕业证成绩单如何办理
 
Student information management system project report ii.pdf
Student information management system project report ii.pdfStudent information management system project report ii.pdf
Student information management system project report ii.pdf
 
The Benefits and Techniques of Trenchless Pipe Repair.pdf
The Benefits and Techniques of Trenchless Pipe Repair.pdfThe Benefits and Techniques of Trenchless Pipe Repair.pdf
The Benefits and Techniques of Trenchless Pipe Repair.pdf
 
weather web application report.pdf
weather web application report.pdfweather web application report.pdf
weather web application report.pdf
 
一比一原版(UofT毕业证)多伦多大学毕业证成绩单如何办理
一比一原版(UofT毕业证)多伦多大学毕业证成绩单如何办理一比一原版(UofT毕业证)多伦多大学毕业证成绩单如何办理
一比一原版(UofT毕业证)多伦多大学毕业证成绩单如何办理
 
Governing Equations for Fundamental Aerodynamics_Anderson2010.pdf
Governing Equations for Fundamental Aerodynamics_Anderson2010.pdfGoverning Equations for Fundamental Aerodynamics_Anderson2010.pdf
Governing Equations for Fundamental Aerodynamics_Anderson2010.pdf
 
Investor-Presentation-Q1FY2024 investor presentation document.pptx
Investor-Presentation-Q1FY2024 investor presentation document.pptxInvestor-Presentation-Q1FY2024 investor presentation document.pptx
Investor-Presentation-Q1FY2024 investor presentation document.pptx
 
Cosmetic shop management system project report.pdf
Cosmetic shop management system project report.pdfCosmetic shop management system project report.pdf
Cosmetic shop management system project report.pdf
 
block diagram and signal flow graph representation
block diagram and signal flow graph representationblock diagram and signal flow graph representation
block diagram and signal flow graph representation
 
AKS UNIVERSITY Satna Final Year Project By OM Hardaha.pdf
AKS UNIVERSITY Satna Final Year Project By OM Hardaha.pdfAKS UNIVERSITY Satna Final Year Project By OM Hardaha.pdf
AKS UNIVERSITY Satna Final Year Project By OM Hardaha.pdf
 
Pile Foundation by Venkatesh Taduvai (Sub Geotechnical Engineering II)-conver...
Pile Foundation by Venkatesh Taduvai (Sub Geotechnical Engineering II)-conver...Pile Foundation by Venkatesh Taduvai (Sub Geotechnical Engineering II)-conver...
Pile Foundation by Venkatesh Taduvai (Sub Geotechnical Engineering II)-conver...
 

Unit 5 Quantization

  • 1. Lecture Notes on Quantization for Open Educational Resource on Data Compression(CA209) by Dr. Piyush Charan Assistant Professor Department of Electronics and Communication Engg. Integral University, Lucknow This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
  • 2. Unit 5-Syllabus • Quantization – Vector Quantization, – Advantages of Vector Quantization over Scalar Quantization, – The Linde-BuzoGray Algorithm, – Tree-structured Vector Quantizers, – Structured Vector Quantizers 02 February 2021 Dr. Piyush Charan, Dept. of ECE, Integral University, Lucknow 3
  • 3. Introduction • Quantization is one of the efficient tool for lossy compression. • It can reduce the bits required to represent the source. • In lossy compression application, we represent each source output using one of a small number of codewords. • The number of distinct source output values is generally much larger than the number of codewords available to represent them. • The process of representing the number of distinct output values to a much smaller set is called quantization. 02 February 2021 Dr. Piyush Charan, Dept. of ECE, Integral University, Lucknow 4
  • 4. Introduction contd… • The set of input and output of a quantizer can be scalars or vectors. 02 February 2021 Dr. Piyush Charan, Dept. of ECE, Integral University, Lucknow 5
  • 5. Types of Quantization • Scalar Quantization: The most common types of quantization is scalar quantization. Scalar quantization, typically denoted as y = Q(x) is the process of using quantization function Q(x) to map a input value x to scalar output value y. • Vector Quantization: A vector quantization map k- dimensional vector in the vector space Rk into a finite set of vectors Y=[Yi : i=1,2,..,N]. Each vector Yi is called a code vector or a codeword and set of all the codeword is called a codebook. 02 February 2021 Dr. Piyush Charan, Dept. of ECE, Integral University, Lucknow 6
  • 6. Vector Quantization • VQ is a lossy data compression method based on the principal of block coding technique that quantizes blocks of data instead of signal sample. • VQ exploits the correlation existing between neighboring signal sample by quantizing them together. • VQ is one of the widely used and efficient technique for image compression. • Since last few decades in the field of multimedia data compression, VQ has received a great attention because it has simple decoding structure and can provide high compression ratio. 02 February 2021 Dr. Piyush Charan, Dept. of ECE, Integral University, Lucknow 7
  • 7. Vector Quantization contd… • VQ based image compression technique has three major steps namely: 1. Codebook Design 2. VQ Encoding Processes. 3. VQ Decoding Processes. • In VQ based image compression first image is decomposed into non- overlapping sub-blocks and each sub block is converted into one- dimension vector termed as training vector. • From training vectors, a set of representative vector are selected to represent the entire set of training vector. • The set of representative training vector is called a codebook and each representative training vector is called codeword or code-vector. 02 February 2021 Dr. Piyush Charan, Dept. of ECE, Integral University, Lucknow 8
  • 8. Vector Quantization contd… • The goal of VQ code book generation is to find an optimal code book that yields the lowest possible distortion when compared with all other code books of the same size. • The performance of VQ based image compression technique depends upon the constructed codebook. • The search complexity increases with the number of vectors in the code book and to minimize the search complexity, the tree search vector quantization schemes was introduced. • The number of code vectors N depends on two parameters, rate R and dimensions L. • The number of code vector is calculated using the following formula- 𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑐𝑜𝑑𝑒 𝑣𝑒𝑐𝑡𝑜𝑟𝑠 (𝑁) = 2𝑅×𝐿 where; R → Rate in bits/pixel, L → Dimensions 02 February 2021 Dr. Piyush Charan, Dept. of ECE, Integral University, Lucknow 9
  • 9. Vector Quantization Process 02 February 2021 Dr. Piyush Charan, Dept. of ECE, Integral University, Lucknow 10
  • 10. Difference between Vector and Scalar Quantization 02 February 2021 Dr. Piyush Charan, Dept. of ECE, Integral University, Lucknow 11 • ⇢ 1: Vector Quantization can lower the average distortion with the number of reconstruction levels held constant, While Scalar Quantization cannot. • ⇢ 2: Vector Quantization can reduce the number of reconstruction levels when distortion is held constant, While Scalar Quantization cannot. • ⇢ 3: The most significant way Vector Quantization can improve performance over Scalar Quantization is by exploiting the statistical dependence among scalars in the block. • ⇢ 4: Vector Quantization is also more effective than Scalar Quantization When the source output values are not correlated.
  • 11. Difference between Vector and Scalar Quantization contd… 02 February 2021 Dr. Piyush Charan, Dept. of ECE, Integral University, Lucknow 12 • ⇢ 5: In Scalar Quantization, in One Dimension, the quantization regions are restricted to be in intervals(i.e., Output points are restricted to be rectangular grids) and the only parameter we can manipulate is the size of the interval. While, in Vector Quantization, When we divide the input into vectors of some length n, the quantization regions are no longer restricted to be rectangles or squares, we have the freedom to divide the range of the inputs in an infinite number of ways. • ⇢ 6: In Scalar Quantization, the Granular Error is affected by size of quantization interval only, while in Vector Quantization, Granular Error is affected by the both shape and size of quantization interval. • ⇢ 7: Vector Quantization provides more flexibility towards modifications than Scalar Quantization. The flexibility of Vector Quantization towards modification increases with increasing dimension.
  • 12. Difference between Vector and Scalar Quantization contd… 02 February 2021 Dr. Piyush Charan, Dept. of ECE, Integral University, Lucknow 13 • ⇢ 8: Vector Quantization have improved performance when there is sample to sample dependence of input, While not in Scalar Quantization. • ⇢ 9: Vector Quantization have improved performance when there is not the sample to sample dependence of input, While not in Scalar Quantization. • ⇢ 10:Describing the decision boundaries between reconstruction levels is easier in Scalar Quantization than in Vector Quantization.
  • 13. Advantages of Vector Quantization over Scalar Quantization • Vector Quantization provide flexibility in choosing multidimensional Quantizer cell shape and in choosing a desired code-book size. • The advantage of VQ over SQ is the fractional value of resolution that is achieved and very important for low-bit rate applications where low resolution is sufficient. • For a given rate VQ results in a lower distortion than SQ. • VQ can utilize the memory of the source better than SQ. 02 February 2021 Dr. Piyush Charan, Dept. of ECE, Integral University, Lucknow 14
  • 14. Linde-Buzo Gray Algorithm • The need for multi-dimensional integration for the design of a vector quantizer was a challenging problem in the earlier days. • The main concepts is to divide a group of vector. To find a most representative vector from one group. Then gather the vectors to from a codebook. The inputs are not longer scalars in the LBG algorithm. 02 February 2021 Dr. Piyush Charan, Dept. of ECE, Integral University, Lucknow 15
  • 15. LBG Algorithm 1. Divide image into block. Then we can view one block as k-dimension vector. 2. Arbitrarily choose initial codebook. Set these initial codebook as centroids. Other are grouped. Vector are in the same group when they have the same nearest centroids. 3. Again to find new centroids for every group. Get new codebooks. Repeat 2,3 steps until the centroids of every groups converge. • Thus at every iteration the codebook become progressively better. This processed is continued till there is no change in the overall distortion. 02 February 2021 Dr. Piyush Charan, Dept. of ECE, Integral University, Lucknow 16
  • 16. Initializing the LBG Algorithm • The important thing we need to consider is the good set of initial quantization points that will guarantee the convergence the LBG algorithm guarantee that the distortion from one iteration to the next will not increase. • The performance of the LBG algorithm depends heavily on the initial codebook. • We will use splitting technique to design the initial codebook. 1. Random selection of Hilbert technique 2. Pair wise Nearest Neighbor (PNN) method. 02 February 2021 Dr. Piyush Charan, Dept. of ECE, Integral University, Lucknow 17
  • 17. Empty Cell Problem • What we will do if one of the reconstruction or quantization region in some iteration is empty? • There might be no points which are closer to a given reconstruction point than any other reconstruction points. • This is problem because in order to update an output points, we need to take the average of the input vectors assigned to that output. • But in this case we will end up with an output that is never used. • A common solution to a empty cell problem is to remove an output point that has no inputs associated with it and replace it with a point from the quantization region with most training points. 02 February 2021 Dr. Piyush Charan, Dept. of ECE, Integral University, Lucknow 18
  • 18. Tree Structure Vector Quantization • Another fast codebook design technique-structured VQ and was presented by Buzo. • The number of operation can be reduced by enforcing a certain structure on the codebook. • One such possibility is using a tree structure, while turns into a tree codebook and the method is called the binary search clustering. 02 February 2021 Dr. Piyush Charan, Dept. of ECE, Integral University, Lucknow 19
  • 19. Tree Structure Vector Quantization • The disadvantage of tree-search is that we might not end up with the reconstruction point that is closest the distortion will be a little higher compared to a full search Quantizer. • The storage requirement will also be larger, since we have to store all test vector too. 02 February 2021 Dr. Piyush Charan, Dept. of ECE, Integral University, Lucknow 20
  • 20. How to design TSVQ 1. Obtain the average of all the training vectors, unsettled it to obtain a second vector, and use these vector to from a two level VQ. 2. Call the vector v0and v1 and the group of training set vector that would be quantized to each as g0 and g1. 3. Unsettled v0 and v1 to get the initial vectors for a four-level VQ. 4. Use g0 to design a two-level VQ and g1 and to design the another two- level VQ. 5. Label the vectors v00,v01,v10,v11. 6. Split g0 using v00 and v01 into two groups g00,g01 7. Split g1 using v10 and v11 into two groups g10 and g11. 02 February 2021 Dr. Piyush Charan, Dept. of ECE, Integral University, Lucknow 21
  • 21. Pruned Tree- structured Vector Quantizer • Now we have develop tree-structured codebook and we can improve its rate distortion performance by pruning removing carefully selected subgroups that will reduce the size of the codebook and thus the rate. • But it may increase the distortion so the main objective of pruning is to remove those groups that will result in the best trade-off rate and distortion. • Prune trees by finding sub tree T that minimizes ‘𝜆𝑇’ • 𝜆𝑇 = 𝑐ℎ𝑎𝑛𝑔𝑒 𝑖𝑛 𝑑𝑖𝑠𝑡𝑜𝑟𝑡𝑖𝑜𝑛 𝑖𝑓 𝑝𝑟𝑢𝑛𝑒 𝑠𝑢𝑏 𝑡𝑟𝑒𝑒 𝑇 𝑐ℎ𝑎𝑛𝑔𝑒 𝑖𝑛 𝑟𝑎𝑡𝑒 𝑖𝑓 𝑝𝑟𝑢𝑛𝑒 𝑠𝑢𝑏𝑡𝑟𝑒𝑒 𝑇 02 February 2021 Dr. Piyush Charan, Dept. of ECE, Integral University, Lucknow 22
  • 22. Structured Vector Quantization • Several structured code impose a structure that allows for reduces implementation complexity and also constrain codewords or codeword search. • Let L be the digestion of the VQ. If R is the bit rate, then 𝐿2𝑅𝐿 scalar need to be stored. Also, 𝐿2𝑅𝐿 scalar distortion calculation are required. • So solution is to introduce some from of structure in the codebook and also in quantization process. • Disadvantage of structure VQ is inventible loss in rate-distortion performance. • Different types of structure vector Quantizer are: 1. Lattice quantization 2. Tree-structure code 3. Multistage code 4. Product code: gain/shape code 02 February 2021 Dr. Piyush Charan, Dept. of ECE, Integral University, Lucknow 23
  • 23. Lattice Vector Quantizer • VQ codebook designed using LBG algorithm complicated the quantization process and have no visible structure. • So alternative is a Lattice point quantization sine we can use it as fast encoding algorithm. • For a bit rate of n bit/sample and spatial dimension v, the number of codebook vectors, or equivalently of lattice points used in 2nv. 02 February 2021 Dr. Piyush Charan, Dept. of ECE, Integral University, Lucknow 24
  • 24. How are tree structured vector quantizers better? • Tree-structured vector quantization (TSVQ) reduces the complexity by imposing a hierarchical structure on the partitioning. We study the design of optimal tree-structured vector quantizers that minimize the expected distortion subject to cost functions related to storage cost, encoding rate, or quantization time. 02 February 2021 Dr. Piyush Charan, Dept. of ECE, Integral University, Lucknow 25
  • 25. Thanks!! 02 February 2021 Dr. Piyush Charan, Dept. of ECE, Integral University, Lucknow 26 Dr. Piyush Charan Assistant Professor, Department of ECE, Integral University, Lucknow Email: er.piyush.charan@gmail.com, piyush@iul.ac.in