SlideShare a Scribd company logo
1 of 6
Download to read offline
INTRA-FRAME COMPRESSION USING ENTROPY CODING
ANIL ULAS KOCAK
Ankara University
Golbasi ANKARA
anilulaskocak@gmail.com
Abstract
Abstract- In this project, I only focus on intra-frame compression using two old entropy coding
techniques huffman and arithmetic coding. While implementing them separately, i also get good experi-
mental results. Thanks to these results it is easy to compare these two techniques and decide which one
is better.
1. Introduction
Compression is very old and common problem in data storage and data transfer field. While technol-
ogy is evolving, big data plays important role and triggers bandwidth and data storage problem. To solve
this problem, main contributions are made in bandwidth and the data which is used over bandwidth to
be sent or received. Approximately 35 years ago, researchers and scientist noticed that data compression
would play key role in data transferring field. Since that time, a lot of method have been come up and
been tried to find most effective solution.
In this work, My main aim is to implement two old and counted as a milestone of data compression
field method huffman and arithmetic coding. By implementing, i analyze both methods. After imple-
mentation, i focus on their results in compression capacities and compression performance. To evaluate
results, codeword length, entropy, bit per pixel rate and compression rate metrics are used for realizing
their strong side.
2. Experimental Works
Experimental works are implemented in MATLAB for this project. The project occurs from some
parts such as Discrete Cosine Transform, Quantisation, sending/receiving binary data and reversing
previous transformation. In this work, my first step is to transform image by DCT and quantize it with
constant quantization table in 8x8 blocks. After zigzag scanning, i extract symbols which are consisting
Dc, Ac value and zeros. I use both huffman and arithmetic entropy coding separately to transform these
values to bitstream. I also used codeword length, entropy bit per pixel rate and compression rate as
metrics for deciding which is better.
1
Figure 1: Schema of image compression.
For more detailed, In this work, i extract symbols which has zero counts and AC values (zeros, | AC |)
from array which is occurred after zigzag scanning. This procedure is applied iteratively to get symbol
dictionary ans source symbols which will be transformed to bitstream from entire image. With these
dictionary symbols and source symbols, frequency of occurrences of each symbol is calculated so that
probabilities of each symbol is assigned as probability array. After getting probabilities, next step is to
use a entropy coding algorithm to transform source symbols to bitstream.
Huffman entropy coding is very old but important lossless entropy coding in data compression field.
By huffman, symbol probabilities is sorted and bottom two probabilities is added. This loop is made
iteratively until get probability value 1 at top. Then, by putting binary ’0’ to top and binary ’1’ to bottom,
codewords are extracted.
Figure 2: Example of Huffman tree and extracting codewords
2
Arithmetic coding is also old but one of the lossless universal consent entropy coding algorithms. Un-
like huffman, in this algorithm bitstream is generated from probability differences of each symbols which
is called range. After transforming symbol to bitstream, new range is rescaled by assigning symbol’s
range. This continues until each source symbol are transformed and added to bitstream iteratively. After
getting bitstream, last step to evaluate finally is computing entropy value by H = i=1 p(si)log2(si)
equation, mean codeword length by L = i=1 p(si)codeword(si) equation, count of total bits, bit rate
by NumberofBitstotal
Heightimage×Widthimage
in bit per pixel, compression rate by
uncompressedimage
compressedimage
, efficiency by E = H
L
and
lastly redundancy by R = 1 − E.
Figure 3: Example of Arithmetic Coding
3. Experimental Results
In this section, i will mention about this work’s steps and evaluate them separately. As first step of
project, after getting probabilities from symbols and their number of occurrence, entropy which consists
of dictionary symbols (zeros, | AC |) and EOB symbol is computed by H = i=1 p(si)log2(si) equa-
tion (zigzag.m, make − symbols.m and make − p.m in attached matlab file). Entropy(H) = 3.847
bit pixel for image (a) 4a and Entropy(H) = 3.939 bit pixel for image (b) 4b
3
(a) Lena.tif size(512X512) (b) ice cif 001.tif size(288X352)
Figure 4: Test images
To find required maximum bitsize for quantized DC values of each 8x8 block finding biggest DC value
is needed. After scanning all DC values in image, biggest one’s bitsize assign to other ones so each DC
values consist of completely same size with biggest DC value. Furthermore, one sign bit for each DC
value is added to these bitsizes and sum of these bitsizes is assigned as maximum bitsize for quantized
DC values (DCbits.m in attached matlab file). 28672 bits are computed for picture (a) 4a and 11088 bits
for picture (b) 4b
First entropy coding implementation of this work is huffman coding (encoder-huffman.m. After im-
plementing huffman coding and getting codewords by symbols’ probabilities for each image separately,
bitstream which can transfer is generated by source symbol. With bitstream and other variables mean
codewords length-bit rate-compression rate-efficiency-redundancy are computed by their formula.
Table 1: Huffman coding performance
HUFFMAN CODING
Image (a) (b)
Total Number of Bits 164250 57383
Mean Codeword Length (bit/pixel) 3.876 3.967
Bit rate(bit/pixel) 0.627 0.566
Compression Rate (times) 12.8 14.1
Efficiency 0.993 0.993
Redundancy 0.007 0.007
Total Number of bits metric is related with image size, unique symbols which symbol dictionary has
and symbols which have low frequency of occurrence. Although image 4b has smaller size, it has almost
same amount of unique symbols but it still costs less size of bitstream. Huffman coding is very useful for
these test images because of that mean Codeword Length is almost same entropy. If we look efficiency,
we can easily realize that it is almost 1 (top value) and redundancy is 0 (bottom value). Bit rate and
related compression rate is also at acceptable level. We can notice that huffman is very appropriate
entropy coding algorithm for both images.
4
After analyzing results and metrics, dekodlama.m is called and then reconstructed image is got. Last
metric of analysis is PSNR. Because of huffman is lossless entropy coding algorithm, only loss has been
occurred in quantization stage of image. Expected value of PSNR is that image which has smaller size
has bigger PSNR value. PSNRs are PSNR = 35.78 for image (a) and PSNR = 36.76 for image (b)
Another entropy coding algorithms for this work is arithmetic coding. encoder − arithmetic.m is
called for implementation and getting results from test images. By implementing arithmetic coding fixed
model is used, so probabilities do not update itself iteratively. It is expected that because of arithmetic
coding has less dependencies to extended sources, by enlarging symbol dictionary, arithmetic coding
does not lose too much its efficiency relatively according to huffman coding.
Table 2: Arithmetic coding performance
ARITHMETIC CODING
Image (a) (b)
Total Number of Bits 167528 58695
Bit rate(bit/pixel) 0.639 0.579
Compression Rate (times) 12.5 13.8
After arithmetic coding implementation, we notice that total number of bits is bigger in image 4a than
image 4b like it is expected. Because, like in huffman impelementation, Total Number of bits metric is
related with image size. Also Bit Rate and Compression Rate which corresponds to Total Number of
bits is bigger in image 4a due to image size.
4. Evaluation
Huffman and Arithmetic coding are two old entropy coding techniques. After implementation of these
algorithms in matlab, results are occurred at table.
Table 3: Comparison of Arithmetic and huffman coding for test images
ARITHMETIC CODING HUFFMAN COD˙ING
Image (a) (b) (a) (b)
Total Number of Bits 167528 58695 164250 57383
Bit rate(bit/pixel) 0.639 0.579 0.627 0.566
Compression Rate (times) 12.5 13.8 12.8 14.1
As a conclusion, Is is expected before implementation that image that has bigger image size is im-
plemented with more bits theoretically and it is occurred experimentally like it is expected. But it is
also expected that arithmetic coding has less bits than huffman coding in same test image due to being
newer according to huffman. It is seems that the reason why arithmetic coding has more total number of
bits than huffman coding is that huffman coding is working well because of images that does not have
extended sources and too much unique symbols. In other word, huffman coding has good compression
rate, bit rate and performance on images that does not contain large alphabets or symbol dictionary for
extended source. But if i used test images that has more unique symbols or large alphabets, arithmetic
5
coding would be more useful and well performance against huffman coding. Another advantage of arith-
metic coding against huffman is the chance of choose of adaptive arithmetic model while in huffman it is
not possible. In other word, adaptive model of arithmetic coding provides varying symbol probabilities
by time while huffman provides only stable symbol probabilities.
References
[1] IAN H. WITTEN, RADFORD M. NEAL, and JOHN G. CLEARY ARITHMETIC CODING FOR
DATA COMPRESSION 30.6.520. 1987.
[2] Rafael C. Gonzalez, Richard E. Woods and S.Eddins Digital Image Processing Using MATLAB 2nd
Edition 2009: Gatesmark Publishing
6

More Related Content

What's hot

Image compression
Image compressionImage compression
Image compressionAle Johnsan
 
Hufman coding basic
Hufman coding basicHufman coding basic
Hufman coding basicradthees
 
Oblivious Neural Network Predictions via MiniONN Transformations
Oblivious Neural Network Predictions via MiniONN TransformationsOblivious Neural Network Predictions via MiniONN Transformations
Oblivious Neural Network Predictions via MiniONN TransformationsSherif Abdelfattah
 
Compression of digital voice and video
Compression of digital voice and videoCompression of digital voice and video
Compression of digital voice and videosangusajjan
 
image basics and image compression
image basics and image compressionimage basics and image compression
image basics and image compressionmurugan hari
 
Multimedia communication jpeg
Multimedia communication jpegMultimedia communication jpeg
Multimedia communication jpegDr. Kapil Gupta
 
Compression: Images (JPEG)
Compression: Images (JPEG)Compression: Images (JPEG)
Compression: Images (JPEG)danishrafiq
 
data compression technique
data compression techniquedata compression technique
data compression techniqueCHINMOY PAUL
 
Fundamentals of Data compression
Fundamentals of Data compressionFundamentals of Data compression
Fundamentals of Data compressionM.k. Praveen
 
Image processing with matlab
Image processing with matlabImage processing with matlab
Image processing with matlabAman Gupta
 
Image stegnography and steganalysis
Image stegnography and steganalysisImage stegnography and steganalysis
Image stegnography and steganalysisPrince Boonlia
 
image compresson
image compressonimage compresson
image compressonAjay Kumar
 
Performance analysis of image compression using fuzzy logic algorithm
Performance analysis of image compression using fuzzy logic algorithmPerformance analysis of image compression using fuzzy logic algorithm
Performance analysis of image compression using fuzzy logic algorithmsipij
 
Introducing the Concept of Back-Inking as an Efficient Model for Document Ret...
Introducing the Concept of Back-Inking as an Efficient Model for Document Ret...Introducing the Concept of Back-Inking as an Efficient Model for Document Ret...
Introducing the Concept of Back-Inking as an Efficient Model for Document Ret...IJITCA Journal
 
Parallel Hardware Implementation of Convolution using Vedic Mathematics
Parallel Hardware Implementation of Convolution using Vedic MathematicsParallel Hardware Implementation of Convolution using Vedic Mathematics
Parallel Hardware Implementation of Convolution using Vedic MathematicsIOSR Journals
 

What's hot (20)

Image compression
Image compressionImage compression
Image compression
 
Run length encoding
Run length encodingRun length encoding
Run length encoding
 
Hufman coding basic
Hufman coding basicHufman coding basic
Hufman coding basic
 
Oblivious Neural Network Predictions via MiniONN Transformations
Oblivious Neural Network Predictions via MiniONN TransformationsOblivious Neural Network Predictions via MiniONN Transformations
Oblivious Neural Network Predictions via MiniONN Transformations
 
Presentation on Image Compression
Presentation on Image Compression Presentation on Image Compression
Presentation on Image Compression
 
Data Redundacy
Data RedundacyData Redundacy
Data Redundacy
 
Compression of digital voice and video
Compression of digital voice and videoCompression of digital voice and video
Compression of digital voice and video
 
image basics and image compression
image basics and image compressionimage basics and image compression
image basics and image compression
 
Multimedia communication jpeg
Multimedia communication jpegMultimedia communication jpeg
Multimedia communication jpeg
 
Compression: Images (JPEG)
Compression: Images (JPEG)Compression: Images (JPEG)
Compression: Images (JPEG)
 
data compression technique
data compression techniquedata compression technique
data compression technique
 
Fundamentals of Data compression
Fundamentals of Data compressionFundamentals of Data compression
Fundamentals of Data compression
 
Compression
CompressionCompression
Compression
 
Jpeg compression
Jpeg compressionJpeg compression
Jpeg compression
 
Image processing with matlab
Image processing with matlabImage processing with matlab
Image processing with matlab
 
Image stegnography and steganalysis
Image stegnography and steganalysisImage stegnography and steganalysis
Image stegnography and steganalysis
 
image compresson
image compressonimage compresson
image compresson
 
Performance analysis of image compression using fuzzy logic algorithm
Performance analysis of image compression using fuzzy logic algorithmPerformance analysis of image compression using fuzzy logic algorithm
Performance analysis of image compression using fuzzy logic algorithm
 
Introducing the Concept of Back-Inking as an Efficient Model for Document Ret...
Introducing the Concept of Back-Inking as an Efficient Model for Document Ret...Introducing the Concept of Back-Inking as an Efficient Model for Document Ret...
Introducing the Concept of Back-Inking as an Efficient Model for Document Ret...
 
Parallel Hardware Implementation of Convolution using Vedic Mathematics
Parallel Hardware Implementation of Convolution using Vedic MathematicsParallel Hardware Implementation of Convolution using Vedic Mathematics
Parallel Hardware Implementation of Convolution using Vedic Mathematics
 

Viewers also liked

動画抜 ゼミ 沖縄班パワポ 完成版
動画抜 ゼミ 沖縄班パワポ 完成版動画抜 ゼミ 沖縄班パワポ 完成版
動画抜 ゼミ 沖縄班パワポ 完成版大樹 遠藤
 
Actividad 4.1 PURPOSEFUL USE OF 21ST CENTURY SKILLS IN HIGHER EDUCATION
Actividad 4.1 PURPOSEFUL USE OF 21ST CENTURY SKILLS IN HIGHER EDUCATIONActividad 4.1 PURPOSEFUL USE OF 21ST CENTURY SKILLS IN HIGHER EDUCATION
Actividad 4.1 PURPOSEFUL USE OF 21ST CENTURY SKILLS IN HIGHER EDUCATIONALEXANDER TOLEDO NEIRA
 
Guía de como dar formato a una diapositiva y como Trabajar con Diapositivas m...
Guía de como dar formato a una diapositiva y como Trabajar con Diapositivas m...Guía de como dar formato a una diapositiva y como Trabajar con Diapositivas m...
Guía de como dar formato a una diapositiva y como Trabajar con Diapositivas m...Carlos Jose Leon Polanco
 
Artesanos
ArtesanosArtesanos
ArtesanosElly97
 
The good days of islam! (1)
The good days of islam! (1)The good days of islam! (1)
The good days of islam! (1)Kenzie Love
 
RITIKA CHOPRA-ACCOUNTS EXECUTIVE - Copy
RITIKA CHOPRA-ACCOUNTS EXECUTIVE - CopyRITIKA CHOPRA-ACCOUNTS EXECUTIVE - Copy
RITIKA CHOPRA-ACCOUNTS EXECUTIVE - CopyRitika Chopra
 
Анель Хасенова+картинг+конкуренты
Анель Хасенова+картинг+конкурентыАнель Хасенова+картинг+конкуренты
Анель Хасенова+картинг+конкурентыAnel Khassenova
 
Friday p3 higher
Friday p3 higherFriday p3 higher
Friday p3 higherMike Hoad
 
Tutorial para aprender hacer una unidad tematica carlos leon
Tutorial para aprender hacer una unidad tematica carlos leonTutorial para aprender hacer una unidad tematica carlos leon
Tutorial para aprender hacer una unidad tematica carlos leonCarlos Jose Leon Polanco
 
JagSinghupdated
JagSinghupdatedJagSinghupdated
JagSinghupdatedjag singh
 
Cadbury nestle britannia
Cadbury nestle britanniaCadbury nestle britannia
Cadbury nestle britanniajincy joy
 
Quest Happy Birthday
Quest Happy BirthdayQuest Happy Birthday
Quest Happy Birthdaycdoarg01
 
Volatilidad en el precio del petróleo
Volatilidad en el precio del petróleo Volatilidad en el precio del petróleo
Volatilidad en el precio del petróleo Carolina Lo
 
RJPenick Resume4
RJPenick Resume4RJPenick Resume4
RJPenick Resume4Ron Penick
 

Viewers also liked (19)

Syed_Khaja_Nooruddin.04042016
Syed_Khaja_Nooruddin.04042016Syed_Khaja_Nooruddin.04042016
Syed_Khaja_Nooruddin.04042016
 
動画抜 ゼミ 沖縄班パワポ 完成版
動画抜 ゼミ 沖縄班パワポ 完成版動画抜 ゼミ 沖縄班パワポ 完成版
動画抜 ゼミ 沖縄班パワポ 完成版
 
Actividad 4.1 PURPOSEFUL USE OF 21ST CENTURY SKILLS IN HIGHER EDUCATION
Actividad 4.1 PURPOSEFUL USE OF 21ST CENTURY SKILLS IN HIGHER EDUCATIONActividad 4.1 PURPOSEFUL USE OF 21ST CENTURY SKILLS IN HIGHER EDUCATION
Actividad 4.1 PURPOSEFUL USE OF 21ST CENTURY SKILLS IN HIGHER EDUCATION
 
Guía de como dar formato a una diapositiva y como Trabajar con Diapositivas m...
Guía de como dar formato a una diapositiva y como Trabajar con Diapositivas m...Guía de como dar formato a una diapositiva y como Trabajar con Diapositivas m...
Guía de como dar formato a una diapositiva y como Trabajar con Diapositivas m...
 
Artesanos
ArtesanosArtesanos
Artesanos
 
The good days of islam! (1)
The good days of islam! (1)The good days of islam! (1)
The good days of islam! (1)
 
RITIKA CHOPRA-ACCOUNTS EXECUTIVE - Copy
RITIKA CHOPRA-ACCOUNTS EXECUTIVE - CopyRITIKA CHOPRA-ACCOUNTS EXECUTIVE - Copy
RITIKA CHOPRA-ACCOUNTS EXECUTIVE - Copy
 
Presentación eduteka
Presentación edutekaPresentación eduteka
Presentación eduteka
 
Анель Хасенова+картинг+конкуренты
Анель Хасенова+картинг+конкурентыАнель Хасенова+картинг+конкуренты
Анель Хасенова+картинг+конкуренты
 
Khalifa_Taiseer_CV_2016
Khalifa_Taiseer_CV_2016Khalifa_Taiseer_CV_2016
Khalifa_Taiseer_CV_2016
 
Friday p3 higher
Friday p3 higherFriday p3 higher
Friday p3 higher
 
Tutorial para aprender hacer una unidad tematica carlos leon
Tutorial para aprender hacer una unidad tematica carlos leonTutorial para aprender hacer una unidad tematica carlos leon
Tutorial para aprender hacer una unidad tematica carlos leon
 
JagSinghupdated
JagSinghupdatedJagSinghupdated
JagSinghupdated
 
SESION 3 - MAYO 21 DE 2016Tiching
SESION 3 - MAYO 21 DE 2016TichingSESION 3 - MAYO 21 DE 2016Tiching
SESION 3 - MAYO 21 DE 2016Tiching
 
Cadbury nestle britannia
Cadbury nestle britanniaCadbury nestle britannia
Cadbury nestle britannia
 
Linesheet_FW16
Linesheet_FW16Linesheet_FW16
Linesheet_FW16
 
Quest Happy Birthday
Quest Happy BirthdayQuest Happy Birthday
Quest Happy Birthday
 
Volatilidad en el precio del petróleo
Volatilidad en el precio del petróleo Volatilidad en el precio del petróleo
Volatilidad en el precio del petróleo
 
RJPenick Resume4
RJPenick Resume4RJPenick Resume4
RJPenick Resume4
 

Similar to first_assignment_Report

Lossless Grey-scale Image Compression Using Source Symbols Reduction and Huff...
Lossless Grey-scale Image Compression Using Source Symbols Reduction and Huff...Lossless Grey-scale Image Compression Using Source Symbols Reduction and Huff...
Lossless Grey-scale Image Compression Using Source Symbols Reduction and Huff...CSCJournals
 
Evaluation of Huffman and Arithmetic Algorithms for Multimedia Compression St...
Evaluation of Huffman and Arithmetic Algorithms for Multimedia Compression St...Evaluation of Huffman and Arithmetic Algorithms for Multimedia Compression St...
Evaluation of Huffman and Arithmetic Algorithms for Multimedia Compression St...IJCSEA Journal
 
A hybrid genetic algorithm and chaotic function model for image encryption
A hybrid genetic algorithm and chaotic function model for image encryptionA hybrid genetic algorithm and chaotic function model for image encryption
A hybrid genetic algorithm and chaotic function model for image encryptionsadique_ghitm
 
A New Chaos Based Image Encryption and Decryption using a Hash Function
A New Chaos Based Image Encryption and Decryption using a Hash FunctionA New Chaos Based Image Encryption and Decryption using a Hash Function
A New Chaos Based Image Encryption and Decryption using a Hash FunctionIRJET Journal
 
FAST AND EFFICIENT IMAGE COMPRESSION BASED ON PARALLEL COMPUTING USING MATLAB
FAST AND EFFICIENT IMAGE COMPRESSION BASED ON PARALLEL COMPUTING USING MATLABFAST AND EFFICIENT IMAGE COMPRESSION BASED ON PARALLEL COMPUTING USING MATLAB
FAST AND EFFICIENT IMAGE COMPRESSION BASED ON PARALLEL COMPUTING USING MATLABJournal For Research
 
Lossless Huffman coding image compression implementation in spatial domain by...
Lossless Huffman coding image compression implementation in spatial domain by...Lossless Huffman coding image compression implementation in spatial domain by...
Lossless Huffman coding image compression implementation in spatial domain by...IRJET Journal
 
SQUASHED JPEG IMAGE COMPRESSION VIA SPARSE MATRIX
SQUASHED JPEG IMAGE COMPRESSION VIA SPARSE MATRIXSQUASHED JPEG IMAGE COMPRESSION VIA SPARSE MATRIX
SQUASHED JPEG IMAGE COMPRESSION VIA SPARSE MATRIXijcsit
 
Image Compression using DPCM with LMS Algorithm
Image Compression using DPCM with LMS AlgorithmImage Compression using DPCM with LMS Algorithm
Image Compression using DPCM with LMS AlgorithmIRJET Journal
 
PERFORMANCE EVALUATION OF JPEG IMAGE COMPRESSION USING SYMBOL REDUCTION TECHN...
PERFORMANCE EVALUATION OF JPEG IMAGE COMPRESSION USING SYMBOL REDUCTION TECHN...PERFORMANCE EVALUATION OF JPEG IMAGE COMPRESSION USING SYMBOL REDUCTION TECHN...
PERFORMANCE EVALUATION OF JPEG IMAGE COMPRESSION USING SYMBOL REDUCTION TECHN...cscpconf
 
Image compression using dpcm with lms algorithm ranbeer
Image compression using dpcm with lms algorithm ranbeerImage compression using dpcm with lms algorithm ranbeer
Image compression using dpcm with lms algorithm ranbeerRanbeer Tyagi
 
An Image Steganography Algorithm Using Huffman and Interpixel Difference Enco...
An Image Steganography Algorithm Using Huffman and Interpixel Difference Enco...An Image Steganography Algorithm Using Huffman and Interpixel Difference Enco...
An Image Steganography Algorithm Using Huffman and Interpixel Difference Enco...CSCJournals
 
Faster Training Algorithms in Neural Network Based Approach For Handwritten T...
Faster Training Algorithms in Neural Network Based Approach For Handwritten T...Faster Training Algorithms in Neural Network Based Approach For Handwritten T...
Faster Training Algorithms in Neural Network Based Approach For Handwritten T...CSCJournals
 
A Study of Image Compression Methods
A Study of Image Compression MethodsA Study of Image Compression Methods
A Study of Image Compression MethodsIOSR Journals
 

Similar to first_assignment_Report (20)

Lossless Grey-scale Image Compression Using Source Symbols Reduction and Huff...
Lossless Grey-scale Image Compression Using Source Symbols Reduction and Huff...Lossless Grey-scale Image Compression Using Source Symbols Reduction and Huff...
Lossless Grey-scale Image Compression Using Source Symbols Reduction and Huff...
 
B070306010
B070306010B070306010
B070306010
 
Himadeep
HimadeepHimadeep
Himadeep
 
Evaluation of Huffman and Arithmetic Algorithms for Multimedia Compression St...
Evaluation of Huffman and Arithmetic Algorithms for Multimedia Compression St...Evaluation of Huffman and Arithmetic Algorithms for Multimedia Compression St...
Evaluation of Huffman and Arithmetic Algorithms for Multimedia Compression St...
 
A hybrid genetic algorithm and chaotic function model for image encryption
A hybrid genetic algorithm and chaotic function model for image encryptionA hybrid genetic algorithm and chaotic function model for image encryption
A hybrid genetic algorithm and chaotic function model for image encryption
 
A New Chaos Based Image Encryption and Decryption using a Hash Function
A New Chaos Based Image Encryption and Decryption using a Hash FunctionA New Chaos Based Image Encryption and Decryption using a Hash Function
A New Chaos Based Image Encryption and Decryption using a Hash Function
 
Source coding
Source codingSource coding
Source coding
 
FAST AND EFFICIENT IMAGE COMPRESSION BASED ON PARALLEL COMPUTING USING MATLAB
FAST AND EFFICIENT IMAGE COMPRESSION BASED ON PARALLEL COMPUTING USING MATLABFAST AND EFFICIENT IMAGE COMPRESSION BASED ON PARALLEL COMPUTING USING MATLAB
FAST AND EFFICIENT IMAGE COMPRESSION BASED ON PARALLEL COMPUTING USING MATLAB
 
Lossless Huffman coding image compression implementation in spatial domain by...
Lossless Huffman coding image compression implementation in spatial domain by...Lossless Huffman coding image compression implementation in spatial domain by...
Lossless Huffman coding image compression implementation in spatial domain by...
 
Assignment-1-NF.docx
Assignment-1-NF.docxAssignment-1-NF.docx
Assignment-1-NF.docx
 
Real time signal processing
Real time signal processingReal time signal processing
Real time signal processing
 
Squashed JPEG Image Compression via Sparse Matrix
Squashed JPEG Image Compression via Sparse MatrixSquashed JPEG Image Compression via Sparse Matrix
Squashed JPEG Image Compression via Sparse Matrix
 
SQUASHED JPEG IMAGE COMPRESSION VIA SPARSE MATRIX
SQUASHED JPEG IMAGE COMPRESSION VIA SPARSE MATRIXSQUASHED JPEG IMAGE COMPRESSION VIA SPARSE MATRIX
SQUASHED JPEG IMAGE COMPRESSION VIA SPARSE MATRIX
 
Squashed JPEG Image Compression via Sparse Matrix
Squashed JPEG Image Compression via Sparse MatrixSquashed JPEG Image Compression via Sparse Matrix
Squashed JPEG Image Compression via Sparse Matrix
 
Image Compression using DPCM with LMS Algorithm
Image Compression using DPCM with LMS AlgorithmImage Compression using DPCM with LMS Algorithm
Image Compression using DPCM with LMS Algorithm
 
PERFORMANCE EVALUATION OF JPEG IMAGE COMPRESSION USING SYMBOL REDUCTION TECHN...
PERFORMANCE EVALUATION OF JPEG IMAGE COMPRESSION USING SYMBOL REDUCTION TECHN...PERFORMANCE EVALUATION OF JPEG IMAGE COMPRESSION USING SYMBOL REDUCTION TECHN...
PERFORMANCE EVALUATION OF JPEG IMAGE COMPRESSION USING SYMBOL REDUCTION TECHN...
 
Image compression using dpcm with lms algorithm ranbeer
Image compression using dpcm with lms algorithm ranbeerImage compression using dpcm with lms algorithm ranbeer
Image compression using dpcm with lms algorithm ranbeer
 
An Image Steganography Algorithm Using Huffman and Interpixel Difference Enco...
An Image Steganography Algorithm Using Huffman and Interpixel Difference Enco...An Image Steganography Algorithm Using Huffman and Interpixel Difference Enco...
An Image Steganography Algorithm Using Huffman and Interpixel Difference Enco...
 
Faster Training Algorithms in Neural Network Based Approach For Handwritten T...
Faster Training Algorithms in Neural Network Based Approach For Handwritten T...Faster Training Algorithms in Neural Network Based Approach For Handwritten T...
Faster Training Algorithms in Neural Network Based Approach For Handwritten T...
 
A Study of Image Compression Methods
A Study of Image Compression MethodsA Study of Image Compression Methods
A Study of Image Compression Methods
 

first_assignment_Report

  • 1. INTRA-FRAME COMPRESSION USING ENTROPY CODING ANIL ULAS KOCAK Ankara University Golbasi ANKARA anilulaskocak@gmail.com Abstract Abstract- In this project, I only focus on intra-frame compression using two old entropy coding techniques huffman and arithmetic coding. While implementing them separately, i also get good experi- mental results. Thanks to these results it is easy to compare these two techniques and decide which one is better. 1. Introduction Compression is very old and common problem in data storage and data transfer field. While technol- ogy is evolving, big data plays important role and triggers bandwidth and data storage problem. To solve this problem, main contributions are made in bandwidth and the data which is used over bandwidth to be sent or received. Approximately 35 years ago, researchers and scientist noticed that data compression would play key role in data transferring field. Since that time, a lot of method have been come up and been tried to find most effective solution. In this work, My main aim is to implement two old and counted as a milestone of data compression field method huffman and arithmetic coding. By implementing, i analyze both methods. After imple- mentation, i focus on their results in compression capacities and compression performance. To evaluate results, codeword length, entropy, bit per pixel rate and compression rate metrics are used for realizing their strong side. 2. Experimental Works Experimental works are implemented in MATLAB for this project. The project occurs from some parts such as Discrete Cosine Transform, Quantisation, sending/receiving binary data and reversing previous transformation. In this work, my first step is to transform image by DCT and quantize it with constant quantization table in 8x8 blocks. After zigzag scanning, i extract symbols which are consisting Dc, Ac value and zeros. I use both huffman and arithmetic entropy coding separately to transform these values to bitstream. I also used codeword length, entropy bit per pixel rate and compression rate as metrics for deciding which is better. 1
  • 2. Figure 1: Schema of image compression. For more detailed, In this work, i extract symbols which has zero counts and AC values (zeros, | AC |) from array which is occurred after zigzag scanning. This procedure is applied iteratively to get symbol dictionary ans source symbols which will be transformed to bitstream from entire image. With these dictionary symbols and source symbols, frequency of occurrences of each symbol is calculated so that probabilities of each symbol is assigned as probability array. After getting probabilities, next step is to use a entropy coding algorithm to transform source symbols to bitstream. Huffman entropy coding is very old but important lossless entropy coding in data compression field. By huffman, symbol probabilities is sorted and bottom two probabilities is added. This loop is made iteratively until get probability value 1 at top. Then, by putting binary ’0’ to top and binary ’1’ to bottom, codewords are extracted. Figure 2: Example of Huffman tree and extracting codewords 2
  • 3. Arithmetic coding is also old but one of the lossless universal consent entropy coding algorithms. Un- like huffman, in this algorithm bitstream is generated from probability differences of each symbols which is called range. After transforming symbol to bitstream, new range is rescaled by assigning symbol’s range. This continues until each source symbol are transformed and added to bitstream iteratively. After getting bitstream, last step to evaluate finally is computing entropy value by H = i=1 p(si)log2(si) equation, mean codeword length by L = i=1 p(si)codeword(si) equation, count of total bits, bit rate by NumberofBitstotal Heightimage×Widthimage in bit per pixel, compression rate by uncompressedimage compressedimage , efficiency by E = H L and lastly redundancy by R = 1 − E. Figure 3: Example of Arithmetic Coding 3. Experimental Results In this section, i will mention about this work’s steps and evaluate them separately. As first step of project, after getting probabilities from symbols and their number of occurrence, entropy which consists of dictionary symbols (zeros, | AC |) and EOB symbol is computed by H = i=1 p(si)log2(si) equa- tion (zigzag.m, make − symbols.m and make − p.m in attached matlab file). Entropy(H) = 3.847 bit pixel for image (a) 4a and Entropy(H) = 3.939 bit pixel for image (b) 4b 3
  • 4. (a) Lena.tif size(512X512) (b) ice cif 001.tif size(288X352) Figure 4: Test images To find required maximum bitsize for quantized DC values of each 8x8 block finding biggest DC value is needed. After scanning all DC values in image, biggest one’s bitsize assign to other ones so each DC values consist of completely same size with biggest DC value. Furthermore, one sign bit for each DC value is added to these bitsizes and sum of these bitsizes is assigned as maximum bitsize for quantized DC values (DCbits.m in attached matlab file). 28672 bits are computed for picture (a) 4a and 11088 bits for picture (b) 4b First entropy coding implementation of this work is huffman coding (encoder-huffman.m. After im- plementing huffman coding and getting codewords by symbols’ probabilities for each image separately, bitstream which can transfer is generated by source symbol. With bitstream and other variables mean codewords length-bit rate-compression rate-efficiency-redundancy are computed by their formula. Table 1: Huffman coding performance HUFFMAN CODING Image (a) (b) Total Number of Bits 164250 57383 Mean Codeword Length (bit/pixel) 3.876 3.967 Bit rate(bit/pixel) 0.627 0.566 Compression Rate (times) 12.8 14.1 Efficiency 0.993 0.993 Redundancy 0.007 0.007 Total Number of bits metric is related with image size, unique symbols which symbol dictionary has and symbols which have low frequency of occurrence. Although image 4b has smaller size, it has almost same amount of unique symbols but it still costs less size of bitstream. Huffman coding is very useful for these test images because of that mean Codeword Length is almost same entropy. If we look efficiency, we can easily realize that it is almost 1 (top value) and redundancy is 0 (bottom value). Bit rate and related compression rate is also at acceptable level. We can notice that huffman is very appropriate entropy coding algorithm for both images. 4
  • 5. After analyzing results and metrics, dekodlama.m is called and then reconstructed image is got. Last metric of analysis is PSNR. Because of huffman is lossless entropy coding algorithm, only loss has been occurred in quantization stage of image. Expected value of PSNR is that image which has smaller size has bigger PSNR value. PSNRs are PSNR = 35.78 for image (a) and PSNR = 36.76 for image (b) Another entropy coding algorithms for this work is arithmetic coding. encoder − arithmetic.m is called for implementation and getting results from test images. By implementing arithmetic coding fixed model is used, so probabilities do not update itself iteratively. It is expected that because of arithmetic coding has less dependencies to extended sources, by enlarging symbol dictionary, arithmetic coding does not lose too much its efficiency relatively according to huffman coding. Table 2: Arithmetic coding performance ARITHMETIC CODING Image (a) (b) Total Number of Bits 167528 58695 Bit rate(bit/pixel) 0.639 0.579 Compression Rate (times) 12.5 13.8 After arithmetic coding implementation, we notice that total number of bits is bigger in image 4a than image 4b like it is expected. Because, like in huffman impelementation, Total Number of bits metric is related with image size. Also Bit Rate and Compression Rate which corresponds to Total Number of bits is bigger in image 4a due to image size. 4. Evaluation Huffman and Arithmetic coding are two old entropy coding techniques. After implementation of these algorithms in matlab, results are occurred at table. Table 3: Comparison of Arithmetic and huffman coding for test images ARITHMETIC CODING HUFFMAN COD˙ING Image (a) (b) (a) (b) Total Number of Bits 167528 58695 164250 57383 Bit rate(bit/pixel) 0.639 0.579 0.627 0.566 Compression Rate (times) 12.5 13.8 12.8 14.1 As a conclusion, Is is expected before implementation that image that has bigger image size is im- plemented with more bits theoretically and it is occurred experimentally like it is expected. But it is also expected that arithmetic coding has less bits than huffman coding in same test image due to being newer according to huffman. It is seems that the reason why arithmetic coding has more total number of bits than huffman coding is that huffman coding is working well because of images that does not have extended sources and too much unique symbols. In other word, huffman coding has good compression rate, bit rate and performance on images that does not contain large alphabets or symbol dictionary for extended source. But if i used test images that has more unique symbols or large alphabets, arithmetic 5
  • 6. coding would be more useful and well performance against huffman coding. Another advantage of arith- metic coding against huffman is the chance of choose of adaptive arithmetic model while in huffman it is not possible. In other word, adaptive model of arithmetic coding provides varying symbol probabilities by time while huffman provides only stable symbol probabilities. References [1] IAN H. WITTEN, RADFORD M. NEAL, and JOHN G. CLEARY ARITHMETIC CODING FOR DATA COMPRESSION 30.6.520. 1987. [2] Rafael C. Gonzalez, Richard E. Woods and S.Eddins Digital Image Processing Using MATLAB 2nd Edition 2009: Gatesmark Publishing 6