Prepared by – JAYPAL SINGH CHOUDHARY
Graphics from - http://plus.maths.org/issue23/features/data/data.jpg
Why Data Compression
Reducing the amount of data required to
represent a source of information.
Preserve the output data original to
the input as much as possible.
Reduce the space required for the data
Also reduce the time of data transmission
SOURCES - www.data-compression.com/index.shtml
Types of Compression
Basic principle of both :
Graphics from - http://img.zdnet.com/techDirectory/LOSSY.GIF
In this the compressing and
decompressing algorithms are
inverse of each other.
When data contains repeated strings then these
can be replaced by special marker.
original data compressed data
In this the short codes are used for
frequent symbols and long for infrequent.
Three common principles are :-
1. Morse code.
2. Huffman encoding.
3. Lempel- Ziv -Welch encoding.
Extremely useful for sending
video, commercial TVs and30 frames in
References - www.data-compression.com/lossless.shtml
Some data in output is lost but not
detected by users.
Mostly used for pictures, videos and
Basic techniques are :
Developed by Inlet technologies in
cooperation with Microsoft and Scientific
Work with media files for
mobiles, portable, web and high
A literature compendium for a large variety of
Audiocoding systems was published in the IEEE
Journal on Selected Areas in Communications
(JSAC), February 1988. While there were
some papers from before that time, this
Collection documented an entire variety of
finished, working audio coders, nearly all of
them using perceptual (i.e. masking)
Techniquce and some kind of frequency
analysis and back End noiseless coding.
Using Neural Networks
- Introduction to neural networks.
Back Propagated (BP) neural
- Image compression using BP
- Comparison with existing image
Image Compression using BP
- Future of Image
Our visual system).
- Narrow Channel K-L.
- The entropy coding of
the state vector h i's
at the hidden Layer.
- A set of image samples is used to
train the network.
- This is equivalent to compressing
the input into the narrow channel
and then reconstructing the input
from the hidden layer.
- The image to be subdivided into
non-overlapping blocks of n x n
pixels each. Such block
represents N-dimensional vector
x, N = n x n, in N-dimensional
space. Transformation process
maps this set of vectors into y=W
Transform coding with
multilayer Neural Network:
The inverse transformation need to
reconstruct original image with
Wavelet Packet Decomposition
The image is first put through a few
levels ofwavelet packet decomposition.
- Each of the decomposed wavelet
sections is divided by the quantization
value and rounded to the nearest
- This creates redundancy in the data
which is easier to work with.
- Quantization is not lossless.
Neural Network Approximation
-An example of the vector with the trained
Neural network attempting to fit it.
Lossless Encoding and
- The entire data stream is then run-
- Afterwards, we can save the data
using the ZIP file format, which
applies some other lossless encoding
- Neural networks can be used to compress
- However, they are probably not the
best way to go unless the data can be
represented in some easier way.
- Most of the compression came from the
quantization, organization, and
Lossless compression stages.
I choose the text from –
because it fulfills mine requirement for the topic.
I choose the graphics from –
because it clears the situation which I want to explain.