SIR ISSAC NEWTON ARTS & SCIENCE COLLEGE
NAGAPATTINAM, TAMILNADU
R.Malarmani-P15274355
S.Vinodhini-P15274359
II M.Sc.,(Computer Science)
SUBMITTED TO
B.SOWBARNIKA M.Sc.,M.Phil.,
Assistant professor,
Department of computer science.
INTRODUCTION
 Data compression may be viewed as a branch of information
theory in which the primary objective is to minimize the amount
of data to be transmitted.
 The purpose of this paper is to present and analyze a variety of
data compression algorithms.
 Data compression is the art of reducing the number of bits
 Needed to store or transmit data.
 In signal processing, data compression, source coding, or bit-
rate reduction involves encoding information using fewer bits
than the original representation.
 Compression can be either lossy or lossless. Lossless
compression reduces bits by identifying and eliminating
statistical redundancy.
Definition:
Lossless Compression
 Lossless compression is a class of data compression algorithms that
allows the original data to be perfectly reconstructed from the
compressed data.
 By contrast, lossy compression permits reconstruction only of an
approximation of the original data, though this usually
improves compression rates (and therefore reduces file sizes).
 Lossless data compression is used in many applications.
 For example, it is used in the ZIP file format and in the GNU tool gzip.
It is also often used as a component within lossy data compression
technologies.
LOSSLESS COMPRESSION
METHOD
Run-length encoding
 Run-length encoding (RLE) is a very simple form of lossless data
compression in which runs of data.
 Given an input string, write a function that returns the Run
Length Encoded string for the input string.
 For example, if the input string is “mmmmaaalarrrrrrviiinnoooo”,
then the function should return “m4a3l1a1r6v1i3n2o4″.
Huffman coding
 Huffman coding is a lossless data compression algorithm.
 A Huffman code is a particular type of optimal prefix code that is
commonly used for lossless data compression
 The process of finding and/or using such a code proceeds by means
of Huffman coding, an algorithm developed by David A. Huffman.
 Symbol Count
 M 14
 A 12
 L 13
 A 5
 R 3
47
14 33
8
25
3 5 12 13
EXAMPLE
Bits Symbol
000 M
101 A
111 L
011 A
001 R
(0)
(0)
(0) (0)
(1)
(1)
(1)
M
L
R A A
Huffman encoding
This problem is that of finding the minimum length bit string which can be used to
encode a string of symbols.
Huffman's scheme uses a table of frequency of occurrence for each symbol (or character) in
the input.
0 1
0 1
0 1
Input text:ABRACABARA
Compute Character Frequencies
A=5,B=3,R=2,C=1
11
A=5 6
3 B=3
C=1 R=2
Lempel Ziv encoding
Lempel–Ziv–Welch (LZW) is a universal lossless data
compression algorithm created by Abraham Lempel, Jacob Ziv,
and Terry Welch.
 Lempel Ziv encoding it is a dictionary based encoding.
Basic Idea
Create a dictionary(a table)of string used during communication.
LOSSY
COMPRESSION
METHODS
Lossy compression methods
 lossy compression or irreversible compression is the class
of data encoding methods that uses inexact approximations and
partial data discarding to represent the content.
Lossy compression is most commonly used to
compress multimedia data (audio, video, and images), especially in
applications such as streaming media and internet telephony.
 These techniques are used to reduce data size for storage,
handling, and transmitting content.
JPEG compression is used in a number of image file formats.
JPEG/Exit is the most common image format used by digital
cameras and other photographic image capture devices;
along with JPEG/JFIF, it is the most common format for storing
and transmitting photographic images on the World Wide Web.
The term "JPEG" is an acronym for the Joint Photographic Experts
Group, which created the standard.
Image compression: JPEG
Discrete cosine transform
A discrete cosine transform (DCT) expresses a finite sequence
of data points in terms of a sum of cosine functions oscillating at
different frequencies.
Video compression--MPEG
The Moving Picture Experts Group (MPEG) is a working
group of authorities that was formed by ISO and IEC to set standards
for audio and video compression and transmission.
It was established in 1988 by the initiative of Hiroshi Yasuda (Nippon
Telegraph and Telephone) and Leonardo Chiariglione.
CONCLUSION
 The report presents a novel approach for image compression using the
Hartley transform.
 The magnitude and phase compression using this transformation have
proved a good performance.
 Magnitude and phase were processed separately.
 The quantization of frequency samples in less bits has increased the
compression ratio. Furthermore, the distributions used to generate the
noise influence the result significantly.
THANKS TO
P. MEERABAI MC.A.,M.Phil.,
HEAD OF THE DEPARTMENT,
COMPUTER SCIENCE.
Data compression

Data compression

  • 2.
    SIR ISSAC NEWTONARTS & SCIENCE COLLEGE NAGAPATTINAM, TAMILNADU R.Malarmani-P15274355 S.Vinodhini-P15274359 II M.Sc.,(Computer Science)
  • 3.
    SUBMITTED TO B.SOWBARNIKA M.Sc.,M.Phil., Assistantprofessor, Department of computer science.
  • 4.
    INTRODUCTION  Data compressionmay be viewed as a branch of information theory in which the primary objective is to minimize the amount of data to be transmitted.  The purpose of this paper is to present and analyze a variety of data compression algorithms.
  • 5.
     Data compressionis the art of reducing the number of bits  Needed to store or transmit data.  In signal processing, data compression, source coding, or bit- rate reduction involves encoding information using fewer bits than the original representation.  Compression can be either lossy or lossless. Lossless compression reduces bits by identifying and eliminating statistical redundancy. Definition:
  • 7.
    Lossless Compression  Losslesscompression is a class of data compression algorithms that allows the original data to be perfectly reconstructed from the compressed data.  By contrast, lossy compression permits reconstruction only of an approximation of the original data, though this usually improves compression rates (and therefore reduces file sizes).  Lossless data compression is used in many applications.  For example, it is used in the ZIP file format and in the GNU tool gzip. It is also often used as a component within lossy data compression technologies.
  • 8.
  • 9.
    Run-length encoding  Run-lengthencoding (RLE) is a very simple form of lossless data compression in which runs of data.  Given an input string, write a function that returns the Run Length Encoded string for the input string.  For example, if the input string is “mmmmaaalarrrrrrviiinnoooo”, then the function should return “m4a3l1a1r6v1i3n2o4″.
  • 10.
    Huffman coding  Huffmancoding is a lossless data compression algorithm.  A Huffman code is a particular type of optimal prefix code that is commonly used for lossless data compression  The process of finding and/or using such a code proceeds by means of Huffman coding, an algorithm developed by David A. Huffman.  Symbol Count  M 14  A 12  L 13  A 5  R 3
  • 11.
    47 14 33 8 25 3 512 13 EXAMPLE Bits Symbol 000 M 101 A 111 L 011 A 001 R (0) (0) (0) (0) (1) (1) (1) M L R A A
  • 12.
    Huffman encoding This problemis that of finding the minimum length bit string which can be used to encode a string of symbols. Huffman's scheme uses a table of frequency of occurrence for each symbol (or character) in the input. 0 1 0 1 0 1 Input text:ABRACABARA Compute Character Frequencies A=5,B=3,R=2,C=1 11 A=5 6 3 B=3 C=1 R=2
  • 13.
    Lempel Ziv encoding Lempel–Ziv–Welch(LZW) is a universal lossless data compression algorithm created by Abraham Lempel, Jacob Ziv, and Terry Welch.  Lempel Ziv encoding it is a dictionary based encoding. Basic Idea Create a dictionary(a table)of string used during communication.
  • 14.
  • 15.
    Lossy compression methods lossy compression or irreversible compression is the class of data encoding methods that uses inexact approximations and partial data discarding to represent the content. Lossy compression is most commonly used to compress multimedia data (audio, video, and images), especially in applications such as streaming media and internet telephony.  These techniques are used to reduce data size for storage, handling, and transmitting content.
  • 16.
    JPEG compression isused in a number of image file formats. JPEG/Exit is the most common image format used by digital cameras and other photographic image capture devices; along with JPEG/JFIF, it is the most common format for storing and transmitting photographic images on the World Wide Web. The term "JPEG" is an acronym for the Joint Photographic Experts Group, which created the standard. Image compression: JPEG
  • 18.
    Discrete cosine transform Adiscrete cosine transform (DCT) expresses a finite sequence of data points in terms of a sum of cosine functions oscillating at different frequencies.
  • 19.
    Video compression--MPEG The MovingPicture Experts Group (MPEG) is a working group of authorities that was formed by ISO and IEC to set standards for audio and video compression and transmission. It was established in 1988 by the initiative of Hiroshi Yasuda (Nippon Telegraph and Telephone) and Leonardo Chiariglione.
  • 22.
    CONCLUSION  The reportpresents a novel approach for image compression using the Hartley transform.  The magnitude and phase compression using this transformation have proved a good performance.  Magnitude and phase were processed separately.  The quantization of frequency samples in less bits has increased the compression ratio. Furthermore, the distributions used to generate the noise influence the result significantly.
  • 23.
    THANKS TO P. MEERABAIMC.A.,M.Phil., HEAD OF THE DEPARTMENT, COMPUTER SCIENCE.