SlideShare a Scribd company logo
1 of 10
Download to read offline
International Journal Of Computational Engineering Research (ijceronline.com) Vol. 2Issue. 5



    Design of Test Data Compressor/Decompressor Using Xmatchpro
                               Method
                           C. Suneetha*1,V.V.S.V.S.Ramachandram*2
        1
         Research Scholar, Dept. of E.C.E., Pragathi Engineering College, JNTU-K, A.P. , India 2 Associate Professor,
                        Dept.,of E.C.E., Pragathi Engineering College, E.G.Dist., A.P., India

                                                               the system. Each Data compressor can process four
Abstract                                                       bytes of data
Higher Circuit Densities in system-on-chip (SOC)
designs have led to drastic increase in test data
volume. Larger Test Data size demands not only
higher memory requirements, but also an increase in
testing time. Test Data Compression/Decompression
addresses this problem by reducing the test data
volume without affecting the overall system
performance. This paper presented an algorithm,
XMatchPro Algorithm that combines the advantages of
dictionary-based and bit mask-based techniques. Our
test compression technique used the dictionary and bit         into and from a block of data in every clock cycle. The
mask selection methods to significantly reduce the             data entering the system needs to be clocked in at
testing time and memory requirements. We have                  a rate of 4 bytes in every clock cycle. This is to
applied our algorithm on various benchmarks and                ensure that adequate data is present for all compressors
compared      our   results    with    existing   test         to process rather than being in an idle state.
compression/Decompression techniques. Our approach
results compression efficiency of 92%. Our approach            1.2. Goal of the Thesis
also generates up to 90% improvement in                        To achieve higher decompression rates using
decompression efficiency compared to other                     compression/decompression architecture with least
techniques without introducing any additional                  increase in latency.
performance or area overhead.                                  1.3. Compression Techniques
                                                               At present there is an insatiable demand for ever-
Keywords - XMatchPro, decompression, test                      greater bandwidth in communication networks and
compression, dictionary selection, SOC.                        forever-greater storage capacity in computer
                                                               system.    This led to the need for an efficient
I. INTRODUCTION                                                compression technique. The compression is the
1.1. Objective                                                 process that is required either to reduce the volume
With the increase in silicon densities, it is                  of information to be transmitted – text, fax and
becoming feasible for compression systems to be                images or reduce the bandwidth that is required for its
implemented in chip. A system with distributed                 transmission – speech, audio and video. The
memory architecture is based on having data                    compression technique is first applied to the source
compression and decompression engines working                  information prior to its transmission. Compression
independently on different data at the same time.              algorithms can be classified in to two types, namely
This data is stored in memory distributed to each
processor. The objective of the project is to design a              Lossless Compression
lossless data compression system which operates in                  Lossy Compression
high-speed to achieve high compression rate. By using
the architecture of compressors, the data compression
rates are significantly improved. Also inherent
scalability of architecture is possible.The main parts
of the system are the Xmatchpro based data                     1.3.1. Lossless Compression
compressors and the control blocks providing                   In this type of lossless compression algorithm, the aim
control signals for the Data compressors allowing              is to reduce the amount of source information to be
appropriate control of the routing of data into and from       transmitted in such a way that, when the

Issn 2250-3005(online)                                September| 2012                                Page 1190
International Journal Of Computational Engineering Research (ijceronline.com) Vol. 2Issue. 5



compressed information is decompressed, there is                 the Huffman and Arithmetic coding algorithms and an
no loss of information. Lossless compression is said,            example of the latter is Lempel-Ziv (LZ) algorithm.
therefore, to be reversible. i.e., Data is not altered           The majority of work on hardware approaches to
or lost in the process of compression or                         lossless data compression has used an adapted form of
decompression. Decompression generates an exact                  the dictionary-based Lempel-Ziv algorithm, in which a
replica of the original object. The Various lossless             large number of simple processing elements are
Compression Techniques are,                                      arranged in a systolic array [1], [2], [3], [4].
      Packbits encoding                                                  II. PREVIOUS WORK ON LOSSLESS
      CCITT Group 3 1D                                                        COMPRESSION METHODS
      CCITT Group 3 2D                                          A second Lempel-Ziv method used a content
      Lempel-Ziv and Welch algorithm LZW                        addressable memory (CAM) capable of performing a
      Huffman                                                   complete dictionary search in one clock cycle [5], [6],
      Arithmetic                                                [7]. The search for the most common string in the
                                                                 dictionary (normally, the most computationally
Example applications of lossless compression are                 expensive operation in the Lempel-Ziv algorithm) can
transferring data over a network as a text file                  be performed by the CAM in a single clock cycle,
since, in such applications, it is normally                      while the systolic array method uses a much
imperative that no part of the source information is             slower deep pipelining technique to implement its
lost     during   either   the    compression     or             dictionary search. However, compared to the CAM
decompression operations and file storage systems                solution, the systolic array method has advantages
(tapes, hard disk drives, solid state storage, file              in terms of reduced hardware costs and lower
servers) and communication networks (LAN, WAN,                   power consumption, which may be more important
wireless).                                                       criteria in some situations than having faster dictionary
                                                                 searching. In [8], the authors show that hardware
1.3.2 Lossy Compression                                          main memory data compression is both feasible
The aim of the Lossy compression algorithms is
                                                                 and worthwhile. The authors also describe the design
normally not to reproduce an exact copy of the
                                                                 and implementation of a novel compression method,
source information after decompression but rather a              the XMatchPro algorithm. The authors exhibit the
version of it that is perceived by the recipient as a true       substantial impact such memory compression has on
copy. The Lossy compression algorithms are:                      overall system performance. The adaptation of
      JPEG (Joint Photographic Expert Group)                    compression code for parallel implementation is
      MPEG (Moving Picture Experts Group)                       investigated      by Jiang and Jones [9]. They
      CCITT H.261 (Px64)                                        recommended the use of a processing array arranged in
                                                                 a tree-like structure. Although compression can be
Example applications of lossy compression are the                implemented in this manner, the implementation of
transfer of digitized images and audio and video                 the decompressor’s search and decode stages in
streams. In such cases, the sensitivity of the human eye         hardware would greatly increase the complexity of
or ear is such that any fine details that may be                 the design and it is likely that these aspects would
missing from the original source signal after                    need to be implemented sequentially. An FPGA
decompression are not detectable.                                implementation of a binary arithmetic coding
                                                                 architecture that is able to process 8 bits per clock
1.3.3 Text Compression                                           cycle compared to the standard 1 bit per cycle is
There are three different types of text – unformatted,           described by Stefo et al [10]. Although little research
formatted and hypertext and all are represented as               has been performed on architectures involving several
strings of characters selected from a defined set. The           independent compression units working in a
compression algorithm associated with text must be               concurrent cooperative manner, IBM has introduced
lossless since the loss of just a single character could         the MXT chip [11], which has four independent
modify the meaning of a complete string. The text                compression engines operating on a shared memory
compression is restricted to the use of entropy                  area. The four Lempel-Ziv compression engines are
encoding and in practice, statistical encoding methods.          used to provide data throughput sufficient for
There are two types of statistical encoding methods              memory compression in computer servers. Adaptation
which are used with text: one which uses single                  of software compression algorithms to make use of
character as the basis of deriving an optimum set of             multiple CPU systems was demonstrated by research
code words and the other which uses variable length              of Penhorn [12] and Simpson and Sabharwal [13].
strings of characters. Two examples of the former are
Issn 2250-3005(online)                                  September| 2012                                Page 1191
International Journal Of Computational Engineering Research (ijceronline.com) Vol. 2Issue. 5



Penhorn used two CPUs to compress data using a                   methods are overcome by using the XmatchPro
technique based on the Lempel-Ziv algorithm and                  algorithm in design. The objective is then to obtain
showed that useful compression rate improvements                 better compression ratios and still maintain a high
can be achieved, but only at the cost of increasing the          throughput so that the compression/decompression
learning time for the dictionary. Simpson and                    processes do not slow the original system down.
Sabharwal described the software implementation of
compression system for a multiprocessor system                   Usage of XMatchPro Algorithm
based on the parallel architecture developed by                  The Lossless Data Compression system designed uses
Gonzalez and Smith and Storer [14].                              the XMatchPro Algorithm. The XMatchPro algorithm
                                                                 uses a fixed-width dictionary of previously seen data
Statistical Methods                                              and attempts to match the current data element with a
 Statistical Modeling of lossless data compression               match in the dictionary. It works by taking a 4-byte
system is based on assigning values to events                    word and trying to match or partially match this word
depending on their probability. The higher the value,            with past data. This past data is stored in a
the higher the probability. The accuracy with which              dictionary, which is constructed from a content
this frequency distribution reflects reality determines          addressable memory. As each entry is 4 bytes wide,
the efficiency of the model. In Markov modeling,                 several types of matches are possible. If all the bytes
predictions are done based on the symbols that precede           do not match with any data present in the
the current symbol. Statistical Methods in hardware              dictionary they are transmitted with an additional
are restricted to simple higher order modeling using             miss bit. If all the bytes are matched then the match
binary alphabets that limits speed, or simple multi              location and match type is coded and transmitted, this
symbol alphabets using zeroeth order models that                 match is then moved to the front of the
limits compression. Binary alphabets limit speed                 dictionary. The dictionary is maintained using a move
because only a few bits (typically a single bit) are             to front strategy whereby a new tuple is placed at the
processed in each cycle while zeroeth order                      front of the dictionary while the rest move down
models limit compression because they can only                   one position. When the dictionary becomes full the
provide an inexact representation of the statistical             tuple placed in the last position is discarded leaving
properties of the data source.                                   space for a new one. The coding function for a match is
 Dictionary Methods                                              required to code several fields as follows:
Dictionary Methods try to replace a symbol or group of           A zero followed by:
symbols by a dictionary location code.           Some
dictionary-based techniques use simple uniform                   1). Match location: It uses the binary code associated to
binary codes to process the information supplied.                the matching location. 2). Match type: Indicates which
Both software and hardware based dictionary models               bytes of the incoming tuple have matched. 3).
achieve good throughput and competitive compression.             Characters that did not match transmitted in literal
The UNIX utility ‘compress’ uses Lempel-Ziv-2                    form. A description of the XMatchPro algorithm in
(LZ2) algorithm and the data compression Lempel-                 pseudo-code is given in the figure below.
Ziv (DCLZ) family of compressors initially
invented by Hewlett-Packard[16] and currently                    Pseudo Code for XMatchPro Algorithm
being developed by AHA[17],[18] also use LZ2                     With the increase in silicon densities, it is
derivatives. Bunton and Borriello present another                becoming      feasible    for XMatchPros      to     be
LZ2 implementation in [19] that improves on the                  implemented on a single chip. A system with
Data Compression Lempel-Ziv method. It uses a tag                distributed memory architecture is based on having
attached to each dictionary location to identify which           data     compression      and decompression engines
node should be eliminated once the dictionary becomes            working independently on different data at the same
full.                                                            time.This data is stored in memory distributed to each
                                                                 processor. There are several approaches in which data
XMatchPro Based System
                                                                 can be routed to and from the compressors that will
The Lossless data compression system is a derivative
                                                                 affect the speed, compression and complexity of the
of the XMatchPro Algorithm which originates from
                                                                 system. Lossless compression removes redundant
previous research of the authors [15] and advances
in FPGA technology. The flexibility provided by using            information from the data while they are transmitted or
this technology is of great interest since the chip can be       before they are stored in memory.             Lossless
                                                                 decompression reintroduces the redundant information
adapted to the requirements of a particular application
                                                                 to recover fully the original data. There are two
easily. The drawbacks of some of the previous
                                                                 important contributions made by the current
Issn 2250-3005(online)                                  September| 2012                                Page 1192
International Journal Of Computational Engineering Research (ijceronline.com) Vol. 2Issue. 5



compression & decompression work, namely,                         The number of bits in a CAM word is usually
improved compression rates and the inherent                       large, with existing implementations ranging from
scalability. Significant   improvements      in   data            36 to 144 bits. A typical CAM employs a table
compression rates have been achieved by sharing                   size ranging between a few hundred entries to 32K
the computational requirement between compressors                 entries, corresponding to an address space ranging
without significantly compromising the contribution               from 7 bits to 15 bits. The length of the CAM varies
made by individual compressors. The scalability                   with three possible values of 16, 32 or 64 tuples trading
feature permits future bandwidth or storage demands to            complexity for compression. The no. of tuples present
be met by adding additional compression engines.                  in the dictionary has an important effect on
                                                                  compression. In principle, the larger the dictionary
The XMatchPro based Compression system                            the higher the probability of having a match and
The XMatchPro algorithm uses a fixed width                        improving compression. On the other hand, a bigger
dictionary of previously seen data and attempts to                dictionary uses more bits to code its locations
match the current data element with a match in the                degrading compression when processing small data
dictionary. It works by taking a 4-byte word and trying           blocks that only use a fraction of the dictionary
to match this word with past data. This past data is              length available. The width of the CAM is fixed with
stored in a dictionary, which is constructed from a               4bytes/word. Content Addressable Memory (CAM)
content addressable memory. Initially all the entries in          compares input search data against a table of stored
the dictionary are empty & 4-bytes are added to the               data, and returns the address of the matching data.
front of the dictionary, while the rest move one                  CAMs have a single clock cycle throughput
position down if a full match has not occurred. The               making them faster than other hardware and
larger the dictionary, the greater the number of address          software-based search systems. The input to the
bits needed to identify each memory location, reducing            system is the search word that is broadcast onto the
compression performance. Since the number of bits                 searchlines to the table of stored data. Each stored
needed to code each location address is a function                word has a matchline that indicates whether the
of the dictionary size greater compression is                     search word and stored word are identical (the match
obtained in comparison to the case where a fixed                  case) or are different (a mismatch case, or miss). The
size dictionary uses fixed address codes for a partially          matchlines are fed to an encoder that generates a
full dictionary.In the XMatchPro system, the data                 binary match location corresponding to the matchline
stream to be compressed enters the compression                    that is in the match state. An encoder is used in systems
system, which is then partitioned and routed to the               where only a single match is expected.
compressors.
                                                                  The overall function of a CAM is to take a search
The Main        Component-       Content     Addressable          word and return the matching memory location.
Memory
                                                                   Managing Dictionary entries
Dictionary based schemes copy repetitive or                       Since the initialization of a compression CAM sets
redundant data into a lookup table (such as CAM)                  all words to zero, a possible input word formed by
and output the dictionary address as a code to replace            zeros will generate multiple full matches in
the data.                                                         different     locations.The Xmatchpro compression
The compression architecture is based around a                    system simply selects the full match closer to the top.
block of CAM to realize the dictionary. This is                   This operational mode initializes the dictionary to a
necessary since the search operation must be done in              state where all the words with location address
parallel in all the entries in the dictionary to allow high       bigger than zero are declared invalid without the need
and data-independent                                              for extra logic.
                                                                  Iii. Xmatchpro Lossless Compression System
                                                                   DESIGN METHODOLOGY
                                                                  The XMatchPro        algorithm      is   efficient at
                                                                  compressing the small blocks of data necessary with
                                                                  cache and page based memory hierarchies found in
                                                                  computer systems. It is suitable for high performance
                                                                  hardware implementation. The XMatchPro hardware
                                                                  achieves a throughput 2-3 times greater than other
                                                                  high-performance hardware implementation. The core
                                                                  component of the system is the XMatchPro based
            Fig..Conceptual view of CAM
Issn 2250-3005(online)                                   September| 2012                                Page 1193
International Journal Of Computational Engineering Research (ijceronline.com) Vol. 2Issue. 5



Compression/ Decompression          system.      The           dictionary means that code words are short during the
XMatchPro is a high-speed lossless dictionary                  early stages of compressing a block. Because the
based data compressor. The XMatchPro algorithm                 XMatchPro algorithm allows partial matches, a
works by taking an incoming four-byte tuple of                 decision must be made about which of the
data and attempting to match fully or partially match          locations provides the best overall match, with the
the tuple with the past data.                                  selection criteria being the shortest possible number of
                                                               output bits.
FUNCTIONAL DESCRIPTION
The XMatchPro algorithm maintains a dictionary                 Implementation of Xmatchpro Based Compressor
of data previously seen and attempts to match the              The block diagram gives the details about the
current data element with an entry in the                      components of a single 32 bit Compressor. There
dictionary, replacing it with a shorter code referencing       are three components namely, COMPARATOR,
the match location. Data elements that do not produce a        ARRAY, CAMCOMPARATOR. The comparator is
match are transmitted in full (literally) prefixed by          used to compare two 32-bit data and to set or reset the
a single bit. Each data element is exactly 4 bytes in          output bit as 1 for equal and 0 for unequal. The CAM
width and is referred to as tuple. This feature gives a        COMPARATOR searches the CAM dictionary entries
guaranteed input data rate during compression and thus         for a full match of the input data given. The reason for
also guaranteed data rates during decompression,               choosing a full match is to get a prototype of the high
irrespective of the data mix. Also the 4-byte tuple            throughout Xmatchpro compressor with reduced
size gives an inherently higher throughput than other          complexity and high performance.
algorithms, which tend to operate on a byte stream.            If a full match occurs, the match-hit signal is
The dictionary is maintained using move to front               generated and the corresponding match location is
strategy, where by the current tuple is placed at              given as output by the CAM Comparator.. If no full
the front of the dictionary and the other tuples               match occurs, the corresponding data that is given as
move down by one location as necessary to make                 input at the given time is got as output.
space. The move to front strategy aims to exploit
                                                                   32 BIT COMPRESSIONS
locality in the input data. If the dictionary becomes
full, the tuple occupying the last location is simply
discarded.
A full match occurs when all characters in the
incoming tuple fully match a dictionary entry. A
partial match occurs when at least any two of the
characters in the incoming tuple match exactly with
a dictionary entry, with the characters that do not
match being transmitted literally.

 The use of partial matching improves the compression           Array is of length of 64X32 bit locations. This is
ratio when compared with allowing only 4 byte                  used to store the unmatched incoming data and
matches, but still maintains high throughput. If               when a new data comes, the incoming data is
neither a full nor partial match occurs, then a miss is        compared with all the data stored in this array. If
registered and a single miss bit of ‘1’ is transmitted         a match occurs, the corresponding match location
followed by the tuple itself in literal form. The only         is sent as output else the incoming data is stored
exception to this is the first tuple in any compression        in next free location of the array & is sent as
operation, which will always generate a miss as the            output. The last component is the cam comparator and
dictionary begins in an empty state. In this case no           is used to send the match location of the CAM
miss bit is required to prefix the tuple.                      dictionary as output if a match has occurred. This is
At the beginning of each compression operation,                done by getting match information as input from the
the dictionary size is reset to zero. The dictionary           comparator.
then grows by one location for each incoming tuple             Suppose the output of the comparator goes high for any
being placed at the                                            input, the match is found and the corresponding
 front of the dictionary and all other entries in the          address is retrieved and sent as output along with
dictionary moving down by one location. A full                 one bit to indicate that match is found. At the same
match does not grow the dictionary, but the move-              time, suppose no match occurs, or no matched data is
to-front rule is still applied. This growth of the             found, the incoming data is stored in the array and it is

Issn 2250-3005(online)                                September| 2012                                Page 1194
International Journal Of Computational Engineering Research (ijceronline.com) Vol. 2Issue. 5



sent as the output. These are the functions of the three
components of the Compressor

IV.Design of Lossless
Compression/Decompression System
DESIGN OF COMPRESSOR / DECOMPRESSOR
The block diagram gives the details about the
components of a single 32-bit compressor /
decompressor. There are three components namely
COMPRESSOR, DECOMPRESSOR and CONTROL.
The compressor has the following components -
COMPARATOR,             ARRAY,            and                  Fig. Block Diagram of 32 bit
CAMCOMPARATOR.                                                 Compressor/Decompression
The comparator is used to compare two 32-bit data and          The Control has the input bit called C / D i.e.,
to set or reset the output bit as 1 for equal and 0 for        Compression / Decompression Indicates whether
unequal.                                                       compression or decompression has to be done. If it has
                                                               the value 0 then omcpressor is stared when the value is
Array is of length of 64X32bit locations. This is
                                                               1 decompression is done
used to store the unmatched in coming data and
when the next new data comes, that data is compared
                                                               V. Result Analysis
with all the data stored in this array. If the incoming
data matches with any of the data stored in array, the         1 SYNTHESIS REPORT
Comparator generates a match signal and sends it               Release 8.2i - xst I.31
to Cam Comparator.
The last component is the Cam comparator and is                Copyright (c) 1995-2006 Xilinx, Inc.            All rights
used to send the incoming data and all the stored              reserved.
data in array one by one to the comparator.
Suppose output of comparator goes high for any                 --> Parameter TMPDIR set to ./xst/projnav.tmp
input, then the match is found and the
                                                               CPU : 0.00 / 0.17 s | Elapsed : 0.00 / 0.00 s
corresponding address (match location) is retrieved
and sent as output along with one bit to indicate the          --> Parameter xsthdpdir set to ./xst
match is found. At the same time, suppose no match is
found, then the incoming data stored in the array is           CPU : 0.00 / 0.17 s | Elapsed : 0.00 / 0.00 s
sent as output. These are the functions of the
three components of the XMatchPro based                        --> Reading design: ds.prj
compressor.
The decompressor has the following components –                Table of Contents
Array and Processing Unit.                                            Synthesis Options Summary
                                                                      HDL Compilation
Array has the same function as that of the array unit
which is used in the Compressor. It is also of the same               Design Hierarchy Analysis
length. Processing unit checks the incoming match hit                 HDL Analysis
data and if it is 0, it indicates that the data is not                HDL Synthesis
present in the Array, so it stores the data in the Array              HDL Synthesis Report
and if the match hit data is 1, it indicates the data is              Advanced HDL Synthesis
present in the Array, then it instructs to find the data              Advanced HDL Synthesis Report
from the Array with the help of the address input and                 Low Level Synthesis
sends as output to the data out.                                       Partition ReportRESULTS




Issn 2250-3005(online)                                September| 2012                                 Page 1195
International Journal Of Computational Engineering Research (ijceronline.com) Vol. 2Issue. 5



COMPARATOR                                                        CAM dictionary as output if a match has occurred.
                                                                  This is done by getting match Information as input
                                                                  from the comparator.

                                                                   Compression




COMPARATOR waveform explanation
Whenever the reset signal is active low then only the
data is compared otherwise i.e. if it is active high the
eqout signal is zero. Here we are applying 3 4 5 6 7
8 as data1 and 4 5 8 as data2. So after comparing                  Fig. Compression of INPUT DATA waveform
the inputs the output signal eqout raises at the data 4
and 8 instants which indicate the output signal.                  Compression waveform explanation
                                                                  Whenever the reset1 and start1 signal is active low
COMPARATOR function                                               then only the compression of data is occurred. Here
Suppose the output of the comparator goes high for                every signal reacts at the clk1 rising edge only
any input, the match is found and the corresponding               because it is meant for positive edge triggering. The
address is retrieved and sent as output along                     data which has to be compressed is applied at the
with one bit to indicate that match is found. At the              udata signal and the resultant data is obtained at the
same time, suppose no match occurs, or no matched                 dataout signal which is nothing but compressed data.
data is found, the incoming data is stored in the array           Whenever there is a redundancy data is there in the
and it is sent as the output.                                     applied data then the matchhit signal will be in active
                                                                  high state by generating the corresponding address
CAM                                                               signal addrout.

                                                                  Compression function
                                                                  The compression engine main intension is to
                                                                  compress the incoming content for memory reduction
                                                                  purpose and to transfer the data very easily.

                                                                   Decompression
Fig. CAM data input Waveform




 Fig. CAM data output Waveform

CAM waveform explanation                                          Fig. Decompression of the data
Whenever the reset signal is active low then only the
cam produces the corresponding address and the
match hit signal by individually raise the
corresponding hit otherwise i.e. if it is active high it
doesn’t produces any signal. In order to obtain the
outputs the start should be in active low state.

CAM function
The cam is used to send the match location Of the

Issn 2250-3005(online)                                     September| 2012                                Page 1196
International Journal Of Computational Engineering Research (ijceronline.com) Vol. 2Issue. 5



                                                                 Decompression

Decompression waveform explanation
     Whenever the reset and start_de signal is active
low then only the decompression of data is occurred.
Here every signal reacts at the clk1 rising edge only
because it is meant for positive edge triggering. Here
for retrieving the original data the outputs of the
compressor engine has to be applied as inputs for the
decompression engine.
                                                                Fig RTL Schematic of decompression waveform
Decompression function                                          of component level

The decompression engine main intension is to
retrieve the original content.

SCHEMATIC

Compression

                                                                Fig. RTL Schematic of decompression waveform
                                                                of circuit level




 Fig. RTL Schematic of compression waveform of
 component level




                                                                 Fig. RTL Schematic of decompression waveform
                                                                 of chip level

                                                                 VI.CONCLUSION
                                                                The various modules are designed and coded using
                                                                VHDL. The source codes are simulated and the
 Fig. RTL Schematic of compression waveform of                  various waveforms are obtained for all the
 circuit level                                                  modules. Since the Compression/Decompression
                                                                system uses XMatchPro algorithm, speed of
                                                                compression throughput is high.
                                                                The Improved Compression ratio is achieved in
                                                                Compression architecture with least increase in
                                                                latency. The High speed throughput is achieved. The
                                                                architecture provides inherent scalability in future.
                                                                The total time required to transmit compressed
                                                                data is less than that of
                                                                Transmitting uncompressed data. This can lead to a
Fig.RTL Schematic of compression waveform of                    performance benefit, as the bandwidth of a link
chip level                                                      appears greater when transmitting compressed data
                                                                and hence more data can be transmitted in a given
Issn 2250-3005(online)                                   September| 2012                               Page 1197
International Journal Of Computational Engineering Research (ijceronline.com) Vol. 2Issue. 5



amount of time.
Higher Circuit Densities in system-on-chip (SOC)
designs have led to drastic increase in test data
volume. Larger Test Data size demands not only
higher memory requirements, but also an increase in
testing time. Test Data Compression/Decompression
addresses this problem by reducing the test data
volume without affecting the overall system
performance. This paper presented an algorithm,
XMatchPro Algorithm that combines the advantages                 References
of dictionary-based and bit mask-based techniques.                [1].      Li, K. Chakrabarty, and N. Touba, “Test
Our test compression technique used the dictionary                          data compression using dictionaries with
and bit mask selection methods to significantly                             selective entries and fixed-length indices,”
reduce the testing time and memory requirements.                            ACM Trans.Des. Autom. Electron. Syst.,
We have applied our algorithm on various                                    vol. 8, no. 4, pp. 470–490, 2003.
benchmarks and compared our results with existing                 [2].      Wunderlich and G. Kiefer, “Bit-flipping
test compression/Decompression techniques. Our                              BIST,” in Proc. Int. Conf. Comput.Aided
approach results compression efficiency of 92%. Our                         Des., 1996, pp. 337–343.
approach also generates up to 90% improvement in                  [3].      N. Touba and E. McCluskey, “Altering a
decompression efficiency compared to other                                  pseudo-random bit sequence for scan based
techniques without introducing any additional                               bist,” in Proc. Int. Test Conf., 1996, pp.
performance or area overhead.                                               167–175.
                                                                  [4].      F. Hsu, K. Butler, and J. Patel, “A case
Vii. Future Scope                                                           study on the implementation of Illinois scan
There is a potential of doubling the performance                            architecture,” in Proc. Int. Test Conf., 2001,
of storage / communication system by increasing                             pp. 538–547.
the available transmission bandwidth and data                     [5].      M. Ros and P. Sutton, “A hamming distance
capacity with minimum investment. It can be applied                         based VLIW/EPIC code compression
in Computer systems, High performance storage                               technique,” in Proc. Compilers, Arch.,
devices. This can be applied to 64 bit also in order to                     Synth. Embed. Syst., 2004, pp. 132–139.
increase the data transfer rate. There is a chance to             [6].      Seong and P. Mishra, “Bitmask-based code
easy migration to ASIC technology which enables 3-                          compression for embed systems,” IEEE
5 times increase in performance rate. There is a                            Trans. Comput.-Aided Des. Integr. Circuits
chance to develop an engine which does not require                          Syst., vol. 27, no. 4, pp. 673–685, Apr.
any external components and supports operation on                           2008.
blocked      data. Full-duplex   operation     enables            [7].      M.-E. N. A. Jas, J. Ghosh-Dastidar, and N.
simultaneous compression /decompression for a                               Touba, “An efficient test vector compression
combined performance of high data rates, which also                         scheme using selective Huffman coding,”
enables self-checking test mode using CRC (Cyclic                           IEEE Trans. Comput.-Aided Des. Integr.
Redundancy Check). High-performance coprocessor-                            Circuits Syst., vol. 22, no. 6, pp.797–806,
style interface should be possible if synchronization                       Jun. 2003.
is achieved. Chance to improve the Compression                    [8].      A. Jas and N. Touba, “Test vector
ratio compare with proposed work.                                           decompression using cyclical scan chains
                                                                            and its application to testing core based
VIII. Acknowledgements                                                      design,” in Proc. Int.Test Conf., 1998, pp.
The authors would like to thank everyone, whoever                           458–464.
remained a great source of help and inspirations in               [9].      A. Chandra and K. Chakrabarty, “System on
this humble presentation. The authors would like to                         a chip test data compression and
thank Pragathi Engineering College Management for                           decompression architectures based on
providing necessary facilities to carry out this work.                      Golomb codes,” IEEE Trans. Comput.-
                                                                            Aided Des. Integr. Circuits Syst., vol. 20,
                                                                            no. 3, pp.355–368, Mar. 2001.


Issn 2250-3005(online)                                    September| 2012                                  Page 1198
International Journal Of Computational Engineering Research (ijceronline.com) Vol. 2Issue. 5



[10].   X. Kavousianos, E. Kalligeros, and D.
        Nikolos, “Optimal selective Huffman coding           [19].      X. Kavousianos, E. Kalligeros, and D.
        for test-data compression,” IEEE Trans.                         Nikolos, “Test data compression based on
        Computers, vol. 56, no. 8, pp. 1146–1152,                       variable-to-variable Huffman encoding with
        Aug. 2007.                                                      codeword reusability,” IEEE Trans.
[11].   M. Nourani andM. Tehranipour, “RL-                              Comput.-Aided Des. Integr. Circuits
        Huffman encoding for test compression and                       Syst.,vol. 27, no. 7, pp. 1333–1338, Jul.
        power reduction in scan applications,” ACM                      2008.
        Trans. Des.Autom. Electron. Syst., vol. 10,          [20].      S. Reda and A. Orailoglu, “Reducing test
        no. 1, pp. 91–115, 2005.                                        application time through test data mutation
[12].   H. Hashempour, L. Schiano, and F.                               encoding,” in Proc. Des. Autom. Test Eur.,
        Lombardi,      “Error-resilient    test    data                 2002, pp.387–393.
        compression using Tunstall codes,” in Proc.          [21].      E. Volkerink, A. Khoche, and S. Mitra,
        IEEE Int. Symp. Defect Fault Tolerance                          “Packet-based input test data compression
        VLSI Syst., 2004, pp. 316–323.                                  techniques,” in Proc. Int. Test Conf., 2002,
[13].   M. Knieser, F.Wolff, C. Papachristou,                           pp. 154–163.
        D.Weyer, and D. McIntyre, “A technique for           [22].      S. Reddy, K. Miyase, S. Kajihara, and I.
        high ratio LZWcompression,” in Proc. Des.,                      Pomeranz, “On test data volume reduction
        Autom., Test Eur., 2003, p. 10116.                              for multiple scan chain design,” in Proc.
[14].   M. Tehranipour, M. Nourani, and K.                              VLSI Test Symp., 2002, pp. 103–108.
        Chakrabarty, “Nine-coded compression                 [23].      F. Wolff and C. Papachristou, “Multiscan-
        technique for testing embedded cores in                         based test compression and hardware
        SOCs,” IEEE Trans.Very Large Scale                              decompression using LZ77,” in Proc. Int.
        Integr. (VLSI) Syst., vol. 13, pp. 719–731,                     Test Conf., 2002,pp. 331–339.
        Jun. 2005.                                           [24].      A. Wurtenberger, C. Tautermann, and S.
[15].   L. Lingappan, S. Ravi, A. Raghunathan, N.                       Hellebrand, “Data compression for multiple
        K. Jha, and S. T.Chakradhar, “Test volume                       scan chains using dictionaries with
        reduction in systems-on-a-chip using                            corrections,” in Proc. Int. Test Conf., 2004,
        heterogeneous and multilevel compression                        pp. 926–935.
        techniques,” IEEE Trans.Comput.-Aided                [25].      K. Basu and P.Mishra, “A novel test-data
        Des. Integr. Circuits Syst., vol. 25, no. 10,                   compression technique using application-
        pp.2193–2206, Oct. 2006.                                        aware bitmask and dictionary selection
[16].   A. Chandra and K. Chakrabarty, “Test data                       methods,” in Proc.ACM Great Lakes Symp.
        compression and test resource partitioning                      VLSI, 2008, pp. 83–88.
        for system-on-a-chip using frequency-                [26].      T. Cormen, C. Leiserson, R. Rivest, and C.
        directed run-length (FDR) codes,” IEEE                          Stein, Introduction to Algorithms. boston,
        Trans. Computers, vol. 52, no. 8, pp.1076–                      MA: MIT Press, 2001.
        1088, Aug. 2003.                                     [27].      M. Garey and D. Johnson, Computers and
[17].   X. Kavousianos, E. Kalligeros, and D.                           Intractability: A Guide to the Theory of NP-
        Nikolos, “Multilevel-Huffman test-data                          Completeness. New York: Freeman, 1979.
        compression for IP cores with multiple scan          [28].      Hamzaoglu and J. Patel, “Test set
        chains,” IEEE Trans. Very Large Scale                           compaction algorithm for combinational
        Integr. (VLSI) Syst., vol. 16, no. 7, pp. 926–                  circuits,” in Proc. Int. Conf. Comput.-Aided
        931,Jul. 2008.                                                  Des., 1998, pp.283–289.
[18].   X. Kavousianos, E. Kalligeros, and D.
        Nikolos, “Multilevel Huffman coding: An
        efficient test-data compression method for
        IP cores,” IEEE Trans. Comput.-Aided Des.
        Integr. Circuits Syst., vol. 26, no. 6,
        pp.1070–1083, Jun. 2007.




Issn 2250-3005(online)                                September| 2012                                 Page 1199

More Related Content

What's hot

Introduction to National Supercomputer center in Tianjin TH-1A Supercomputer
Introduction to National Supercomputer center in Tianjin TH-1A SupercomputerIntroduction to National Supercomputer center in Tianjin TH-1A Supercomputer
Introduction to National Supercomputer center in Tianjin TH-1A SupercomputerFörderverein Technische Fakultät
 
IJCER (www.ijceronline.com) International Journal of computational Engineerin...
IJCER (www.ijceronline.com) International Journal of computational Engineerin...IJCER (www.ijceronline.com) International Journal of computational Engineerin...
IJCER (www.ijceronline.com) International Journal of computational Engineerin...ijceronline
 
Calibration of Deployment Simulation Models - A Multi-Paradigm Modelling Appr...
Calibration of Deployment Simulation Models - A Multi-Paradigm Modelling Appr...Calibration of Deployment Simulation Models - A Multi-Paradigm Modelling Appr...
Calibration of Deployment Simulation Models - A Multi-Paradigm Modelling Appr...Daniele Gianni
 
Intra-coding using non-linear prediction, KLT and Texture Synthesis: AV1 enco...
Intra-coding using non-linear prediction, KLT and Texture Synthesis: AV1 enco...Intra-coding using non-linear prediction, KLT and Texture Synthesis: AV1 enco...
Intra-coding using non-linear prediction, KLT and Texture Synthesis: AV1 enco...Förderverein Technische Fakultät
 
Design and Implementation of a Cache Hierarchy-Aware Task Scheduling for Para...
Design and Implementation of a Cache Hierarchy-Aware Task Scheduling for Para...Design and Implementation of a Cache Hierarchy-Aware Task Scheduling for Para...
Design and Implementation of a Cache Hierarchy-Aware Task Scheduling for Para...csandit
 
Dynamic Texture Coding using Modified Haar Wavelet with CUDA
Dynamic Texture Coding using Modified Haar Wavelet with CUDADynamic Texture Coding using Modified Haar Wavelet with CUDA
Dynamic Texture Coding using Modified Haar Wavelet with CUDAIJERA Editor
 
High Performance Parallel Computing with Clouds and Cloud Technologies
High Performance Parallel Computing with Clouds and Cloud TechnologiesHigh Performance Parallel Computing with Clouds and Cloud Technologies
High Performance Parallel Computing with Clouds and Cloud Technologiesjaliyae
 
Scanned document compression using block based hybrid video codec
Scanned document compression using block based hybrid video codecScanned document compression using block based hybrid video codec
Scanned document compression using block based hybrid video codecMuthu Samy
 
Run-Time Adaptive Processor Allocation of Self-Configurable Intel IXP2400 Net...
Run-Time Adaptive Processor Allocation of Self-Configurable Intel IXP2400 Net...Run-Time Adaptive Processor Allocation of Self-Configurable Intel IXP2400 Net...
Run-Time Adaptive Processor Allocation of Self-Configurable Intel IXP2400 Net...CSCJournals
 
Robustness of compressed CNNs
Robustness of compressed CNNsRobustness of compressed CNNs
Robustness of compressed CNNsKaushalya Madhawa
 
Reversible encrypted data concealment in images by reserving room approach
Reversible encrypted data concealment in images by reserving room approachReversible encrypted data concealment in images by reserving room approach
Reversible encrypted data concealment in images by reserving room approachIAEME Publication
 
An fpga based efficient fruit recognition system using minimum
An fpga based efficient fruit recognition system using minimumAn fpga based efficient fruit recognition system using minimum
An fpga based efficient fruit recognition system using minimumAlexander Decker
 
AI On the Edge: Model Compression
AI On the Edge: Model CompressionAI On the Edge: Model Compression
AI On the Edge: Model CompressionApache MXNet
 
LOW AREA FPGA IMPLEMENTATION OF DROMCSLA-QTL ARCHITECTURE FOR CRYPTOGRAPHIC A...
LOW AREA FPGA IMPLEMENTATION OF DROMCSLA-QTL ARCHITECTURE FOR CRYPTOGRAPHIC A...LOW AREA FPGA IMPLEMENTATION OF DROMCSLA-QTL ARCHITECTURE FOR CRYPTOGRAPHIC A...
LOW AREA FPGA IMPLEMENTATION OF DROMCSLA-QTL ARCHITECTURE FOR CRYPTOGRAPHIC A...IJNSA Journal
 
Implementation and Optimization of FDTD Kernels by Using Cache-Aware Time-Ske...
Implementation and Optimization of FDTD Kernels by Using Cache-Aware Time-Ske...Implementation and Optimization of FDTD Kernels by Using Cache-Aware Time-Ske...
Implementation and Optimization of FDTD Kernels by Using Cache-Aware Time-Ske...Serhan
 

What's hot (17)

Introduction to National Supercomputer center in Tianjin TH-1A Supercomputer
Introduction to National Supercomputer center in Tianjin TH-1A SupercomputerIntroduction to National Supercomputer center in Tianjin TH-1A Supercomputer
Introduction to National Supercomputer center in Tianjin TH-1A Supercomputer
 
IJCER (www.ijceronline.com) International Journal of computational Engineerin...
IJCER (www.ijceronline.com) International Journal of computational Engineerin...IJCER (www.ijceronline.com) International Journal of computational Engineerin...
IJCER (www.ijceronline.com) International Journal of computational Engineerin...
 
Calibration of Deployment Simulation Models - A Multi-Paradigm Modelling Appr...
Calibration of Deployment Simulation Models - A Multi-Paradigm Modelling Appr...Calibration of Deployment Simulation Models - A Multi-Paradigm Modelling Appr...
Calibration of Deployment Simulation Models - A Multi-Paradigm Modelling Appr...
 
Gv2512441247
Gv2512441247Gv2512441247
Gv2512441247
 
Intra-coding using non-linear prediction, KLT and Texture Synthesis: AV1 enco...
Intra-coding using non-linear prediction, KLT and Texture Synthesis: AV1 enco...Intra-coding using non-linear prediction, KLT and Texture Synthesis: AV1 enco...
Intra-coding using non-linear prediction, KLT and Texture Synthesis: AV1 enco...
 
Design and Implementation of a Cache Hierarchy-Aware Task Scheduling for Para...
Design and Implementation of a Cache Hierarchy-Aware Task Scheduling for Para...Design and Implementation of a Cache Hierarchy-Aware Task Scheduling for Para...
Design and Implementation of a Cache Hierarchy-Aware Task Scheduling for Para...
 
Dynamic Texture Coding using Modified Haar Wavelet with CUDA
Dynamic Texture Coding using Modified Haar Wavelet with CUDADynamic Texture Coding using Modified Haar Wavelet with CUDA
Dynamic Texture Coding using Modified Haar Wavelet with CUDA
 
High Performance Parallel Computing with Clouds and Cloud Technologies
High Performance Parallel Computing with Clouds and Cloud TechnologiesHigh Performance Parallel Computing with Clouds and Cloud Technologies
High Performance Parallel Computing with Clouds and Cloud Technologies
 
Scanned document compression using block based hybrid video codec
Scanned document compression using block based hybrid video codecScanned document compression using block based hybrid video codec
Scanned document compression using block based hybrid video codec
 
Run-Time Adaptive Processor Allocation of Self-Configurable Intel IXP2400 Net...
Run-Time Adaptive Processor Allocation of Self-Configurable Intel IXP2400 Net...Run-Time Adaptive Processor Allocation of Self-Configurable Intel IXP2400 Net...
Run-Time Adaptive Processor Allocation of Self-Configurable Intel IXP2400 Net...
 
Robustness of compressed CNNs
Robustness of compressed CNNsRobustness of compressed CNNs
Robustness of compressed CNNs
 
Reversible encrypted data concealment in images by reserving room approach
Reversible encrypted data concealment in images by reserving room approachReversible encrypted data concealment in images by reserving room approach
Reversible encrypted data concealment in images by reserving room approach
 
An fpga based efficient fruit recognition system using minimum
An fpga based efficient fruit recognition system using minimumAn fpga based efficient fruit recognition system using minimum
An fpga based efficient fruit recognition system using minimum
 
AI On the Edge: Model Compression
AI On the Edge: Model CompressionAI On the Edge: Model Compression
AI On the Edge: Model Compression
 
LOW AREA FPGA IMPLEMENTATION OF DROMCSLA-QTL ARCHITECTURE FOR CRYPTOGRAPHIC A...
LOW AREA FPGA IMPLEMENTATION OF DROMCSLA-QTL ARCHITECTURE FOR CRYPTOGRAPHIC A...LOW AREA FPGA IMPLEMENTATION OF DROMCSLA-QTL ARCHITECTURE FOR CRYPTOGRAPHIC A...
LOW AREA FPGA IMPLEMENTATION OF DROMCSLA-QTL ARCHITECTURE FOR CRYPTOGRAPHIC A...
 
Data compression
Data compression Data compression
Data compression
 
Implementation and Optimization of FDTD Kernels by Using Cache-Aware Time-Ske...
Implementation and Optimization of FDTD Kernels by Using Cache-Aware Time-Ske...Implementation and Optimization of FDTD Kernels by Using Cache-Aware Time-Ske...
Implementation and Optimization of FDTD Kernels by Using Cache-Aware Time-Ske...
 

Viewers also liked

Introdução ao Jitter
Introdução ao JitterIntrodução ao Jitter
Introdução ao JitterJorge Cardoso
 
Study on groundwater quality in and around sipcot industrial complex, area cu...
Study on groundwater quality in and around sipcot industrial complex, area cu...Study on groundwater quality in and around sipcot industrial complex, area cu...
Study on groundwater quality in and around sipcot industrial complex, area cu...ijceronline
 
Design And Analysis of a Rocker Arm
Design And Analysis of a Rocker ArmDesign And Analysis of a Rocker Arm
Design And Analysis of a Rocker Armijceronline
 
Stress Analysis of a Centrifugal Supercharger Impeller Blade
Stress Analysis of a Centrifugal Supercharger Impeller BladeStress Analysis of a Centrifugal Supercharger Impeller Blade
Stress Analysis of a Centrifugal Supercharger Impeller Bladeijceronline
 
Strong (Weak) Triple Connected Domination Number of a Fuzzy Graph
Strong (Weak) Triple Connected Domination Number of a Fuzzy GraphStrong (Weak) Triple Connected Domination Number of a Fuzzy Graph
Strong (Weak) Triple Connected Domination Number of a Fuzzy Graphijceronline
 
Development of a Cassava Starch Extraction Machine
Development of a Cassava Starch Extraction MachineDevelopment of a Cassava Starch Extraction Machine
Development of a Cassava Starch Extraction Machineijceronline
 
Engendering sustainable socio-spatial environment for tourism activities in t...
Engendering sustainable socio-spatial environment for tourism activities in t...Engendering sustainable socio-spatial environment for tourism activities in t...
Engendering sustainable socio-spatial environment for tourism activities in t...ijceronline
 
The Myth of Softening behavior of the Cohesive Zone Model Exact derivation of...
The Myth of Softening behavior of the Cohesive Zone Model Exact derivation of...The Myth of Softening behavior of the Cohesive Zone Model Exact derivation of...
The Myth of Softening behavior of the Cohesive Zone Model Exact derivation of...ijceronline
 

Viewers also liked (8)

Introdução ao Jitter
Introdução ao JitterIntrodução ao Jitter
Introdução ao Jitter
 
Study on groundwater quality in and around sipcot industrial complex, area cu...
Study on groundwater quality in and around sipcot industrial complex, area cu...Study on groundwater quality in and around sipcot industrial complex, area cu...
Study on groundwater quality in and around sipcot industrial complex, area cu...
 
Design And Analysis of a Rocker Arm
Design And Analysis of a Rocker ArmDesign And Analysis of a Rocker Arm
Design And Analysis of a Rocker Arm
 
Stress Analysis of a Centrifugal Supercharger Impeller Blade
Stress Analysis of a Centrifugal Supercharger Impeller BladeStress Analysis of a Centrifugal Supercharger Impeller Blade
Stress Analysis of a Centrifugal Supercharger Impeller Blade
 
Strong (Weak) Triple Connected Domination Number of a Fuzzy Graph
Strong (Weak) Triple Connected Domination Number of a Fuzzy GraphStrong (Weak) Triple Connected Domination Number of a Fuzzy Graph
Strong (Weak) Triple Connected Domination Number of a Fuzzy Graph
 
Development of a Cassava Starch Extraction Machine
Development of a Cassava Starch Extraction MachineDevelopment of a Cassava Starch Extraction Machine
Development of a Cassava Starch Extraction Machine
 
Engendering sustainable socio-spatial environment for tourism activities in t...
Engendering sustainable socio-spatial environment for tourism activities in t...Engendering sustainable socio-spatial environment for tourism activities in t...
Engendering sustainable socio-spatial environment for tourism activities in t...
 
The Myth of Softening behavior of the Cohesive Zone Model Exact derivation of...
The Myth of Softening behavior of the Cohesive Zone Model Exact derivation of...The Myth of Softening behavior of the Cohesive Zone Model Exact derivation of...
The Myth of Softening behavior of the Cohesive Zone Model Exact derivation of...
 

Similar to IJCER (www.ijceronline.com) International Journal of computational Engineering research

High performance operating system controlled memory compression
High performance operating system controlled memory compressionHigh performance operating system controlled memory compression
High performance operating system controlled memory compressionMr. Chanuwan
 
Chrome server2 print_http_www_uni_mannheim_de_acm97_papers_soderquist_m_13736...
Chrome server2 print_http_www_uni_mannheim_de_acm97_papers_soderquist_m_13736...Chrome server2 print_http_www_uni_mannheim_de_acm97_papers_soderquist_m_13736...
Chrome server2 print_http_www_uni_mannheim_de_acm97_papers_soderquist_m_13736...Léia de Sousa
 
Robust Fault Tolerance in Content Addressable Memory Interface
Robust Fault Tolerance in Content Addressable Memory InterfaceRobust Fault Tolerance in Content Addressable Memory Interface
Robust Fault Tolerance in Content Addressable Memory InterfaceIOSRJVSP
 
Affable Compression through Lossless Column-Oriented Huffman Coding Technique
Affable Compression through Lossless Column-Oriented Huffman Coding TechniqueAffable Compression through Lossless Column-Oriented Huffman Coding Technique
Affable Compression through Lossless Column-Oriented Huffman Coding TechniqueIOSR Journals
 
Real time database compression optimization using iterative length compressio...
Real time database compression optimization using iterative length compressio...Real time database compression optimization using iterative length compressio...
Real time database compression optimization using iterative length compressio...csandit
 
REAL TIME DATABASE COMPRESSION OPTIMIZATION USING ITERATIVE LENGTH COMPRESSIO...
REAL TIME DATABASE COMPRESSION OPTIMIZATION USING ITERATIVE LENGTH COMPRESSIO...REAL TIME DATABASE COMPRESSION OPTIMIZATION USING ITERATIVE LENGTH COMPRESSIO...
REAL TIME DATABASE COMPRESSION OPTIMIZATION USING ITERATIVE LENGTH COMPRESSIO...cscpconf
 
JPM1404 Designing an Efficient Image Encryption-Then-Compression System via...
JPM1404   Designing an Efficient Image Encryption-Then-Compression System via...JPM1404   Designing an Efficient Image Encryption-Then-Compression System via...
JPM1404 Designing an Efficient Image Encryption-Then-Compression System via...chennaijp
 
Image Compression Through Combination Advantages From Existing Techniques
Image Compression Through Combination Advantages From Existing TechniquesImage Compression Through Combination Advantages From Existing Techniques
Image Compression Through Combination Advantages From Existing TechniquesCSCJournals
 
First phase report on "ANALYZING THE EFFECTIVENESS OF THE ADVANCED ENCRYPTION...
First phase report on "ANALYZING THE EFFECTIVENESS OF THE ADVANCED ENCRYPTION...First phase report on "ANALYZING THE EFFECTIVENESS OF THE ADVANCED ENCRYPTION...
First phase report on "ANALYZING THE EFFECTIVENESS OF THE ADVANCED ENCRYPTION...Nikhil Jain
 
Robust video data hiding using forbidden zone
Robust video data hiding using forbidden zoneRobust video data hiding using forbidden zone
Robust video data hiding using forbidden zoneMadonna Ranjini
 
IEEE 2014 MATLAB IMAGE PROCESSING PROJECTS Designing an efficient image encr...
IEEE 2014 MATLAB IMAGE PROCESSING PROJECTS  Designing an efficient image encr...IEEE 2014 MATLAB IMAGE PROCESSING PROJECTS  Designing an efficient image encr...
IEEE 2014 MATLAB IMAGE PROCESSING PROJECTS Designing an efficient image encr...IEEEBEBTECHSTUDENTPROJECTS
 
Hardback solution to accelerate multimedia computation through mgp in cmp
Hardback solution to accelerate multimedia computation through mgp in cmpHardback solution to accelerate multimedia computation through mgp in cmp
Hardback solution to accelerate multimedia computation through mgp in cmpeSAT Publishing House
 
seed block algorithm
seed block algorithmseed block algorithm
seed block algorithmDipak Badhe
 
A Novel Approach for Compressing Surveillance System Videos
A Novel Approach for Compressing Surveillance System VideosA Novel Approach for Compressing Surveillance System Videos
A Novel Approach for Compressing Surveillance System VideosINFOGAIN PUBLICATION
 
Data compression, data security, and machine learning
Data compression, data security, and machine learningData compression, data security, and machine learning
Data compression, data security, and machine learningChris Huang
 
Application scenarios in streaming oriented embedded-system design
Application scenarios in streaming oriented embedded-system designApplication scenarios in streaming oriented embedded-system design
Application scenarios in streaming oriented embedded-system designMr. Chanuwan
 
First phase slide presentation on "ANALYZING THE EFFECTIVENESS OF THE ADVANCE...
First phase slide presentation on "ANALYZING THE EFFECTIVENESS OF THE ADVANCE...First phase slide presentation on "ANALYZING THE EFFECTIVENESS OF THE ADVANCE...
First phase slide presentation on "ANALYZING THE EFFECTIVENESS OF THE ADVANCE...Nikhil Jain
 
Design of Image Compression Algorithm using MATLAB
Design of Image Compression Algorithm using MATLABDesign of Image Compression Algorithm using MATLAB
Design of Image Compression Algorithm using MATLABIJEEE
 
JOINT IMAGE WATERMARKING, COMPRESSION AND ENCRYPTION BASED ON COMPRESSED SENS...
JOINT IMAGE WATERMARKING, COMPRESSION AND ENCRYPTION BASED ON COMPRESSED SENS...JOINT IMAGE WATERMARKING, COMPRESSION AND ENCRYPTION BASED ON COMPRESSED SENS...
JOINT IMAGE WATERMARKING, COMPRESSION AND ENCRYPTION BASED ON COMPRESSED SENS...ijma
 

Similar to IJCER (www.ijceronline.com) International Journal of computational Engineering research (20)

High performance operating system controlled memory compression
High performance operating system controlled memory compressionHigh performance operating system controlled memory compression
High performance operating system controlled memory compression
 
Chrome server2 print_http_www_uni_mannheim_de_acm97_papers_soderquist_m_13736...
Chrome server2 print_http_www_uni_mannheim_de_acm97_papers_soderquist_m_13736...Chrome server2 print_http_www_uni_mannheim_de_acm97_papers_soderquist_m_13736...
Chrome server2 print_http_www_uni_mannheim_de_acm97_papers_soderquist_m_13736...
 
Robust Fault Tolerance in Content Addressable Memory Interface
Robust Fault Tolerance in Content Addressable Memory InterfaceRobust Fault Tolerance in Content Addressable Memory Interface
Robust Fault Tolerance in Content Addressable Memory Interface
 
Affable Compression through Lossless Column-Oriented Huffman Coding Technique
Affable Compression through Lossless Column-Oriented Huffman Coding TechniqueAffable Compression through Lossless Column-Oriented Huffman Coding Technique
Affable Compression through Lossless Column-Oriented Huffman Coding Technique
 
Real time database compression optimization using iterative length compressio...
Real time database compression optimization using iterative length compressio...Real time database compression optimization using iterative length compressio...
Real time database compression optimization using iterative length compressio...
 
REAL TIME DATABASE COMPRESSION OPTIMIZATION USING ITERATIVE LENGTH COMPRESSIO...
REAL TIME DATABASE COMPRESSION OPTIMIZATION USING ITERATIVE LENGTH COMPRESSIO...REAL TIME DATABASE COMPRESSION OPTIMIZATION USING ITERATIVE LENGTH COMPRESSIO...
REAL TIME DATABASE COMPRESSION OPTIMIZATION USING ITERATIVE LENGTH COMPRESSIO...
 
JPM1404 Designing an Efficient Image Encryption-Then-Compression System via...
JPM1404   Designing an Efficient Image Encryption-Then-Compression System via...JPM1404   Designing an Efficient Image Encryption-Then-Compression System via...
JPM1404 Designing an Efficient Image Encryption-Then-Compression System via...
 
Image Compression Through Combination Advantages From Existing Techniques
Image Compression Through Combination Advantages From Existing TechniquesImage Compression Through Combination Advantages From Existing Techniques
Image Compression Through Combination Advantages From Existing Techniques
 
First phase report on "ANALYZING THE EFFECTIVENESS OF THE ADVANCED ENCRYPTION...
First phase report on "ANALYZING THE EFFECTIVENESS OF THE ADVANCED ENCRYPTION...First phase report on "ANALYZING THE EFFECTIVENESS OF THE ADVANCED ENCRYPTION...
First phase report on "ANALYZING THE EFFECTIVENESS OF THE ADVANCED ENCRYPTION...
 
Robust video data hiding using forbidden zone
Robust video data hiding using forbidden zoneRobust video data hiding using forbidden zone
Robust video data hiding using forbidden zone
 
IEEE 2014 MATLAB IMAGE PROCESSING PROJECTS Designing an efficient image encr...
IEEE 2014 MATLAB IMAGE PROCESSING PROJECTS  Designing an efficient image encr...IEEE 2014 MATLAB IMAGE PROCESSING PROJECTS  Designing an efficient image encr...
IEEE 2014 MATLAB IMAGE PROCESSING PROJECTS Designing an efficient image encr...
 
Hardback solution to accelerate multimedia computation through mgp in cmp
Hardback solution to accelerate multimedia computation through mgp in cmpHardback solution to accelerate multimedia computation through mgp in cmp
Hardback solution to accelerate multimedia computation through mgp in cmp
 
seed block algorithm
seed block algorithmseed block algorithm
seed block algorithm
 
A Novel Approach for Compressing Surveillance System Videos
A Novel Approach for Compressing Surveillance System VideosA Novel Approach for Compressing Surveillance System Videos
A Novel Approach for Compressing Surveillance System Videos
 
Data compression, data security, and machine learning
Data compression, data security, and machine learningData compression, data security, and machine learning
Data compression, data security, and machine learning
 
CLOUD BIOINFORMATICS Part1
 CLOUD BIOINFORMATICS Part1 CLOUD BIOINFORMATICS Part1
CLOUD BIOINFORMATICS Part1
 
Application scenarios in streaming oriented embedded-system design
Application scenarios in streaming oriented embedded-system designApplication scenarios in streaming oriented embedded-system design
Application scenarios in streaming oriented embedded-system design
 
First phase slide presentation on "ANALYZING THE EFFECTIVENESS OF THE ADVANCE...
First phase slide presentation on "ANALYZING THE EFFECTIVENESS OF THE ADVANCE...First phase slide presentation on "ANALYZING THE EFFECTIVENESS OF THE ADVANCE...
First phase slide presentation on "ANALYZING THE EFFECTIVENESS OF THE ADVANCE...
 
Design of Image Compression Algorithm using MATLAB
Design of Image Compression Algorithm using MATLABDesign of Image Compression Algorithm using MATLAB
Design of Image Compression Algorithm using MATLAB
 
JOINT IMAGE WATERMARKING, COMPRESSION AND ENCRYPTION BASED ON COMPRESSED SENS...
JOINT IMAGE WATERMARKING, COMPRESSION AND ENCRYPTION BASED ON COMPRESSED SENS...JOINT IMAGE WATERMARKING, COMPRESSION AND ENCRYPTION BASED ON COMPRESSED SENS...
JOINT IMAGE WATERMARKING, COMPRESSION AND ENCRYPTION BASED ON COMPRESSED SENS...
 

Recently uploaded

Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...Patryk Bandurski
 
Snow Chain-Integrated Tire for a Safe Drive on Winter Roads
Snow Chain-Integrated Tire for a Safe Drive on Winter RoadsSnow Chain-Integrated Tire for a Safe Drive on Winter Roads
Snow Chain-Integrated Tire for a Safe Drive on Winter RoadsHyundai Motor Group
 
Scanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL CertsScanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL CertsRizwan Syed
 
Pigging Solutions in Pet Food Manufacturing
Pigging Solutions in Pet Food ManufacturingPigging Solutions in Pet Food Manufacturing
Pigging Solutions in Pet Food ManufacturingPigging Solutions
 
Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Scott Keck-Warren
 
"Federated learning: out of reach no matter how close",Oleksandr Lapshyn
"Federated learning: out of reach no matter how close",Oleksandr Lapshyn"Federated learning: out of reach no matter how close",Oleksandr Lapshyn
"Federated learning: out of reach no matter how close",Oleksandr LapshynFwdays
 
Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?Mattias Andersson
 
Connect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck PresentationConnect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck PresentationSlibray Presentation
 
#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024BookNet Canada
 
Making_way_through_DLL_hollowing_inspite_of_CFG_by_Debjeet Banerjee.pptx
Making_way_through_DLL_hollowing_inspite_of_CFG_by_Debjeet Banerjee.pptxMaking_way_through_DLL_hollowing_inspite_of_CFG_by_Debjeet Banerjee.pptx
Making_way_through_DLL_hollowing_inspite_of_CFG_by_Debjeet Banerjee.pptxnull - The Open Security Community
 
Beyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
Beyond Boundaries: Leveraging No-Code Solutions for Industry InnovationBeyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
Beyond Boundaries: Leveraging No-Code Solutions for Industry InnovationSafe Software
 
Unblocking The Main Thread Solving ANRs and Frozen Frames
Unblocking The Main Thread Solving ANRs and Frozen FramesUnblocking The Main Thread Solving ANRs and Frozen Frames
Unblocking The Main Thread Solving ANRs and Frozen FramesSinan KOZAK
 
Unleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding ClubUnleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding ClubKalema Edgar
 
Build your next Gen AI Breakthrough - April 2024
Build your next Gen AI Breakthrough - April 2024Build your next Gen AI Breakthrough - April 2024
Build your next Gen AI Breakthrough - April 2024Neo4j
 
Designing IA for AI - Information Architecture Conference 2024
Designing IA for AI - Information Architecture Conference 2024Designing IA for AI - Information Architecture Conference 2024
Designing IA for AI - Information Architecture Conference 2024Enterprise Knowledge
 
Swan(sea) Song – personal research during my six years at Swansea ... and bey...
Swan(sea) Song – personal research during my six years at Swansea ... and bey...Swan(sea) Song – personal research during my six years at Swansea ... and bey...
Swan(sea) Song – personal research during my six years at Swansea ... and bey...Alan Dix
 

Recently uploaded (20)

Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
 
Snow Chain-Integrated Tire for a Safe Drive on Winter Roads
Snow Chain-Integrated Tire for a Safe Drive on Winter RoadsSnow Chain-Integrated Tire for a Safe Drive on Winter Roads
Snow Chain-Integrated Tire for a Safe Drive on Winter Roads
 
Scanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL CertsScanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL Certs
 
Pigging Solutions in Pet Food Manufacturing
Pigging Solutions in Pet Food ManufacturingPigging Solutions in Pet Food Manufacturing
Pigging Solutions in Pet Food Manufacturing
 
Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024
 
DMCC Future of Trade Web3 - Special Edition
DMCC Future of Trade Web3 - Special EditionDMCC Future of Trade Web3 - Special Edition
DMCC Future of Trade Web3 - Special Edition
 
E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptxE-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
 
"Federated learning: out of reach no matter how close",Oleksandr Lapshyn
"Federated learning: out of reach no matter how close",Oleksandr Lapshyn"Federated learning: out of reach no matter how close",Oleksandr Lapshyn
"Federated learning: out of reach no matter how close",Oleksandr Lapshyn
 
Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?
 
Vulnerability_Management_GRC_by Sohang Sengupta.pptx
Vulnerability_Management_GRC_by Sohang Sengupta.pptxVulnerability_Management_GRC_by Sohang Sengupta.pptx
Vulnerability_Management_GRC_by Sohang Sengupta.pptx
 
Connect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck PresentationConnect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck Presentation
 
Hot Sexy call girls in Panjabi Bagh 🔝 9953056974 🔝 Delhi escort Service
Hot Sexy call girls in Panjabi Bagh 🔝 9953056974 🔝 Delhi escort ServiceHot Sexy call girls in Panjabi Bagh 🔝 9953056974 🔝 Delhi escort Service
Hot Sexy call girls in Panjabi Bagh 🔝 9953056974 🔝 Delhi escort Service
 
#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
 
Making_way_through_DLL_hollowing_inspite_of_CFG_by_Debjeet Banerjee.pptx
Making_way_through_DLL_hollowing_inspite_of_CFG_by_Debjeet Banerjee.pptxMaking_way_through_DLL_hollowing_inspite_of_CFG_by_Debjeet Banerjee.pptx
Making_way_through_DLL_hollowing_inspite_of_CFG_by_Debjeet Banerjee.pptx
 
Beyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
Beyond Boundaries: Leveraging No-Code Solutions for Industry InnovationBeyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
Beyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
 
Unblocking The Main Thread Solving ANRs and Frozen Frames
Unblocking The Main Thread Solving ANRs and Frozen FramesUnblocking The Main Thread Solving ANRs and Frozen Frames
Unblocking The Main Thread Solving ANRs and Frozen Frames
 
Unleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding ClubUnleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding Club
 
Build your next Gen AI Breakthrough - April 2024
Build your next Gen AI Breakthrough - April 2024Build your next Gen AI Breakthrough - April 2024
Build your next Gen AI Breakthrough - April 2024
 
Designing IA for AI - Information Architecture Conference 2024
Designing IA for AI - Information Architecture Conference 2024Designing IA for AI - Information Architecture Conference 2024
Designing IA for AI - Information Architecture Conference 2024
 
Swan(sea) Song – personal research during my six years at Swansea ... and bey...
Swan(sea) Song – personal research during my six years at Swansea ... and bey...Swan(sea) Song – personal research during my six years at Swansea ... and bey...
Swan(sea) Song – personal research during my six years at Swansea ... and bey...
 

IJCER (www.ijceronline.com) International Journal of computational Engineering research

  • 1. International Journal Of Computational Engineering Research (ijceronline.com) Vol. 2Issue. 5 Design of Test Data Compressor/Decompressor Using Xmatchpro Method C. Suneetha*1,V.V.S.V.S.Ramachandram*2 1 Research Scholar, Dept. of E.C.E., Pragathi Engineering College, JNTU-K, A.P. , India 2 Associate Professor, Dept.,of E.C.E., Pragathi Engineering College, E.G.Dist., A.P., India the system. Each Data compressor can process four Abstract bytes of data Higher Circuit Densities in system-on-chip (SOC) designs have led to drastic increase in test data volume. Larger Test Data size demands not only higher memory requirements, but also an increase in testing time. Test Data Compression/Decompression addresses this problem by reducing the test data volume without affecting the overall system performance. This paper presented an algorithm, XMatchPro Algorithm that combines the advantages of dictionary-based and bit mask-based techniques. Our test compression technique used the dictionary and bit into and from a block of data in every clock cycle. The mask selection methods to significantly reduce the data entering the system needs to be clocked in at testing time and memory requirements. We have a rate of 4 bytes in every clock cycle. This is to applied our algorithm on various benchmarks and ensure that adequate data is present for all compressors compared our results with existing test to process rather than being in an idle state. compression/Decompression techniques. Our approach results compression efficiency of 92%. Our approach 1.2. Goal of the Thesis also generates up to 90% improvement in To achieve higher decompression rates using decompression efficiency compared to other compression/decompression architecture with least techniques without introducing any additional increase in latency. performance or area overhead. 1.3. Compression Techniques At present there is an insatiable demand for ever- Keywords - XMatchPro, decompression, test greater bandwidth in communication networks and compression, dictionary selection, SOC. forever-greater storage capacity in computer system. This led to the need for an efficient I. INTRODUCTION compression technique. The compression is the 1.1. Objective process that is required either to reduce the volume With the increase in silicon densities, it is of information to be transmitted – text, fax and becoming feasible for compression systems to be images or reduce the bandwidth that is required for its implemented in chip. A system with distributed transmission – speech, audio and video. The memory architecture is based on having data compression technique is first applied to the source compression and decompression engines working information prior to its transmission. Compression independently on different data at the same time. algorithms can be classified in to two types, namely This data is stored in memory distributed to each processor. The objective of the project is to design a  Lossless Compression lossless data compression system which operates in  Lossy Compression high-speed to achieve high compression rate. By using the architecture of compressors, the data compression rates are significantly improved. Also inherent scalability of architecture is possible.The main parts of the system are the Xmatchpro based data 1.3.1. Lossless Compression compressors and the control blocks providing In this type of lossless compression algorithm, the aim control signals for the Data compressors allowing is to reduce the amount of source information to be appropriate control of the routing of data into and from transmitted in such a way that, when the Issn 2250-3005(online) September| 2012 Page 1190
  • 2. International Journal Of Computational Engineering Research (ijceronline.com) Vol. 2Issue. 5 compressed information is decompressed, there is the Huffman and Arithmetic coding algorithms and an no loss of information. Lossless compression is said, example of the latter is Lempel-Ziv (LZ) algorithm. therefore, to be reversible. i.e., Data is not altered The majority of work on hardware approaches to or lost in the process of compression or lossless data compression has used an adapted form of decompression. Decompression generates an exact the dictionary-based Lempel-Ziv algorithm, in which a replica of the original object. The Various lossless large number of simple processing elements are Compression Techniques are, arranged in a systolic array [1], [2], [3], [4].  Packbits encoding II. PREVIOUS WORK ON LOSSLESS  CCITT Group 3 1D COMPRESSION METHODS  CCITT Group 3 2D A second Lempel-Ziv method used a content  Lempel-Ziv and Welch algorithm LZW addressable memory (CAM) capable of performing a  Huffman complete dictionary search in one clock cycle [5], [6],  Arithmetic [7]. The search for the most common string in the dictionary (normally, the most computationally Example applications of lossless compression are expensive operation in the Lempel-Ziv algorithm) can transferring data over a network as a text file be performed by the CAM in a single clock cycle, since, in such applications, it is normally while the systolic array method uses a much imperative that no part of the source information is slower deep pipelining technique to implement its lost during either the compression or dictionary search. However, compared to the CAM decompression operations and file storage systems solution, the systolic array method has advantages (tapes, hard disk drives, solid state storage, file in terms of reduced hardware costs and lower servers) and communication networks (LAN, WAN, power consumption, which may be more important wireless). criteria in some situations than having faster dictionary searching. In [8], the authors show that hardware 1.3.2 Lossy Compression main memory data compression is both feasible The aim of the Lossy compression algorithms is and worthwhile. The authors also describe the design normally not to reproduce an exact copy of the and implementation of a novel compression method, source information after decompression but rather a the XMatchPro algorithm. The authors exhibit the version of it that is perceived by the recipient as a true substantial impact such memory compression has on copy. The Lossy compression algorithms are: overall system performance. The adaptation of  JPEG (Joint Photographic Expert Group) compression code for parallel implementation is  MPEG (Moving Picture Experts Group) investigated by Jiang and Jones [9]. They  CCITT H.261 (Px64) recommended the use of a processing array arranged in a tree-like structure. Although compression can be Example applications of lossy compression are the implemented in this manner, the implementation of transfer of digitized images and audio and video the decompressor’s search and decode stages in streams. In such cases, the sensitivity of the human eye hardware would greatly increase the complexity of or ear is such that any fine details that may be the design and it is likely that these aspects would missing from the original source signal after need to be implemented sequentially. An FPGA decompression are not detectable. implementation of a binary arithmetic coding architecture that is able to process 8 bits per clock 1.3.3 Text Compression cycle compared to the standard 1 bit per cycle is There are three different types of text – unformatted, described by Stefo et al [10]. Although little research formatted and hypertext and all are represented as has been performed on architectures involving several strings of characters selected from a defined set. The independent compression units working in a compression algorithm associated with text must be concurrent cooperative manner, IBM has introduced lossless since the loss of just a single character could the MXT chip [11], which has four independent modify the meaning of a complete string. The text compression engines operating on a shared memory compression is restricted to the use of entropy area. The four Lempel-Ziv compression engines are encoding and in practice, statistical encoding methods. used to provide data throughput sufficient for There are two types of statistical encoding methods memory compression in computer servers. Adaptation which are used with text: one which uses single of software compression algorithms to make use of character as the basis of deriving an optimum set of multiple CPU systems was demonstrated by research code words and the other which uses variable length of Penhorn [12] and Simpson and Sabharwal [13]. strings of characters. Two examples of the former are Issn 2250-3005(online) September| 2012 Page 1191
  • 3. International Journal Of Computational Engineering Research (ijceronline.com) Vol. 2Issue. 5 Penhorn used two CPUs to compress data using a methods are overcome by using the XmatchPro technique based on the Lempel-Ziv algorithm and algorithm in design. The objective is then to obtain showed that useful compression rate improvements better compression ratios and still maintain a high can be achieved, but only at the cost of increasing the throughput so that the compression/decompression learning time for the dictionary. Simpson and processes do not slow the original system down. Sabharwal described the software implementation of compression system for a multiprocessor system Usage of XMatchPro Algorithm based on the parallel architecture developed by The Lossless Data Compression system designed uses Gonzalez and Smith and Storer [14]. the XMatchPro Algorithm. The XMatchPro algorithm uses a fixed-width dictionary of previously seen data Statistical Methods and attempts to match the current data element with a Statistical Modeling of lossless data compression match in the dictionary. It works by taking a 4-byte system is based on assigning values to events word and trying to match or partially match this word depending on their probability. The higher the value, with past data. This past data is stored in a the higher the probability. The accuracy with which dictionary, which is constructed from a content this frequency distribution reflects reality determines addressable memory. As each entry is 4 bytes wide, the efficiency of the model. In Markov modeling, several types of matches are possible. If all the bytes predictions are done based on the symbols that precede do not match with any data present in the the current symbol. Statistical Methods in hardware dictionary they are transmitted with an additional are restricted to simple higher order modeling using miss bit. If all the bytes are matched then the match binary alphabets that limits speed, or simple multi location and match type is coded and transmitted, this symbol alphabets using zeroeth order models that match is then moved to the front of the limits compression. Binary alphabets limit speed dictionary. The dictionary is maintained using a move because only a few bits (typically a single bit) are to front strategy whereby a new tuple is placed at the processed in each cycle while zeroeth order front of the dictionary while the rest move down models limit compression because they can only one position. When the dictionary becomes full the provide an inexact representation of the statistical tuple placed in the last position is discarded leaving properties of the data source. space for a new one. The coding function for a match is Dictionary Methods required to code several fields as follows: Dictionary Methods try to replace a symbol or group of A zero followed by: symbols by a dictionary location code. Some dictionary-based techniques use simple uniform 1). Match location: It uses the binary code associated to binary codes to process the information supplied. the matching location. 2). Match type: Indicates which Both software and hardware based dictionary models bytes of the incoming tuple have matched. 3). achieve good throughput and competitive compression. Characters that did not match transmitted in literal The UNIX utility ‘compress’ uses Lempel-Ziv-2 form. A description of the XMatchPro algorithm in (LZ2) algorithm and the data compression Lempel- pseudo-code is given in the figure below. Ziv (DCLZ) family of compressors initially invented by Hewlett-Packard[16] and currently Pseudo Code for XMatchPro Algorithm being developed by AHA[17],[18] also use LZ2 With the increase in silicon densities, it is derivatives. Bunton and Borriello present another becoming feasible for XMatchPros to be LZ2 implementation in [19] that improves on the implemented on a single chip. A system with Data Compression Lempel-Ziv method. It uses a tag distributed memory architecture is based on having attached to each dictionary location to identify which data compression and decompression engines node should be eliminated once the dictionary becomes working independently on different data at the same full. time.This data is stored in memory distributed to each processor. There are several approaches in which data XMatchPro Based System can be routed to and from the compressors that will The Lossless data compression system is a derivative affect the speed, compression and complexity of the of the XMatchPro Algorithm which originates from system. Lossless compression removes redundant previous research of the authors [15] and advances in FPGA technology. The flexibility provided by using information from the data while they are transmitted or this technology is of great interest since the chip can be before they are stored in memory. Lossless decompression reintroduces the redundant information adapted to the requirements of a particular application to recover fully the original data. There are two easily. The drawbacks of some of the previous important contributions made by the current Issn 2250-3005(online) September| 2012 Page 1192
  • 4. International Journal Of Computational Engineering Research (ijceronline.com) Vol. 2Issue. 5 compression & decompression work, namely, The number of bits in a CAM word is usually improved compression rates and the inherent large, with existing implementations ranging from scalability. Significant improvements in data 36 to 144 bits. A typical CAM employs a table compression rates have been achieved by sharing size ranging between a few hundred entries to 32K the computational requirement between compressors entries, corresponding to an address space ranging without significantly compromising the contribution from 7 bits to 15 bits. The length of the CAM varies made by individual compressors. The scalability with three possible values of 16, 32 or 64 tuples trading feature permits future bandwidth or storage demands to complexity for compression. The no. of tuples present be met by adding additional compression engines. in the dictionary has an important effect on compression. In principle, the larger the dictionary The XMatchPro based Compression system the higher the probability of having a match and The XMatchPro algorithm uses a fixed width improving compression. On the other hand, a bigger dictionary of previously seen data and attempts to dictionary uses more bits to code its locations match the current data element with a match in the degrading compression when processing small data dictionary. It works by taking a 4-byte word and trying blocks that only use a fraction of the dictionary to match this word with past data. This past data is length available. The width of the CAM is fixed with stored in a dictionary, which is constructed from a 4bytes/word. Content Addressable Memory (CAM) content addressable memory. Initially all the entries in compares input search data against a table of stored the dictionary are empty & 4-bytes are added to the data, and returns the address of the matching data. front of the dictionary, while the rest move one CAMs have a single clock cycle throughput position down if a full match has not occurred. The making them faster than other hardware and larger the dictionary, the greater the number of address software-based search systems. The input to the bits needed to identify each memory location, reducing system is the search word that is broadcast onto the compression performance. Since the number of bits searchlines to the table of stored data. Each stored needed to code each location address is a function word has a matchline that indicates whether the of the dictionary size greater compression is search word and stored word are identical (the match obtained in comparison to the case where a fixed case) or are different (a mismatch case, or miss). The size dictionary uses fixed address codes for a partially matchlines are fed to an encoder that generates a full dictionary.In the XMatchPro system, the data binary match location corresponding to the matchline stream to be compressed enters the compression that is in the match state. An encoder is used in systems system, which is then partitioned and routed to the where only a single match is expected. compressors. The overall function of a CAM is to take a search The Main Component- Content Addressable word and return the matching memory location. Memory Managing Dictionary entries Dictionary based schemes copy repetitive or Since the initialization of a compression CAM sets redundant data into a lookup table (such as CAM) all words to zero, a possible input word formed by and output the dictionary address as a code to replace zeros will generate multiple full matches in the data. different locations.The Xmatchpro compression The compression architecture is based around a system simply selects the full match closer to the top. block of CAM to realize the dictionary. This is This operational mode initializes the dictionary to a necessary since the search operation must be done in state where all the words with location address parallel in all the entries in the dictionary to allow high bigger than zero are declared invalid without the need and data-independent for extra logic. Iii. Xmatchpro Lossless Compression System DESIGN METHODOLOGY The XMatchPro algorithm is efficient at compressing the small blocks of data necessary with cache and page based memory hierarchies found in computer systems. It is suitable for high performance hardware implementation. The XMatchPro hardware achieves a throughput 2-3 times greater than other high-performance hardware implementation. The core component of the system is the XMatchPro based Fig..Conceptual view of CAM Issn 2250-3005(online) September| 2012 Page 1193
  • 5. International Journal Of Computational Engineering Research (ijceronline.com) Vol. 2Issue. 5 Compression/ Decompression system. The dictionary means that code words are short during the XMatchPro is a high-speed lossless dictionary early stages of compressing a block. Because the based data compressor. The XMatchPro algorithm XMatchPro algorithm allows partial matches, a works by taking an incoming four-byte tuple of decision must be made about which of the data and attempting to match fully or partially match locations provides the best overall match, with the the tuple with the past data. selection criteria being the shortest possible number of output bits. FUNCTIONAL DESCRIPTION The XMatchPro algorithm maintains a dictionary Implementation of Xmatchpro Based Compressor of data previously seen and attempts to match the The block diagram gives the details about the current data element with an entry in the components of a single 32 bit Compressor. There dictionary, replacing it with a shorter code referencing are three components namely, COMPARATOR, the match location. Data elements that do not produce a ARRAY, CAMCOMPARATOR. The comparator is match are transmitted in full (literally) prefixed by used to compare two 32-bit data and to set or reset the a single bit. Each data element is exactly 4 bytes in output bit as 1 for equal and 0 for unequal. The CAM width and is referred to as tuple. This feature gives a COMPARATOR searches the CAM dictionary entries guaranteed input data rate during compression and thus for a full match of the input data given. The reason for also guaranteed data rates during decompression, choosing a full match is to get a prototype of the high irrespective of the data mix. Also the 4-byte tuple throughout Xmatchpro compressor with reduced size gives an inherently higher throughput than other complexity and high performance. algorithms, which tend to operate on a byte stream. If a full match occurs, the match-hit signal is The dictionary is maintained using move to front generated and the corresponding match location is strategy, where by the current tuple is placed at given as output by the CAM Comparator.. If no full the front of the dictionary and the other tuples match occurs, the corresponding data that is given as move down by one location as necessary to make input at the given time is got as output. space. The move to front strategy aims to exploit 32 BIT COMPRESSIONS locality in the input data. If the dictionary becomes full, the tuple occupying the last location is simply discarded. A full match occurs when all characters in the incoming tuple fully match a dictionary entry. A partial match occurs when at least any two of the characters in the incoming tuple match exactly with a dictionary entry, with the characters that do not match being transmitted literally. The use of partial matching improves the compression Array is of length of 64X32 bit locations. This is ratio when compared with allowing only 4 byte used to store the unmatched incoming data and matches, but still maintains high throughput. If when a new data comes, the incoming data is neither a full nor partial match occurs, then a miss is compared with all the data stored in this array. If registered and a single miss bit of ‘1’ is transmitted a match occurs, the corresponding match location followed by the tuple itself in literal form. The only is sent as output else the incoming data is stored exception to this is the first tuple in any compression in next free location of the array & is sent as operation, which will always generate a miss as the output. The last component is the cam comparator and dictionary begins in an empty state. In this case no is used to send the match location of the CAM miss bit is required to prefix the tuple. dictionary as output if a match has occurred. This is At the beginning of each compression operation, done by getting match information as input from the the dictionary size is reset to zero. The dictionary comparator. then grows by one location for each incoming tuple Suppose the output of the comparator goes high for any being placed at the input, the match is found and the corresponding front of the dictionary and all other entries in the address is retrieved and sent as output along with dictionary moving down by one location. A full one bit to indicate that match is found. At the same match does not grow the dictionary, but the move- time, suppose no match occurs, or no matched data is to-front rule is still applied. This growth of the found, the incoming data is stored in the array and it is Issn 2250-3005(online) September| 2012 Page 1194
  • 6. International Journal Of Computational Engineering Research (ijceronline.com) Vol. 2Issue. 5 sent as the output. These are the functions of the three components of the Compressor IV.Design of Lossless Compression/Decompression System DESIGN OF COMPRESSOR / DECOMPRESSOR The block diagram gives the details about the components of a single 32-bit compressor / decompressor. There are three components namely COMPRESSOR, DECOMPRESSOR and CONTROL. The compressor has the following components - COMPARATOR, ARRAY, and Fig. Block Diagram of 32 bit CAMCOMPARATOR. Compressor/Decompression The comparator is used to compare two 32-bit data and The Control has the input bit called C / D i.e., to set or reset the output bit as 1 for equal and 0 for Compression / Decompression Indicates whether unequal. compression or decompression has to be done. If it has the value 0 then omcpressor is stared when the value is Array is of length of 64X32bit locations. This is 1 decompression is done used to store the unmatched in coming data and when the next new data comes, that data is compared V. Result Analysis with all the data stored in this array. If the incoming data matches with any of the data stored in array, the 1 SYNTHESIS REPORT Comparator generates a match signal and sends it Release 8.2i - xst I.31 to Cam Comparator. The last component is the Cam comparator and is Copyright (c) 1995-2006 Xilinx, Inc. All rights used to send the incoming data and all the stored reserved. data in array one by one to the comparator. Suppose output of comparator goes high for any --> Parameter TMPDIR set to ./xst/projnav.tmp input, then the match is found and the CPU : 0.00 / 0.17 s | Elapsed : 0.00 / 0.00 s corresponding address (match location) is retrieved and sent as output along with one bit to indicate the --> Parameter xsthdpdir set to ./xst match is found. At the same time, suppose no match is found, then the incoming data stored in the array is CPU : 0.00 / 0.17 s | Elapsed : 0.00 / 0.00 s sent as output. These are the functions of the three components of the XMatchPro based --> Reading design: ds.prj compressor. The decompressor has the following components – Table of Contents Array and Processing Unit.  Synthesis Options Summary  HDL Compilation Array has the same function as that of the array unit which is used in the Compressor. It is also of the same  Design Hierarchy Analysis length. Processing unit checks the incoming match hit  HDL Analysis data and if it is 0, it indicates that the data is not  HDL Synthesis present in the Array, so it stores the data in the Array  HDL Synthesis Report and if the match hit data is 1, it indicates the data is  Advanced HDL Synthesis present in the Array, then it instructs to find the data  Advanced HDL Synthesis Report from the Array with the help of the address input and  Low Level Synthesis sends as output to the data out. Partition ReportRESULTS Issn 2250-3005(online) September| 2012 Page 1195
  • 7. International Journal Of Computational Engineering Research (ijceronline.com) Vol. 2Issue. 5 COMPARATOR CAM dictionary as output if a match has occurred. This is done by getting match Information as input from the comparator. Compression COMPARATOR waveform explanation Whenever the reset signal is active low then only the data is compared otherwise i.e. if it is active high the eqout signal is zero. Here we are applying 3 4 5 6 7 8 as data1 and 4 5 8 as data2. So after comparing Fig. Compression of INPUT DATA waveform the inputs the output signal eqout raises at the data 4 and 8 instants which indicate the output signal. Compression waveform explanation Whenever the reset1 and start1 signal is active low COMPARATOR function then only the compression of data is occurred. Here Suppose the output of the comparator goes high for every signal reacts at the clk1 rising edge only any input, the match is found and the corresponding because it is meant for positive edge triggering. The address is retrieved and sent as output along data which has to be compressed is applied at the with one bit to indicate that match is found. At the udata signal and the resultant data is obtained at the same time, suppose no match occurs, or no matched dataout signal which is nothing but compressed data. data is found, the incoming data is stored in the array Whenever there is a redundancy data is there in the and it is sent as the output. applied data then the matchhit signal will be in active high state by generating the corresponding address CAM signal addrout. Compression function The compression engine main intension is to compress the incoming content for memory reduction purpose and to transfer the data very easily. Decompression Fig. CAM data input Waveform Fig. CAM data output Waveform CAM waveform explanation Fig. Decompression of the data Whenever the reset signal is active low then only the cam produces the corresponding address and the match hit signal by individually raise the corresponding hit otherwise i.e. if it is active high it doesn’t produces any signal. In order to obtain the outputs the start should be in active low state. CAM function The cam is used to send the match location Of the Issn 2250-3005(online) September| 2012 Page 1196
  • 8. International Journal Of Computational Engineering Research (ijceronline.com) Vol. 2Issue. 5 Decompression Decompression waveform explanation Whenever the reset and start_de signal is active low then only the decompression of data is occurred. Here every signal reacts at the clk1 rising edge only because it is meant for positive edge triggering. Here for retrieving the original data the outputs of the compressor engine has to be applied as inputs for the decompression engine. Fig RTL Schematic of decompression waveform Decompression function of component level The decompression engine main intension is to retrieve the original content. SCHEMATIC Compression Fig. RTL Schematic of decompression waveform of circuit level Fig. RTL Schematic of compression waveform of component level Fig. RTL Schematic of decompression waveform of chip level VI.CONCLUSION The various modules are designed and coded using VHDL. The source codes are simulated and the Fig. RTL Schematic of compression waveform of various waveforms are obtained for all the circuit level modules. Since the Compression/Decompression system uses XMatchPro algorithm, speed of compression throughput is high. The Improved Compression ratio is achieved in Compression architecture with least increase in latency. The High speed throughput is achieved. The architecture provides inherent scalability in future. The total time required to transmit compressed data is less than that of Transmitting uncompressed data. This can lead to a Fig.RTL Schematic of compression waveform of performance benefit, as the bandwidth of a link chip level appears greater when transmitting compressed data and hence more data can be transmitted in a given Issn 2250-3005(online) September| 2012 Page 1197
  • 9. International Journal Of Computational Engineering Research (ijceronline.com) Vol. 2Issue. 5 amount of time. Higher Circuit Densities in system-on-chip (SOC) designs have led to drastic increase in test data volume. Larger Test Data size demands not only higher memory requirements, but also an increase in testing time. Test Data Compression/Decompression addresses this problem by reducing the test data volume without affecting the overall system performance. This paper presented an algorithm, XMatchPro Algorithm that combines the advantages References of dictionary-based and bit mask-based techniques. [1]. Li, K. Chakrabarty, and N. Touba, “Test Our test compression technique used the dictionary data compression using dictionaries with and bit mask selection methods to significantly selective entries and fixed-length indices,” reduce the testing time and memory requirements. ACM Trans.Des. Autom. Electron. Syst., We have applied our algorithm on various vol. 8, no. 4, pp. 470–490, 2003. benchmarks and compared our results with existing [2]. Wunderlich and G. Kiefer, “Bit-flipping test compression/Decompression techniques. Our BIST,” in Proc. Int. Conf. Comput.Aided approach results compression efficiency of 92%. Our Des., 1996, pp. 337–343. approach also generates up to 90% improvement in [3]. N. Touba and E. McCluskey, “Altering a decompression efficiency compared to other pseudo-random bit sequence for scan based techniques without introducing any additional bist,” in Proc. Int. Test Conf., 1996, pp. performance or area overhead. 167–175. [4]. F. Hsu, K. Butler, and J. Patel, “A case Vii. Future Scope study on the implementation of Illinois scan There is a potential of doubling the performance architecture,” in Proc. Int. Test Conf., 2001, of storage / communication system by increasing pp. 538–547. the available transmission bandwidth and data [5]. M. Ros and P. Sutton, “A hamming distance capacity with minimum investment. It can be applied based VLIW/EPIC code compression in Computer systems, High performance storage technique,” in Proc. Compilers, Arch., devices. This can be applied to 64 bit also in order to Synth. Embed. Syst., 2004, pp. 132–139. increase the data transfer rate. There is a chance to [6]. Seong and P. Mishra, “Bitmask-based code easy migration to ASIC technology which enables 3- compression for embed systems,” IEEE 5 times increase in performance rate. There is a Trans. Comput.-Aided Des. Integr. Circuits chance to develop an engine which does not require Syst., vol. 27, no. 4, pp. 673–685, Apr. any external components and supports operation on 2008. blocked data. Full-duplex operation enables [7]. M.-E. N. A. Jas, J. Ghosh-Dastidar, and N. simultaneous compression /decompression for a Touba, “An efficient test vector compression combined performance of high data rates, which also scheme using selective Huffman coding,” enables self-checking test mode using CRC (Cyclic IEEE Trans. Comput.-Aided Des. Integr. Redundancy Check). High-performance coprocessor- Circuits Syst., vol. 22, no. 6, pp.797–806, style interface should be possible if synchronization Jun. 2003. is achieved. Chance to improve the Compression [8]. A. Jas and N. Touba, “Test vector ratio compare with proposed work. decompression using cyclical scan chains and its application to testing core based VIII. Acknowledgements design,” in Proc. Int.Test Conf., 1998, pp. The authors would like to thank everyone, whoever 458–464. remained a great source of help and inspirations in [9]. A. Chandra and K. Chakrabarty, “System on this humble presentation. The authors would like to a chip test data compression and thank Pragathi Engineering College Management for decompression architectures based on providing necessary facilities to carry out this work. Golomb codes,” IEEE Trans. Comput.- Aided Des. Integr. Circuits Syst., vol. 20, no. 3, pp.355–368, Mar. 2001. Issn 2250-3005(online) September| 2012 Page 1198
  • 10. International Journal Of Computational Engineering Research (ijceronline.com) Vol. 2Issue. 5 [10]. X. Kavousianos, E. Kalligeros, and D. Nikolos, “Optimal selective Huffman coding [19]. X. Kavousianos, E. Kalligeros, and D. for test-data compression,” IEEE Trans. Nikolos, “Test data compression based on Computers, vol. 56, no. 8, pp. 1146–1152, variable-to-variable Huffman encoding with Aug. 2007. codeword reusability,” IEEE Trans. [11]. M. Nourani andM. Tehranipour, “RL- Comput.-Aided Des. Integr. Circuits Huffman encoding for test compression and Syst.,vol. 27, no. 7, pp. 1333–1338, Jul. power reduction in scan applications,” ACM 2008. Trans. Des.Autom. Electron. Syst., vol. 10, [20]. S. Reda and A. Orailoglu, “Reducing test no. 1, pp. 91–115, 2005. application time through test data mutation [12]. H. Hashempour, L. Schiano, and F. encoding,” in Proc. Des. Autom. Test Eur., Lombardi, “Error-resilient test data 2002, pp.387–393. compression using Tunstall codes,” in Proc. [21]. E. Volkerink, A. Khoche, and S. Mitra, IEEE Int. Symp. Defect Fault Tolerance “Packet-based input test data compression VLSI Syst., 2004, pp. 316–323. techniques,” in Proc. Int. Test Conf., 2002, [13]. M. Knieser, F.Wolff, C. Papachristou, pp. 154–163. D.Weyer, and D. McIntyre, “A technique for [22]. S. Reddy, K. Miyase, S. Kajihara, and I. high ratio LZWcompression,” in Proc. Des., Pomeranz, “On test data volume reduction Autom., Test Eur., 2003, p. 10116. for multiple scan chain design,” in Proc. [14]. M. Tehranipour, M. Nourani, and K. VLSI Test Symp., 2002, pp. 103–108. Chakrabarty, “Nine-coded compression [23]. F. Wolff and C. Papachristou, “Multiscan- technique for testing embedded cores in based test compression and hardware SOCs,” IEEE Trans.Very Large Scale decompression using LZ77,” in Proc. Int. Integr. (VLSI) Syst., vol. 13, pp. 719–731, Test Conf., 2002,pp. 331–339. Jun. 2005. [24]. A. Wurtenberger, C. Tautermann, and S. [15]. L. Lingappan, S. Ravi, A. Raghunathan, N. Hellebrand, “Data compression for multiple K. Jha, and S. T.Chakradhar, “Test volume scan chains using dictionaries with reduction in systems-on-a-chip using corrections,” in Proc. Int. Test Conf., 2004, heterogeneous and multilevel compression pp. 926–935. techniques,” IEEE Trans.Comput.-Aided [25]. K. Basu and P.Mishra, “A novel test-data Des. Integr. Circuits Syst., vol. 25, no. 10, compression technique using application- pp.2193–2206, Oct. 2006. aware bitmask and dictionary selection [16]. A. Chandra and K. Chakrabarty, “Test data methods,” in Proc.ACM Great Lakes Symp. compression and test resource partitioning VLSI, 2008, pp. 83–88. for system-on-a-chip using frequency- [26]. T. Cormen, C. Leiserson, R. Rivest, and C. directed run-length (FDR) codes,” IEEE Stein, Introduction to Algorithms. boston, Trans. Computers, vol. 52, no. 8, pp.1076– MA: MIT Press, 2001. 1088, Aug. 2003. [27]. M. Garey and D. Johnson, Computers and [17]. X. Kavousianos, E. Kalligeros, and D. Intractability: A Guide to the Theory of NP- Nikolos, “Multilevel-Huffman test-data Completeness. New York: Freeman, 1979. compression for IP cores with multiple scan [28]. Hamzaoglu and J. Patel, “Test set chains,” IEEE Trans. Very Large Scale compaction algorithm for combinational Integr. (VLSI) Syst., vol. 16, no. 7, pp. 926– circuits,” in Proc. Int. Conf. Comput.-Aided 931,Jul. 2008. Des., 1998, pp.283–289. [18]. X. Kavousianos, E. Kalligeros, and D. Nikolos, “Multilevel Huffman coding: An efficient test-data compression method for IP cores,” IEEE Trans. Comput.-Aided Des. Integr. Circuits Syst., vol. 26, no. 6, pp.1070–1083, Jun. 2007. Issn 2250-3005(online) September| 2012 Page 1199