SlideShare a Scribd company logo
1 of 57
Download to read offline
Featured Pattern Run-Length Coding for Test Data
Compression
i
Featured Pattern Run-Length Coding for Test Data
Compression
Student : Chih-Ho Shen
Advisor : Wang-Dauh Tseng
A Thesis
Submitted to the Department of Computer Science and Engineering
Yuan Ze University
in Partial Fulfillment of the Requirements
for the Degree of Master Science
in
Computer Science and Engineering
January 2013
Chungli, Taiwan, Republic of China
ii
iii
ISCAS’89
67.69%
3.52%
iv
Featured Pattern Run Length Coding for Test Data Compression
Student : Chih-Ho Shen Advisor : Dr. Wang-Dauh Tseng
Submitted to Department of Computer Science and Engineering
College of Informatics
Yuan Ze University
Abstract
Test data compression is necessary to reduce the volume of test data for
system-on-a-chip design. In this thesis, we propose a code-based compression technique
called featured pattern run-length coding. Featured pattern run-length coding is based on
block merging compression technique (BM). We utilize slice types encoding to improve
BM method. It is demonstrated that higher test data compression can be achieved based
on slice types encoding. Furthermore, we provide three methods to improve our test
data compression scheme. First, we alter the slice data encoding scheme. The slice type
with higher occurrences is assigned shorted codeword. On the contrary, the slice type
with few occurrences is assigned longer codeword. Second, we add two slice types into
slice data encoding scheme to reduce test data. Third, we alter the pattern run-length
encoding table to achieve higher compression ratio. Experimental results show that the
average compression ratio is 67.69% in ISCAS’89 benchmark circuit. The average
compression ratio in our proposed method is 3.52% greater than BM technique.
v
IC Testing
vi
Table of Contents
ABSTRACT…………………….....................................................................................v
List of Figures………………………………………………………………………….iv
List of Tables…………………………………………………………………………xi
Chapter 1. Introduction…………….……………………………………………….1
1.1 Background….…………………………………………………………………1
1.2 Test Data Compression Techniques……………………………………………1
1.3 Motivations…………………………………………………………………….6
1.4 Organization of this Dissertation………………………………………………7
Chapter 2. Related Works Review………...………………………………………8
2.1 Efficient Test Compression Technique Based on Block Merging…………….8
Chapter 3. Proposed Approaches………………………………………………....12
3.1 The Main Concept of Featured Pattern Run-Length Coding...……………...12
3.2 Slice Data Description………………………...……………………………13
3.3 Encoding Scheme…...……………………………………………………….16
3.4 The Attempt to Enhance Compression…………..…..………………………20
3.4.1 Alter Slice Encoding……………………………………………………...20
3.4.2 Add New Slice Types into Compression Scheme……..…………..……..23
3.4.3 Alter Run Length Encoding…………………………...………………….27
3.5 Summary……………………………………………………………………..29
Chapter 4. Decompression Architecture….………………………………………30
4.1 The Datapath Design of Decompression Architecture...……………………..30
4.2 Finite State Machine Design………………………………………………….33
vii
Chapter 5. Experimental Results…………………………………………………35
5.1 The Result of Proposed Compression Method……………………...………..35
5.2 The Result of Altering Slice Encoding……………………………………….36
5.3 The Result of Altering Run Length…………………………………………..36
5.4 The Result of Adding Some New Type Slice into Compression Scheme……37
5.5 Result Comparisons with Previous Works …………………………………..42
Chapter 6. Conclusions…………………………………………………………….44
Chapter 7. References……………………………………………………………...45
viii
List of Tables
Table 1 BM encoding scheme…………………………………………………………11
Table 2 Slice data encoding for block size……………………………………………14
Table 3 Encoding scheme format……………………………………………...………17
Table 4 This table illustrates the occurrence of different slice types...…………………20
Table 5 Slice types encoding are defined after modification…………………………...21
Table 6 The benefit of slice types encoding...…………………………………………22
Table 7 This table illustrates the occurrence in s38584 circuit…………………………24
Table 8 Slice types encoding are defined after modification…………………………...25
Table 9 The benefit of slice encoding method………………………………………….26
Table 10 Percentage of occurrence of merged block counts…………………………...28
Table 11 The scheme is defined after altering run length encoding……………………28
Table 12 Compression result of proposed encoding scheme (method a)……………38
Table 13 Compression result of alternating slice encoding (method b)……………..39
Table 14 Compression result of adding new slice type (method c)………………...40
Table 15 Compression result of altering run length (method d)……………………41
Table 16 Compression result with previous Works……………………………………43
ix
List of Figures
Figure 1 A conceptual architecture...…………………………………………………….2
Figure 2 An example of LFSR architecture……………………………………………...3
Figure 3 Sequential linear decompression architecture...........…………………………..4
Figure 4 Two mode of IIIinois scan architecture……………………………………….5
Figure 5 An example illustrates the merging process…………………………….……...9
Figure 6 An example illustrates the codeword of BM compression technique………10
Figure 7 The distribution of merged block run length………………………………….27
Figure 8 Decompression architecture....………………………………………………..30
Figure 9 Finite state machine.…………………………………………………………..37
1
Chapter 1. Introduction
1.1 Background
As the integrated process is scaling down in nanometer era, more and more
functions are crammed into devices. Obviously, there are tremendous test data volumes
which are used to detect the fault of a large amount of transistors in per chip. In IC
testing process, it applies to store those generated test data volumes in the automatic
device equipment (ATE). However, the limited memory caused two disadvantage
factors in ATE: (1) those new subsequent test data volumes could replace the previous
generated test data volumes, which store in ATE. (2) Limited memory will increase the
test time for given test data bandwidth [1]. On the purpose to solve these two
disadvantage factors in the ATE limed memory, test data compression techniques have
been proposed.
1.2 Test Data Compression Techniques
In respect of IC testing process, test data compression techniques are developed in
three categories in published research:
1. Code-based schemes partition test data slices into several types of symbols, and then
the scheme uses its own specific codeword to encode these symbols.
2. Linear-decompression-based scheme utilizes Linear Feedback Shift Register (LFSR)
to generate test data sequence.
3. Broadcast-scan-based scheme is based on the idea of broadcasting the same value to
multiple scan chains.
2
1. Code Based Schemes:
Code based schemes take advantage of efficient codeword to represent original
test data volumes (TD). The approach of this method is to derive the feature of test
data into symbols, and then encode the symbols into compression codeword (TE). As
shown in Figure 1, a large number of compression codeword are stored in ATE
memory, and an on-chip decoder is used for test data decompression to obtain TD
from TE during compression/decompression application [2][3][4]. This is an
example defined as test data from ATE to SOC.
Figure 1. A conceptual architecture is used to test a system-on-a-chip by storing the
encoded test data volumes in ATE memory and decoding them by using on-chip
decoder
Code based schemes exploit correlation in specified bits and this method
applies in any set of test cubes. Due to above advantages, many relative works
employ these feature and design efficient codeword to represent the original test
data. Continuous 0/1-value frequently appear in the test data, hence K. Chakrabary
3
propose FDR method to compress the continuous 0-vlaue [5]. While FDR encoding
scheme compress the 0-value efficiently, it will cause heavy encoding overhead due
to continuous 1-value. In order to improve this drawback, A. H. El-Maleh
demonstrate EFDR method to achieve higher compression based on encoding both
run of 0-value and 1-value [6].
Another form of code-based scheme is statistical coding, which partitions the
original data into n-bit symbols and assigns variable-length codeword based on each
symbol’s frequency of occurrence [1]. It gathers each symbol occurrence statistics
and assigns shorter codeword for more frequent symbols. This purpose of this
strategy is to reduce the codeword of total test data. These techniques are including:
selective Huffman-coding [7], optimal SHC [8], and variable-length input Huffman
coding (VIHC) [9].
2. Linear-decompression-based schemes:
A second kind of compression techniques is based on LFSR decompression
architecture. The LFSR decompression architecture consists of only wire, XOR
gates, and flip-flops. The basic idea in LFSR schemes is to generate deterministic
test cubes by expanding seeds as shown in Figure 2. A seed is an initial state of the
LFSR decompression architecture that is expanded by running the LFSR application.
According to deterministic test cubes, which are generated by ATPG tools, a
corresponding seed can be retrieved by solving a set of linear equations based on the
feedback polynomial of LFSR decompression architecture [1][10].
4
Figure 2. An example of LFSR architecture
A generic example of linear decompression architecture is like Figure 3. There
are many seeds (compression data) stored in the tester. LFSR runs its cycles to
produce the test vectors and fill the set of scan chains. (if there are m bits in each
scan chain, then the LFSR is run for m cycles to fill the scan chains). Different
LFSR seeds will produce different test vectors. The set of seeds are depended on the
deterministic test cubes, and those seed can be computed by solving the linear
equations based on the LFSR architecture [11].
Figure 3. Sequential linear decompression architecture
5
3. Broadcast scan based schemes:
The main idea of broadcast scan schemes is that a single test channel broadcast
the same test data to multiple scan chains, because there are many compatible
sub-patterns in each scan chain. The approach of this technique is to reduce the test
time and test data volume. As Figure 5 shows, the Illinois Scan Architecture shifts
same test data to each partial of scan chains through a single scan chain.
Consequently, retrieve the same data in a given particular test cube in order to
reduce the test time, it needs to make compatibility analysis of test data among scan
chains [12][13].
(a)
(b)
Figure 4 Two Mode of IIIinois Scan Architecture
6
1.3 Motivation
There are many academic papers on test data compression have been published
over the past years. These different compression techniques employ their own specific
coding scheme to reach the purpose of minimize test data volumes. Given test cube
which are generated by ATPG, there are a large amount of don’t care bits. A.H.
EL-Maleh analysis the ISCAS’89 benchmark circuits, he found his EFDR compression
technique can reduce the total number of 1s’ and 0s’ runs to nearly 54% compression
ratio [6]. In fact, EFDR still need extra bits to record both 1 and 0 value run length bits.
Another form of code base scheme technique is pattern run length method. It takes
several bits as a pattern and compute consecutive compatible pattern run length, and
then the encoded compatible pattern are used to represent the compatible pattern count.
In the experimental results, it could be observed that pattern run length technique have
higher compression rate over run length techniques [14][15][16]. However, the pattern
run length methods need record long-length pattern, which might not be just few bits. It
might increase the size of codeword.
In this paper, the study we proposed is an attempt to supplement the test data
compression method of those earlier studies. We take advantage of pattern run length
technique which employs efficient coded schemes, and we adopts a slice encoding
method to improve the compression ratio of pattern run length.
7
1.4 Organization of this Dissertation
In our dissertation, we propose a code base compression method involving pattern
run length and pattern identification encoding. In chapter 2, we introduce two
compression method based on code based technique. The first method is “Efficient Test
Compression Technique based on Test Compression” which encodes runs of
consecutive compatible blocks and this method should be observed that has higher
compression ratio over run length coding. The second method is “An Internal Pattern
Run Length Methodology for Slice Encoding” which classifies different pattern types
into seven specific pattern types. It makes statistics the occurrence of these specific
pattern types and assigns higher occurrence of pattern types to shorter codeword. In
chapter 4, the design of the test decompression circuitry of our proposed method
technique is described. In chapter 5, experimental results are demonstrated the effective
of our proposed method. Finally, we conclude the studies and list future works in
chapter 6.
8
Chapter 2. Related Works Reviews
Dealing with a large number of test data volumes is a main challenge in testing
system-on-a-chip (SOC) designs. The numerous test data are used to detect signal fault
of transistors in the SOC designs. Large amount of test data which stored in ATE
memory will cause exceeding the memory and I/O channel capacity of ATE and
increasing testing time. To resolve this problem, many test data compression techniques
based on code-based scheme have been proposed. As the section 1.2 code-based
schemes compression techniques describe, the run length-based compression methods
are GOLOMB [4], FDR [5], EFDR [6], ALT-FDR, and PRL. The advantage of these
proposed methods are low decompression hardware overhead and applying in any kind
of test cubes. In this chapter, we introduce two efficient techniques block merging
compression (BM) in section 2.1 and internal pattern run length (IPR) in section 2.2.
These techniques are up to 64% test data compression ratio.
2.1 Efficient Test Compression Technique Based on Block Merging
The block merging (BM) compression technique is based on dividing a generated
test cube into same bit-size blocks and then merging consecutive compatible blocks into
merged blocks. This technique computes a count of all consecutive compatible blocks as
merged blocks count and then encodes the merged block number with prefix type. In
addition, the author offers a method to store the merged block. The method identifies
the merged block whether the block is filled with one value (i.e. 0 or 1) or not. If the
block is filled with one value, the block will be encoded to filled block with two bits (11
or 10). If the block are filled with both values (0 or 1), a codeword will contain the
9
whole pattern of merged block in tail form of codeword.
The main concept of BM compression techniques are like Figure 4 illustrated, the
test data are partitioned into same size blocks, whose block size are 5. The first block
“X0X1X” attempts to merge its consecutive compatible block and then the second block
“101XX” are compatible with the first block. Subsequently, both the two blocks are
merged into a merged block “1011X”, and then the new merged block attempts to find
whether its consecutive block “XX111” are compatible or not. Finally, the merging
process finds that the first four blocks are compatible, and their merged blocks are
“1011”. Next, the following block “0X0X0” does the same merging process and so on.
Figure 5 An example illustrates the merged block in the process of merging consecutive
compatible block in the BM compression technique for the block size of 5
In order to effectively represent the run length of merged blocks by using few
number of bits, the different run length of merged blocks are grouped into six groups.
Table 1 shows the BM encoding schemes with six groups. The range of merged block
number in each group is defined in Table 1, and each group also has their group code. If
the range of merged block number is more than 2 blocks, the group will add group
offset bits to point out the number of merged block after the encoding group scheme. In
the end of the BM scheme, the scheme records the pattern of merged blocks. This
technique indicates the pattern of merged block whether the pattern are full one or zero
10
value or not. If the pattern are full one or zero value, it encodes the pattern of merged
block is “11” or “10” codeword. On the other hand, it will add extra bit ‘1’ and encodes
the full contents of merged blocks if the pattern contains both one and zero value. The
extra bit ‘1’ is used to indicate the pattern filled of both one and zero value.
For example, the first group B = 1 represents a block is not compatible with its
consecutive block, so its merged block number is just exactly one. The merged block is
added with its pattern in the end (b is the size of merged block). Another example is just
as the Figure 5 shows. The number of merged block is just 4, thus “100” is the
codeword of merged block group, and the third group offset is “01”. Then the pattern of
merged block is indicated whether the pattern is filled with one or zero value or not. The
merged block “10111” are neither one value nor zero value. Thus the pattern codeword
is “10111”. In the end, the first four block of Fig 5 are encoded by the bit stream 110 01
010111.
Figure 6 An example illustrates the codeword of BM compression technique
The advantage of BM compression technique is that making full use of merged
blocks count to decrease encoding bits. First, BM compression technique groups the
encoded merged block into six groups. It attempts to use fewer bit to represent the
higher frequency number of merged blocks, since making the analysis of occurrence of
11
merged blocks counts. For instance, B=1 accounts for the highest frequency of merged
blocks count, a prefix “0” stands for encoded merged group. Second, this technique
stores the pattern of merged block in two different ways. If a merged block is filled with
one or zero value, it will encode just two bits to represent the full contents. In addition,
if the block size is 5, the merged block codeword “11” and “10” gain 3 bits benifit.
Table 1. BM encoding scheme
12
Chapter 3. Proposed Approaches
3.1 The main concept of featured pattern run-length coding
Reducing a lot of test data volume is the main challenge in modern testing
system-on-a-chip design [18][19]. Therefore, test data compression techniques have
been developed to solve these problems. There are many relative works which employ
this feature and design efficient code scheme to reduce enormous test data. In many
literature review, we observed that pattern run length technique show great compression
effectiveness in experimental results [15][16][17]. We take the advantage of the
compression effectiveness that pattern run length coding scheme could represent a long
test data stream. Furthermore, pattern run length coding scheme compress test data
without any structural information of the circuit. It is applicable for test compression of
IP cores in SOC [15].
In this chapter, we present our proposed method featured pattern run length coding
schemes to achieve test data compression. The basic idea is to partition the test data into
same size blocks and then encode those consecutive compatible blocks. In order to
improve the compression ratio, we adopt a slice encoding method to reduce the record
bits of merged blocks. It encodes the merged blocks in different ways which depend on
recognizing which character types belongs to the merged block. In the experimental
results, we observed that pattern run length coding scheme have higher test data
compression ratio. Due to this reason, bring character types coding into block merging
compression techniques is used to solve the problem of merged block needed record a
long pattern. In the section 3.2 slice data description, we describe the character types in
detailed. In the section 3.3, we introduce the whole coding scheme.
13
3.2 Slice Data Description
To achieve higher compression ratio, we consider adopt the slice data encoding
method into our compression method. The idea of slice coding is original from the
author Lung-Jen Lee et al [17]. They observed that there were some common feature in
the merged blocks and then they classified the test slices into several character types.
However, this size of slice data is limited by 8, 16, 32, and 64 [17]. If the slice encoding
method needs to be adopted in general case, we have to prune off some unsuitable
character types.
Slice coding method first partitions test data into same-size slices, the size is even
number, and then it recognizes each slice as a character type. There are five specific
character types which are defined in Table 2. The format of each codeword is composed
of three parts: prefix, extend, and tail. The prefix is used to indicate the character types
All 0, All 1. The extend part is as an extended section which indicate the character types
1/2 copy, 1/2 inverse copy, and original. The tail type records the merged block
information. Each character types are statement as follows:
A. All 0: The type character is that a test slice is filled with zero value or don’t care
bits. This type character example is as Table 2 second row shows. In this case, this
kind of test slices is encoded into codeword “00”.
B. All 1: This type character is that a test slice is filled with one value or don’t care
bits. The example of this type character is as Table 2 third row shows. This kind of
test slices is encoded into codeword “01”.
C. 1/2 copy: This type character is defined that left half slice is compatible with right
half slice. This type slice is encoded by the Prefix “10”. The Tail form stores a
merged half slice contents. As Table 2 fourth row shows, the test lice
“0XX101XX” is encoded to “101110”.
14
D. 1/2 inverse copy: This type character is defined that left half slice is compatible to
the inverse of right half slice data. This type slice is encoded by the Prefix “11” and
Extend “0”. The Tail data is a merged data from merging both left half slice and the
inverse data of right half slice. An example of 1/2 inverse copy is in Table fifth row,
the left half slice is “1X10” and the right half slice is “010X”. Then the left half
slice “1X10” merges the inverse data of right half slice “101X”. In the end, the
generated merged slice “1010” is encoded in the tail form of slice. As Table 2 fifth
row shows, the test lice “1X10010X” is encoded to “11101010”.
E. Original: If a slice does not match any of the above cases, the slice is defined as
Original type. The Prefix is “11” and the Extend is “1”, and the tail information
records the full contains of the slice. The example of original type is that the test
slice “1X100X10” and the encoded data is “111111100110”.
Table 2 Slice Dada Encoding for block size = 8
This method is motivated by observing the five types pattern which occur frequently
in the most of test sets. The individual codeword in different character types represent
the initial test slice. The character type A11 0 and All 1 are those test slices are only
used 2-bits codeword to represent the whole test slice. When the size of slice data
15
increases, the gain benefit of this inflexible codeword will decrease more data bits.
Moreover, those character types are frequently-used types, and they use smaller
codeword achieve to reduce some data bits. On the other hand, the original type gets
extra bits to represent the full pattern. The extra bits are the overhead of slice encoding
schemes. If we have to adopt the slice encoding scheme into our compression method,
we have no choice but add extra bits to represent the original type.
16
3.3 Encoding Scheme (Method A)
The technique we proposed is based on partitioning the test set into same size
block, and the merging the consecutive blocks. We take our proposed encoding scheme
into two parts and explain later. First, the counts of merged blocks are described in
Table 3(A). The merged block counts are used for representing the merged blocks. The
counts of merged block are represented by the prefix type and extend type. The pattern
type is to record the pattern information. Second, we adopt slice encoding method to
represent the pattern information. The character types are described in Table 3(B).
Those character types are composed of prefix, extend, and tail. The prefix and extend
type are used to distinguish which type the pattern is. The tail type is used to store the
full merged block information.
The test data are partitioning into same size blocks with even numbers, because the
different character types are adopted into coding scheme. In order to reduce the number
of bits of merged block counts, the numbers of consecutive merged blocks are grouped
into six groups. Each group has its encoding schemes and the range of merged block
number in each group is defined in Table 3(A). The first group, B=1, represent the
situation if a block is not compatible with its consecutive block. The first group is
encoded by “0”. The second group, B=2, represents a block merge only one consecutive
block. The second group is encoded by “10”. If the merged block number is over 2, the
group offset is used to determine the exact number. For example, B=3, the prefix “110”
shows that it is third group, and the group offset “00” shows its first one in the range of
3 to 6.
17
(A)
(B)
Table 3. Encoding Scheme Format
18
The second part is defined in Table 3(B). This part is used to represent the merged
block pattern information. The merged block patterns are classified into five character
types which are defined in section 3.2. After recognizing their character types, each
character type is encoded into its specific codeword whose scheme format is in Table
3(B). The character types are composed of prefix type, extend type, and tail type. The
prefix determines their character types. The extend type is used to indicate some
character types which prefix could not directly indicate. The tail is used to record the
pattern information.
An example illustrates our proposed codeword in Figure 6. A test data stream is
partitioned into 6-bits blocks. The first partitioned block attempts to merge its
consecutive blocks until the merged block is not compatible with its next block. As the
steel blue covering range shows, the 6-bit block run length is 3 and the merged block is
“111X11”. In the encoding process, the first part is encoding the merged block counts.
The run length in the range of block group is 3 to 6. The prefix type is “110” and the
extend type is “00”. After encoding the merged block counts, the second part is slice
data encoding. The merged block is identified as All One type and then the merged
block is followed by the All One encoding defined. The data “01” is used to represent
the merged block “111X11”. The recoding data “1100001” is generated just as Figure 6
blue covering range shows. Next, the encoding process of the fourth block is just as
previous present. After the merging process, the merged block is generated “110000”
and the merged block counts are 2. In the encoding of merged counts, the prefix type is
“10” and the second part needs to encode slice data. In the slice data encoding process,
the merged block is identified as Original types and then it is encoded into the data
“1111110000”. In the end, the recoding data is encoded into “101111110000”.
19
Figure 6 An example illustrates the codeword of our compression technique
20
3.4 The Attempt to Enhance Compression
In this section, we provide some ideas to reduce test data volume. We propose
three methods to improve compression method and each method is made a few
modifications in the coding scheme. In reality, our proposed test data compression
method is still based on previous presentation (section 3.3).
3.4.1 Alter slice type encoding (Method B)
The slice type encoding is defined in the section 3.2. In the previous introduction,
the slice encoding is composed of prefix type, extend type, and tail type. The drawback
of this encoding scheme is that Original type is used to add four bits to record its
merged block. It seems that this scheme is not efficient to compress the given test data.
Another problem is that we observe Original type is more frequent than All 1 type and
All 0 type. To solve this problem, we consider higher occurrence slice types to use
fewer bits and less occurrence slice types to use more bits.
Table 4: This table illustrates the occurrence of different slice types in s38584 circuit
The first thing we need to do is to make the occurrence statics of different slice
21
types. The occurrence of each slice type is just as the table 4 shows. These frequent slice
types almost are 1/2 copy type, 1/2 inverse type, and Original type. Also, we use
different block size to analyze the result of occurrence, and we could know the message
that different block size just makes a little influence on the occurrence of different slice
types. Therefore, we consider encode fewer bits in high frequency slice types, and then
we observe the effectiveness of our modification.
The new slice scheme is defined in Table 5. We keep the same tail format to
represent the information of slice types. The difference is that the extend type of 1/2
copy, 1/2 inverse copy, and original type are pruned off. Instead of using more bits to
identify, we only use the prefix type to represent these three slice types. The prefix type
of 1/2 copy is “00”, the prefix type of 1/2 inverse copy is “01”, and the prefix type of
the original type is “10”. Also, the All 0 and All 1 are the same identifier “11”, which is
in the prefix type, and their slice type are determined by an extra bit, which is in the
extend type. After modifying the slice encoding, the encoding scheme is more efficient
to represent initial test pattern. We could observe that the Original type which only adds
two bits to identify itself. It achieves the goal that high frequency slice types use fewer
encoding bits in the encoding process.
Table 5: The new slice types encoding are defined after modification
22
In order to show the encoding benefit of slice data encoding scheme, we use the
table 6 to illustrate the benefit in different block size. The meaning of these positive
numbers in this table is used to represent the bits gain in each block pattern. For
example, we use the block information (B=6) to store the pattern “11XX11” and then
we need use 6 bits to record the full pattern. If we adopt slice type encoding, we only
use 3 bits (“110”) to represent initial pattern “11XX11”. The encoding benefit of slice
type encoding in All 0 is 3 bits gain. Also, we can know there still is same encoding
overhead in Original type in different block size. In order to represent Original type
pattern, the codeword needs to add 2 extra bits in each block.
Table 6: The benefit of slice types encoding
23
3.4.2 Add New Slice Types into Compression Scheme (Method C)
To achieve higher compression ratio, we utilize slice types encoding method into
our compression method. It seems that it increases test data compression ratio to adopt
the slice types encoding method, so we try to add some slice types to improve
compression method. Also, we observe the experimental result to confirm that it is a
good idea to add new slice types into compression scheme.
After analyzing the distribution of slice data types in table 4, we discover that
1/2 copy type and 1/2 inverse copy type have higher occurrences. In this reason, we
consider that there are many rotating pattern which exist in the test data. In order to
demonstrate our idea, we make the new occurrence static of slice types. We add two
slice types “Left rotate type” and “Right rotate type” in our slice encoding scheme, and
they are defined in the following:
A. Left rotate type: This type character is defined that the left half slice left shift its
pattern and the leftist bit is moved to the rightist position, and then the left half
slice which had rotated is compatible with the right half slice. The other conditions
are that this slice type is not belong to other slice types which are including “All 0”,
“All 1”, “1/2 copy” and “1/2 inverse copy”. For example, a test pattern “1010X1”
is left rotate type. After the left slice “101” is left rotated, the new pattern “011” is
generated. The new pattern “011” is compatible with its right slice “0X1”. Also,
the pattern “1010X1” is not belonged to another slice type.
B. Right rotate type: This type character is defined that the left half slice right shift its
pattern and the rightist bit is moved to the leftist position, and then the left half
slice which had rotated is compatible with the right half slice. The other conditions
24
are that this slice type is not belong to other slice types which are including “All 0”,
“All 1”, “1/2 copy” and “1/2 inverse copy”. For example, a test pattern “101110” is
right rotate type. After the left slice “101” is right rotated, the new pattern “110” is
generated. The new pattern “110” is compatible with its right slice “110”. Also, the
pattern “101110” is not belonged to another slice type.
We use different block size to analyze the occurrence of different slice types, which
are illustrated in Table 7. In this table, we could know the first third occurrences of slice
types are 1/2 copy type, 1/2 inverse copy, and Original type. In this reason, we allow
these three slice types to be encoded fewer codeword. Also, we know there are similar
occurrences of the other four slice types, which are All 0 type, All 1 type, Left rotate
type, and Right rotate type. Due to fewer occurrences of these slice types, their
codeword are assigned more bits.
Table 7: This table illustrates the occurrence in s38584 circuit
25
The new slice encoding schemes are defined in Table 8, after we add two slice
types “Left rotate type” and “Right rotate type”. We keep the same format to represent
the information of different slice types. The format of each codeword is composed of
three parts: prefix, extend, and tail. The prefix is used to indicate the character types 1/2
copy type, 1/2 inverse copy type, and Original type. The extend part is an extended
section to indicate the character types All 0 type, All 1 type, Left rotate type, and
original. The tail records the partial slice contains in order to represent the slice
information. Some slice types can use the half pattern to represent the whole pattern
information, just like Left rotate type, Right rotate type, 1/2 copy type, and 1/2 inverse
copy type.
Table 8: Slice types encoding are defined after modification
In order to explain the benefit of adding two slice types, we use the table 9 to
illustrate the encoding benefit in different block size. The meaning of these positive
numbers in this table is used to represent the bits gain in each block pattern. For
example, the merged block (B=10) is used to store the pattern “111101110X” and then
26
we need use at least 8 bits to record the full pattern. If we adopt slice type encoding, the
merged block is identified as left rotate slice type and the slice data encoding is
“111011110”. The slice data encoding use 9 bits to represent the merged block
information. The encoding benefit in left rotate slice type is 1 bit gain. In block size 10,
there are many encoding benefit except for Original slice type. According to Table 9
shows, we can know there will be more encoding benefit when the block size increases.
Table 9: The Benefit of Slice Encoding Method
27
3.4.3 Alter Run Length Encoding (Method D)
The codeword of merged block run length is defined in table 3(A). To describe the
count of the merged block, the coding scheme is still based on A.H. El-Maleh’s previous
work [15], which is introduced in section 2.1. In this section, we analysis the
distribution of merged blocks run length in the ISCAS’89 benchmark circuits, and then
we also provide an idea of altering run length encoding. In addition, we slightly adjust
the encoding of merged block run length, and then observing whether the compression
result is increasing or not.
Figure 7 illustrates the distribution of merged block run length, whose block size is
6. The horizontal coordinate axis shows the counts of merged blocks, and the vertical
coordinate axis shows the frequency of occurrence of merged block counts. It is
mentioned that only one merged block (PRL=1) is the range from 31% to 55%. We
could know the fact is that PRL=1 dominates the frequency of occurrence of merged
block counts. For this reason, we still keep the first group (B=1) contain only PRL=1 in
the group, hence PRL=1 only uses one bit to represent the count of merged block.
Figure 7: The distribution of merged block run length (block size is 6)
28
Next, two merged blocks (PRL=2) is the range from 8% to 23%, and three merged
blocks (PRL=3) is the range from 0.25% to 13%. If we change the second group (B=2)
into a new group (B=2~3), we need to add 1 bit as group offset. The benefit is that B=3
can gain 2 bits benefit, but B=3 need add 1 bit overhead. In table 11, it shows that the
occurrences of PRL=3 is higher than half of the occurrences of PRL=2 in most of cases.
For this reason, we propose a new codeword table of merged block run length in Table
12. It seems that B=3, B=7, B=15, and B=31 can gain 2 bits benefit. Also, this scheme
is more efficient to encode the count of merged blocks.
Table10 Percentage of occurrence of merged block counts (b=6)
Table 11: The Scheme is defined after altering run length encoding
29
3.5 Summary
We have developed an efficient code-based compression method called featured
pattern run length coding method. The main idea is based on previous work block
merging compression technique. To improve test data compression, we classify the
merged blocks into different slice types. Each slice type has its own encoding method.
In fact, adopting the slice type encoding into compression scheme is more efficient than
previous work. Furthermore, we add three methods to improve our proposed
compression method. First, we consider higher occurrences of slice types need to use
fewer bits and fewer occurrences of slice types need to use more bits. Second, our
compression scheme is added two slice types to achieve higher compression ratio. Third,
we alter run length encoding scheme. The result of alternating slice encoding is greater
than previous works.
30
Chapter 4. Decompression Architecture
4.1 The Datapath Design of Decompression Architecture
We describe the test data compression procedure, the decompression architecture,
and the design of on-chip decoder. Since our proposed compression technique is based
on the BM decompression architecture, the design of both decoders is similar with some
additional components and different behavior in the finite state machine (FSM) of the
BM decompression architecture. A main difference in FSM is used to determine many
slice types in the codeword. Another difference is that in the compression code, the
number of block size is only even. Figure 9 shows the main components of the
Decompression Architecture, and each component is described below.
Figure 8. Decompressor Design
31
A. Counter 1: This counter is a 5-bit counter and it used to counter the number of
1-value in the prefix code. It can know which group the number of merged
blocks is. When this counter receives five consecutive 1, it will transmit the
signal MAX to FSM. Then, the FSM will stop sending data to the counter.
B. Offset Shift Register: This is a 5-bit shift register and it is used to store the
offset code information. After Counter 1 computes the prefix code, this shift
register will start shifting the offset code. Once the number in Counter 2 is
equal to the number in Counter 1, it will set RST 2 signal to FSM.
C. Run-Length Shift Register: This is a 6-bit shift register and it is used to store
the merged block count.
D. Counter 2: This counter is a 4-bit counter and it stores the block size from the
signal of block decoder. This counter will decrement during decoding the tail
type of merged block codeword. When the counter reach zero, it will send
RST 3 signal to FSM.
E. 6/12 shift register: The configuration is designed by a 6-bit shift register to a
12-bit shift register. It is controlled by three signals from the FSM and the
output of the 3-8 decoder. The configuration is be used to fill the tail type of
merged block codeword during the decoding process.
F. Shift Register: This 3-bit shift register sequentially loads the codeword of
block size during the FSM enter the first three states.
G. A latch: The latch is used to fill the bit of pattern in A11 1 type or All 0 type.
When the FSM identify the pattern is A11 1 type, the latch will receive
1-value from signal SER1. Otherwise, the latch will receive 0-value.
H. Multiplexer 1: This multiplexer is used to fill the tail type of pattern. During
the decoding process in FSM, the slice type of pattern is determined. The
32
multiplex is controlled by the signal SER2 from FSM. Once the slice type is
determined as All 1 or All 0, the multiplexer choose the signal from the latch.
When the slice type is determined as Half copy type or Half inverse type, the
multiplexer choose the signal from the 6-Bit shift register.
I. Multiplexer 2: The multiplexer is inserted at the output stage in order to drive
the scan chain either directly from the serial input of from the output of the
6/12-bit shift register.
J. Block size decoder: This decoder is used to decode the block size from 3 bits
to 4 bits. It will send the real number of block size. For example, it decode the
codeword “001”, and then it will generate new codeword “0110”, which
represent the block size is 6.
K. 3-8 decoder: This decoder is used to configure the 4/10 bit shift register
according to the block size (4-18 bits). It decodes the content of the shift
register considering that code 000 represent a block size of 4, and code 001
represents a block size of 6.
33
4.2 Finite State Machine
The FSM is composed of 18 states as shown in Figure 10. The behavior in the
FSM is different from the BM FSM. Although the decompression architecture is based
on the BM decompression architecture, we design many state to determine different
slice types in the codeword and modify the path of states due to different definition in
run length encoding, which is shown in table 11.
The behavior of each state is defined as follows. S0 sets EN signal in order to
receive next codeword. From S1 to S3 read the first three bits, which are used to
represent the block size in the compression data. S4 checks the next bit to know the
whether the number of merged block is greater one or not. If the bit is indicated as one,
it goes to S5. Otherwise, it goes to S8. S5 read prefix type of the codeword and stores
the bits in counter 1. Once S5 indicates the 0-value in prefix type, it will stop staying its
own state and go to S6 state. S6 read the offset type of codeword and stores the bits in
counter 2. When S6 indicate the 0-value in offset type, it will stop staying its own state
and go to S7 state.
S7 to S15 are used to identify the slice type in pattern. If S7 reads the next bit is
0-value, it will go to S9 state. Otherwise, it will go to S8 state. Then, S9 reads the next
bit, which is used to determine the slice type is 1/2 copy type or 1/2 inverse type. If the
next bit is 0-value, the slice is 1/2 copy type. Otherwise, the slice is 1/2 inverse type.
When the S7 go to S8 state, S8 reads the next bit in order to determine the slice type. If
the next bit is 0-value, S8 will go to S11 and the slice is Original type. If the next bit is
1-value, S8 will go to S12.
From S12 to S15 are used to read extend type in slice encoding, which is defined in
table 8. If S12 reads the next bit is 0-value, S12 will go to S14 state. When S12 reads
the next bit is 0-value, the slice type of pattern is identified as A11 0 type. Otherwise, it
34
is identified as A11 1 type. On the other hand, S12 reads the next bit is 1-value, S12 will
go to S13 state. When S13 reads the next bit is 0-value, the slice type of pattern is
identified as Left Rotate type. Otherwise, it is identified as Left Right type.
S16 output its merged pattern, which slice type is identified from S7 to S15. S16
and S16 output number of times equal to Run-Length shift register. When they go to S4
state, the new codeword will be decoded in the process of FSM.
Figure 9. Finite State Machine
35
Chapter 5. Experimental Results
5.1 The Result of Proposed Compression Method (Method A)
In order to demonstrate the effectiveness of our proposed compression technique,
we implement a C++ program to perform on a number of the largest full-scanned
versions of ISCAS’89 circuit. The Mintest [20] benchmark circuits generated using the
dynamic compaction option have been used.
Our compression technique computes the effectiveness for different block sizes.
The compression ratio is defined in the following:
Comp. ratio
100%×
−
=
bitsorginalofNumber
bitscompressedofNumberbitsorginalofNumber
The compression results of proposed encoding scheme (method a) are shown in
Table 9. The column means the size of block, and the row means the circuits, and test
size means the number bit of test data. Compared to block compression method, the
block sizes of our compression are even number, which are adopted to recognize each
slice as a character type. In this table, we can know that each circuit has different
compression ratio in different block size. For example, the highest compression for
circuit s5378 is achieved with a block size of 12. In another example, the highest
compression for larger circuit s38417 is achieved with a block size of 6. Also, we can
know each circuit has higher compression ratio with the range 6 to 12 of block size.
36
5.2 The Result of Altering Slice Type Encoding (Method B)
To achieve higher compression ratio, we alter slice type encoding in our
compression method (Method B in section 3.4.1). Since the occurrences of slice types
probably have effects on test data compression, we consider higher occurrence slice
types to use fewer bits and less occurrence slice types to use more bits. After analysis
the occurrences of slice types, those frequent slice types almost are 1/2 copy type, 1/2
inverse type, and Original type. The modified encoding scheme is defined in Table 5.
The result of alternating slice encoding is in Table 10. The highest compression
for circuit s5378 is achieved with a block size of 12, and the highest compression for
larger circuit s38417 is achieved with a block size of 6. Compared to Method A, altering
slice type method improves the effectiveness of encoding scheme in each circuit, except
for s13207 circuit and s35932 circuit. According our computation, the average
compression ratio in Method B is 0.58% greater than the average compression ratio in
Method A. It seems that alternating slice encoding is efficient to compress the given test
data.
5.3 The Result of Adding Slice Encoding Scheme (Method C)
We try to add some slice types to improve compression scheme method, since
adopting the slice encoding method increases test data compression ratio. We consider
that there are probably many rotating pattern which exist in the test data. In order to
demonstrate our idea, we add two slice types “Left rotate type” and “Right rotate type”
in our slice encoding schemes. Then we make the occurrence static of slice types and
the result is shown in table 8.
Compared to Method B, this method improves the compression ratio in each circuit.
According our computation, the average compression ratio in Method C is 0.13%
37
greater than the average compression ratio in Method B. It seems that adding rotate slice
type into compression scheme is efficient to compress the given test data.
5.4 The Result of Altering Run Length Encoding (Method D)
We provide an idea of altering run length encoding, and then we also analysis the
distribution of merged blocks run length in the ISCAS’89 benchmark circuits. In
addition, we slightly adjust the encoding of merged block run length. In order to
demonstrate the effectiveness by modifying the codeword of merged block run length,
we make the table of compression scheme in method D.
The result of alternating slice encoding is shown in Table 15. The average
compression ratio in Method D is 0.19% greater than the average compression ratio in
Method C. In some case, this method obviously increases the compression ratio in small
block. For example, the highest compression for circuit s35932 and s38417 and it is
achieved with a block size of 4 and 6.
38
Table 12: Compression Result of Proposed Encoding Scheme (Method A)
39
Table 13: Compression Result of Alternating Slice Encoding (Method B)
40
Table 14: Compression Result of Adding New Slice Type into Compression Scheme (Method C)
41
Table 15: Compression Result of Altering Run Length (Method D)
42
5.5 Compression result with Previous Works
In this section, we make a table comparing our proposed technique to the previous
works, which are including Golomb [4], FDR [5], EFDR [6], ALT-FDR [21], SHC [7],
VIHC [22], RL-HC [23], 9C [24], and BM [15]. The test data are ISCAS’89
benchmark circuits and the test cube are generated by Mintest ATPG program [20].
In this table, we can know that FDR, EFDR, and ALT-FDR have lower
compression ratio. In fact, run length scheme only deal with consecutive 1-value or
consecutive 0-value into compression, which cannot achieve great compression
effectiveness. In this reason, we try to find another efficient technique. We find that
BM have better effect in most cases, but this method still need record the merged block
information, which are many bits. Our proposed compression technique improves
block merging method by adopting slice encoding. The average compression ratio in
Method A is 2.24% greater than BM technique. In method B, considering the slice
types encoding are depend on the occurrences of slice types. We assigned shorter
codeword to those slice types which have higher occurrence. In method C, we add two
new slice types into our compression scheme. In method D, we alter the encoding of
block merging run length. The average compression ratio in Method D is 3.52%
greater than BM technique.
43
Table 16: Compression Result with Previous Works
44
Chapter 6. Conclusions
There are many academic papers on test data compression have been published
over the past years. The different compression techniques employ their own coding
scheme to achieve the purpose of minimize test data volumes. We find that previous
scheme use inefficient codeword to compress test data.
In this work, we develop an efficient code-based scheme for efficient compressing
test data. Our technique not only utilizes the compression effectiveness of block
merging method, but also adopts slice types encoding method. To achieve higher
compression ratio, we add the some slice types into our compression method. In
addition, we also provide an idea of altering run length encoding. The decompressor is
circuit independent and can be used with any test set. Result shows, the average
compression ratio in Method D is 3.52% greater than BM technique. In addition, our
study has the highest average compression ratio in all related works.
45
Chapter 7. Reference
[1] N. A. Touba, “Survey of test vector compression techniques”, IEEE Design & Test
Computers, April 2006, 23, (4), pp.294-303
[2] B. Ye, M. Luo, “A new test data compression method for system-on-a-chip”, in
Proc. 3rd
IEEE Int. Conf. on Computer Science and Information Technology, 2010,
pp.129-133.
[3] L. Zhang, J. S. Kuang, “Test-data compression using hybrid prefix encoding for
testing embedded cores”, in Proc. 3rd
IEEE Int. Conf. on Computer Science and
Information Technology, July 2010.
[4] A. Chandra, K. Chakrabarty, “System on a Chip Test Data Compression and
Decompression Architecture Based on Golomb Code”, IEEE Trans. Computer
-Aided Design, vol. 20, no.3, Mar. 2001, pp.353-368.
[5] A. Chandra, K. Chakrabarty, “Test Data Compression and Test Resource
Partitioning for System-on-a-Chip Using Frequently-Directed Run-Length (FDR)
Codes”, IEEE Trans. Computers, vol.52, no.8, Aug.2003, pp.1076-1088.
[6] A. H. El-Maleh, “Test data compression for system-on-a-chip using extended
frequency directed run-length code”, Computer & Digital Techniques, IET, vo1. 2,
no.3, May. 2008, pp. 153-163.
[7] A. Jas et al., “An efficient Test Vector Compression Schemes Using Selective
Huffman Coding”, IEEE Trans. Computer-Aided Design of Integrated Circuits and
Systems, vol.22, no. 6, June 2003, pp.797-806.
[8] X. Kavousianos, E. Kalligeros, and D. Nikolos, “Optimal Selective Huffman
Coding for Test-Data Compression”, IEEE Trans. Computers, vol. 56, no.8, Aug.
2007, pp.1146-1152.
[9] P. Gonciari, B. Al-Hashimi, B. Nicolici, “Improving compression ratio, area
overhead, and test application time for system-on-a-chip test data
compression/decompression”. Proc. Design Automation Test in Europe (DATE),
Paris, France, March 2002, pp. 604-611.
[10] J.Lee and N. A. Touba, “LFSR-reseeding scheme achieving low power dissipation
during test”, IEEE Trans. Computer-Aided Design of Integr. Circuit and Syst.,
46
vol.26, 2007, pp. 396-401
[11] C. V. Krishna, A. Jas, and N. A. Touba, “Test Vector Encoding Using Partial LFSR
Reseeding”, Proc. Int’l Test Conf. (ITC 01), IEEE CS Press, 2001, pp.885-893.
[12] K. J. Lee, J. J. Chen, and C. H. Huang, “Using a Single Input to Support Multiple
Scan Chains”, Porc. Int’t Conf. Computer-Aided Design (ICCAD 98), IEEE CS
Press, 1998, pp74-78.
[13] A. R. Pandey, J. H. Patel, “An Incremental Algorithm for Test Generation in
Illinois Scan Architecture Based Designs” Design, Automation, and Test in Europe
Conference and Exhibition (DATE), 2002, pp368-375.
[14] X. Ruan and R. Katti, “An Efficient Data-Independent Technique for Compressing
Test Vectors in Systems-on-a-Chip”, Proc. IEEE Emerging VLSI Tech. Arch.
Symp., 2006, pp153
[15] A. H. El-Maleh, “Efficient Test Compression Technique Based on Block Merging”,
IET Comput. Digit. Tech., vol. 2, no. 5, 2008, pp.327-335
[16] L,-J. Lee et al, “A Multi-Dimensional Pattern Run-Length Method for Test Data
Compression for Test Data Compression”, Proc. Asian Test Symp., 2009, pp.
111-116.
[17] Lung-Jen Lee, Wang-Dauh Tseng, and Rung-Bin Lin, “An Internal Pattern
Run-Length Methodology for Slice Encoding”, ETRI Journal, vol. 33, no.3, June
2011, pp. 374-381.
[18] El-Maleh, Al-Suwaiyan: “An efficient test relaxation technique for combinational
and full-scan sequential circuit”. Proc. VLSI Test Symp., Monterey, CA, April
2002, pp. 53-59.
[19] K. Miyase, S. Kajihara: “Don’t care identification of test patterns for
combinational circuits”, IEEE Trans. Comput. Aided Des., 2004, 23, (2), pp.
321-326.
[20] I. Hamzaoglu, J. H. Patel, “Test set compaction algorithms for combinational
circuits”. Proc. Int. Conf. Computer-Aided Design, San Jose, CA, November 1998,
pp. 283-289
[21] A. Chandra, K. Chakrabarty, “A unified approach to reduce SoC test data volumes,
scan power, and test time”, IEEE Trans. Computer-Aided Design Integrated Circuit
System, December, 2003, 22, (2), pp. 353-363.
47
[22] P. Gonciari, B. Al-Hashimi, N. Nicolici, “Improving compression ratio, area
overhead, and test application time for system-on-a-chip test data
compression/decompression”, Proc. Design Automation Test in Europe, Paris,
France, March 2002, pp. 604-611.
[23] M. Nourani, M. Tehranipour, “RL-Huffman encoding for test compression and
power reduction in scan application”, ACM Tran., Deign Automation Electronic
System, 2005, 10, (1), pp. 91-115
[24] M. Tehranipoor, M. Nourani, K. Chakrabarty, “Nine-coded compression technique
for testing embedded cores in SoCs”, IEEE Tran, Very Large Scale Integr. (VLSI)
Syst., 2005, 13, (6), pp. 1070-1083.

More Related Content

Similar to Featured Pattern Run Length Coding for Test Data Compression

Ali.Kamali-MSc.Thesis-SFU
Ali.Kamali-MSc.Thesis-SFUAli.Kamali-MSc.Thesis-SFU
Ali.Kamali-MSc.Thesis-SFUAli Kamali
 
Thesis_Sebastian_Ånerud_2015-06-16
Thesis_Sebastian_Ånerud_2015-06-16Thesis_Sebastian_Ånerud_2015-06-16
Thesis_Sebastian_Ånerud_2015-06-16Sebastian
 
Lossless Data Compression Using Rice Algorithm Based On Curve Fitting Technique
Lossless Data Compression Using Rice Algorithm Based On Curve Fitting TechniqueLossless Data Compression Using Rice Algorithm Based On Curve Fitting Technique
Lossless Data Compression Using Rice Algorithm Based On Curve Fitting TechniqueIRJET Journal
 
Machine_Learning_Blocks___Bryan_Thesis
Machine_Learning_Blocks___Bryan_ThesisMachine_Learning_Blocks___Bryan_Thesis
Machine_Learning_Blocks___Bryan_ThesisBryan Collazo Santiago
 
Distributed Traffic management framework
Distributed Traffic management frameworkDistributed Traffic management framework
Distributed Traffic management frameworkSaurabh Nambiar
 
Machine Learning Project - Neural Network
Machine Learning Project - Neural Network Machine Learning Project - Neural Network
Machine Learning Project - Neural Network HamdaAnees
 
A study on improving speaker diarization system = Nghiên cứu phương pháp cải ...
A study on improving speaker diarization system = Nghiên cứu phương pháp cải ...A study on improving speaker diarization system = Nghiên cứu phương pháp cải ...
A study on improving speaker diarization system = Nghiên cứu phương pháp cải ...Man_Ebook
 
A study on improving speaker diarization system = Nghiên cứu phương pháp cải ...
A study on improving speaker diarization system = Nghiên cứu phương pháp cải ...A study on improving speaker diarization system = Nghiên cứu phương pháp cải ...
A study on improving speaker diarization system = Nghiên cứu phương pháp cải ...Man_Ebook
 
Implementation of coarse-grain coherence tracking support in ring-based multi...
Implementation of coarse-grain coherence tracking support in ring-based multi...Implementation of coarse-grain coherence tracking support in ring-based multi...
Implementation of coarse-grain coherence tracking support in ring-based multi...ed271828
 
MSc Thesis - Jaguar Land Rover
MSc Thesis - Jaguar Land RoverMSc Thesis - Jaguar Land Rover
MSc Thesis - Jaguar Land RoverAkshat Srivastava
 
Arules_TM_Rpart_Markdown
Arules_TM_Rpart_MarkdownArules_TM_Rpart_Markdown
Arules_TM_Rpart_MarkdownAdrian Cuyugan
 
disertation_Pavel_Prochazka_A1
disertation_Pavel_Prochazka_A1disertation_Pavel_Prochazka_A1
disertation_Pavel_Prochazka_A1Pavel Prochazka
 
Measuring Aspect-Oriented Software In Practice
Measuring Aspect-Oriented Software In PracticeMeasuring Aspect-Oriented Software In Practice
Measuring Aspect-Oriented Software In PracticeHakan Özler
 
Big Data and the Web: Algorithms for Data Intensive Scalable Computing
Big Data and the Web: Algorithms for Data Intensive Scalable ComputingBig Data and the Web: Algorithms for Data Intensive Scalable Computing
Big Data and the Web: Algorithms for Data Intensive Scalable ComputingGabriela Agustini
 
CSC 347 – Computer Hardware and Maintenance
CSC 347 – Computer Hardware and MaintenanceCSC 347 – Computer Hardware and Maintenance
CSC 347 – Computer Hardware and MaintenanceSumaiya Ismail
 
An_expected_improvement_criterion_for_the_global_optimization_of_a_noisy_comp...
An_expected_improvement_criterion_for_the_global_optimization_of_a_noisy_comp...An_expected_improvement_criterion_for_the_global_optimization_of_a_noisy_comp...
An_expected_improvement_criterion_for_the_global_optimization_of_a_noisy_comp...Kanika Anand
 
Masters' Thesis - Reza Pourramezan - 2017
Masters' Thesis - Reza Pourramezan - 2017Masters' Thesis - Reza Pourramezan - 2017
Masters' Thesis - Reza Pourramezan - 2017Reza Pourramezan
 

Similar to Featured Pattern Run Length Coding for Test Data Compression (20)

Ali.Kamali-MSc.Thesis-SFU
Ali.Kamali-MSc.Thesis-SFUAli.Kamali-MSc.Thesis-SFU
Ali.Kamali-MSc.Thesis-SFU
 
Thesis_Sebastian_Ånerud_2015-06-16
Thesis_Sebastian_Ånerud_2015-06-16Thesis_Sebastian_Ånerud_2015-06-16
Thesis_Sebastian_Ånerud_2015-06-16
 
Lossless Data Compression Using Rice Algorithm Based On Curve Fitting Technique
Lossless Data Compression Using Rice Algorithm Based On Curve Fitting TechniqueLossless Data Compression Using Rice Algorithm Based On Curve Fitting Technique
Lossless Data Compression Using Rice Algorithm Based On Curve Fitting Technique
 
Thesis_Tan_Le
Thesis_Tan_LeThesis_Tan_Le
Thesis_Tan_Le
 
Machine_Learning_Blocks___Bryan_Thesis
Machine_Learning_Blocks___Bryan_ThesisMachine_Learning_Blocks___Bryan_Thesis
Machine_Learning_Blocks___Bryan_Thesis
 
Distributed Traffic management framework
Distributed Traffic management frameworkDistributed Traffic management framework
Distributed Traffic management framework
 
Machine Learning Project - Neural Network
Machine Learning Project - Neural Network Machine Learning Project - Neural Network
Machine Learning Project - Neural Network
 
A study on improving speaker diarization system = Nghiên cứu phương pháp cải ...
A study on improving speaker diarization system = Nghiên cứu phương pháp cải ...A study on improving speaker diarization system = Nghiên cứu phương pháp cải ...
A study on improving speaker diarization system = Nghiên cứu phương pháp cải ...
 
A study on improving speaker diarization system = Nghiên cứu phương pháp cải ...
A study on improving speaker diarization system = Nghiên cứu phương pháp cải ...A study on improving speaker diarization system = Nghiên cứu phương pháp cải ...
A study on improving speaker diarization system = Nghiên cứu phương pháp cải ...
 
Implementation of coarse-grain coherence tracking support in ring-based multi...
Implementation of coarse-grain coherence tracking support in ring-based multi...Implementation of coarse-grain coherence tracking support in ring-based multi...
Implementation of coarse-grain coherence tracking support in ring-based multi...
 
main
mainmain
main
 
MSc Thesis - Jaguar Land Rover
MSc Thesis - Jaguar Land RoverMSc Thesis - Jaguar Land Rover
MSc Thesis - Jaguar Land Rover
 
Arules_TM_Rpart_Markdown
Arules_TM_Rpart_MarkdownArules_TM_Rpart_Markdown
Arules_TM_Rpart_Markdown
 
disertation_Pavel_Prochazka_A1
disertation_Pavel_Prochazka_A1disertation_Pavel_Prochazka_A1
disertation_Pavel_Prochazka_A1
 
Measuring Aspect-Oriented Software In Practice
Measuring Aspect-Oriented Software In PracticeMeasuring Aspect-Oriented Software In Practice
Measuring Aspect-Oriented Software In Practice
 
Big Data and the Web: Algorithms for Data Intensive Scalable Computing
Big Data and the Web: Algorithms for Data Intensive Scalable ComputingBig Data and the Web: Algorithms for Data Intensive Scalable Computing
Big Data and the Web: Algorithms for Data Intensive Scalable Computing
 
Big data-and-the-web
Big data-and-the-webBig data-and-the-web
Big data-and-the-web
 
CSC 347 – Computer Hardware and Maintenance
CSC 347 – Computer Hardware and MaintenanceCSC 347 – Computer Hardware and Maintenance
CSC 347 – Computer Hardware and Maintenance
 
An_expected_improvement_criterion_for_the_global_optimization_of_a_noisy_comp...
An_expected_improvement_criterion_for_the_global_optimization_of_a_noisy_comp...An_expected_improvement_criterion_for_the_global_optimization_of_a_noisy_comp...
An_expected_improvement_criterion_for_the_global_optimization_of_a_noisy_comp...
 
Masters' Thesis - Reza Pourramezan - 2017
Masters' Thesis - Reza Pourramezan - 2017Masters' Thesis - Reza Pourramezan - 2017
Masters' Thesis - Reza Pourramezan - 2017
 

Featured Pattern Run Length Coding for Test Data Compression

  • 1. Featured Pattern Run-Length Coding for Test Data Compression
  • 2. i Featured Pattern Run-Length Coding for Test Data Compression Student : Chih-Ho Shen Advisor : Wang-Dauh Tseng A Thesis Submitted to the Department of Computer Science and Engineering Yuan Ze University in Partial Fulfillment of the Requirements for the Degree of Master Science in Computer Science and Engineering January 2013 Chungli, Taiwan, Republic of China
  • 3. ii
  • 5. iv Featured Pattern Run Length Coding for Test Data Compression Student : Chih-Ho Shen Advisor : Dr. Wang-Dauh Tseng Submitted to Department of Computer Science and Engineering College of Informatics Yuan Ze University Abstract Test data compression is necessary to reduce the volume of test data for system-on-a-chip design. In this thesis, we propose a code-based compression technique called featured pattern run-length coding. Featured pattern run-length coding is based on block merging compression technique (BM). We utilize slice types encoding to improve BM method. It is demonstrated that higher test data compression can be achieved based on slice types encoding. Furthermore, we provide three methods to improve our test data compression scheme. First, we alter the slice data encoding scheme. The slice type with higher occurrences is assigned shorted codeword. On the contrary, the slice type with few occurrences is assigned longer codeword. Second, we add two slice types into slice data encoding scheme to reduce test data. Third, we alter the pattern run-length encoding table to achieve higher compression ratio. Experimental results show that the average compression ratio is 67.69% in ISCAS’89 benchmark circuit. The average compression ratio in our proposed method is 3.52% greater than BM technique.
  • 7. vi Table of Contents ABSTRACT…………………….....................................................................................v List of Figures………………………………………………………………………….iv List of Tables…………………………………………………………………………xi Chapter 1. Introduction…………….……………………………………………….1 1.1 Background….…………………………………………………………………1 1.2 Test Data Compression Techniques……………………………………………1 1.3 Motivations…………………………………………………………………….6 1.4 Organization of this Dissertation………………………………………………7 Chapter 2. Related Works Review………...………………………………………8 2.1 Efficient Test Compression Technique Based on Block Merging…………….8 Chapter 3. Proposed Approaches………………………………………………....12 3.1 The Main Concept of Featured Pattern Run-Length Coding...……………...12 3.2 Slice Data Description………………………...……………………………13 3.3 Encoding Scheme…...……………………………………………………….16 3.4 The Attempt to Enhance Compression…………..…..………………………20 3.4.1 Alter Slice Encoding……………………………………………………...20 3.4.2 Add New Slice Types into Compression Scheme……..…………..……..23 3.4.3 Alter Run Length Encoding…………………………...………………….27 3.5 Summary……………………………………………………………………..29 Chapter 4. Decompression Architecture….………………………………………30 4.1 The Datapath Design of Decompression Architecture...……………………..30 4.2 Finite State Machine Design………………………………………………….33
  • 8. vii Chapter 5. Experimental Results…………………………………………………35 5.1 The Result of Proposed Compression Method……………………...………..35 5.2 The Result of Altering Slice Encoding……………………………………….36 5.3 The Result of Altering Run Length…………………………………………..36 5.4 The Result of Adding Some New Type Slice into Compression Scheme……37 5.5 Result Comparisons with Previous Works …………………………………..42 Chapter 6. Conclusions…………………………………………………………….44 Chapter 7. References……………………………………………………………...45
  • 9. viii List of Tables Table 1 BM encoding scheme…………………………………………………………11 Table 2 Slice data encoding for block size……………………………………………14 Table 3 Encoding scheme format……………………………………………...………17 Table 4 This table illustrates the occurrence of different slice types...…………………20 Table 5 Slice types encoding are defined after modification…………………………...21 Table 6 The benefit of slice types encoding...…………………………………………22 Table 7 This table illustrates the occurrence in s38584 circuit…………………………24 Table 8 Slice types encoding are defined after modification…………………………...25 Table 9 The benefit of slice encoding method………………………………………….26 Table 10 Percentage of occurrence of merged block counts…………………………...28 Table 11 The scheme is defined after altering run length encoding……………………28 Table 12 Compression result of proposed encoding scheme (method a)……………38 Table 13 Compression result of alternating slice encoding (method b)……………..39 Table 14 Compression result of adding new slice type (method c)………………...40 Table 15 Compression result of altering run length (method d)……………………41 Table 16 Compression result with previous Works……………………………………43
  • 10. ix List of Figures Figure 1 A conceptual architecture...…………………………………………………….2 Figure 2 An example of LFSR architecture……………………………………………...3 Figure 3 Sequential linear decompression architecture...........…………………………..4 Figure 4 Two mode of IIIinois scan architecture……………………………………….5 Figure 5 An example illustrates the merging process…………………………….……...9 Figure 6 An example illustrates the codeword of BM compression technique………10 Figure 7 The distribution of merged block run length………………………………….27 Figure 8 Decompression architecture....………………………………………………..30 Figure 9 Finite state machine.…………………………………………………………..37
  • 11. 1 Chapter 1. Introduction 1.1 Background As the integrated process is scaling down in nanometer era, more and more functions are crammed into devices. Obviously, there are tremendous test data volumes which are used to detect the fault of a large amount of transistors in per chip. In IC testing process, it applies to store those generated test data volumes in the automatic device equipment (ATE). However, the limited memory caused two disadvantage factors in ATE: (1) those new subsequent test data volumes could replace the previous generated test data volumes, which store in ATE. (2) Limited memory will increase the test time for given test data bandwidth [1]. On the purpose to solve these two disadvantage factors in the ATE limed memory, test data compression techniques have been proposed. 1.2 Test Data Compression Techniques In respect of IC testing process, test data compression techniques are developed in three categories in published research: 1. Code-based schemes partition test data slices into several types of symbols, and then the scheme uses its own specific codeword to encode these symbols. 2. Linear-decompression-based scheme utilizes Linear Feedback Shift Register (LFSR) to generate test data sequence. 3. Broadcast-scan-based scheme is based on the idea of broadcasting the same value to multiple scan chains.
  • 12. 2 1. Code Based Schemes: Code based schemes take advantage of efficient codeword to represent original test data volumes (TD). The approach of this method is to derive the feature of test data into symbols, and then encode the symbols into compression codeword (TE). As shown in Figure 1, a large number of compression codeword are stored in ATE memory, and an on-chip decoder is used for test data decompression to obtain TD from TE during compression/decompression application [2][3][4]. This is an example defined as test data from ATE to SOC. Figure 1. A conceptual architecture is used to test a system-on-a-chip by storing the encoded test data volumes in ATE memory and decoding them by using on-chip decoder Code based schemes exploit correlation in specified bits and this method applies in any set of test cubes. Due to above advantages, many relative works employ these feature and design efficient codeword to represent the original test data. Continuous 0/1-value frequently appear in the test data, hence K. Chakrabary
  • 13. 3 propose FDR method to compress the continuous 0-vlaue [5]. While FDR encoding scheme compress the 0-value efficiently, it will cause heavy encoding overhead due to continuous 1-value. In order to improve this drawback, A. H. El-Maleh demonstrate EFDR method to achieve higher compression based on encoding both run of 0-value and 1-value [6]. Another form of code-based scheme is statistical coding, which partitions the original data into n-bit symbols and assigns variable-length codeword based on each symbol’s frequency of occurrence [1]. It gathers each symbol occurrence statistics and assigns shorter codeword for more frequent symbols. This purpose of this strategy is to reduce the codeword of total test data. These techniques are including: selective Huffman-coding [7], optimal SHC [8], and variable-length input Huffman coding (VIHC) [9]. 2. Linear-decompression-based schemes: A second kind of compression techniques is based on LFSR decompression architecture. The LFSR decompression architecture consists of only wire, XOR gates, and flip-flops. The basic idea in LFSR schemes is to generate deterministic test cubes by expanding seeds as shown in Figure 2. A seed is an initial state of the LFSR decompression architecture that is expanded by running the LFSR application. According to deterministic test cubes, which are generated by ATPG tools, a corresponding seed can be retrieved by solving a set of linear equations based on the feedback polynomial of LFSR decompression architecture [1][10].
  • 14. 4 Figure 2. An example of LFSR architecture A generic example of linear decompression architecture is like Figure 3. There are many seeds (compression data) stored in the tester. LFSR runs its cycles to produce the test vectors and fill the set of scan chains. (if there are m bits in each scan chain, then the LFSR is run for m cycles to fill the scan chains). Different LFSR seeds will produce different test vectors. The set of seeds are depended on the deterministic test cubes, and those seed can be computed by solving the linear equations based on the LFSR architecture [11]. Figure 3. Sequential linear decompression architecture
  • 15. 5 3. Broadcast scan based schemes: The main idea of broadcast scan schemes is that a single test channel broadcast the same test data to multiple scan chains, because there are many compatible sub-patterns in each scan chain. The approach of this technique is to reduce the test time and test data volume. As Figure 5 shows, the Illinois Scan Architecture shifts same test data to each partial of scan chains through a single scan chain. Consequently, retrieve the same data in a given particular test cube in order to reduce the test time, it needs to make compatibility analysis of test data among scan chains [12][13]. (a) (b) Figure 4 Two Mode of IIIinois Scan Architecture
  • 16. 6 1.3 Motivation There are many academic papers on test data compression have been published over the past years. These different compression techniques employ their own specific coding scheme to reach the purpose of minimize test data volumes. Given test cube which are generated by ATPG, there are a large amount of don’t care bits. A.H. EL-Maleh analysis the ISCAS’89 benchmark circuits, he found his EFDR compression technique can reduce the total number of 1s’ and 0s’ runs to nearly 54% compression ratio [6]. In fact, EFDR still need extra bits to record both 1 and 0 value run length bits. Another form of code base scheme technique is pattern run length method. It takes several bits as a pattern and compute consecutive compatible pattern run length, and then the encoded compatible pattern are used to represent the compatible pattern count. In the experimental results, it could be observed that pattern run length technique have higher compression rate over run length techniques [14][15][16]. However, the pattern run length methods need record long-length pattern, which might not be just few bits. It might increase the size of codeword. In this paper, the study we proposed is an attempt to supplement the test data compression method of those earlier studies. We take advantage of pattern run length technique which employs efficient coded schemes, and we adopts a slice encoding method to improve the compression ratio of pattern run length.
  • 17. 7 1.4 Organization of this Dissertation In our dissertation, we propose a code base compression method involving pattern run length and pattern identification encoding. In chapter 2, we introduce two compression method based on code based technique. The first method is “Efficient Test Compression Technique based on Test Compression” which encodes runs of consecutive compatible blocks and this method should be observed that has higher compression ratio over run length coding. The second method is “An Internal Pattern Run Length Methodology for Slice Encoding” which classifies different pattern types into seven specific pattern types. It makes statistics the occurrence of these specific pattern types and assigns higher occurrence of pattern types to shorter codeword. In chapter 4, the design of the test decompression circuitry of our proposed method technique is described. In chapter 5, experimental results are demonstrated the effective of our proposed method. Finally, we conclude the studies and list future works in chapter 6.
  • 18. 8 Chapter 2. Related Works Reviews Dealing with a large number of test data volumes is a main challenge in testing system-on-a-chip (SOC) designs. The numerous test data are used to detect signal fault of transistors in the SOC designs. Large amount of test data which stored in ATE memory will cause exceeding the memory and I/O channel capacity of ATE and increasing testing time. To resolve this problem, many test data compression techniques based on code-based scheme have been proposed. As the section 1.2 code-based schemes compression techniques describe, the run length-based compression methods are GOLOMB [4], FDR [5], EFDR [6], ALT-FDR, and PRL. The advantage of these proposed methods are low decompression hardware overhead and applying in any kind of test cubes. In this chapter, we introduce two efficient techniques block merging compression (BM) in section 2.1 and internal pattern run length (IPR) in section 2.2. These techniques are up to 64% test data compression ratio. 2.1 Efficient Test Compression Technique Based on Block Merging The block merging (BM) compression technique is based on dividing a generated test cube into same bit-size blocks and then merging consecutive compatible blocks into merged blocks. This technique computes a count of all consecutive compatible blocks as merged blocks count and then encodes the merged block number with prefix type. In addition, the author offers a method to store the merged block. The method identifies the merged block whether the block is filled with one value (i.e. 0 or 1) or not. If the block is filled with one value, the block will be encoded to filled block with two bits (11 or 10). If the block are filled with both values (0 or 1), a codeword will contain the
  • 19. 9 whole pattern of merged block in tail form of codeword. The main concept of BM compression techniques are like Figure 4 illustrated, the test data are partitioned into same size blocks, whose block size are 5. The first block “X0X1X” attempts to merge its consecutive compatible block and then the second block “101XX” are compatible with the first block. Subsequently, both the two blocks are merged into a merged block “1011X”, and then the new merged block attempts to find whether its consecutive block “XX111” are compatible or not. Finally, the merging process finds that the first four blocks are compatible, and their merged blocks are “1011”. Next, the following block “0X0X0” does the same merging process and so on. Figure 5 An example illustrates the merged block in the process of merging consecutive compatible block in the BM compression technique for the block size of 5 In order to effectively represent the run length of merged blocks by using few number of bits, the different run length of merged blocks are grouped into six groups. Table 1 shows the BM encoding schemes with six groups. The range of merged block number in each group is defined in Table 1, and each group also has their group code. If the range of merged block number is more than 2 blocks, the group will add group offset bits to point out the number of merged block after the encoding group scheme. In the end of the BM scheme, the scheme records the pattern of merged blocks. This technique indicates the pattern of merged block whether the pattern are full one or zero
  • 20. 10 value or not. If the pattern are full one or zero value, it encodes the pattern of merged block is “11” or “10” codeword. On the other hand, it will add extra bit ‘1’ and encodes the full contents of merged blocks if the pattern contains both one and zero value. The extra bit ‘1’ is used to indicate the pattern filled of both one and zero value. For example, the first group B = 1 represents a block is not compatible with its consecutive block, so its merged block number is just exactly one. The merged block is added with its pattern in the end (b is the size of merged block). Another example is just as the Figure 5 shows. The number of merged block is just 4, thus “100” is the codeword of merged block group, and the third group offset is “01”. Then the pattern of merged block is indicated whether the pattern is filled with one or zero value or not. The merged block “10111” are neither one value nor zero value. Thus the pattern codeword is “10111”. In the end, the first four block of Fig 5 are encoded by the bit stream 110 01 010111. Figure 6 An example illustrates the codeword of BM compression technique The advantage of BM compression technique is that making full use of merged blocks count to decrease encoding bits. First, BM compression technique groups the encoded merged block into six groups. It attempts to use fewer bit to represent the higher frequency number of merged blocks, since making the analysis of occurrence of
  • 21. 11 merged blocks counts. For instance, B=1 accounts for the highest frequency of merged blocks count, a prefix “0” stands for encoded merged group. Second, this technique stores the pattern of merged block in two different ways. If a merged block is filled with one or zero value, it will encode just two bits to represent the full contents. In addition, if the block size is 5, the merged block codeword “11” and “10” gain 3 bits benifit. Table 1. BM encoding scheme
  • 22. 12 Chapter 3. Proposed Approaches 3.1 The main concept of featured pattern run-length coding Reducing a lot of test data volume is the main challenge in modern testing system-on-a-chip design [18][19]. Therefore, test data compression techniques have been developed to solve these problems. There are many relative works which employ this feature and design efficient code scheme to reduce enormous test data. In many literature review, we observed that pattern run length technique show great compression effectiveness in experimental results [15][16][17]. We take the advantage of the compression effectiveness that pattern run length coding scheme could represent a long test data stream. Furthermore, pattern run length coding scheme compress test data without any structural information of the circuit. It is applicable for test compression of IP cores in SOC [15]. In this chapter, we present our proposed method featured pattern run length coding schemes to achieve test data compression. The basic idea is to partition the test data into same size blocks and then encode those consecutive compatible blocks. In order to improve the compression ratio, we adopt a slice encoding method to reduce the record bits of merged blocks. It encodes the merged blocks in different ways which depend on recognizing which character types belongs to the merged block. In the experimental results, we observed that pattern run length coding scheme have higher test data compression ratio. Due to this reason, bring character types coding into block merging compression techniques is used to solve the problem of merged block needed record a long pattern. In the section 3.2 slice data description, we describe the character types in detailed. In the section 3.3, we introduce the whole coding scheme.
  • 23. 13 3.2 Slice Data Description To achieve higher compression ratio, we consider adopt the slice data encoding method into our compression method. The idea of slice coding is original from the author Lung-Jen Lee et al [17]. They observed that there were some common feature in the merged blocks and then they classified the test slices into several character types. However, this size of slice data is limited by 8, 16, 32, and 64 [17]. If the slice encoding method needs to be adopted in general case, we have to prune off some unsuitable character types. Slice coding method first partitions test data into same-size slices, the size is even number, and then it recognizes each slice as a character type. There are five specific character types which are defined in Table 2. The format of each codeword is composed of three parts: prefix, extend, and tail. The prefix is used to indicate the character types All 0, All 1. The extend part is as an extended section which indicate the character types 1/2 copy, 1/2 inverse copy, and original. The tail type records the merged block information. Each character types are statement as follows: A. All 0: The type character is that a test slice is filled with zero value or don’t care bits. This type character example is as Table 2 second row shows. In this case, this kind of test slices is encoded into codeword “00”. B. All 1: This type character is that a test slice is filled with one value or don’t care bits. The example of this type character is as Table 2 third row shows. This kind of test slices is encoded into codeword “01”. C. 1/2 copy: This type character is defined that left half slice is compatible with right half slice. This type slice is encoded by the Prefix “10”. The Tail form stores a merged half slice contents. As Table 2 fourth row shows, the test lice “0XX101XX” is encoded to “101110”.
  • 24. 14 D. 1/2 inverse copy: This type character is defined that left half slice is compatible to the inverse of right half slice data. This type slice is encoded by the Prefix “11” and Extend “0”. The Tail data is a merged data from merging both left half slice and the inverse data of right half slice. An example of 1/2 inverse copy is in Table fifth row, the left half slice is “1X10” and the right half slice is “010X”. Then the left half slice “1X10” merges the inverse data of right half slice “101X”. In the end, the generated merged slice “1010” is encoded in the tail form of slice. As Table 2 fifth row shows, the test lice “1X10010X” is encoded to “11101010”. E. Original: If a slice does not match any of the above cases, the slice is defined as Original type. The Prefix is “11” and the Extend is “1”, and the tail information records the full contains of the slice. The example of original type is that the test slice “1X100X10” and the encoded data is “111111100110”. Table 2 Slice Dada Encoding for block size = 8 This method is motivated by observing the five types pattern which occur frequently in the most of test sets. The individual codeword in different character types represent the initial test slice. The character type A11 0 and All 1 are those test slices are only used 2-bits codeword to represent the whole test slice. When the size of slice data
  • 25. 15 increases, the gain benefit of this inflexible codeword will decrease more data bits. Moreover, those character types are frequently-used types, and they use smaller codeword achieve to reduce some data bits. On the other hand, the original type gets extra bits to represent the full pattern. The extra bits are the overhead of slice encoding schemes. If we have to adopt the slice encoding scheme into our compression method, we have no choice but add extra bits to represent the original type.
  • 26. 16 3.3 Encoding Scheme (Method A) The technique we proposed is based on partitioning the test set into same size block, and the merging the consecutive blocks. We take our proposed encoding scheme into two parts and explain later. First, the counts of merged blocks are described in Table 3(A). The merged block counts are used for representing the merged blocks. The counts of merged block are represented by the prefix type and extend type. The pattern type is to record the pattern information. Second, we adopt slice encoding method to represent the pattern information. The character types are described in Table 3(B). Those character types are composed of prefix, extend, and tail. The prefix and extend type are used to distinguish which type the pattern is. The tail type is used to store the full merged block information. The test data are partitioning into same size blocks with even numbers, because the different character types are adopted into coding scheme. In order to reduce the number of bits of merged block counts, the numbers of consecutive merged blocks are grouped into six groups. Each group has its encoding schemes and the range of merged block number in each group is defined in Table 3(A). The first group, B=1, represent the situation if a block is not compatible with its consecutive block. The first group is encoded by “0”. The second group, B=2, represents a block merge only one consecutive block. The second group is encoded by “10”. If the merged block number is over 2, the group offset is used to determine the exact number. For example, B=3, the prefix “110” shows that it is third group, and the group offset “00” shows its first one in the range of 3 to 6.
  • 28. 18 The second part is defined in Table 3(B). This part is used to represent the merged block pattern information. The merged block patterns are classified into five character types which are defined in section 3.2. After recognizing their character types, each character type is encoded into its specific codeword whose scheme format is in Table 3(B). The character types are composed of prefix type, extend type, and tail type. The prefix determines their character types. The extend type is used to indicate some character types which prefix could not directly indicate. The tail is used to record the pattern information. An example illustrates our proposed codeword in Figure 6. A test data stream is partitioned into 6-bits blocks. The first partitioned block attempts to merge its consecutive blocks until the merged block is not compatible with its next block. As the steel blue covering range shows, the 6-bit block run length is 3 and the merged block is “111X11”. In the encoding process, the first part is encoding the merged block counts. The run length in the range of block group is 3 to 6. The prefix type is “110” and the extend type is “00”. After encoding the merged block counts, the second part is slice data encoding. The merged block is identified as All One type and then the merged block is followed by the All One encoding defined. The data “01” is used to represent the merged block “111X11”. The recoding data “1100001” is generated just as Figure 6 blue covering range shows. Next, the encoding process of the fourth block is just as previous present. After the merging process, the merged block is generated “110000” and the merged block counts are 2. In the encoding of merged counts, the prefix type is “10” and the second part needs to encode slice data. In the slice data encoding process, the merged block is identified as Original types and then it is encoded into the data “1111110000”. In the end, the recoding data is encoded into “101111110000”.
  • 29. 19 Figure 6 An example illustrates the codeword of our compression technique
  • 30. 20 3.4 The Attempt to Enhance Compression In this section, we provide some ideas to reduce test data volume. We propose three methods to improve compression method and each method is made a few modifications in the coding scheme. In reality, our proposed test data compression method is still based on previous presentation (section 3.3). 3.4.1 Alter slice type encoding (Method B) The slice type encoding is defined in the section 3.2. In the previous introduction, the slice encoding is composed of prefix type, extend type, and tail type. The drawback of this encoding scheme is that Original type is used to add four bits to record its merged block. It seems that this scheme is not efficient to compress the given test data. Another problem is that we observe Original type is more frequent than All 1 type and All 0 type. To solve this problem, we consider higher occurrence slice types to use fewer bits and less occurrence slice types to use more bits. Table 4: This table illustrates the occurrence of different slice types in s38584 circuit The first thing we need to do is to make the occurrence statics of different slice
  • 31. 21 types. The occurrence of each slice type is just as the table 4 shows. These frequent slice types almost are 1/2 copy type, 1/2 inverse type, and Original type. Also, we use different block size to analyze the result of occurrence, and we could know the message that different block size just makes a little influence on the occurrence of different slice types. Therefore, we consider encode fewer bits in high frequency slice types, and then we observe the effectiveness of our modification. The new slice scheme is defined in Table 5. We keep the same tail format to represent the information of slice types. The difference is that the extend type of 1/2 copy, 1/2 inverse copy, and original type are pruned off. Instead of using more bits to identify, we only use the prefix type to represent these three slice types. The prefix type of 1/2 copy is “00”, the prefix type of 1/2 inverse copy is “01”, and the prefix type of the original type is “10”. Also, the All 0 and All 1 are the same identifier “11”, which is in the prefix type, and their slice type are determined by an extra bit, which is in the extend type. After modifying the slice encoding, the encoding scheme is more efficient to represent initial test pattern. We could observe that the Original type which only adds two bits to identify itself. It achieves the goal that high frequency slice types use fewer encoding bits in the encoding process. Table 5: The new slice types encoding are defined after modification
  • 32. 22 In order to show the encoding benefit of slice data encoding scheme, we use the table 6 to illustrate the benefit in different block size. The meaning of these positive numbers in this table is used to represent the bits gain in each block pattern. For example, we use the block information (B=6) to store the pattern “11XX11” and then we need use 6 bits to record the full pattern. If we adopt slice type encoding, we only use 3 bits (“110”) to represent initial pattern “11XX11”. The encoding benefit of slice type encoding in All 0 is 3 bits gain. Also, we can know there still is same encoding overhead in Original type in different block size. In order to represent Original type pattern, the codeword needs to add 2 extra bits in each block. Table 6: The benefit of slice types encoding
  • 33. 23 3.4.2 Add New Slice Types into Compression Scheme (Method C) To achieve higher compression ratio, we utilize slice types encoding method into our compression method. It seems that it increases test data compression ratio to adopt the slice types encoding method, so we try to add some slice types to improve compression method. Also, we observe the experimental result to confirm that it is a good idea to add new slice types into compression scheme. After analyzing the distribution of slice data types in table 4, we discover that 1/2 copy type and 1/2 inverse copy type have higher occurrences. In this reason, we consider that there are many rotating pattern which exist in the test data. In order to demonstrate our idea, we make the new occurrence static of slice types. We add two slice types “Left rotate type” and “Right rotate type” in our slice encoding scheme, and they are defined in the following: A. Left rotate type: This type character is defined that the left half slice left shift its pattern and the leftist bit is moved to the rightist position, and then the left half slice which had rotated is compatible with the right half slice. The other conditions are that this slice type is not belong to other slice types which are including “All 0”, “All 1”, “1/2 copy” and “1/2 inverse copy”. For example, a test pattern “1010X1” is left rotate type. After the left slice “101” is left rotated, the new pattern “011” is generated. The new pattern “011” is compatible with its right slice “0X1”. Also, the pattern “1010X1” is not belonged to another slice type. B. Right rotate type: This type character is defined that the left half slice right shift its pattern and the rightist bit is moved to the leftist position, and then the left half slice which had rotated is compatible with the right half slice. The other conditions
  • 34. 24 are that this slice type is not belong to other slice types which are including “All 0”, “All 1”, “1/2 copy” and “1/2 inverse copy”. For example, a test pattern “101110” is right rotate type. After the left slice “101” is right rotated, the new pattern “110” is generated. The new pattern “110” is compatible with its right slice “110”. Also, the pattern “101110” is not belonged to another slice type. We use different block size to analyze the occurrence of different slice types, which are illustrated in Table 7. In this table, we could know the first third occurrences of slice types are 1/2 copy type, 1/2 inverse copy, and Original type. In this reason, we allow these three slice types to be encoded fewer codeword. Also, we know there are similar occurrences of the other four slice types, which are All 0 type, All 1 type, Left rotate type, and Right rotate type. Due to fewer occurrences of these slice types, their codeword are assigned more bits. Table 7: This table illustrates the occurrence in s38584 circuit
  • 35. 25 The new slice encoding schemes are defined in Table 8, after we add two slice types “Left rotate type” and “Right rotate type”. We keep the same format to represent the information of different slice types. The format of each codeword is composed of three parts: prefix, extend, and tail. The prefix is used to indicate the character types 1/2 copy type, 1/2 inverse copy type, and Original type. The extend part is an extended section to indicate the character types All 0 type, All 1 type, Left rotate type, and original. The tail records the partial slice contains in order to represent the slice information. Some slice types can use the half pattern to represent the whole pattern information, just like Left rotate type, Right rotate type, 1/2 copy type, and 1/2 inverse copy type. Table 8: Slice types encoding are defined after modification In order to explain the benefit of adding two slice types, we use the table 9 to illustrate the encoding benefit in different block size. The meaning of these positive numbers in this table is used to represent the bits gain in each block pattern. For example, the merged block (B=10) is used to store the pattern “111101110X” and then
  • 36. 26 we need use at least 8 bits to record the full pattern. If we adopt slice type encoding, the merged block is identified as left rotate slice type and the slice data encoding is “111011110”. The slice data encoding use 9 bits to represent the merged block information. The encoding benefit in left rotate slice type is 1 bit gain. In block size 10, there are many encoding benefit except for Original slice type. According to Table 9 shows, we can know there will be more encoding benefit when the block size increases. Table 9: The Benefit of Slice Encoding Method
  • 37. 27 3.4.3 Alter Run Length Encoding (Method D) The codeword of merged block run length is defined in table 3(A). To describe the count of the merged block, the coding scheme is still based on A.H. El-Maleh’s previous work [15], which is introduced in section 2.1. In this section, we analysis the distribution of merged blocks run length in the ISCAS’89 benchmark circuits, and then we also provide an idea of altering run length encoding. In addition, we slightly adjust the encoding of merged block run length, and then observing whether the compression result is increasing or not. Figure 7 illustrates the distribution of merged block run length, whose block size is 6. The horizontal coordinate axis shows the counts of merged blocks, and the vertical coordinate axis shows the frequency of occurrence of merged block counts. It is mentioned that only one merged block (PRL=1) is the range from 31% to 55%. We could know the fact is that PRL=1 dominates the frequency of occurrence of merged block counts. For this reason, we still keep the first group (B=1) contain only PRL=1 in the group, hence PRL=1 only uses one bit to represent the count of merged block. Figure 7: The distribution of merged block run length (block size is 6)
  • 38. 28 Next, two merged blocks (PRL=2) is the range from 8% to 23%, and three merged blocks (PRL=3) is the range from 0.25% to 13%. If we change the second group (B=2) into a new group (B=2~3), we need to add 1 bit as group offset. The benefit is that B=3 can gain 2 bits benefit, but B=3 need add 1 bit overhead. In table 11, it shows that the occurrences of PRL=3 is higher than half of the occurrences of PRL=2 in most of cases. For this reason, we propose a new codeword table of merged block run length in Table 12. It seems that B=3, B=7, B=15, and B=31 can gain 2 bits benefit. Also, this scheme is more efficient to encode the count of merged blocks. Table10 Percentage of occurrence of merged block counts (b=6) Table 11: The Scheme is defined after altering run length encoding
  • 39. 29 3.5 Summary We have developed an efficient code-based compression method called featured pattern run length coding method. The main idea is based on previous work block merging compression technique. To improve test data compression, we classify the merged blocks into different slice types. Each slice type has its own encoding method. In fact, adopting the slice type encoding into compression scheme is more efficient than previous work. Furthermore, we add three methods to improve our proposed compression method. First, we consider higher occurrences of slice types need to use fewer bits and fewer occurrences of slice types need to use more bits. Second, our compression scheme is added two slice types to achieve higher compression ratio. Third, we alter run length encoding scheme. The result of alternating slice encoding is greater than previous works.
  • 40. 30 Chapter 4. Decompression Architecture 4.1 The Datapath Design of Decompression Architecture We describe the test data compression procedure, the decompression architecture, and the design of on-chip decoder. Since our proposed compression technique is based on the BM decompression architecture, the design of both decoders is similar with some additional components and different behavior in the finite state machine (FSM) of the BM decompression architecture. A main difference in FSM is used to determine many slice types in the codeword. Another difference is that in the compression code, the number of block size is only even. Figure 9 shows the main components of the Decompression Architecture, and each component is described below. Figure 8. Decompressor Design
  • 41. 31 A. Counter 1: This counter is a 5-bit counter and it used to counter the number of 1-value in the prefix code. It can know which group the number of merged blocks is. When this counter receives five consecutive 1, it will transmit the signal MAX to FSM. Then, the FSM will stop sending data to the counter. B. Offset Shift Register: This is a 5-bit shift register and it is used to store the offset code information. After Counter 1 computes the prefix code, this shift register will start shifting the offset code. Once the number in Counter 2 is equal to the number in Counter 1, it will set RST 2 signal to FSM. C. Run-Length Shift Register: This is a 6-bit shift register and it is used to store the merged block count. D. Counter 2: This counter is a 4-bit counter and it stores the block size from the signal of block decoder. This counter will decrement during decoding the tail type of merged block codeword. When the counter reach zero, it will send RST 3 signal to FSM. E. 6/12 shift register: The configuration is designed by a 6-bit shift register to a 12-bit shift register. It is controlled by three signals from the FSM and the output of the 3-8 decoder. The configuration is be used to fill the tail type of merged block codeword during the decoding process. F. Shift Register: This 3-bit shift register sequentially loads the codeword of block size during the FSM enter the first three states. G. A latch: The latch is used to fill the bit of pattern in A11 1 type or All 0 type. When the FSM identify the pattern is A11 1 type, the latch will receive 1-value from signal SER1. Otherwise, the latch will receive 0-value. H. Multiplexer 1: This multiplexer is used to fill the tail type of pattern. During the decoding process in FSM, the slice type of pattern is determined. The
  • 42. 32 multiplex is controlled by the signal SER2 from FSM. Once the slice type is determined as All 1 or All 0, the multiplexer choose the signal from the latch. When the slice type is determined as Half copy type or Half inverse type, the multiplexer choose the signal from the 6-Bit shift register. I. Multiplexer 2: The multiplexer is inserted at the output stage in order to drive the scan chain either directly from the serial input of from the output of the 6/12-bit shift register. J. Block size decoder: This decoder is used to decode the block size from 3 bits to 4 bits. It will send the real number of block size. For example, it decode the codeword “001”, and then it will generate new codeword “0110”, which represent the block size is 6. K. 3-8 decoder: This decoder is used to configure the 4/10 bit shift register according to the block size (4-18 bits). It decodes the content of the shift register considering that code 000 represent a block size of 4, and code 001 represents a block size of 6.
  • 43. 33 4.2 Finite State Machine The FSM is composed of 18 states as shown in Figure 10. The behavior in the FSM is different from the BM FSM. Although the decompression architecture is based on the BM decompression architecture, we design many state to determine different slice types in the codeword and modify the path of states due to different definition in run length encoding, which is shown in table 11. The behavior of each state is defined as follows. S0 sets EN signal in order to receive next codeword. From S1 to S3 read the first three bits, which are used to represent the block size in the compression data. S4 checks the next bit to know the whether the number of merged block is greater one or not. If the bit is indicated as one, it goes to S5. Otherwise, it goes to S8. S5 read prefix type of the codeword and stores the bits in counter 1. Once S5 indicates the 0-value in prefix type, it will stop staying its own state and go to S6 state. S6 read the offset type of codeword and stores the bits in counter 2. When S6 indicate the 0-value in offset type, it will stop staying its own state and go to S7 state. S7 to S15 are used to identify the slice type in pattern. If S7 reads the next bit is 0-value, it will go to S9 state. Otherwise, it will go to S8 state. Then, S9 reads the next bit, which is used to determine the slice type is 1/2 copy type or 1/2 inverse type. If the next bit is 0-value, the slice is 1/2 copy type. Otherwise, the slice is 1/2 inverse type. When the S7 go to S8 state, S8 reads the next bit in order to determine the slice type. If the next bit is 0-value, S8 will go to S11 and the slice is Original type. If the next bit is 1-value, S8 will go to S12. From S12 to S15 are used to read extend type in slice encoding, which is defined in table 8. If S12 reads the next bit is 0-value, S12 will go to S14 state. When S12 reads the next bit is 0-value, the slice type of pattern is identified as A11 0 type. Otherwise, it
  • 44. 34 is identified as A11 1 type. On the other hand, S12 reads the next bit is 1-value, S12 will go to S13 state. When S13 reads the next bit is 0-value, the slice type of pattern is identified as Left Rotate type. Otherwise, it is identified as Left Right type. S16 output its merged pattern, which slice type is identified from S7 to S15. S16 and S16 output number of times equal to Run-Length shift register. When they go to S4 state, the new codeword will be decoded in the process of FSM. Figure 9. Finite State Machine
  • 45. 35 Chapter 5. Experimental Results 5.1 The Result of Proposed Compression Method (Method A) In order to demonstrate the effectiveness of our proposed compression technique, we implement a C++ program to perform on a number of the largest full-scanned versions of ISCAS’89 circuit. The Mintest [20] benchmark circuits generated using the dynamic compaction option have been used. Our compression technique computes the effectiveness for different block sizes. The compression ratio is defined in the following: Comp. ratio 100%× − = bitsorginalofNumber bitscompressedofNumberbitsorginalofNumber The compression results of proposed encoding scheme (method a) are shown in Table 9. The column means the size of block, and the row means the circuits, and test size means the number bit of test data. Compared to block compression method, the block sizes of our compression are even number, which are adopted to recognize each slice as a character type. In this table, we can know that each circuit has different compression ratio in different block size. For example, the highest compression for circuit s5378 is achieved with a block size of 12. In another example, the highest compression for larger circuit s38417 is achieved with a block size of 6. Also, we can know each circuit has higher compression ratio with the range 6 to 12 of block size.
  • 46. 36 5.2 The Result of Altering Slice Type Encoding (Method B) To achieve higher compression ratio, we alter slice type encoding in our compression method (Method B in section 3.4.1). Since the occurrences of slice types probably have effects on test data compression, we consider higher occurrence slice types to use fewer bits and less occurrence slice types to use more bits. After analysis the occurrences of slice types, those frequent slice types almost are 1/2 copy type, 1/2 inverse type, and Original type. The modified encoding scheme is defined in Table 5. The result of alternating slice encoding is in Table 10. The highest compression for circuit s5378 is achieved with a block size of 12, and the highest compression for larger circuit s38417 is achieved with a block size of 6. Compared to Method A, altering slice type method improves the effectiveness of encoding scheme in each circuit, except for s13207 circuit and s35932 circuit. According our computation, the average compression ratio in Method B is 0.58% greater than the average compression ratio in Method A. It seems that alternating slice encoding is efficient to compress the given test data. 5.3 The Result of Adding Slice Encoding Scheme (Method C) We try to add some slice types to improve compression scheme method, since adopting the slice encoding method increases test data compression ratio. We consider that there are probably many rotating pattern which exist in the test data. In order to demonstrate our idea, we add two slice types “Left rotate type” and “Right rotate type” in our slice encoding schemes. Then we make the occurrence static of slice types and the result is shown in table 8. Compared to Method B, this method improves the compression ratio in each circuit. According our computation, the average compression ratio in Method C is 0.13%
  • 47. 37 greater than the average compression ratio in Method B. It seems that adding rotate slice type into compression scheme is efficient to compress the given test data. 5.4 The Result of Altering Run Length Encoding (Method D) We provide an idea of altering run length encoding, and then we also analysis the distribution of merged blocks run length in the ISCAS’89 benchmark circuits. In addition, we slightly adjust the encoding of merged block run length. In order to demonstrate the effectiveness by modifying the codeword of merged block run length, we make the table of compression scheme in method D. The result of alternating slice encoding is shown in Table 15. The average compression ratio in Method D is 0.19% greater than the average compression ratio in Method C. In some case, this method obviously increases the compression ratio in small block. For example, the highest compression for circuit s35932 and s38417 and it is achieved with a block size of 4 and 6.
  • 48. 38 Table 12: Compression Result of Proposed Encoding Scheme (Method A)
  • 49. 39 Table 13: Compression Result of Alternating Slice Encoding (Method B)
  • 50. 40 Table 14: Compression Result of Adding New Slice Type into Compression Scheme (Method C)
  • 51. 41 Table 15: Compression Result of Altering Run Length (Method D)
  • 52. 42 5.5 Compression result with Previous Works In this section, we make a table comparing our proposed technique to the previous works, which are including Golomb [4], FDR [5], EFDR [6], ALT-FDR [21], SHC [7], VIHC [22], RL-HC [23], 9C [24], and BM [15]. The test data are ISCAS’89 benchmark circuits and the test cube are generated by Mintest ATPG program [20]. In this table, we can know that FDR, EFDR, and ALT-FDR have lower compression ratio. In fact, run length scheme only deal with consecutive 1-value or consecutive 0-value into compression, which cannot achieve great compression effectiveness. In this reason, we try to find another efficient technique. We find that BM have better effect in most cases, but this method still need record the merged block information, which are many bits. Our proposed compression technique improves block merging method by adopting slice encoding. The average compression ratio in Method A is 2.24% greater than BM technique. In method B, considering the slice types encoding are depend on the occurrences of slice types. We assigned shorter codeword to those slice types which have higher occurrence. In method C, we add two new slice types into our compression scheme. In method D, we alter the encoding of block merging run length. The average compression ratio in Method D is 3.52% greater than BM technique.
  • 53. 43 Table 16: Compression Result with Previous Works
  • 54. 44 Chapter 6. Conclusions There are many academic papers on test data compression have been published over the past years. The different compression techniques employ their own coding scheme to achieve the purpose of minimize test data volumes. We find that previous scheme use inefficient codeword to compress test data. In this work, we develop an efficient code-based scheme for efficient compressing test data. Our technique not only utilizes the compression effectiveness of block merging method, but also adopts slice types encoding method. To achieve higher compression ratio, we add the some slice types into our compression method. In addition, we also provide an idea of altering run length encoding. The decompressor is circuit independent and can be used with any test set. Result shows, the average compression ratio in Method D is 3.52% greater than BM technique. In addition, our study has the highest average compression ratio in all related works.
  • 55. 45 Chapter 7. Reference [1] N. A. Touba, “Survey of test vector compression techniques”, IEEE Design & Test Computers, April 2006, 23, (4), pp.294-303 [2] B. Ye, M. Luo, “A new test data compression method for system-on-a-chip”, in Proc. 3rd IEEE Int. Conf. on Computer Science and Information Technology, 2010, pp.129-133. [3] L. Zhang, J. S. Kuang, “Test-data compression using hybrid prefix encoding for testing embedded cores”, in Proc. 3rd IEEE Int. Conf. on Computer Science and Information Technology, July 2010. [4] A. Chandra, K. Chakrabarty, “System on a Chip Test Data Compression and Decompression Architecture Based on Golomb Code”, IEEE Trans. Computer -Aided Design, vol. 20, no.3, Mar. 2001, pp.353-368. [5] A. Chandra, K. Chakrabarty, “Test Data Compression and Test Resource Partitioning for System-on-a-Chip Using Frequently-Directed Run-Length (FDR) Codes”, IEEE Trans. Computers, vol.52, no.8, Aug.2003, pp.1076-1088. [6] A. H. El-Maleh, “Test data compression for system-on-a-chip using extended frequency directed run-length code”, Computer & Digital Techniques, IET, vo1. 2, no.3, May. 2008, pp. 153-163. [7] A. Jas et al., “An efficient Test Vector Compression Schemes Using Selective Huffman Coding”, IEEE Trans. Computer-Aided Design of Integrated Circuits and Systems, vol.22, no. 6, June 2003, pp.797-806. [8] X. Kavousianos, E. Kalligeros, and D. Nikolos, “Optimal Selective Huffman Coding for Test-Data Compression”, IEEE Trans. Computers, vol. 56, no.8, Aug. 2007, pp.1146-1152. [9] P. Gonciari, B. Al-Hashimi, B. Nicolici, “Improving compression ratio, area overhead, and test application time for system-on-a-chip test data compression/decompression”. Proc. Design Automation Test in Europe (DATE), Paris, France, March 2002, pp. 604-611. [10] J.Lee and N. A. Touba, “LFSR-reseeding scheme achieving low power dissipation during test”, IEEE Trans. Computer-Aided Design of Integr. Circuit and Syst.,
  • 56. 46 vol.26, 2007, pp. 396-401 [11] C. V. Krishna, A. Jas, and N. A. Touba, “Test Vector Encoding Using Partial LFSR Reseeding”, Proc. Int’l Test Conf. (ITC 01), IEEE CS Press, 2001, pp.885-893. [12] K. J. Lee, J. J. Chen, and C. H. Huang, “Using a Single Input to Support Multiple Scan Chains”, Porc. Int’t Conf. Computer-Aided Design (ICCAD 98), IEEE CS Press, 1998, pp74-78. [13] A. R. Pandey, J. H. Patel, “An Incremental Algorithm for Test Generation in Illinois Scan Architecture Based Designs” Design, Automation, and Test in Europe Conference and Exhibition (DATE), 2002, pp368-375. [14] X. Ruan and R. Katti, “An Efficient Data-Independent Technique for Compressing Test Vectors in Systems-on-a-Chip”, Proc. IEEE Emerging VLSI Tech. Arch. Symp., 2006, pp153 [15] A. H. El-Maleh, “Efficient Test Compression Technique Based on Block Merging”, IET Comput. Digit. Tech., vol. 2, no. 5, 2008, pp.327-335 [16] L,-J. Lee et al, “A Multi-Dimensional Pattern Run-Length Method for Test Data Compression for Test Data Compression”, Proc. Asian Test Symp., 2009, pp. 111-116. [17] Lung-Jen Lee, Wang-Dauh Tseng, and Rung-Bin Lin, “An Internal Pattern Run-Length Methodology for Slice Encoding”, ETRI Journal, vol. 33, no.3, June 2011, pp. 374-381. [18] El-Maleh, Al-Suwaiyan: “An efficient test relaxation technique for combinational and full-scan sequential circuit”. Proc. VLSI Test Symp., Monterey, CA, April 2002, pp. 53-59. [19] K. Miyase, S. Kajihara: “Don’t care identification of test patterns for combinational circuits”, IEEE Trans. Comput. Aided Des., 2004, 23, (2), pp. 321-326. [20] I. Hamzaoglu, J. H. Patel, “Test set compaction algorithms for combinational circuits”. Proc. Int. Conf. Computer-Aided Design, San Jose, CA, November 1998, pp. 283-289 [21] A. Chandra, K. Chakrabarty, “A unified approach to reduce SoC test data volumes, scan power, and test time”, IEEE Trans. Computer-Aided Design Integrated Circuit System, December, 2003, 22, (2), pp. 353-363.
  • 57. 47 [22] P. Gonciari, B. Al-Hashimi, N. Nicolici, “Improving compression ratio, area overhead, and test application time for system-on-a-chip test data compression/decompression”, Proc. Design Automation Test in Europe, Paris, France, March 2002, pp. 604-611. [23] M. Nourani, M. Tehranipour, “RL-Huffman encoding for test compression and power reduction in scan application”, ACM Tran., Deign Automation Electronic System, 2005, 10, (1), pp. 91-115 [24] M. Tehranipoor, M. Nourani, K. Chakrabarty, “Nine-coded compression technique for testing embedded cores in SoCs”, IEEE Tran, Very Large Scale Integr. (VLSI) Syst., 2005, 13, (6), pp. 1070-1083.