In the current scenario of IP core based SoC, to test the CUT we need to communication link between Circuit Under Test and ATPG, so before applying to actual DUT. If there is a problem with this link, there may be a lip in bit of test data. Compared to original test data, if there is a bit lip in the original data, the codeword may change and hence the decompressed data will have a large number of bit deviation. This deviation in bits can severely degrade the test quality and overall fault coverage which may affect yield. The error resilience is the capability of the test data to resist against such bit lips. Here in this paper, the earlier methods of error resilience is compared and a Hamming code based error resilience technique is proposed to improve the error resilience capacity of compressed test data. This method is applied on Huffman code based compressed test data of widely used ISCAS benchmark circuits. The fault coverage measurement results show the effectiveness of the proposed method. The basic goal here is to survey the effect of bit lips on fault coverage and prepare a platform for further development in this avenue.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
implementation of area efficient high speed eddr architectureKumar Goud
Abstract-This project presents an EDDR design, based on the residue-and-quotient (RQ) code, to embed into motion estimation (ME) for video coding testing applications. An error in processing elements (PEs), i.e. key components of a ME, can be detected and recovered effectively by using the EDDR design. The proposed EDDR design for ME testing can detect errors and recover data with an acceptable area overhead and timing penalty. The functional verification and synthesis can be done by Xilinx ISE. That is when compare to the existing design the implemented design area and timing will be reduced.
Index Terms—Area overhead, data recovery, error detection, reliability, residue-and-quotient (RQ) code, Xilinx ISE
Design Verification and Test Vector Minimization Using Heuristic Method of a ...ijcisjournal
The reduction in feature size increases the probability of manufacturing defect in the IC will result in a
faulty chip. A very small defect can easily result in a faulty transistor or interconnecting wire when the
feature size is less. Testing is required to guarantee fault-free products, regardless of whether the product
is a VLSI device or an electronic system. Simulation is used to verify the correctness of the design. To test n
input circuit we required 2n
test vectors. As the number inputs of a circuit are more, the exponential growth
of the required number of vectors takes much time to test the circuit. It is necessary to find testing methods
to reduce test vectors . So here designed an heuristic approach to test the ripple carry adder. Modelsim and
Xilinx tools are used to verify and synthesize the design.
Wavelet Based on the Finding of Hard and Soft Faults in Analog and Digital Si...ijcisjournal
In this paper methods for testing both software and hardware faults are implemented in analog and digital
signal circuits are presented. They are based on the wavelet transform (WT). The limit which affected by
faults detect ability, for the reference circuits is set by statistical processing data obtained from a set of
faults free circuits .In wavelet analysis it has two algorithm one is based on a discrimination factor using
Euclidean distances and the other mahalanobis distances, are introduced both methods on wavelet energy
calculation. Simulation result from proposed test methods in the testing known analog and digital signal
circuit benchmark are given. The results shows that effectiveness of existing methods two test metrics
against three other test methods, namely a test method based on rms value of the measured signal, a test
method utilizing the harmonic magnitude component of the measured signal waveform
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
implementation of area efficient high speed eddr architectureKumar Goud
Abstract-This project presents an EDDR design, based on the residue-and-quotient (RQ) code, to embed into motion estimation (ME) for video coding testing applications. An error in processing elements (PEs), i.e. key components of a ME, can be detected and recovered effectively by using the EDDR design. The proposed EDDR design for ME testing can detect errors and recover data with an acceptable area overhead and timing penalty. The functional verification and synthesis can be done by Xilinx ISE. That is when compare to the existing design the implemented design area and timing will be reduced.
Index Terms—Area overhead, data recovery, error detection, reliability, residue-and-quotient (RQ) code, Xilinx ISE
Design Verification and Test Vector Minimization Using Heuristic Method of a ...ijcisjournal
The reduction in feature size increases the probability of manufacturing defect in the IC will result in a
faulty chip. A very small defect can easily result in a faulty transistor or interconnecting wire when the
feature size is less. Testing is required to guarantee fault-free products, regardless of whether the product
is a VLSI device or an electronic system. Simulation is used to verify the correctness of the design. To test n
input circuit we required 2n
test vectors. As the number inputs of a circuit are more, the exponential growth
of the required number of vectors takes much time to test the circuit. It is necessary to find testing methods
to reduce test vectors . So here designed an heuristic approach to test the ripple carry adder. Modelsim and
Xilinx tools are used to verify and synthesize the design.
Wavelet Based on the Finding of Hard and Soft Faults in Analog and Digital Si...ijcisjournal
In this paper methods for testing both software and hardware faults are implemented in analog and digital
signal circuits are presented. They are based on the wavelet transform (WT). The limit which affected by
faults detect ability, for the reference circuits is set by statistical processing data obtained from a set of
faults free circuits .In wavelet analysis it has two algorithm one is based on a discrimination factor using
Euclidean distances and the other mahalanobis distances, are introduced both methods on wavelet energy
calculation. Simulation result from proposed test methods in the testing known analog and digital signal
circuit benchmark are given. The results shows that effectiveness of existing methods two test metrics
against three other test methods, namely a test method based on rms value of the measured signal, a test
method utilizing the harmonic magnitude component of the measured signal waveform
Design for Testability in Timely Testing of Vlsi CircuitsIJERA Editor
Even though a circuit is designed error-free, manufactured circuits may not function correctly. Since the manufacturing process is not perfect, some defects such as short-circuits, open-circuits, open interconnections, pin shorts, etc., may be introduced. Points out that the cost of detecting a faulty component increases ten times at each step between prepackage component test and system warranty repair. It is important to identify a faulty component as early in the manufacturing process as possible. Therefore, testing has become a very important aspect of any VLSI manufacturing system.Two main issues related to test and security domain are scan-based attacks and misuse of JTAG interface. Design for testability presents effective and timely testing of VLSI circuits. The project is to test the circuits after design and then reduce the area, power, delay and security of misuse. BIST architecture is used to test the circuits effectively compared to scan based testing. In built-in self-test (BIST), on-chip circuitry is added to generate test vectors or analyze output responses or both. BIST is usually performed using pseudorandom pattern generators (PRPGs). Among the advantages of pseudorandom BIST are: (1) the low cost compared to testing from automatic test equipment (ATE). (2) The speed of the test, which is much faster than when it is applied from ATE. (3) The applicability of the test while the circuit is in the field, and (4) the potential for high quality of test.
Towards 802.11g Signal Strength Estimation in an Industrial Environment: a Pr...Dalton Valadares
Paper published in the AINA-2019 (The 33-rd International Conference on Advanced Information Networking and Applications).
March 27-th to March 29-th, 2019 at Kunibiki Messe, Matsue, Japan.
Digital Procurement in the Nuclear Industry: Tips on Embracing New TechnologiesATC
Digital procurement is nothing to be afraid of. Programmable components are creeping (sometimes unexpectedly) into many devices used by the nuclear industry. It is time for the industry to embrace this technology. To help, here are some pointers for your consideration. Presented by Andrew Nack, PE, Senior Instrumentation & Controls Engineer, ATC Nuclear, on Feb. 10 at the EPRI Procurement Forum in Williamsburg, VA.
IMPLEMENTATION OF COMPACTION ALGORITHM FOR ATPG GENERATED PARTIALLY SPECIFIED...VLSICS Design
In this paper the ATPG is implemented using C++. This ATPG is based on fault equivalence concept in which the number of faults gets reduced before compaction method. This ATPG uses the line justification and error propagation to find the test vectors for reduced fault set with the aid of controllability and observability. Single stuck at fault model is considered. The programs are developed for fault equivalence method, controllability Observability, automatic test pattern generation and test data compaction using object oriented language C++. ISCAS 85 C17 circuit was used for analysis purpose along with other circuits. Standard ISCAS (International Symposium on Circuits And Systems) netlist format was used. The flow charts and results for ISCAS 85 C17 circuits along with other netlists are given in this paper. The test vectors generated by the ATPG further compacted to reduce the test vector data. The algorithm is developed for the test vector compaction and discussed along with results.
BE Analytic Solutions LLP is the first unique company of excellence offering Reliability ,Availability, Maintainability and Safety (RAMS) as an Engineering Service to customers in India. The company was established in 2010 by a group of eminent scientists and experienced engineers in Bangalore, India. In today’s highly competitive market, product/system Quality and Reliability are among the key differentiators. Many contracts, today, also require delivering RAMS engineering analysis report as a contractual obligation – this is where we can assist you. We offer cost effective turnkey solutions on Reliability Engineering Analysis.
Comparative Study of Speed Characteristics of DC Motor with and without Contr...IJMTST Journal
In this paper a dc motor is modeled using transfer function analysis and the speed characteristics are
plotted for dc motor with and without controllers. The difference between load torque and electromagnetic
torque is improved using discrete p, I, d controllers for improving the speed characterics.the best speed
characteristics is obtained using proportional plus derivative controller. These characteristics are obtained by
formulating a relation between speed and torque. The entire simulation is conducted on mat
lab/simulink2013 environment.
Design for Testability in Timely Testing of Vlsi CircuitsIJERA Editor
Even though a circuit is designed error-free, manufactured circuits may not function correctly. Since the manufacturing process is not perfect, some defects such as short-circuits, open-circuits, open interconnections, pin shorts, etc., may be introduced. Points out that the cost of detecting a faulty component increases ten times at each step between prepackage component test and system warranty repair. It is important to identify a faulty component as early in the manufacturing process as possible. Therefore, testing has become a very important aspect of any VLSI manufacturing system.Two main issues related to test and security domain are scan-based attacks and misuse of JTAG interface. Design for testability presents effective and timely testing of VLSI circuits. The project is to test the circuits after design and then reduce the area, power, delay and security of misuse. BIST architecture is used to test the circuits effectively compared to scan based testing. In built-in self-test (BIST), on-chip circuitry is added to generate test vectors or analyze output responses or both. BIST is usually performed using pseudorandom pattern generators (PRPGs). Among the advantages of pseudorandom BIST are: (1) the low cost compared to testing from automatic test equipment (ATE). (2) The speed of the test, which is much faster than when it is applied from ATE. (3) The applicability of the test while the circuit is in the field, and (4) the potential for high quality of test.
Towards 802.11g Signal Strength Estimation in an Industrial Environment: a Pr...Dalton Valadares
Paper published in the AINA-2019 (The 33-rd International Conference on Advanced Information Networking and Applications).
March 27-th to March 29-th, 2019 at Kunibiki Messe, Matsue, Japan.
Digital Procurement in the Nuclear Industry: Tips on Embracing New TechnologiesATC
Digital procurement is nothing to be afraid of. Programmable components are creeping (sometimes unexpectedly) into many devices used by the nuclear industry. It is time for the industry to embrace this technology. To help, here are some pointers for your consideration. Presented by Andrew Nack, PE, Senior Instrumentation & Controls Engineer, ATC Nuclear, on Feb. 10 at the EPRI Procurement Forum in Williamsburg, VA.
IMPLEMENTATION OF COMPACTION ALGORITHM FOR ATPG GENERATED PARTIALLY SPECIFIED...VLSICS Design
In this paper the ATPG is implemented using C++. This ATPG is based on fault equivalence concept in which the number of faults gets reduced before compaction method. This ATPG uses the line justification and error propagation to find the test vectors for reduced fault set with the aid of controllability and observability. Single stuck at fault model is considered. The programs are developed for fault equivalence method, controllability Observability, automatic test pattern generation and test data compaction using object oriented language C++. ISCAS 85 C17 circuit was used for analysis purpose along with other circuits. Standard ISCAS (International Symposium on Circuits And Systems) netlist format was used. The flow charts and results for ISCAS 85 C17 circuits along with other netlists are given in this paper. The test vectors generated by the ATPG further compacted to reduce the test vector data. The algorithm is developed for the test vector compaction and discussed along with results.
BE Analytic Solutions LLP is the first unique company of excellence offering Reliability ,Availability, Maintainability and Safety (RAMS) as an Engineering Service to customers in India. The company was established in 2010 by a group of eminent scientists and experienced engineers in Bangalore, India. In today’s highly competitive market, product/system Quality and Reliability are among the key differentiators. Many contracts, today, also require delivering RAMS engineering analysis report as a contractual obligation – this is where we can assist you. We offer cost effective turnkey solutions on Reliability Engineering Analysis.
Comparative Study of Speed Characteristics of DC Motor with and without Contr...IJMTST Journal
In this paper a dc motor is modeled using transfer function analysis and the speed characteristics are
plotted for dc motor with and without controllers. The difference between load torque and electromagnetic
torque is improved using discrete p, I, d controllers for improving the speed characterics.the best speed
characteristics is obtained using proportional plus derivative controller. These characteristics are obtained by
formulating a relation between speed and torque. The entire simulation is conducted on mat
lab/simulink2013 environment.
International Journal of Engineering Research and DevelopmentIJERD Editor
Electrical, Electronics and Computer Engineering,
Information Engineering and Technology,
Mechanical, Industrial and Manufacturing Engineering,
Automation and Mechatronics Engineering,
Material and Chemical Engineering,
Civil and Architecture Engineering,
Biotechnology and Bio Engineering,
Environmental Engineering,
Petroleum and Mining Engineering,
Marine and Agriculture engineering,
Aerospace Engineering.
Case Study: How CA Went From 40 Days to Three Days Building Crystal-Clear Tes...CA Technologies
Here at CA Technologies, our development teams share many of the same challenges producing quality software as our customers.
For more information on DevOps: Continuous Delivery, please visit: http://cainc.to/CAW17-CD
Case Study: How CA Went From 40 Days to Three Days Building Crystal-Clear Tes...CA Technologies
Here at CA Technologies, our development teams share many of the same challenges producing quality software as our customers.
For more information on DevOps: Continuous Delivery, please visit: http://ow.ly/3X1E50g62YC
Advanced Verification Methodology for Complex System on Chip VerificationVLSICS Design
Verification remains the most significant challenge in getting advanced SOC devices in market. The
important challenge to be solved in the Semiconductor industry is the growing complexity of SOCs.
Industry experts consider that the verification effort is almost 70% to 75% of the overall design effort.
Verification language cannot alone increase verification productivity but it must be accompanied by a
methodology to facilitate reuse to the maximum extent under different design IP configurations. This
Advanced reusable test bench development will decrease the time to market for a chip. It will help in code
reuse so that the same code used in sub-block level can be used in block level and top level as well that
helps in saving cost for a tape-out of a chip. This test bench development technique will help us to achieve
faster time to market and will help reducing the cost for the chip up to a large extent.
Functional Verification of Large-integers Circuits using a Cosimulation-base...IJECEIAES
Cryptography and computational algebra designs are complex systems based on modular arithmetic and build on multi-level modules where bit-width is generally larger than 64-bit. Because of their particularity, such designs pose a real challenge for verification, in part because large-integer‘s functions are not supported in actual hardware description languages (HDLs), therefore limiting the HDL testbench utility. In another hand, high-level verification approach proved its efficiency in the last decade over HDL testbench technique by raising the latter at a higher abstraction level. In this work, we propose a high-level platform to verify such designs, by leveraging the capabilities of a popular tool (Matlab/Simulink) to meet the requirements of a cycle accurate verification without bit-size restrictions and in multi-level inside the design architecture. The proposed high-level platform is augmented by an assertion-based verification to complete the verification coverage. The platform experimental results of the testcase provided good evidence of its performance and re-usability.
Application-oriented ping-pong benchmarking: how to assess the real communica...Trieu Nguyen
Moving data between processes has often been discussed as one of the
major bottlenecks in parallel computing—there is a large body of research, striving
to improve communication latency and bandwidth on different networks, measured
with ping-pong benchmarks of different message sizes. In practice, the data to be
communicated generally originates from application data structures and needs to be
serialized before communicating it over serial network channels.
Qualifying a high performance memory subsysten for Functional SafetyPankaj Singh
Addressing the Challenges of Safety verification for LPDDR4.
✓Avoid traditional approach of starting functional safety after functional verification : Iterative and expensive development phase
1. Functional Safety Need to be Architected and not added later.
2. Safety Analysis must start prior to implementation. ‘Design for safety/verification’
3. Reuse & Synergize : Nominal and Functional Safety Verification.
✓Fault optimization with formal and other techniques is necessary to overcome challenges with scaling simulation and analysis.
✓Integrated push button fault simulation flow is need of hour and saves verification engineers time.
✓Analog defect modelling and coverage can be performed based on IEEE P2427.
To Get any Project for CSE, IT ECE, EEE Contact Me @ 09666155510, 09849539085 or mail us - ieeefinalsemprojects@gmail.com-Visit Our Website: www.finalyearprojects.org
Robust Fault Tolerance in Content Addressable Memory InterfaceIOSRJVSP
With the rapid improvement in data exchange, large memory devices have come out in recent past. The operational controlling for such large memory has became a tedious task due to faster, distributed nature of memory units. In the process of memory accessing it is observed that data written or fetched are often encounter with fault location and faulty data are written or fetched from the addressed locations. In real time applications, this error cannot be tolerated as it leads to variation in the operational condition dependent on the memory data. Hence, It is required to have an optimal controlling fault tolerance in content addressable memory. In this paper, we present an approach of fault tolerance approach by controlling the fault addressing overhead, by introducing a new addressing approach using redundant control modeling of fault address unit. The presented approach achieves the objective of fault controlling over multiple fault location in different dimensions with redundant coding.
Courier management system project report.pdfKamal Acharya
It is now-a-days very important for the people to send or receive articles like imported furniture, electronic items, gifts, business goods and the like. People depend vastly on different transport systems which mostly use the manual way of receiving and delivering the articles. There is no way to track the articles till they are received and there is no way to let the customer know what happened in transit, once he booked some articles. In such a situation, we need a system which completely computerizes the cargo activities including time to time tracking of the articles sent. This need is fulfilled by Courier Management System software which is online software for the cargo management people that enables them to receive the goods from a source and send them to a required destination and track their status from time to time.
Final project report on grocery store management system..pdfKamal Acharya
In today’s fast-changing business environment, it’s extremely important to be able to respond to client needs in the most effective and timely manner. If your customers wish to see your business online and have instant access to your products or services.
Online Grocery Store is an e-commerce website, which retails various grocery products. This project allows viewing various products available enables registered users to purchase desired products instantly using Paytm, UPI payment processor (Instant Pay) and also can place order by using Cash on Delivery (Pay Later) option. This project provides an easy access to Administrators and Managers to view orders placed using Pay Later and Instant Pay options.
In order to develop an e-commerce website, a number of Technologies must be studied and understood. These include multi-tiered architecture, server and client-side scripting techniques, implementation technologies, programming language (such as PHP, HTML, CSS, JavaScript) and MySQL relational databases. This is a project with the objective to develop a basic website where a consumer is provided with a shopping cart website and also to know about the technologies used to develop such a website.
This document will discuss each of the underlying technologies to create and implement an e- commerce website.
Water scarcity is the lack of fresh water resources to meet the standard water demand. There are two type of water scarcity. One is physical. The other is economic water scarcity.
Industrial Training at Shahjalal Fertilizer Company Limited (SFCL)MdTanvirMahtab2
This presentation is about the working procedure of Shahjalal Fertilizer Company Limited (SFCL). A Govt. owned Company of Bangladesh Chemical Industries Corporation under Ministry of Industries.
Cosmetic shop management system project report.pdfKamal Acharya
Buying new cosmetic products is difficult. It can even be scary for those who have sensitive skin and are prone to skin trouble. The information needed to alleviate this problem is on the back of each product, but it's thought to interpret those ingredient lists unless you have a background in chemistry.
Instead of buying and hoping for the best, we can use data science to help us predict which products may be good fits for us. It includes various function programs to do the above mentioned tasks.
Data file handling has been effectively used in the program.
The automated cosmetic shop management system should deal with the automation of general workflow and administration process of the shop. The main processes of the system focus on customer's request where the system is able to search the most appropriate products and deliver it to the customers. It should help the employees to quickly identify the list of cosmetic product that have reached the minimum quantity and also keep a track of expired date for each cosmetic product. It should help the employees to find the rack number in which the product is placed.It is also Faster and more efficient way.
COLLEGE BUS MANAGEMENT SYSTEM PROJECT REPORT.pdfKamal Acharya
The College Bus Management system is completely developed by Visual Basic .NET Version. The application is connect with most secured database language MS SQL Server. The application is develop by using best combination of front-end and back-end languages. The application is totally design like flat user interface. This flat user interface is more attractive user interface in 2017. The application is gives more important to the system functionality. The application is to manage the student’s details, driver’s details, bus details, bus route details, bus fees details and more. The application has only one unit for admin. The admin can manage the entire application. The admin can login into the application by using username and password of the admin. The application is develop for big and small colleges. It is more user friendly for non-computer person. Even they can easily learn how to manage the application within hours. The application is more secure by the admin. The system will give an effective output for the VB.Net and SQL Server given as input to the system. The compiled java program given as input to the system, after scanning the program will generate different reports. The application generates the report for users. The admin can view and download the report of the data. The application deliver the excel format reports. Because, excel formatted reports is very easy to understand the income and expense of the college bus. This application is mainly develop for windows operating system users. In 2017, 73% of people enterprises are using windows operating system. So the application will easily install for all the windows operating system users. The application-developed size is very low. The application consumes very low space in disk. Therefore, the user can allocate very minimum local disk space for this application.
Hybrid optimization of pumped hydro system and solar- Engr. Abdul-Azeez.pdffxintegritypublishin
Advancements in technology unveil a myriad of electrical and electronic breakthroughs geared towards efficiently harnessing limited resources to meet human energy demands. The optimization of hybrid solar PV panels and pumped hydro energy supply systems plays a pivotal role in utilizing natural resources effectively. This initiative not only benefits humanity but also fosters environmental sustainability. The study investigated the design optimization of these hybrid systems, focusing on understanding solar radiation patterns, identifying geographical influences on solar radiation, formulating a mathematical model for system optimization, and determining the optimal configuration of PV panels and pumped hydro storage. Through a comparative analysis approach and eight weeks of data collection, the study addressed key research questions related to solar radiation patterns and optimal system design. The findings highlighted regions with heightened solar radiation levels, showcasing substantial potential for power generation and emphasizing the system's efficiency. Optimizing system design significantly boosted power generation, promoted renewable energy utilization, and enhanced energy storage capacity. The study underscored the benefits of optimizing hybrid solar PV panels and pumped hydro energy supply systems for sustainable energy usage. Optimizing the design of solar PV panels and pumped hydro energy supply systems as examined across diverse climatic conditions in a developing country, not only enhances power generation but also improves the integration of renewable energy sources and boosts energy storage capacities, particularly beneficial for less economically prosperous regions. Additionally, the study provides valuable insights for advancing energy research in economically viable areas. Recommendations included conducting site-specific assessments, utilizing advanced modeling tools, implementing regular maintenance protocols, and enhancing communication among system components.
NO1 Uk best vashikaran specialist in delhi vashikaran baba near me online vas...Amil Baba Dawood bangali
Contact with Dawood Bhai Just call on +92322-6382012 and we'll help you. We'll solve all your problems within 12 to 24 hours and with 101% guarantee and with astrology systematic. If you want to take any personal or professional advice then also you can call us on +92322-6382012 , ONLINE LOVE PROBLEM & Other all types of Daily Life Problem's.Then CALL or WHATSAPP us on +92322-6382012 and Get all these problems solutions here by Amil Baba DAWOOD BANGALI
#vashikaranspecialist #astrologer #palmistry #amliyaat #taweez #manpasandshadi #horoscope #spiritual #lovelife #lovespell #marriagespell#aamilbabainpakistan #amilbabainkarachi #powerfullblackmagicspell #kalajadumantarspecialist #realamilbaba #AmilbabainPakistan #astrologerincanada #astrologerindubai #lovespellsmaster #kalajaduspecialist #lovespellsthatwork #aamilbabainlahore#blackmagicformarriage #aamilbaba #kalajadu #kalailam #taweez #wazifaexpert #jadumantar #vashikaranspecialist #astrologer #palmistry #amliyaat #taweez #manpasandshadi #horoscope #spiritual #lovelife #lovespell #marriagespell#aamilbabainpakistan #amilbabainkarachi #powerfullblackmagicspell #kalajadumantarspecialist #realamilbaba #AmilbabainPakistan #astrologerincanada #astrologerindubai #lovespellsmaster #kalajaduspecialist #lovespellsthatwork #aamilbabainlahore #blackmagicforlove #blackmagicformarriage #aamilbaba #kalajadu #kalailam #taweez #wazifaexpert #jadumantar #vashikaranspecialist #astrologer #palmistry #amliyaat #taweez #manpasandshadi #horoscope #spiritual #lovelife #lovespell #marriagespell#aamilbabainpakistan #amilbabainkarachi #powerfullblackmagicspell #kalajadumantarspecialist #realamilbaba #AmilbabainPakistan #astrologerincanada #astrologerindubai #lovespellsmaster #kalajaduspecialist #lovespellsthatwork #aamilbabainlahore #Amilbabainuk #amilbabainspain #amilbabaindubai #Amilbabainnorway #amilbabainkrachi #amilbabainlahore #amilbabaingujranwalan #amilbabainislamabad
2. 83 International Journal for Modern Trends in Science and Technology
Radharapu Ravivarma and M.Sanjay : Improvement in Error Resilience in BIST using Hamming Code
Extended EFDR [5], VIHC coding[8] etc. The
comparison of these techniques is presented in [9
HDR]. Among such various test data compression
method like broadcast scan based methods, linear
decompression based methods and the code based
methods, the code based compression method inds
the most suitable for IP cores where the internal
architecture of core is hidden rom system
Integrator. [9].
When the compressed data is being transferred
rom ATE to SoC, if there is any distortion or noise
in the communication link, it may happen that the
test data bit may lip. This bit lip even though may
not be so serious for uncompressed data, but gives
serious results in case of compressed data because
it may change one or many codewords in
compressed data and correspondingly
regenerated-decompressed data will be different
from original data. When such deviated data will be
applied to DUT for testing, definitely will not give
the test quality which was intended. The fault
coverage may be affected and hence the yield will
be also affected.
The capability of test data to resist the effects of
such bit lips is called error resilience. Fault
endurance, reliability and security are the major
concern for resilience of test data. So the error
resilience can be defined as the ability of a design
under test (DUT) to execute proper unction in the
presence of bit lip error, bit deletion, burst error
and missing fragmentations and to recover from
any service degradation. These stuffs are essential
in a testing where the validity of data depends on
the reliable, secure and correct function of the
device under test. Considered systems include, but
are not limited to, infrastructures, computer
networks, including adhoc and mesh networks,
web-based systems, or service- oriented
architectures, embedded systems, manufacturing
systems, control systems and more. Much work
has been done already on evaluation, analysis and
enhancement of error resilience of test data, but a
lot remains to be done. The combination of
performance and dependability, performability,
has been widely studied. But aspect, like,
improvement in error resilience is still ongoing
work. As the systems are changing day by day as
per its characteristics, it's a high need to expand
work in this area device under test. Considered
systems include, but are not limited to,
inrastructures, computer networks, including
adhoc and mesh networks, web-based systems, or
service- oriented architectures, embedded systems,
manufacturing systems, control systems and
more. Much work has been done already on
evaluation, analysis and enhancement of error
resilience of test data, but a lot remains to be done.
The combination of performance and
dependability, performability, has been widely
studied. But aspect, like, improvement in error
resilience is still ongoing work. As the systems are
changing day by day as per its characteristics, it's a
high need to expand work in this area The paper
has addressed methods and tools to describe,
measure, evaluate benchmark and improve
resilience concening resilience of test data. Here in
this paper, we have calculated the effect of bit lips
on fault coverage of widely cited ISCAS benchmark
circuits for both the case : I. when normal
(uncompressed) data is used through serial link
and 2. When the compressed data is used and the
comparisons are analyzed. Further we have
proposed the Hamming code base approach to
improve the error resilience and its effectivity is
analyzed by calculating the bit overhead and
improvement in fault coverage. The basic goal here
is to prepare a platform for further development to
improve error resilience capability in case of
compressed test data being used for IP core based
SoC
II. THE SIGNIFICANCE OF BIT FLIP N TEST
DATA
A. Why Bit Flp Occurs?
Bit lips can occur in many ways during the
transmission of test data from ATE to OUT. Noise
can affect the interfacing line between the ATE and
design under test, which can raise the errors
because of bit lip. Moreover, Bit lips can also be
generated when test inputs are generated as a pin
waveform. So it can be said that bit lip can be
reduced if the ATE test program and the DUT is
fully debugged. However, as testing is done at very
high speed, eror in few bits(bit lips) must be
considered as unavoidable in ATE environments
[10].
B. fect of Bit Flip on Fault Coverage
During the development of the test program,
Bit-lips can produce diverse effects over its entire
testing environment. Throughout the
manufacturing test, a bit-lip affects the test data in
such a way that the coverage of inaccurate test set
can be reduced (as affected by bit-lips). As a result,
there is a hike in cost incorporated with debugging
and development, while decreases productivity.
3. 84 International Journal for Modern Trends in Science and Technology
Radharapu Ravivarma and M.Sanjay : Improvement in Error Resilience in BIST using Hamming Code
C. Experimental Set-up to measure the efect ofBit-lip on
Fault Coverage
Here the full scan version of the ISCAS'89 circuits
is used as a platform to evaluate different bit lip
effects and associated fundamentals. The
experiments were performed on test data generated
by MINTEST tools. In the given test cubes, all
unspecified bits are filled with MT ill technique and
then re-ordered using the Artificial Intelligence (AI)
based algorithm[II][12]. Generally the occurrence of
bit lips is considered in two ways i.) A bit lip
probability ii.) Bit lip count. In this paper, we are
focusing mainly on bit lip count. The bit lip
locations are uniformly spread over the entire test
vector. So the bit lips are uniformly distributed to
the entire bit stream and can be evaluated in all
conditions. Here, for simplicity bit lip count of 1,2 5
and 10 are used for experimental framework. Here
each bit lip count is repeated for 25 times and
average fault coverage is calculated. The Huffman
compression technique is used for data
compression with symbol size of 4 bits [13].
D. fect of Bit Flip on Uncompressed Data
As we know, bits in compressed test data carries
more information as compared to original bit
stream, So bit lip is important for test data
compression circuits. To understand the minimum
effect of bit lip, it is understand that fault coverage
loss with original test data due to bit lip represents
lower bound on the eror resilience. For the proper
understanding, bit lips are uniformly injected in
the test data and corresponding fault coverage is
measured. Table I shows the fault coverage due to
bit lip in uncompressed test data. The average
coverage loss is 0.1 to 0.32% for 1 bit, 2 bit,S bit
and 10 bit lips.
III. EFFECT OF BIT FLIP ON COMPRESSED
TEST DATA
Assume that there are V test vectors which are
the source of combinational and sequential test
data. In the compression technique discussed
above, test vectors are partitioned into given
symbol size ((sill ) S I S S) where S is the total
number of different symbols. Ouring compression,
each symbol Sj is mapped to its associated
codeword. The compressed test data can be written
as an ordering n of the codeword (C,ll),
Cn(2),....C,lN)), where N is the total number of
symbols per codeword in the test set. The different
effects with bit lips are described below.
A. Propagation and sht efect
As it is discussed, bit lip can change the
compressed sequence in a complicated manner,
which depends on a number of symbols,
compression ratio, length of symbol or codeword
and ordering technique used. In general, bit lip can
affect codeword in two ways.
Codeword which is affected is partitioned into(K �
1) valid codeword.
Codeword which is affected is partitioned into(K �
0) valid codeword and so called dangling suix,
which is not a valid code word.
Example: Assume thatcodeword for the Huffman
compression is given as 0,10,110,I11.
Now if there is a bit lip in 15t codeword as given
in example, then it becomes'l' which is not a valid
codeword. If there is a bit lip in the 1st bit of 2nd
codeword then it becomes "00", Which gives two
different codewords '0'-'0'. But if there is a bit lip in
2nd bit of 2nd codeword then it becomes "11"
which is not valid codeword (only dangling sufix).
As shown in case I, if there is a bit lip in the
codeword Cn(i), then codeword C[(i+I) to Cn(N) will
remain unaffected. So the bit lip will generate K
additional codeword C,li-I) and Cn(i+I). As a result,
correct sequence of codeword will start rom C,li+1),
shited by K additional codeword as shown in fig 3.
Fig 3. Conceptual view of the shit effect due to a bit lip
Now, as shown in case II, if there is a bit lip in the
codeword C,li) and if bit lip creates a dangling suix,
then that dangling sufix will form a valid codeword
with bits of remaining codeword starting from
Cn(i+I). This phenomena is called as propagation of
bit lipto codeword C,li+1). Now, if codeword C,li+1)
also let with dangling sufix, then that dangling suix
will make codeword with bits of C,li+2) codeword. If
Cn(i+j) doesn't have dangling sufix, then codewords
C[(i+j+1) to C[(N) will be unaffected which is shown
in fig 4.
S820 99. 14 98. 92 98. 92 98. 92
S832 97. 17 97. 17 97. 17 97. 17
S838 99. 67 99. 67 99. 67 99. 67
4. 85 International Journal for Modern Trends in Science and Technology
Radharapu Ravivarma and M.Sanjay : Improvement in Error Resilience in BIST using Hamming Code
Above two cases signifies the relationship
between shit and propagation effect due to bit lip.
In the 1st case, codeword Cn(i) is lost and forms
another valid codeword which shits remaining code
words by at least one codeword.
In the 2nd case, due to bit lip, codeword j+I are
lost and dangling suffix will make a codeword by
using bits of next codeword. So the remaining code
words will shift by minimum one codeword. So it
can be said that propagation and shit effect can be
differentiated in terms of the number of lost
codeword. As it can be seen in the diagram, shit is
the special case of propagation where only one
codeword is lost, propagation always comes with
shit where number of lost codeword depends on the
location of the bit lip.
B. Synchronization loss
After decompression, corrupted sequence of bits
must appear in the same sequence as it would
there in case of no bit flipping. In case of bit
flipping, decompressed sequence will be disturbed
by the corrupt sequence of bits. If these corrupt bit
sequences are different than original one, then
there is a loss of synchronization and hence
decoding of the received code will be difficult.
C. Bit Flip in case Differentiated Vectors
In the compression technique described above,
data is first of all, re-ordered and then
differentiation is done to maximize compression.
Assume that 01, O2, 03 • . • . • . . • . . On is the
difference vector which can be defined as
01=V1
O2= V2 XOR VI
O"=V,, XOR V,,_1
At the decompression side assigned vectors are
retrieved in the following way.
VI = DI
Vz = Dz XOR VI
V" = 0" XOR 0,,-1 XOR ............02 XOR 01
Now in this process, if bit lip occurs in bit OJ, then
it will affect on all subsequent vectors during
decompression and all vector rom i to n will be
affected.
D. Effect of Bit Flip on Fault Coverage of
Compressed Data
Assume that the test data is compressed with the
compression ratio of 30%. That means, there is
30% minimization in data volume. So it can be said
that each 70 bits in the compressed test set can be
expanded into 100 bits of the initial uncompressed
test set. These bits will be scattered over entire test
data, which can affect the multiple test vectors and
degrade the fault coverage. Table II shows the
average fault coverage with bit lip in Huffman
compressed ISCAS circuit test data.
IV. PROPOSED METHOD TOIMPROVEERROR
RESILENCE N CASE OF BIT FLIP
In the previous section, importance of bit lip and
its negative impact on fault coverage was
described. As a solution of that, the new technique
is developed to improve the fault coverage and
hence error resilience with compressed test data. A
parity bit based error resilience method is
described in [14]. Here, in this paper, the advance
method for error resilience is proposed based on
Hamming distance.
A. Hamming Code based Technique
In communication, The Hamming code is called
as linear error code which detects and corrects
errors. At the most, it can detect two bits of error
and corrects one bit. If the hamming distance
between received and transmitted bit pattern is
5. 86 International Journal for Modern Trends in Science and Technology
Radharapu Ravivarma and M.Sanjay : Improvement in Error Resilience in BIST using Hamming Code
less than one, then it is called as reliable
communication. Hamming code is like a binary
linear code. For each code m : 2, there will be m
parity bits and 2m -m+1 message bits. Hamming
code is the best example of the perfect code which
means this code matches the theoretical upper
bound on different code words for the given bits
and its strength to correct the error bits. If more
error correcting bits are added along with the
message and if such bits are arranged to get
different error results in different incorrect bits,
then bad bits can be identified. For example, in (7,
3) Hamming code, there is a chance of error in 7
single bits. So in that 3 parity bits not only signifies
the error in data, but also inds the position of the
error.
B. General algorithm
1. Signiies the bit numbers in binary form. I.e.
for 3 bit code, Bit I = 00I, Bit 2 = 01
2. Bit positions which are powers of 2 are
considered as parity bits. e.g. 2° - 1st bit as
parity bit
3. Consider all remaining bits as data bits.
4. Parity bits are calculated rom the different
combination of data bits as per the binary
structure of the bit position.
Parity bit I includes the data bits which
have least signiicant bit 'I' in their bit
location. i.e bit 1,3,5,7....
Parity bit 2 includes the data bits which
have least signiicant bit '1' in their bit
location. i.e bit 2,3,6,7,....
5. Calculate parity rom the message bits. For
simplicity, (7,3) hamming code is taken as
an example, where,out of 7 bits, 4 bits are
message bits (0], O2, 03, 04) and 3 bits are
parity bits (C], C2, C3). Assume that
message sequence given here is "0111"
CI = Dlxor D2xor 04 C1 = 0 xor 1 xor 1
C1 = 0
C2 = 01Xor 03xor 04 C2 = 0 xor 1 xor 1
C2 = 0
C3 = 02Xor 03xor 04 C3 = I xor 1 xor I
C3 = I
6. After calculating the parity, again form a
full sequence of hamming code. In our
example it becomes "0001111".
7. Now suppose at the receiver side instead of
"0001111", "0011111" is received. That
indicates the bit flipping at some location
8. At the receiver side, decode the code in
following mannerAl = Clxor 0lxor 02xor 04
AI = 0 xor I xor 1 xor I
Al = 1
A2 = C2xor 01Xor D3Xor D4 A2 = 0 xor 1 xor 1 xor
1
A2 = 1
A3 = C3xor D2xor D3Xor D4 A3 = 1 xor 1 xor 1 xor
1
A3 = 0
Where A), A2 and A3 bits are answer bits.
9. If A], A2 and A3 are zero, then there is no error in
the message. But if any bit is '1' then there is an
error.
10. Arranage A], A2 and A3 bits n reverse order.
A3A2Al = 011. That indicates there is an error at
the 3rd position in fmal hamming sequence. So
now lip that bit and solve the error.
Now apply this algorithm to compress test data.
Divide whole data set in to blocks of 4 bits. For
each 4 message bits calculate the parity bits and
detect and correct error.
V. EXPERIMENTAL RESULTS
So the main advantage of Hamming code is that
it removes bit lip error, so no loss of fault coverage
as shown in table III and hence it improves error
resilience. But the biggest disadvantage of this
method is bits overhead as shown in table IV
Table IV BITS Overhead with Hamming Technique
Bench
mark
Circuit
Bits Human
Communcation
Total Bits
Hamming
Code
Bits
Overhead
S27 18 35 17
S298 406 714
S344 390 680
S349 420 735
S382 714 1253
S386 300 525
S400 798 1400
S420 928 1624
6. 87 International Journal for Modern Trends in Science and Technology
Radharapu Ravivarma and M.Sanjay : Improvement in Error Resilience in BIST using Hamming Code
S444 735 1288
S510 402 707
VI. CONCLUSION
Through extensive simulation on the different
benchmark circuit, it is shown that the bit lip in
compressed test data impacts a lot on fault
coverage as compared to uncompressed test data.
But with the proposed Hamming code based
technique, impact of bit flipping on compressed
test data can be significantly nullify. So the
proposed technique can improve the error
resilience but at the cost of more bits overhead.
REFERENCES
[1] Y. Zorian, "Test requirements for embedded
core-based systems and IEEE PI50 0 ," in Proc.IEEE
Int. Test Coj, 1997, pp. 191-199.
[2] Touba NA,"Survey of Test Vector Compression
Techniques", IEEE D esign and Test of Computers,
20 06, 3(4), pp. 294- 30 3
[3] D. Ha M. Ishida and T. Yamaguchi. Compact: A
hybrid method for compressmg test data. IEEE VLSI
Test Symposium, pp. 62-69, 1998.
[4] U Mehta, N D evashrayee, K S D as Gupta,
"Combining Unspeciied Test D ata Bit Fil l ing
Methods and Run Length Based Codes to Estimate
Compression, Pow er nd Area overhead", IEEE
Annual Symposium on VLSI,pp-448,449,20 10
[5] U Mehta, N D evashrayee, K S D as Gupta,
"Run-Length-Based Test Data Compression
Techniques: How Far from Entropy and Power
Bounds?-A Survey", VLSI D esign, Hindawi Publ
ication Corporation, PP-9,20 10
[6] A. Chandra nd K. Chkrabarty. "Frequency-D irected
Run-Length (FD R) codes with appl ication to
system-on-a-chip test data compression," in
Proc.IEEE VLSI Test Symp., 20 0 1, pp. 42-47.
[7] A. Chandra and K. Chakrabarty, "System-on-a-chip
test-data compression nd decompression
architectures based on Gol omb codes," IEEE Trans.
Comput.-Aided Des. Integr. Circuits Syst., vol . 20 ,
no. 3, pp. 355-368, Mar. 20 01.
[8] P. Gonciari, B. AI-Hashimi, and N. Nicolici, "Variabl
e-l ength input Huffman coding for system-on-a-chip
test," IEEE Trans.Comput.-Aided Des.Inter.Circuits
Syst., vol . 22, no. 6, pp. 783-796, Jun. 20 03.
[9] U Mehta, N D evashrayee, K S D as Gupta," Weighted
Transition Based Reordering, Columnw ise Bit Fil l
ing, and D ifference Vector: A Power- Aw re Test D
ata Compression Method", VLSI D esign, Hindaw
i Publ ication Corporation, PP-9,20 11.
[10]B. West, private communication, Credence Corp
[11]H Parmar, U Mehta, " A statistical test data
compression technique with adaptive bit ll ing nd AI
based reordering: optimization for compression and
scan power", Intenational Jounal of VLSI and Sinal
Processing Applications, Vol.I, Issue 2, May2011,pp
-15-24
[12]H Parmar, U Mehta, K D asgupta, N D evashrayee,"
Test D ata Compression Technique for IP Core
based SoC using Artiicial Intel l igence",
ASDAT,Jan 20 11/
[13]M Sharma," Compression using Hufnan Coding",
IJCSNS Intenational Jounal of Computer Science
and Network Securiy, VOL. 10 No. 5, May 2010 .
[14]U Mehta, R Trivedi, N Thakkar, "Error Resil ience in
Case of Test D ata Compression Techniques for IP
Core Based SoC", Intenational Journal of Advance
Research in Engineering and Technology, Vol ume 5,
Issue I, January (2014), page:24-35.