This document discusses VLSI testing and analysis. It defines key terms like defect, fault, and error and describes typical types of defects. It also discusses logical fault models and the role of testing in quality control. Different types of tests like production testing and burn-in testing are described. The testing process, fault simulation, design for testability techniques, and built-in self-test are summarized.
Level sensitive scan design(LSSD) and Boundry scan(BS)Praveen Kumar
This presentation contains,
Introduction,design for testability, scan chain, operation, scan structure, test vectors, Boundry scan, test logic, operation, BS cell, states of TAP controller, Boundry scan instructions.
Level sensitive scan design(LSSD) and Boundry scan(BS)Praveen Kumar
This presentation contains,
Introduction,design for testability, scan chain, operation, scan structure, test vectors, Boundry scan, test logic, operation, BS cell, states of TAP controller, Boundry scan instructions.
Scan design is currently the most popular structured DFT approach. It is implemented by Connecting selected storage elements present in the design into multiple shift registers, called Scan chains.
Scannability Rules -->
The tool perform basic two check
1) It ensures all the defined clocks including set/Reset are at their off-states, the sequential element remain stable and inactive. (S1)
2) It ensures for each defined clocks can capture data when all other defined clocks are off. (S2)
Spyglass DFT is comprehensive process of resolving RTL Design issues, thereby ensuring high quality RTL with fewer design bugs.
Improves test quality by diagnosing DFT issues early at RTL or netlist.
Shortens test implementation time and cost by ensuring RTL or netlist is scan-compliant.
Introduction of testing and verification of vlsi designUsha Mehta
This slides are introductory slides for the course testing and verification of VLSI Design which cover the basics of Why, Where, When and How of VLSI design testing
01 Transition Fault Detection methods by Swethaswethamg18
Fault Models
Stuck-at fault test covers
Shorts and opens
Resistive shorts – Not covered
Delay fault test covers
Resistive opens and coupling faults
Resistive power supply lines
Process variations
Delay Fault Testing
Propagation delay of all paths in a circuit must be less than clock period for correct operation
Functional tests applied at operational speed of circuit are often used for delay faults
Scan based stuck-at tests are often applied at speed
However, functional and stuck-at testing even if done at-speed do not specifically target delay faults
In considering the techniques that may be used for digital circuit testing, two distinct philosophies may be found, First is Functional Testing, which undertake a series of functional tests and check for the correct (fault free) 0 or 1 output response. It does not consider how the circuit is designed, but only that it gives the correct output during test and second one is Fault Modelling in whichto consider the possible Faults that may occur within the circuit, and then to apply a series of tests which are specifically formulated to check whether each of these faults is present or not.The faults which are likely to occur on the wafer during the manufacture of the ICs, and compute the result on the circuit output(s) with or without each fault present. Each of the final series of tests is then designed to show that a particular fault is present or not.
Microcontroller Based Testing of Digital IP-CoreVLSICS Design
Testing core based System on Chip [1] is a challenge for the test engineers. To test the complete SOC at one time with maximum fault coverage, test engineers prefer to test each IP-core separately. At speed testing using external testers is more expensive because of gigahertz processor. The purpose of this paper is to develop cost efficient and flexible test methodology for testing digital IP-cores [2]. The prominent feature of the approach is to use microcontroller to test IP-core. The novel feature is that there is no need of test pattern generator and output response analyzer as microcontroller performs the function of both. This approach has various advantages such as at speed testing, low cost, less area overhead and greater flexibility since most of the testing process is based on software.
Scan design is currently the most popular structured DFT approach. It is implemented by Connecting selected storage elements present in the design into multiple shift registers, called Scan chains.
Scannability Rules -->
The tool perform basic two check
1) It ensures all the defined clocks including set/Reset are at their off-states, the sequential element remain stable and inactive. (S1)
2) It ensures for each defined clocks can capture data when all other defined clocks are off. (S2)
Spyglass DFT is comprehensive process of resolving RTL Design issues, thereby ensuring high quality RTL with fewer design bugs.
Improves test quality by diagnosing DFT issues early at RTL or netlist.
Shortens test implementation time and cost by ensuring RTL or netlist is scan-compliant.
Introduction of testing and verification of vlsi designUsha Mehta
This slides are introductory slides for the course testing and verification of VLSI Design which cover the basics of Why, Where, When and How of VLSI design testing
01 Transition Fault Detection methods by Swethaswethamg18
Fault Models
Stuck-at fault test covers
Shorts and opens
Resistive shorts – Not covered
Delay fault test covers
Resistive opens and coupling faults
Resistive power supply lines
Process variations
Delay Fault Testing
Propagation delay of all paths in a circuit must be less than clock period for correct operation
Functional tests applied at operational speed of circuit are often used for delay faults
Scan based stuck-at tests are often applied at speed
However, functional and stuck-at testing even if done at-speed do not specifically target delay faults
In considering the techniques that may be used for digital circuit testing, two distinct philosophies may be found, First is Functional Testing, which undertake a series of functional tests and check for the correct (fault free) 0 or 1 output response. It does not consider how the circuit is designed, but only that it gives the correct output during test and second one is Fault Modelling in whichto consider the possible Faults that may occur within the circuit, and then to apply a series of tests which are specifically formulated to check whether each of these faults is present or not.The faults which are likely to occur on the wafer during the manufacture of the ICs, and compute the result on the circuit output(s) with or without each fault present. Each of the final series of tests is then designed to show that a particular fault is present or not.
Microcontroller Based Testing of Digital IP-CoreVLSICS Design
Testing core based System on Chip [1] is a challenge for the test engineers. To test the complete SOC at one time with maximum fault coverage, test engineers prefer to test each IP-core separately. At speed testing using external testers is more expensive because of gigahertz processor. The purpose of this paper is to develop cost efficient and flexible test methodology for testing digital IP-cores [2]. The prominent feature of the approach is to use microcontroller to test IP-core. The novel feature is that there is no need of test pattern generator and output response analyzer as microcontroller performs the function of both. This approach has various advantages such as at speed testing, low cost, less area overhead and greater flexibility since most of the testing process is based on software.
Qualifying a high performance memory subsysten for Functional SafetyPankaj Singh
Addressing the Challenges of Safety verification for LPDDR4.
✓Avoid traditional approach of starting functional safety after functional verification : Iterative and expensive development phase
1. Functional Safety Need to be Architected and not added later.
2. Safety Analysis must start prior to implementation. ‘Design for safety/verification’
3. Reuse & Synergize : Nominal and Functional Safety Verification.
✓Fault optimization with formal and other techniques is necessary to overcome challenges with scaling simulation and analysis.
✓Integrated push button fault simulation flow is need of hour and saves verification engineers time.
✓Analog defect modelling and coverage can be performed based on IEEE P2427.
OSVC_Meta-Data based Simulation Automation to overcome Verification Challenge...Soham Mondal
Identified huge error count and US$1.7M excess expense in product engineering and product development; Spearheaded from scratch product roadmap and end-to-end engineering and deployment of a custom novel software for automatic creation of error-free verification infrastructure for a customizable Network-interconnect, across 6 global teams, saved 70+ man hours per integration and testing cycle and reduced time-to-first-test by 60%, resulting in an estimated annual savings of US$4.5M in purchased product licenses and 100% reduction in error-count in engineering process. Enabled a 4-member cross-cultural global team in Seoul for 6+ months for E2E-auto-testbench product during its’ adoption, prototype testing, and life cycle. Conducted 120+ user interviews, market analysis, customer research to define key product requirements for new features resulting in 100% user adoption, 80% increase in user satisfaction. Received appreciation award from VP of Engineering, Samsung Memory Solutions.
Disclaimer: - The slides presented here are a minimised version of the actual detailed content/implementation/publication presented to the stakeholders.
If the originals are needed, they will be provided based on mutual agreement.
(All Rights Reserved)
UVM BASED REUSABLE VERIFICATION IP FOR WISHBONE COMPLIANT SPI MASTER COREVLSICS Design
The System on Chip design industry relies heavily on functional verification to ensure that the designs are bug-free. As design engineers are coming up with increasingly dense chips with much functionality, the functional verification field has advanced to provide modern verification techniques. In this paper, we
present verification of a wishbone compliant Serial Peripheral Interface (SPI) Master core using a System Verilog based standard verification methodology, the Universal Verification Methodology (UVM). The reason for using UVM factory pattern with parameterized classes is to develop a robust and reusable
verification IP. SPI is a full duplex communication protocol used to interface components most likely in embedded systems. We have verified an SPI Master IP core design that is wishbone compliant and compatible with SPI protocol and bus and furnished the results of our verification. We have used
QuestaSim for simulation and analysis of waveforms, Integrated Metrics Center, Cadence for coverage analysis. We also propose interesting future directions for this work in developing reliable systems.
UVM BASED REUSABLE VERIFICATION IP FOR WISHBONE COMPLIANT SPI MASTER COREVLSICS Design
The System on Chip design industry relies heavily on functional verification to ensure that the designs are bug-free. As design engineers are coming up with increasingly dense chips with much functionality, the functional verification field has advanced to provide modern verification techniques. In this paper, we present verification of a wishbone compliant Serial Peripheral Interface (SPI) Master core using a System Verilog based standard verification methodology, the Universal Verification Methodology (UVM). The reason for using UVM factory pattern with parameterized classes is to develop a robust and reusable verification IP. SPI is a full duplex communication protocol used to interface components most likely in embedded systems. We have verified an SPI Master IP core design that is wishbone compliant and compatible with SPI protocol and bus and furnished the results of our verification. We have used QuestaSim for simulation and analysis of waveforms, Integrated Metrics Center, Cadence for coverage analysis. We also propose interesting future directions for this work in developing reliable systems.
UVM BASED REUSABLE VERIFICATION IP FOR WISHBONE COMPLIANT SPI MASTER COREVLSICS Design
The System on Chip design industry relies heavily on functional verification to ensure that the designs are bug-free. As design engineers are coming up with increasingly dense chips with much functionality, the functional verification field has advanced to provide modern verification techniques. In this paper, we
present verification of a wishbone compliant Serial Peripheral Interface (SPI) Master core using a System Verilog based standard verification methodology, the Universal Verification Methodology (UVM). The reason for using UVM factory pattern with parameterized classes is to develop a robust and reusable
verification IP. SPI is a full duplex communication protocol used to interface components most likely in embedded systems. We have verified an SPI Master IP core design that is wishbone compliant and compatible with SPI protocol and bus and furnished the results of our verification. We have used
QuestaSim for simulation and analysis of waveforms, Integrated Metrics Center, Cadence for coverage analysis. We also propose interesting future directions for this work in developing reliable systems.
6TL-Engineering has developped a true flexible, modular test system platform concept to help engineering groups developping a reliable, flexible and efficient test system. This presentation shows the concept. used
Advanced Verification Methodology for Complex System on Chip VerificationVLSICS Design
Verification remains the most significant challenge in getting advanced SOC devices in market. The
important challenge to be solved in the Semiconductor industry is the growing complexity of SOCs.
Industry experts consider that the verification effort is almost 70% to 75% of the overall design effort.
Verification language cannot alone increase verification productivity but it must be accompanied by a
methodology to facilitate reuse to the maximum extent under different design IP configurations. This
Advanced reusable test bench development will decrease the time to market for a chip. It will help in code
reuse so that the same code used in sub-block level can be used in block level and top level as well that
helps in saving cost for a tape-out of a chip. This test bench development technique will help us to achieve
faster time to market and will help reducing the cost for the chip up to a large extent.
NUMERICAL SIMULATIONS OF HEAT AND MASS TRANSFER IN CONDENSING HEAT EXCHANGERS...ssuser7dcef0
Power plants release a large amount of water vapor into the
atmosphere through the stack. The flue gas can be a potential
source for obtaining much needed cooling water for a power
plant. If a power plant could recover and reuse a portion of this
moisture, it could reduce its total cooling water intake
requirement. One of the most practical way to recover water
from flue gas is to use a condensing heat exchanger. The power
plant could also recover latent heat due to condensation as well
as sensible heat due to lowering the flue gas exit temperature.
Additionally, harmful acids released from the stack can be
reduced in a condensing heat exchanger by acid condensation. reduced in a condensing heat exchanger by acid condensation.
Condensation of vapors in flue gas is a complicated
phenomenon since heat and mass transfer of water vapor and
various acids simultaneously occur in the presence of noncondensable
gases such as nitrogen and oxygen. Design of a
condenser depends on the knowledge and understanding of the
heat and mass transfer processes. A computer program for
numerical simulations of water (H2O) and sulfuric acid (H2SO4)
condensation in a flue gas condensing heat exchanger was
developed using MATLAB. Governing equations based on
mass and energy balances for the system were derived to
predict variables such as flue gas exit temperature, cooling
water outlet temperature, mole fraction and condensation rates
of water and sulfuric acid vapors. The equations were solved
using an iterative solution technique with calculations of heat
and mass transfer coefficients and physical properties.
Overview of the fundamental roles in Hydropower generation and the components involved in wider Electrical Engineering.
This paper presents the design and construction of hydroelectric dams from the hydrologist’s survey of the valley before construction, all aspects and involved disciplines, fluid dynamics, structural engineering, generation and mains frequency regulation to the very transmission of power through the network in the United Kingdom.
Author: Robbie Edward Sayers
Collaborators and co editors: Charlie Sims and Connor Healey.
(C) 2024 Robbie E. Sayers
NO1 Uk best vashikaran specialist in delhi vashikaran baba near me online vas...Amil Baba Dawood bangali
Contact with Dawood Bhai Just call on +92322-6382012 and we'll help you. We'll solve all your problems within 12 to 24 hours and with 101% guarantee and with astrology systematic. If you want to take any personal or professional advice then also you can call us on +92322-6382012 , ONLINE LOVE PROBLEM & Other all types of Daily Life Problem's.Then CALL or WHATSAPP us on +92322-6382012 and Get all these problems solutions here by Amil Baba DAWOOD BANGALI
#vashikaranspecialist #astrologer #palmistry #amliyaat #taweez #manpasandshadi #horoscope #spiritual #lovelife #lovespell #marriagespell#aamilbabainpakistan #amilbabainkarachi #powerfullblackmagicspell #kalajadumantarspecialist #realamilbaba #AmilbabainPakistan #astrologerincanada #astrologerindubai #lovespellsmaster #kalajaduspecialist #lovespellsthatwork #aamilbabainlahore#blackmagicformarriage #aamilbaba #kalajadu #kalailam #taweez #wazifaexpert #jadumantar #vashikaranspecialist #astrologer #palmistry #amliyaat #taweez #manpasandshadi #horoscope #spiritual #lovelife #lovespell #marriagespell#aamilbabainpakistan #amilbabainkarachi #powerfullblackmagicspell #kalajadumantarspecialist #realamilbaba #AmilbabainPakistan #astrologerincanada #astrologerindubai #lovespellsmaster #kalajaduspecialist #lovespellsthatwork #aamilbabainlahore #blackmagicforlove #blackmagicformarriage #aamilbaba #kalajadu #kalailam #taweez #wazifaexpert #jadumantar #vashikaranspecialist #astrologer #palmistry #amliyaat #taweez #manpasandshadi #horoscope #spiritual #lovelife #lovespell #marriagespell#aamilbabainpakistan #amilbabainkarachi #powerfullblackmagicspell #kalajadumantarspecialist #realamilbaba #AmilbabainPakistan #astrologerincanada #astrologerindubai #lovespellsmaster #kalajaduspecialist #lovespellsthatwork #aamilbabainlahore #Amilbabainuk #amilbabainspain #amilbabaindubai #Amilbabainnorway #amilbabainkrachi #amilbabainlahore #amilbabaingujranwalan #amilbabainislamabad
Sachpazis:Terzaghi Bearing Capacity Estimation in simple terms with Calculati...Dr.Costas Sachpazis
Terzaghi's soil bearing capacity theory, developed by Karl Terzaghi, is a fundamental principle in geotechnical engineering used to determine the bearing capacity of shallow foundations. This theory provides a method to calculate the ultimate bearing capacity of soil, which is the maximum load per unit area that the soil can support without undergoing shear failure. The Calculation HTML Code included.
Forklift Classes Overview by Intella PartsIntella Parts
Discover the different forklift classes and their specific applications. Learn how to choose the right forklift for your needs to ensure safety, efficiency, and compliance in your operations.
For more technical information, visit our website https://intellaparts.com
Cosmetic shop management system project report.pdfKamal Acharya
Buying new cosmetic products is difficult. It can even be scary for those who have sensitive skin and are prone to skin trouble. The information needed to alleviate this problem is on the back of each product, but it's thought to interpret those ingredient lists unless you have a background in chemistry.
Instead of buying and hoping for the best, we can use data science to help us predict which products may be good fits for us. It includes various function programs to do the above mentioned tasks.
Data file handling has been effectively used in the program.
The automated cosmetic shop management system should deal with the automation of general workflow and administration process of the shop. The main processes of the system focus on customer's request where the system is able to search the most appropriate products and deliver it to the customers. It should help the employees to quickly identify the list of cosmetic product that have reached the minimum quantity and also keep a track of expired date for each cosmetic product. It should help the employees to find the rack number in which the product is placed.It is also Faster and more efficient way.
3. Why Model Faults
I/O function tests inadequate for manufacturing
(functionality versus component and interconnection testing)
Real defects (often mechanical) too numerous and often not
analyzable
A fault model identifies targets for testing
A fault model makes analysis possible
Effectiveness measurable by experiments
4. Defect, Fault, and Error
Defect: A defect is the unintended difference between the implemented hardware and its intended
design.
Defects occur either during manufacture or during the use of devices.
Fault: A representation of a defect at the abstracted function level.
Error: A wrong output signal produced by a defective system. An error is caused by a Fault or a
design error.
Typical Types of Defects
Extra and missing material
Primarily caused by dust particles on the mask or
wafer surface, or in the processing chemicals Oxide breakdown
Primarily caused by insufficient oxygen at the interface of
silicon (Si) and silicon dioxide (SiO2), chemical contamination,and crystal defects
Electromigration
Primarily caused by the transport of metal atoms when a current flows through the wire
5. Logical Fault Models
Systematic defects might be caused by process variations, signal integrity, and
design integrity issues.
It is possible both random and systematic defects could happen on a single die
Logical faults
Logical faults represent the physical defects on the
behaviors of the systems
6. Role of Testing
If you design a product, fabricate, and test it, and it fails the test,
then there must because for the failure. Test was wrong
The fabrication process was faulty
The design was incorrect
The specification problem
The role of testing is to detect whether something went wrong and the role of
diagnosis is to determine exactly what went wrong.
Correctness and effectiveness of testing is most important for
quality products.
7. Verification & Test
Verifies correctness of design
Performed by simulation, hardware emulation, or formal methods
Perform once before manufacturing
Responsible for quality of design
Test
Verifies correctness of manufactured hardware
Two-part process
Test generation: software process executed once during design
Test application: electrical tests applied to hardware
Test application performed on every manufactured device
Responsible for quality of device
8. Types of Test
Production testing
Every fabricated chip is subjected to production tests
The test patterns may not cover all possible functions and data patterns but must
have a high fault coverage of modeled faults
The main driver is cost, since every device must be tested.
Test time must be absolutely minimized Only a go/no-go decision is made
Test whether some device-under-test parameters are met to the device
specifications under normal operating conditions
Burn-In testing
Ensure reliability of tested devices by testing
Detect the devices with potential failures
9. Test Process
The testing problem
Given a set of faults in the circuit under test (or device under test), how do we obtain a certain (small) number of
test patterns which guarantees a certain (high) fault coverage?
1. Test process
2. What faults to test? (fault modeling)
3. How are test pattern obtained? (test pattern generation)
How is test quality (fault coverage) measured?
(fault simulation)?
How are test vectors applied and results evaluated?
10. Testing & Diagnosis
Testing is a process which includes test
1. pattern generation, test pattern application, and output evaluation.
2. Fault detection tells whether a circuit is fault-free or not
3. Fault location provides the location of the detected fault
4. Fault diagnosis provides the location and the type of the detected fault
11. Fault Simulation
Fault simulation
In general, simulating a circuit in the presence of faults is known as fault simulation
The main goals of fault simulation
1. Measuring the effectiveness of the test patterns
2. Guiding the test pattern generator program
3. Generating fault dictionaries
4. Outputs of fault simulation
5. Fault coverage - fraction (or percentage) of modeled faults
6. detected by test vectors
7. Set of undetected faults
12. Design for Testability
A fault is testable if there exists a well-specified procedure to expose it, which is implementable with a
reasonable cost using current technologies. A circuit is testable with respect to a fault set when each and every
fault in this set is testable
Definition
Design for testability (DFT) refers to those design techniques that make test generation and test application
cost-effective
Electronic systems contain three types of components:
(a) digital logic,
(b) memory blocks, and
(c) analog or mixed-signal circuits
13. WHAT IS DFT?
Design for testability (DFT) refers to those design techniques that make test generation and
test application cost-effective.
DFT consists of IC design techniques that add testability features to a hardware product
design.
The purpose of manufacturing tests is to
validate that the product hardware contains no
manufacturing defects that could adversely
14. WHY DESIGN FOR TESTABILITY?
Testability is a design characteristic that influences various costs associated with testing.It allows for:
Device status to be determined
Isolation of faults
Reduce test time and cost
CONTROLLABILITY
Ability to establish a specific signal value at each node by setting circuit’s inputs
Circuits typically difficult to control: decoders,circuits with feedback, oscillators, clockgenerators …
15. OBSERVABILITY
Ability to determine the signal value at any
node in a circuit by controlling the circuit’s
inputs and observing its output
GOAL OF DESIGN FOR TESTABILITY(DFT)Improve
1. Controllability
2. Observability
3. Predictability
17. AD-HOC DFT METHODS
Good design practices learnt through experience are used as guidelines:
Avoid asynchronous (un clocked) feedback
Make flip-flops initializable
Avoid redundant gates
Avoid large fan-in gates
Provide test control for difficult-to-control signals
Avoid gated clocks
Design reviews conducted by experts or design auditing tools
Disadvantages of ad-hoc DFT methods:
Experts and tools not always available
Test generation is often manual with no guarantee of high fault coverage
18. SCAN DESIGN
Circuit is designed using pre-specified design rules.
Test structure (hardware) is added to the verified design:
Add a test control (TC) primary input.
Replace flip-flops by scan flip-flops (SFF) and connect to form one or more shift registers in the test mode.
Make input/output of each scan shift register controllable/observable from PI/PO.
Use only clocked D-type of flip-flops for all state variables
At least one PI pin must be available for test; more pins, if available, can be used
19. BUILT-IN SELF-TEST
Advances in microelectronics technology have
introduced a new paradigm in IC design: System-on- Chip (SoC)
Many systems are nowadays designed by embedding
predesigned and pre verified complex functional blocks
(cores) into one single die
Such a design style allows designers to reuse previous designs and will lead to shorter time-to-market and
reduced cost System-on-Chip
Embedded DRAM Interface Control Copmplex core UDL Legacy core DSP core Self-test control
1149.1 UDL
SoC structure breakdown:
10% UDL
75% memory
50% in-house cores
20. BIST TECHNIQUES
BIST techniques are classified:
on-line BIST - includes concurrent and non-concurrent techniques
off-line BIST - includes functional and structural approaches
On-line BIST - testing occurs during normal functional operation
Concurrent on-line BIST - testing occurs simultaneously with normal operation mode, usually coding techniques
or duplication and comparison are used
Non-concurrent on-line BIST - testing is carried out while a system is in an idle state, often by
executing diagnostic software or firmware routines
Off-line BIST - system is not in its normal working mode, Usually
on-chip test generators and output response analysers or micro diagnostic routines
Functional off-line BIST is based on a functional description of the Component Under Test
(CUT) and uses functional high-level fault models
Structural off-line BIST is based on the structure of the CUT and uses structural fault models
(e.g. SAF)
21. GENERAL ARCHITECTURE OF BIST
BIST components:
Test pattern generator (TPG)
Test response analyzer (TRA)
TPG & TRA are usually
implemented as linear feedback
shift registers (LFSR)
Two widespread schemes:
test-per-scan
test-per-clock
22. BIST BENEFITS
Reduced testing and maintenance cost
Lower test generation cost
Reduced storage / maintenance of test patterns
Simpler and less expensive ATE
Can test many units in parallel
Shorter test application times
Can test at functional system speed
23. Introduction to Built-In Self-Test
Built-in self-test (BIST)
The capability of a circuit (chip/board/system) to test itself
Advantages of BIST
Test patterns generated on-chip -controllability Increased
Test can be on-line (concurrent) or off-line
Test can run at circuit speed, more realistic; shorter test time; easier delay testing
External test equipment greatly simplified, or even totally eliminated
Easily adopting to engineering changes
24. Benefits of Testing
Quality and economy are two major benefits of testing
The two attributes are greatly dependent and can not be defined without the other
Quality means satisfying the user’s needs at a minimum cost
The purpose of testing is to weed out all bad products before they reach the user
The number of bad products heavily affect the price of good products
A profound understanding of the principles of manufacturing and test is essential
for an engineer to design a quality product
25. DRAWBACKS OF BIST
Additional pins and silicon area needed
Decreased reliability due to increased silicon area
Performance impact due to additional circuitry
Additional design time and cost
26. JTAG and BOUNDARY SCAN
An outline of a typical test procedure using a
boundary scan is
as follows:
– A boundary-scan test instruction is shifted into the
IR
through the TDI.
– The instruction is decoded by the decoder
associated with the IR to generate the required
control signals so as to properly configure the test
logic.
– A test pattern is shifted into the selected data
register
through the TDI and then applied to the logic to be
tested.
– The test response is captured into some data
register.
– The captured response is shifted out through the
TDO
for observation and, at the same time, a new test
pattern can be scanned in through the TDI.
28. How does it work?
The top level schematic of the test logic defined by IEEE Std 1149.1 includes three key blocks:
The TAP Controller
This responds to the control sequences supplied through the test access port (TAP) and generates the
clock and control signals required for correct operation of the other circuit blocks.
The Instruction Register
This shift register-based circuit is serially loaded with the instruction that selects an operation to be
performed.
The Data Registers
These are a bank of shift register based circuits. The stimuli required by an operation are serially
loaded into the data registers selected by the current instruction. Following execution of the
operation, results can be shifted out for examination.
29. The function of each TAP pin is as follows:
n TCK - this pin is the JTAG test clock. It sequences the TAP controller as well as
all of the JTAG registers
n TMS - this pin is the mode input signal to the TAP Controller. The state of TMS
at the rising edge of TCK determines the sequence of states for the TAP
controller.
TDI - this pin is the serial data input to all JTAG instruction and data registers.
TDI is sampled into the JTAG registers on the rising edge of TCK.
TDO - this pin is the serial data output for all JTAG instruction and data
registers.. TDO changes state on the falling edge of TCK and is only active
during the shifting of data through the device. This pin is three-stated at all other
times
30. Test Access Port Controller
n The JTAG Test Access Port (TAP) contains four pins that drive the
circuit blocks and control the operations specified.
The TAP facilitates the serial loading and unloading of
instructions and data.
The four pins of the TAP are:
n TMS – Test Mode Select
n TCK – Test Clock
n TDI - Test Data Input
n TDO – Test Data Output
31. The JTAG TAP Controller is a
1. 16-state finite state machine, that controls the scanning of data into the various
registers of the JTAG architecture.
2. The state of the TMS pin at the rising edge of TCK is responsible for determining
the sequence of state transitions.
3. There are two state transition paths for scanning the signal at
TDI in to the device,
one for shifting in an instruction to the instruction register ,and , one for shifting
data into the active data register as determined by the current instruction.
33. Data Registers
The Device ID register (IDR) reads-out an identification
number which is hardwired into the chip.
The Bypass register (BR) is a 1-cell pass-through register
which connects the TDI to the TDO with a1-clock delay to
give test equipment easy access to another device in the test
chain on the same board.
The Boundary Scan register (BSR), intercepts all the signals
between the core-logic and the pins.
34. TEST PROCESS
The standard test process for verifying a device or circuit board using boundary-
scan technology is as follows:
The tester applies test or diagnostic data on the input pins of the device.
The boundary-scan cells capture the data in the boundary scan registers
monitoring the input pins.
Data is scanned out of the device via the TDO pin,for verification.
Data can then be scanned into the device via theTDI pin.
The tester can then verify data on the output pins of the device.
35. AUTOMATIC TEST PATTERN
GENERATION
Automatic test equipment (ATE) is computer-controlled equipment used in
the production testing of ICs (both at the wafer level and in packaged devices)
and PCBs.
Test patterns are applied to the CUT and the output responses are compared to
stored responses for the fault free circuit.
Generating effective test patterns efficiently for a digital circuit is thus the goal of
any Automatic-Test-Pattern- Generation (ATPG) system.
The effectiveness of ATPG is measured by the number of modeled defects, or
fault models, detectable and by the number of generated patterns
36. Sequential ATPG
Sequential-circuit ATPG searches for a sequence of test vectors to
detect a particular fault through the space of all possible test vector
sequences.
Even a simple stuck-at fault requires a sequence of vectors for
detection in a sequential circuit.
Due to the presence of memory elements, the controllability and
observability of the internal signals in a sequential circuit are in
general much more difficult than those in a combinational logic
circuit.