The document provides an overview of the ASIC design and verification process. It discusses the key stages of ASIC design including specification, high-level design, micro design, RTL coding, simulation, synthesis, place and route, and post-silicon validation. It then describes the importance of verification, including why 70% of design time and costs are spent on verification. The verification process uses testbenches, directed and constrained-random testing, and functional coverage to verify the design matches specifications. Verification of more complex designs like FPGAs, SOCs is also discussed.
3. SPECIFICATION:
• The specification is the 1st step of an IC design is a set of requirements which
should be met and hold true across all possible operating requirements of
process, voltage and temperature, as well as across all mismatches for a
particular circuit.
• For digital circuit design, a specification is usually one document that can be
used by the circuit designer to implement the circuit in a chip.
• The process of specification development is a step-by-step refinement of
clarifying the technical requirements needed in the chip design.
• How much detail a spec contains depends on the particular situation, but it at
least covers all the necessary information needed for the design in an
unambiguous manner.
4. HIGH LEVEL DESIGN:
• It means splitting the design into blocks based on their function
• It also tell how different blocks communicate within the design.
• The Ports required to communicate with external world is also defined in
this stage.
5. MICRO DESIGN/LOW LEVEL DESIGN:
• In this stage the implementation details of the blocks is decided.
• It tells the details about the state machine, counter & internal registers needed
to design the block.
FSM
CPU
MUX
6. RTL CODING:
• RTL is an acronym for register transfer level, where Micro design is converted
into Verilog/VHDL code using synthesizable constructs of the language.
• RTL coding is done in terms of the flow of digital signals (data)
between hardware registers, and the logical operations performed on those
signals. Behavioral modelling is widely used.
RTL code example for AND gate.
module and1(a,b,c);
input a; //Input declaration.
input b; //Input declaration.
output c; //Output declaration.
assign c=a&b; //data flow
endmodule
7. SIMULATION:
• Simulation is the process of verifying the functional characteristics of models at
any level of abstraction.
• To check the Functional characteristics of the model testbench is written, which
generates all required test vectors.
• RTL code and testbench are simulated using HDL simulators to check on the
functionality of the design.
• Waveform is generated by the simulator to check the functional characteristics.
• In Complex designs self checking testbench is written where expected value is
compared with actual value.
8. SYNTHESIS:
• This process is conducted on the RTL code where the RTL code is converted into logic
gates.
• The logic gate produced is the functional equivalent of the RTL code as intended in the
design. The synthesis process however requires two input files: firstly, the “standard
cell technology files” and secondly the “constraints file”.
• A synthesized database of the design is created in the system.
9. PLACE & ROUTE:
• Place and Route process whereby the layout is being produced. In this process, the
synthesized database together with timing information from synthesis is used to place
the logic gates. Most designs have critical paths whose timings required them to be
routed first. The process of placement and routing normally has some degree of
flexibility.
10. POST SILICON VALIDATION:
• It is a process in which the manufactured design (chip) is tested for all
functional correctness in a lab setup. This is done using the real chip
assembled on a test board or a reference board along with all other
components part of the system for which the chip was designed for.
• The goal is to validate all use cases of the chip that a customer might
eventually have in a true deployment and to qualify the design for all these
usage models.
• Since the simulation speed (number of clocks per second) with RTL is very
slow, there is always the possibility to find a bug in Post silicon validation.
11. INTRODUCTION TO VERIFICATION
• Verification is the process to verify the functionality of design under given specification
before tape out.
• With Increasing design complexities, the scope of verification is also evolving to include
much more than functionality. This includes verification of performance and power
targets, security and safety aspects of design and complexities with multiple asynchronous
clock domains.
• The Verification process is considered very critical as part of design life cycle as any
serious bugs in design not discovered before tape-out can lead to the need of newer
stepping and increasing the overall cost of design process.
12. WHY VERIFICATION ?
Verification is the process of ensuring that the given hardware works as excepted.
Around 70% of the overall design time and cost is spent on verification and validation.
It becomes very important to verify the correctness of the circuits consisting of millions of
transistor.
Verification of such a complex system in a shorter span of time becomes a dominating
factor before it goes silicon level.
Finding Functional defects in the design process at early stage will help save cost.
13. Some Features of Verification are:
• Constrained-random stimulus generation.
• Functional coverage.
• Higher-level structures, especially Object Oriented Programming.
• Multi-threading and interposes communication.
• Support for HDL types such as Verilog’s 4-state values.
• Tight integration with event-simulator for control of the design.
• Protocol checking with assertion
14. THE VERIFICATION PROCESS
• The process of verification parallels the design creation process.
• The verification engineer reads the hardware specification, create the verification
plan, and then follow it to build tests showing the RTL code correctly implements
the features.
• The tests then exercise the RTL to show that it matches the desired interpretation.
• Once the DUT performs its designated functions correctly, it should also be
checked how DUT will operate when there is error.
15. BASIC TESTBENCH FUNCTIONALITY
• The purpose of a test bench is to determine the correctness of the design under test
(DUT). This is accomplished by the following steps.
Generate stimulus.
Apply stimulus to the DUT.
Capture the response.
Check for correctness.
Measure progress against the overall verification goals.
16. DIRECTED TESTING
• In Directed verification, the Verification Environment has mechanism to send the Stimulus
to DUT and collect the responses and check the responses. The Stimulus is generated in
Tests case.
• Each test case verifies specific feature of the design. This becomes tedious when the design
complexity increases.
17. TESTBENCH COMPONENTS
• In simulation, the testbench wraps around the DUT.
• Testbench needs to work over a wide range of levels of abstraction, creating
transactions and sequences, which are eventually transformed into bit vectors.
18. MAXIMUM CODE REUSE
• To verify a complex device with hundreds of features, we have to write hundreds of
directed tests.
• If we use constrained-random stimulus, we will write far fewer tests.
• Instead, the real work is put into constructing the testbench, which contains all the lower
testbench layers, scenario, functional, and command.
• This testbench code is used by all the tests, so it should remain generic.
21. Simple Class with Random variables:
class Packet; OOPS CONCEPT
// The random variables
rand bit [31:0] src, dst, data[8];
randc bit [7:0] kind;
// Limit the values for src
constraint c {src > 10; CONSTRAINTS
src < 15;}
Endclass
Packet p;
initial begin
p = new; // Create a packet
assert (p.randomize()); ASSERTION & RANDOMIZATION
transmit(p);
end
22. RANDOMIZATION
Why we should randomize?
• As designs grow larger, it becomes more difficult to create a complete set of
stimuli needed to check their functionality.
• Directed testcases become more complex, when feature gets doubling.
• The solution to this is, to create a testcase automatically uses constrained random
test(CRT).
• The advantage of CRT is, a directed test finds the bugs you think are there, but a
CRT finds bugs you never thought about, by using random stimulus.
23. FUNCTIONAL COVERAGE
• Functional coverage is a measure of which design features have been exercised by
the tests.
• Start with the design specification and create a verification plan with a detailed list
of what to test and how.
24. Why Functional Coverage?
• With CRT, we are freed from hand crafting every line of input stimulus, but
now you need to write code that tracks the effectiveness of the test with respect
to the verification plan.
• Reaching for 100% functional coverage forces you to think more about what
you want to observe and how you can direct the design into those states.
25. Gathering coverage data:
• We can run the same random testbench over and over, simply by changing the
random seed, to generate new stimulus.
• Each individual simulation generates a database of functional coverage
information, the trail of footprints from the random walk.
• We can then merge all this information together to measure your overall progress
using functional coverage.
• By analyzing the coverage data we can decide how to modify your tests.
26. • When your functional coverage values near 100%, check the bug rate. If bugs
are still being found, you may not be measuring true coverage for some areas of
your design.
27. FPGA VERIFICATION
• The early FPGA design flow consisted of entering a gate-level schematic design,
downloading it onto a device on a test board, and then validating the overall
system with real test data.
• Even with just a few thousand gates, it became clear that some form of simulation
of the design prior to download provided an easier and faster method to resolve
issues through early detection.
28. FPGA ARCHITECTURE
• FPGA contains a two dimensional
arrays of logic blocks and
interconnections between logic
blocks.
• Logic blocks and interconnects are
programmable.
LOGIC BLOCKS
30. Design Entry
There are different techniques for design entry. Schematic based,
Hardware Description Language and combination of both etc.
HDLs represent a level of abstraction that can isolate the designers from the
details of the hardware implementation. Schematic based entry gives designers
much more visibility into the hardware
31. Synthesis
• The process which translates VHDL or Verilog code into a device netlist formate.
• Synthesis process will check code syntax and analyze the hierarchy of the design
which ensures that the design is optimized for the design architecture, the
designer has selected.
33. Design Programming
• The design must be loaded on the FPGA
• The design must be converted to a format so that the FPGA can accept it
34. Design Verification
Verification can be done at different stages of the process steps.
• Behavioral Simulation
Behavioral simulation can be performed on either VHDL or Verilog
designs. In this process, signals and variables are observed, procedures and
functions are traced and breakpoints are set.
• Functional simulation
Gives information about the logic operation of the circuit.
• Static Timing Analysis
35. SOC VERIFICATION
• A verification plan must cover the verification of the individual cores as well as
that of the overall SOC.
• A good understanding of the overall application of the SOC is essential.
• The more extensive and intense the knowledge of the external interfaces and their
interactions with the SOC, the more complete the SOC verification will be.
• SOC verification becomes more complex because of the many different kinds of
IPs on the chip.
37. SOC Architecture
• SOC architecture consists of one or more embedded processors, some on-chip
memory, additional functional units, and interfaces to standard buses and perhaps
off-chip memory as well.
• Some sort of on-chip bus, network-on-chip connects all the units together..
38. SOC functionality
• Data-flow models
Determines the bandwidth capacity of an SOC interconnect and the
requirements of its various components by considering the amount of data that
needs to be processed under real-time conditions.
• Control-flow models.
Control-flow analysis for an SOC takes into account the nature and rate of
external interface processing. The control of the data and events from outside may
be in various time domains, or it may be totally asynchronous in nature.
40. SOC level/Top level view (Feature Extractions)
• At this stage, a thorough understanding of SoC functionality and its
architecture is required because misunderstanding of the specification can
become the leading cause of bugs.
42. EXTERNAL INTERFACE EMULATION
• When verifying complex SOCs, in addition to logic simulation techniques
full chip emulation should be considered.
• The primary external interfaces of each IP, as well as the SOC data
interfaces, should be examined to evaluate the need for any SOC simulation
43. • Hardware/Software integration must be planned for SOCs with processor
type cores.
• Developing bus-functional models should be part of the plan.
Hardware/Software INTEGRATION PLANNING
VERIFICATION RESOURCE PLANNING
• The size of a verification task can predict the simulation hardware resources
and the needed personnel.
• The number and complexity of IPs in an SOC will determine the amount of
estimated regression time, hardware computing resources, and simulation
license requirements.
44. REGRESSION PLANNING
• Regression testing is the process of verifying designs to guarantee that earlier
debugging has not affected the overall functionality.
• Regression testing is a reassurance that the design is backward compatible
with the original design that was tested.
• Regression testing can be automated by using batch files and scripts to provide
more reliability for complex SOCs
45. TIMING SIMULATION
• Verify the functionality as well as the timing requirements of a design.
• Used for asynchronous designs as well as synchronous designs.
• Static timing analysis should be used to verify the delays within the design.
47. • The verification team should pay special attention to the power-up and power-
down sequencing of the different cores in the chip, both during simulation and
during device bring-up.
• The register inside each core should be carefully verified.
• Individual cores should be tested. Regression, debugging, and test coverage
should be performed on all individual cores.
• 100 percent code coverage is desirable. Low code-coverage numbers should
alert the verification team that additional testing is required.
• Software reuse can speed up the verification process in device bring-up.