System-on-Chip and ASIC Design Conference
A Unified Functional Verification
Approach for Mixed Analog-Digital
Functional verification of a mixed-signal ASIC is often a challenging task because different design
methodologies are required for its digital and analog elements. Traditional strategies involve partitioning
the ASIC by the Analog-Digital boundary and creating separate simulation environments. However,
these strategies typically leave holes in the signals that interact between the analog and digital domains.
This paper describes a chip-level, mixed-signal verification environment that combines the analog and
digital elements. It uses discrete-time models in VHDL to represent the analog transistor-level blocks
and Verilog Register-Transfer-Level (RTL) for the digital parts. In addition, this environment includes a
Specman random-generation engine to control the digital and analog parameters, thereby achieving
fairly comprehensive functional test coverage. The environment also includes a cycle-accurate
MATLAB Simulink model to verify the embedded DSP core. These four languages (i.e., Verilog,
VHDL, Specman and MATLAB) are unified seamlessly in a single platform to obtain the required
capacity, simulation performance and flexibility.
Bill Luo is a verification lead engineer at Legerity, Inc., based in Austin, Texas. Previously, he served as
a senior verification and design engineer for Motorola’s Image & Entertainment Solutions group. Prior
to Motorola, he worked two years at AMD in the late 1990s as a senior verification engineer in AMD’s
Embedded Processor Division. He began his engineering career at Texas Instruments where he worked
as an ASIC design engineer for TI’s Notebook Architecture Lab. Bill holds a MSEE from California
State University, Northridge.
Jim Lear is a member of technical staff (MTS) and a design and verification engineer at Legerity, Inc.,
in Austin, Texas. He is responsible for developing mixed-signal verification and modeling methodology,
as well as memory design. Previously, he served as a senior design engineer with Digital Equipment
Corporation's StrongARM Group. He also spent five years as a design engineer with Motorola's
PowerPC microprocessor group in Austin. Jim Lear holds a BSEE and a MSEE from the University of
Texas at Austin.
Technical and business reasons often require different CAD tools and languages for different portions of
the chip. One common example is to insert a VHDL Intellectual Property (IP) block into an existing
Verilog database, or vice versa. This dilemma is even compounded in the case of mixed-signal ASIC
functional verification, due to the addition of analog circuitry.
One solution for slow analog transistor-level simulation is to replace analog transistors with discrete-
time models. For this purpose, VHDL offers several advantages over Verilog or C for analog modeling:
• VHDL has a rich set of data types, such as arrays, records, pointers and resolved types
• VHDL offers user-defined type resolution
• VHDL netlisters are available for the automatic module interconnections.
These models are orders of magnitude faster than transistor-level models and can be run within
Hardware-Design-Language (HDL) simulators simultaneously with the digital Verilog RTL. Before the
VHDL models are imported, they are verified separately against the analog transistor blocks to ensure
Digital-RTL functional verification efforts rely increasingly on randomized stimulus-generation to
increase the quality of test coverage and reduce time-to-market. Randomized stimulus-generation offers
two major advantages over traditional, manually created schemes. First, users can produce as many
meaningful tests significantly faster than with traditional schemes, and second, it can generate corner
cases and multiple simultaneous events that most designers cannot explicitly think of. Numerous
semiconductor companies, as well as several universities, have used these randomized techniques to
verify complicated ASIC designs. In addition to randomized stimulus-generation, a superior functional-
coverage analysis that estimates how much of the design has been verified is also an important part of
the verification flow. It is desirable to use the functional-coverage result to intelligently guide the
random tests into new directions and avoid the coverage overlap. Consequently, this intelligent random
suite can greatly improve testing efficiency and measure the verification completeness more accurately.
In our verification environment, we chose Verisity’s Specman Elite to implement both the
randomization engine and the functional coverage modules. Specman is designed to provide the
necessary abstraction level to develop reliable test environments for all aspects of verification: automatic
generation of functional tests, data and temporal checking, functional coverage analysis, and HDL
Historically, we used to have two separate simulation environments to verify a mixed-signal ASIC. One
environment is composed of Specman and Verilog modules that only verify digital blocks. The other is
VHDL-oriented environment that mainly concentrates on the analog characteristics. Over time, we
realized that the unification of these two testbenches could bring us several benefits:
• Reduce the element overlap between the two environments and increase the simulation accuracy
• Improve the test coverage on Analog-Digital interaction signals
• Combine two separated verification plans and issue tracking database
• Allocate the digital and analog verification resources more flexibly.
Legerity’s QLSLAC™ voice codec IC is used to demonstrate this unified simulation environment. The
QLSLAC device integrates the key functions of analog linecards into a high-performance, highly
programmable, four-channel codec/filter device. Its advanced DSP-based architecture implements four
independent voice channels and provides a Plain Old Telephone System (POTS) interface for
programmable linecards. This analog-digital mixed-signal IC combines the digital filter blocks, as well
as the sigma-delta A/D converters.
This paper presents the unified simulation environment in a step-by-step fashion. Section 3 illustrates
our verification strategies on the digital portion, which includes EDA tool selections and testbench
generation. Section 4 outlines VHDL’s advantages over other languages on analog discrete-time
modeling, along with our strategies and precautions in the analog circuitry verification. Finally, section
5 describes the unification of the Specman/Verilog and the VHDL testbenches, how Specman randomly
controls the digital and analog parameters, as well as its simulation performance benchmark. Note that
this paper primarily focuses on the RTL functional verification issues. Therefore, formal verification and
static timing analysis are not discussed here.
Digital Functional Verification Strategy
DSP Reference Model in MATLAB’s Simulink
The QLSLAC codec has an embedded DSP core to accomplish the telephony-voice signal processing.
Therefore, one of the verification challenges is to create a reference model to verify its functionality.
During the simulation, the same set of stimulus patterns is applied to both the DSP-under-test and the
DSP reference model. Then we will compare the results to ensure their functional equivalence. Ideally, a
reference model needs to be fast-in-simulation, functionally correct and represent the detail
implementation of the Design-Under-Test (DUT). We selected MATLAB’s Simulink to implement this
DSP model because it can fulfill all these three requirements more reasonably than other languages, such
as HDL, Specman or C.
Simulink is MATLAB’s module-based software package for modeling, simulating and analyzing
dynamical systems. It is based on an interpretive programming language and therefore, runs slower than
C, which is a compiled programming language. However, it still runs faster than the HDLs such as
Verilog or VHDL. Our benchmark shows that the Simulink simulation is about 10 to 100 times faster
than the detailed RTL model.
Validating a complex reference model also poses a major challenge in functional verification. In most
DSP-based communication-system design projects, the system application group typically maintains a
high-abstraction-level system model in MATLAB/Simulink to validate the overall product concept. This
system MATLAB environment is normally unrelated to our design verification environment, which is
more design-implementation oriented. However, if our verification’s DSP reference model is
implemented in Simulink, it can function as a bridge over these two seemingly unrelated environments.
We will be able to plug the Simulink model into the system MATLAB-based platform to validate its
correctness and then use it as a golden model to verify our RTL design.
The reference model must represent the detail implementation of the design-under-test in a certain
extent, especially for the complex design blocks such as a DSP. Thus, it can provide good visibility and
make the debug process easier. Simulink models are hierarchical, so we can build models by using the
same top-down approaches used by design. As a matter of fact, there can be a one-on-one
correspondence between each major design block in RTL and each major module in the Simulink
model. By using this approach, we can pinpoint the source of error much faster once a mismatch is
found in the simulation.
In addition to these three advantages mentioned above, Simulink also provides a Graphical User
Interface (GUI) for building models as block diagrams, using click-and-drag mouse operations. This is a
significant improvement over other MATLAB’s simulation packages that require us to formulate
differential equations in a language or program.
Automated MATLAB Comparison
Once this Simulink model is created and fully validated separately, integrating it into the HDL
simulation environment poses another challenge. We intended to make the model to be bit-true and
cycle-accurate; this means that each bit of the output has to be accurate at any data-strobe. One ideal
solution is to establish some Programming-Language-Interface (PLI) calls between Simulink and the
HDL simulator, as commonly done with C programming. The advantage of PLI calls is to enable the
comparison between the model and the DUT in real time. Once there is a mismatch, an error message is
generated, and the simulation stops instantly. In addition, this type of dynamic checking allows us to do
recursive testing, where the following test result overwrites the previous one.A third-party tool called
“VMlink” from Mathworks’ web site was intended to bridge the gap between Simulink and Verilog.
According to Mathworks’ description, VMlink enables the user to connect any model developed with
Simulink to the DUT written in Verilog-HDL and co-simulate them.
However, when the project started, we did not have the knowledge of creating PLI link between
Simulink and Verilog. Therefore, we went for an alternative solution that avoids any Simulink-to-HDL
PLI invocation. When the HDL simulation is running, it takes snapshots of the DSP-Under-Test
input/output data and put them into files. After the HDL simulation is completed, a Perl script is
automatically triggered to feed those data files to the DSP model, run the Simulink simulation, and make
the comparison in a batch mode. The detail of the flow can be briefly described as:
1. A Verilog module latches the input and output data of the DSP-Under-Test at each data strobe.
At the end of the RTL simulation, this Verilog module creates files called dsp_rtl_in.dat and
dsp_rtl_out.dat respectively for the data, and dsp_setup.cfg for the DSP configuration used in
2. A perl script reads the file dsp_setup.cfg and generates a MATLAB executable script called
3. DSP.m reads the input data file dsp_rtl_in.dat and triggers the Simulink DSP reference model. In
return, the model creates the golden data output file dsp_mdl_out.dat
4. The perl script then compare the actual dsp_rtl_out.dat with the expected dsp_mdl_out.dat. It
flags an error for a mismatch of any bit, to reinforce the bit-true nature of the verification
5. The same process is repeated for all four channels of the voice codec.
In this project, one system engineer spent approximately five months developing and validating this
Simulink model. Another verification engineer managed the automatic comparison flow and debugged
the mismatches for the HDL simulations. There was a fairly explicit boundary between these two
engineers throughout the project. The effort finally paid off, as the DSP design block was fully
functional in the first revision of the silicon.
Bus Functional Model Implementation
Bus Functional Models (BFM) are used extensively in most of the functional verification environments.
These models allow designers to simulate the interaction between DUTs and external devices without
having real external devices in place. A majority of the BFMs have the following characteristics: (1)
describe the functionality of the external devices and provide cycle-accurate interface to the DUTs, (2)
do not have to be synthesizable and only contain enough detail to drive and observe the bus
appropriately, and (3) provide the interface for users to program various bus cycles in order to test the
In the codec project, we originally decided to use only Specman to implement those BFMs, instead of
pure Verilog. Since the majority of our testbench elements are implemented in Specman, having the
BFMs also in Specman simply provides natural interaction between the BFMs and their higher-level (i.e.
application level) controlling units. Another reason is that Specman provides a rich set of temporal
expressions to define the occurrence of a serious of sequential events. In Specman, temporal expressions
are a combination of events and a specialized set of operators that describe behavior in time. This will
make our BFM’s code much more concise than the traditional Verilog version.
However, we then realized some disadvantages of implementing the BFMs in the Specman-only
fashion. First, the waveform support for Specman is considerably less friendly than Verilog. For
example, the variables within a Specman method can not be shown in the waveform viewer directly.
This is a quite severe problem in our experience because we always rely on waveform to identify how
BFM manipulates its internal variables. And it is common that those variables are located within the
BFM’s Specman read/write transaction methods. Therefore, this waveform shortcoming of Specman
slows down our debug process. Second, this Specman-only BFM runs noticeably slower than its Verilog
counterpart. Specman interacts with HDL simulator through some proprietary PLI calls, which is more
time-consuming than Verilog task calls. If the interaction between Specman and Verilog happens as
frequently as the physical bus clock toggles, then there might be too many PLI calls in the simulation,
which deteriorates the performance.
Verilog portion Specman portion
Hardware Level Transaction Level Application Level
Figure 1: BFM Structure
Eventually, we went for the third solution, which is to partition the BFM into two parts, Verilog and
Specman according to the structure diagram depicted in Figure 1. The Verilog portion is a Verilog-task
driven module that resides in what we called hardware level. It contains a set of basic transaction tasks,
such as READ, WRITE, Direct Memory Access (DMA), RESET, etc. In the definition of those tasks, it
synchronizes with the clock and drive/observe the physical bus based on the predefined bus protocol. In
addition to the transaction tasks, the Verilog module also includes a CONFIG task that can set this BFM
into certain modes. Because they are in Verilog, all of its signals and task variables can be drag-and-
drop in our waveform viewer.
On the other hand, the Specman portion of the BFM is raised to a higher abstraction level called
“transaction level.” It has no accessibility to the physical bus signals, but directly invokes those Verilog
transaction and configuration tasks through the PLI calls. Because the PLI is invoked for each
transaction, which is a lot sparser than the clock toggles, simulation can run much faster. Since the
transaction level is in Specman, we can add certain pseudo-random capabilities into the BFM, such as
the transaction order and the data for the WRITE task.
As indicated above, this Verilog/Specman partition of the BFM can help us avoid the Specman-only
shortcomings and also take advantage of Specman’s superior random capabilities. Moreover, this
partition can improve the code portability. Since the transaction level is not attached to any specific bus
protocol, the majority of its Specman code can be shared by different BFMs.
Protocol/Data Checker Schemes
The failure or pass of traditional manually created direct testcases is typically determined by the
testcases themselves. Nevertheless, with our pseudo-random generation fashion, the self-checking
mechanism can not work very efficiently due to the unpredicted variations. We need a reference model,
normally called checker, to ensure the functional correctness of the DUT. As shown in Figure 2, when
the random generation engine creates a legitimate set of configurations, it passes the same information to
both of the checker and the DUT through Control BFM. Thus the checker and DUT should behave
exactly the same. Most of the checkers check its design counterpart dynamically during the simulation,
and halt the simulation immediately after they find a mismatch. The only exception is the DSP reference
model, which is implemented in Simulink, for the reason mentioned above.
Random Generation Engine and Control Unit
Checker Checker Checker
1 2 3
BLK BLK BLK
Data BFM (IN) 1 2 3 Data BFM (OUT)
Figure 2: Checker Environment
Figure 2 also illustrates the fact that we have one checker on each major design block in the DUT. This
is what we call distributed checking, versus the black-box checking, which only looks at the inputs and
outputs of the full chip. The advantage of distributed checking is its visibility to the DUT’s internal
signals. Once a mismatch is found, it is much easier to pinpoint the source of error inside the often-
complicated system. On the other hand, block-box checking normally runs faster and needs less
maintenance because it is transparent to the design implementation. In the voice codec project, we chose
distributed checking scheme because we weighted the debugged easiness over the simulation time and
We decided to implement most of the checkers in Specman because: (1) the checkers can then be easily
integrated with the rest of the Specman elements including random generation engine and BFMs, and (2)
we can takes advantage of Specman’s powerful temporal expression set to make the code more concise
and readable. However, temporal expression can demand a large amount of memory and consumes a lot
of simulation time. With the help from the Specman elite’s performance profiler, we were able to
identify which expressions have big impact on the simulation performance. Then we removed the
unnecessary events and did optimization particularly on those expressions. Our benchmark showed that
this effort alone improved the simulation speed by 20 to 30 percent.
Checkers should be used in both of the pre-synthesis-RTL and post-synthesis-netlist stages to check the
functionality of the DUT, as well as its gate-level SDF timing. Because the Synopsys synthesis tool
occasionally changes the name of the internal signals during the conversion from RTL to gates, it
potentially causes the incompatibility issue for the checkers that probe the internal signals. The solution
is the combination of two actions. The first is to have the checker only probe outputs of flip-flops rather
than just wires, whose name will likely be changed in every Synopsys run. The second is to create
conditional mnemonics such that the mnemonics would point to a different signal name depending on
whether it’s an RTL simulation or a gate-level simulation. Then the checker only refers to the
mnemonics in its code and allows the conditional pointer to make the real signal connection.
Analog Discrete Time Modeling
The analog section of the voice codec comprises analog multiplexers, A to D and D to A converters, and
gain stages. To achieve the mixed-signal chip-level verification in an efficient manner, discrete-time
models of the analog devices are created in VHDL. These models are verified against the transistor
designs using a mixed-signal simulator, such as Advance-MS, to demonstrate their accuracy. A top-level
VHDL netlist of the chip is then created, which stitches together the analog models, the analog
schematics, and the Verilog RTL. This netlist is then instantiated into the testbench.
The analog models are ideal functional models and are not intended to capture behavior, such as
temperature coefficients, bias current sensitivity, resistor noise, or other circuit characteristics. On the
other hand, the models are reasonably accurate at the architecture level so that, for example, a delta-
sigma converter model will share frequency characteristics similar to its transistor brethren.
VHDL was chosen as the modeling language because it offers many advantages over such languages as
Verilog or C for analog modeling. First, VHDL is a high-level language with rich data types, unlike
Verilog. In addition to real, integer and logic types, VHDL offers arrays, records, pointers, resolved
types and operator overloading. For example, included in the IEEE math_complex package is support
for the “complex” and “polar” data types with appropriate arithmetic operators. These types are well
suited for use in signal and frequency analysis. Resolved types allow multiple blocks to drive a single
signal simultaneously, with the resulting value of the signal resolved as defined by the user. This can be
used for the obvious logic tri-state signals, but can also be used to model analog tri-state devices, current
summing, or Norton/Thevenin drivers and loads. Records, arrays and pointers are handy in simplifying
the implementation of complex test benches.
VHDL also has the advantage over C in that the process and signal constructs built into VHDL greatly
simplify the control of the device models. For example, VHDL will “awaken” an instantiation of an
amplifier when the input to the amplifier changes. In C, or C++, the constructs must be created to
manage the signal flow.
There are several techniques employed when writing the models. Analog signals, which typically are
real types, may be one of a few commonly used types. However, some devices generated have Hi-
impedance capabilities, so a real_tristate type was created. This type is a resolved type in which hi-
impedance drivers drive an arbitrary real value denoted by the constant real_z. The resolution function
will resolve a signal to another arbitrary real value, real_contention when more than one driver is
driving. In this manner, analog nets can be shared and contention detected. Similarly, the type
real_summation is a resolved type that sums all of the drivers together. This is useful for nets into which
several current sources are fed.
Gain stages can be implemented quite simply with concurrent statements, such as “vout <= vin * gain;”.
Non-linearities and clipping can be easily added using when statements. Voltage and current dividers
can also be implemented with simple expressions. On the other hand, networks that include capacitors or
inductors, require more complicated modeling. For these devices, the network is first derived into
polynomial transfer functions in the S-domain. The S-domain coefficients are then passed to an
analog_block routine that implements a second order bilinear transform to convert the S-domain transfer
function into the Z-domain. This Z-domain transfer is then implemented using an IIR. For example,3
shows a series resistor and capacitor. The second order S-domain transfer function is
V (s ) 0s 2 + RCs + 1
= . This is implemented in VHDL with an instantiation of our subroutine
I (s ) 0s 2 + Cs + 0
“analog_block(i, v, 0.0, R*C, 1.0, 0.0, C, 0.0);”, where i, and v are input and output signals, and the
remaining are the coefficients of the numerator and denominator of the s-domain polynomial ratio.
Figure 3: Example RC circuit
VHDL does have important limitations. Because VHDL does not implement simultaneous equation
solving, the input and output impedances cannot be easily modeled. For example, it is not easy to
implement an RC filter in VHDL in which the R is in the output of one model and the C is in the input
of another. Occasionally, this limitation requires partitioning the structure of the devices so that I/O
impedances are inconsequential. Alternatively, the driving or receiving models can be cautiously written
to assume knowledge of the impedance of the counterpart. This can create hazards for verifying the
After the models is written, they are verified by using a mixed-signal simulator such as Advance-MS.
Advance-MS marries a spice-like transistor level simulator, Eldo, with the Modelsim logic simulator so
that the model and transistor device can be simulated simultaneously. These simulations ensure that
gains, frequency cutoffs, and other gross characteristics are correct in nominal conditions. The
simulations for verifying the models have a side benefit of also verifying the transistor design, at least at
a gross level.
Once the analog models have been created, they must be stitched together to match the analog schematic
design. This is performed with a VHDL netlister. Thus the entire hierarchy of the analog side of the chip
is implemented in VHDL with a high degree of confidence in its accuracy.
A Unified Digital/Analog Approach
ModelSim SE is a simulator product from Model Technology that supports VHDL/Verilog mixed-
language simulation. We chose ModelSim for our unified design/verification environment due to its
excellent debugging and monitoring facilities. ModelSim SE is a single-kernel-simulator environment
that provides full-featured access to Verilog modules and VHDL entities, including source code
debugging, waveform viewing, and hierarchy navigation.
(random_gen, BFM, Checker,
DUT tbench_vhdl tbench_verilog
Digital RTL in
Figure 4: Unified Simulation Environment Block Diagram
Unified Simulation Control/Data Flow
Figure 4 shows the block diagram of the unified simulation environment. Underneath the voice codec
DUT, there are two major groups of design blocks. One is for digital modules and represented in Verilog
RTL. The other is for analog entities and represented in VHDL discrete-time model. A VHDL netlister
is used to automatically connect the VHDL and Verilog design blocks and create the QLSLAC top-level
interconnect module. In the same level of the QLSLAC DUT, there are two groups of testbench
elements, called tbench_vhdl and tbench_Verilog. The tbench_vhdl is composed of all the VHDL
testbench elements that are mainly responsible for the FFT/IFFT calculations, along with other
mathematics functions. Similarly, the tbench_Verilog includes all the Verilog modules, such as BFM’s
hardware-level components, randomized clock generation unit and a conditional mnemonics Verilog
The Specman Elite and ModelSim simulator can be integrated through the industry-standard PLI (for
Verilog) and FLI (for VHDL) interface. Consequently, Specman Elite is able to observe and control the
mixed-language HDL signals, variables, tasks and procedures. The Specman part is the high-commander
of the overall environment. It randomly generates a full set of configuration before simulation starts.
When simulation begins, the Specman part probes the synchronization events and invokes its own
Specman Time Consuming Methods (TCM) accordingly. As indicated by Figure 4, the Specman TCMs
are translated into HDL signal toggles or HDL tasks/procedures through two stub files, Specman.v and
Specman.vhd. Even though Specman cannot handle real data-type directly, we can still use an integer to
represent the real data value by having a pre-defined scaling. For example, a Specman’s “2358” can be
translated into a VHDL’s “2.358” by right-shifting three units.
The QLSLAC DUT, tbench_vhdl and tbench_Verilog are all instantiated under the top-level testbench
interconnect module called tbench_top. During the chip-level simulation, tbench_vhdl controls and
verifies the signals through the DUT’s data path. On the other hand, tbench_Verilog communicates with
the DUT’s digital input/output signals controlling and verifying the device configuration,
synchronization with certain clocks, and the communication protocol. Tbench_top exchanges the
necessary data and handshaking between tbench_vhdl and tbench_Verilog.
The dominant purpose of tbench_vhdl is to ensure that a signal applied to input A reaches the desired
output B with the appropriate transfer function. The tbench_vhdl bench was designed with a crude text
command interpreter through which Specman controls the tbench_vhdl behavior. The most common
commands configure the test bench, generate signal stimulus, or analyze the signal outputs. For
example, to create a sine wave on the DUT analog input pin, VIN1, one might issue the command
“generate_analog_samples vin1 bin0_1.5_bin17_0.1”. The parameter “bin0_1.5_bin17_0.1” is the
name of a text file that contains a complex vector that describes the desired discrete frequency spectrum
of the input signal. In this case, the file contains a value of 1.5 for bin 0 and a value of 0.1+0j for bin 17,
hence the naming. This represents a DC voltage of 1.5V and a 0.1V sine wave at some frequency. Also,
the test bench automatically and continuously samples the output of the DUT into a circular buffer.
Specman can simply issue a pause command to wait for the buffer to fill and any transients to be cleared
out. At that point, Specman can issue the “fft_samples dx1a” command, for example, that would perform
an FFT on the contents of the circular buffer for output channel DX1A. Finally, a command such as
“check_bin_mag 17 2300 2375” could be used to ensure that the 0.1V input voltage has emerged
properly from the digital side of the chip at the appropriate frequency (no crossing of channels) and at
the appropriate digital magnitude (between 2300 and 2375).
As shown previously, the Simulink DSP reference model is integrated into the environment though file
communication. The DSP’s data input and output files are generated during HDL simulation. After HDL
simulation is finished, the Simulink model reads those files and does the comparison. Since file
communication is language independent, our DSP reference model is identical before and after the
Simulation Performance Benchmark
Verification-related work occupies more and more share of the overall project time and resources as the
complexity of ICs increases. Improving simulation performance can directly enhance the efficiency of
verification work, and therefore reduces products’ time-to-market. ModelSim’s Performance Analyzer
feature quickly identifies simulation bottlenecks. It uncovers inefficient code as well as more elusive
performance sinks such as non-accelerated library cells, unnecessary testbench code, architectural
bottlenecks, or poor integration of a third-party tool. The Performance Analyzer works with VHDL,
Verilog and Specman interfaced through the PLI or FLI. We can easily examine Performance Analyzer
results and then make changes to dramatically decrease simulation run time.
Simulator Kernal 15%
Figure 5: ModelSim Performance Analyzer Result
Figure 5 shows the result of the ModelSim Performance Analyzer. It indicates the percentage of the
overall simulation time that is occupied by each language module.
According to the result, our Specman code consumes the majority of the simulation time. Hence, it is the
area of concentration to improve simulation performance. Then we brought in Specman Elite’s
Performance Profiler to do further analysis on our Specman portion. The consequent action has been
explained in the Specman BFM and checker section. We partitioned the Specman-only BFMs into
Specman and Verilog pieces. Furthermore, we received a new version of Specman and optimized the
temporal expressions based on Verisity’s coding guideline. These actions alone reduced chip-level
simulation time by almost 50 percent.
We also performed optimization in the ModelSim side to further increase the simulation speed.
ModelSim provides a global optimization argument called “+opt” to increase simulation speed
significantly. This option merges always blocks, in-lines instantiated modules, and performs cell-level
optimizations. It also reduces or eliminates events and improves memory management. In addition, we
were surprised by our performance comparison between ModelSim’s Linux and Unix versions. With a
much lower price tag, the Linux system generally runs the same test faster than the Unix one by about
25 percent. The benchmark results triggered our investigation of adding more Linux machines into our
simulation LSF farm.
The latest release from Model Technology, ModelSim 5.6, also includes a new option that allows us to
perform the elaboration step (design loading) once and run simulations multiple times. This eliminates
the need to reload the design and SDF file for each simulation and is especially useful for regression
testing and large gate-level simulations with timing files.
Integrated Functional Coverage Module
As generally acknowledged, the use of pseudo-random generation of both the analog and digital
parameters is more effective and productive than the traditional manually created schemes. However, it
also raises the necessity of having a superior functional coverage module that indicates how much of the
design has been verified by the existing regression suite, and defines the completion of the verification
work. The functional coverage modules should be written based on the verification test plan, which
defines the goals that the verification tests need to achieve from both analog and digital perspective. Our
experience is to have a web-based itemized verification test plan that is created and closely monitored by
all the groups of system application, design and verification engineers. The coverage module can be
written in such a way that each item in the coverage module has a one-to-one correspondence to each
item in the verification test plan. This scheme improves the readability and completeness of the coverage
module and also helps us to estimate the pace of our verification work.
One of the most important highlights of Specman is its functional coverage capability. There are three
features of Specman to serve this purpose:
• Coverage grading provides an indication of how close the current test suite is to achieving the
functional coverage goals you have set. For example, in our voice codec project, we want to set
minimum goals for the number of data bytes sent in each timeslot or timeslot ranges. After
running multiple tests, we used coverage grading to determine whether these goals had been met.
Note that our Specman module can control both the Verilog and VHDL portion of the testbench
via the two stub files. Therefore, it is feasible to have Specman coverage grading on both digital
and analog parameters.
• Test ranking helps us identify a subset of tests that have the lowest cost and still provide the
similar coverage grade as the full test suite. Lowest cost can be measured as the lowest CPU time
or the fewest simulation cycles. We used test ranking to create mini-regression test suites or to
determine what are the high-priority tests to run if CPU time is limited. These high-coverage
testcases are also good candidates for gate-level simulation suite, which has less number of tests
because they are too time-consuming.
• The coverage Graphical User Interface (GUI) provides a view of all coverage data currently
loaded into the system. It allows browsing over the data, as well as drilling down to focus on a
single element. Its histogram format shows the coverage holes and distribution more intuitively.
With this GUI, we can even create new cross coverage item interactively after the tests have
It should be noted that the coverage items are all user-defined, and therefore can be quite subjective. A
functional coverage score of one-hundred-percent does not necessarily means that the design is fully
covered by the verification suite. On the other hand, code coverage score is an objective indication
because it just basically reveals that which line of the RTL code has been executed at least once. It is
recommended to have both functional and code coverage satisfactory scores to ensure the completeness
of the test suite.
The complete functional verification of Analog/Digital mixed-signal designs requires chip-level
simulation environments that have pseudo-random capability. Large, monolithic HDL testbenches are
generally difficult to manage and adapt when users attempt to exercise and coordinate both the digital
and analog blocks in a random fashion. This paper illustrates a unified solution that integrates different
languages (i.e., Verilog, VHDL, Specman and MATLAB) into a single chip-level verification
environment and still achieves satisfactory simulation performance. In this environment, there is also a
Specman functional coverage module to ensure the completeness of the random test suite.
However, there are also some inevitable pitfalls to the unified mixed-language verification scheme. The
most commonly expressed concern is that both design and verification engineers ought to possess
proficiency in multiple languages. Hopefully, the learning curve will likely become less of an issue as
technology advances and the lines separating digital and analog blur. Secondly, too many tools (i.e.,
Specman, ModelSim and MATLAB’s Simulink) are required for a single chip-level simulation,
compared to System C, which uses C to do the jobs all the way from system specification to RTL/Gate
level implementation and simulation. To ease this situation, the designer should reinforce block-level
environments to optimize the quality of design blocks before they are ported in the chip-level