This document discusses various methodologies for connecting a testbench to a design under test (DUT) in SystemVerilog, including using virtual interfaces, abstract classes, and direct connections. It compares these approaches and explains that the best methodology depends on the specific verification environment and requirements. Virtual interfaces are commonly used but abstract classes can provide more flexibility. The document also covers considerations for bidirectional buses and avoiding race conditions.
The document discusses the building blocks of a SystemVerilog testbench. It describes the program block, which encapsulates test code and allows reading/writing signals and calling module routines. Interface and clocking blocks are used to connect the testbench to the design under test. Assertions, randomization, and other features help create flexible testbenches to verify design correctness.
COVERAGE DRIVEN FUNCTIONAL TESTING ARCHITECTURE FOR PROTOTYPING SYSTEM USING ...VLSICS Design
Time and efforts for functional testing of digital logic is big chunk of overall project cycle in VLSI industry. Progress of functional testing is measured by functional coverage where test-plan defines what needs to be covered, and test-results indicates quality of stimulus. Claiming closer of functional testing requires that functional coverage hits 100% of original test-plan. Depending on the complexity of the design, availability
of resources and budget, various methods are used for functional testing. Software simulations using various logic simulators, available from Electronic Design Automation (EDA) companies, is primary method for functional testing. The next level in functional testing is pre-silicon verification using Field
Programmable Gate Array (FPGA) prototype and/or emulation platforms for stress testing the Design Under Test (DUT). With all the efforts, the purpose is to gain confidence on maturity of DUT to ensures first time silicon success that meets time to market needs of the industry. For any test-environment the
bottleneck, in achieving verification closer, is controllability and observability that is quality of stimulus to unearth issues at early stage and coverage calculation. Software simulation, FPGA prototype, or emulation, each method has its own limitations, be it test-time, ease of use, or cost of software, tools and
hardware-platform. Compared to software simulation, FPGA prototyping and emulation methods pose greater challenges in quality stimulus generation and coverage calculation. Many researchers have identified the problems of bug-detection / localization, but very few have touched the concept of quality
stimulus generation that leads to better functional coverage and thereby uncover hidden bugs in FPGA prototype verification setup. This paper presents a novel approach to address above-mentioned issues by embedding synthesizable active-agent and coverage collector into FPGA prototype. The proposed
architecture has been experimented for functional and stress testing of Universal Serial Bus (USB) Link Training and Status State Machine (LTSSM) logic module as DUT in FPGA prototype. The proposed solution is fully synthesizable and hence can be used in both software simulation as well as in prototype system. The biggest advantage is plug and play nature of this active-agent component, that allows its
reusability in any USB3.0 LTSSM digital core.
COVERAGE DRIVEN FUNCTIONAL TESTING ARCHITECTURE FOR PROTOTYPING SYSTEM USING ...VLSICS Design
This document describes a proposed architecture for functional testing of a USB Link Training and Status State Machine (LTSSM) logic module using a synthesizable active agent embedded in an FPGA prototyping system. The active agent controls stimulus generation and injects errors to target time-sensitive link training and low power states. It also includes a coverage collector to provide observability for closed-loop functional testing. The active agent is fully synthesizable, making it reusable for both software simulation and FPGA prototyping. Experimental results showed the architecture was able to better generate stimuli and improve functional coverage for stress testing the LTSSM module.
COVERAGE DRIVEN FUNCTIONAL TESTING ARCHITECTURE FOR PROTOTYPING SYSTEM USING ...VLSICS Design
Time and efforts for functional testing of digital logic is big chunk of overall project cycle in VLSI industry. Progress of functional testing is measured by functional coverage where test-plan defines what needs to be covered, and test-results indicates quality of stimulus. Claiming closer of functional testing requires that functional coverage hits 100% of original test-plan. Depending on the complexity of the design, availability of resources and budget, various methods are used for functional testing. Software simulations using various logic simulators, available from Electronic Design Automation (EDA) companies, is primary method for functional testing. The next level in functional testing is pre-silicon verification using Field Programmable Gate Array (FPGA) prototype and/or emulation platforms for stress testing the Design Under Test (DUT). With all the efforts, the purpose is to gain confidence on maturity of DUT to ensuresfirst time silicon success that meets time to market needs of the industry. For any test-environment the bottleneck, in achieving verification closer, is controllability and observability that is quality of stimulus to unearth issues at early stage and coverage calculation. Software simulation, FPGA prototype, or emulation, each method has its own limitations, be it test-time, ease of use, or cost of software, tools and
hardware-platform. Compared to software simulation, FPGA prototyping and emulation methods pose greater challenges in quality stimulus generation and coverage calculation. Many researchers have identified the problems of bug-detection / localization, but very few have touched the concept of quality stimulus generation that leads to better functional coverage and thereby uncover hidden bugs in FPGA prototype verification setup. This paper presents a novel approach to address above-mentioned issues by embedding synthesizable active-agent and coverage collector into FPGA prototype. The proposed architecture has been experimented for functional and stress testing of Universal Serial Bus (USB) Link Training and Status State Machine (LTSSM) logic module as DUT in FPGA prototype. The proposed solution is fully synthesizable and hence can be used in both software simulation as well as in prototype system. The biggest advantage is plug and play nature of this active-agent component, that allows its reusability in any USB3.0 LTSSM digital core.
Re usable continuous-time analog sva assertionsRégis SANTONJA
This paper shows how SystemVerilog Assertions (SVA) modules can be bound to analog IP blocks, shall they be at behavioral or transistor-level, enabling the assertions to become a true IP deliverable that can be reused at SoC level. It also highlights how DPIs can fix analog assertions specificities, such as getting rid of hierarchical paths, especially when probing currents. This paper also demonstrates how to flawlessly switch models between digital (wreal) and analog models without breaking the assertions. Finally, it demonstrates how one can generate an adaptive clock to continuously assert analog properties whose stability over time is critical, such as current or voltage references or supplies.
The document discusses various techniques for software testing including whitebox testing, blackbox testing, unit testing, integration testing, validation testing, and system testing. It provides details on techniques like equivalence partitioning, boundary value analysis, orthogonal array testing, and graph matrices. The objective of testing is to systematically uncover errors in a minimum amount of time and effort. Testing should begin with unit testing and progress towards integration and system-level testing.
This document summarizes a research paper about binary obfuscation techniques that aim to make reverse engineering of software more difficult. The paper proposes replacing control transfer instructions like jumps and calls with signals (traps) that are handled by signal handling code to perform the control transfer. It also inserts dummy control transfers and junk instructions after traps to confuse disassemblers. Experimental results show this obfuscation causes disassemblers to miss 30-80% of instructions and make mistakes on over half of control flow edges, while increasing execution time.
Star Test Topology for Testing Printed Circuits BoardsIRJET Journal
This document presents a new testing methodology called star test topology (STT) for testing printed circuit boards. STT aims to address limitations of traditional testing methods such as being manual, limited by chip complexity, and requiring expensive test equipment. STT involves developing a shared test access port over the entire PCB and redesigning on-chip design-for-testability circuitry. In STT, devices under test are connected in a star topology with a central test access port acting as a hub. This allows test patterns to be broadcast to devices and results returned, with minimal pins/resources required. The document describes simulating STT using circuit design software and capturing output signals with a logic analyzer.
The document discusses the building blocks of a SystemVerilog testbench. It describes the program block, which encapsulates test code and allows reading/writing signals and calling module routines. Interface and clocking blocks are used to connect the testbench to the design under test. Assertions, randomization, and other features help create flexible testbenches to verify design correctness.
COVERAGE DRIVEN FUNCTIONAL TESTING ARCHITECTURE FOR PROTOTYPING SYSTEM USING ...VLSICS Design
Time and efforts for functional testing of digital logic is big chunk of overall project cycle in VLSI industry. Progress of functional testing is measured by functional coverage where test-plan defines what needs to be covered, and test-results indicates quality of stimulus. Claiming closer of functional testing requires that functional coverage hits 100% of original test-plan. Depending on the complexity of the design, availability
of resources and budget, various methods are used for functional testing. Software simulations using various logic simulators, available from Electronic Design Automation (EDA) companies, is primary method for functional testing. The next level in functional testing is pre-silicon verification using Field
Programmable Gate Array (FPGA) prototype and/or emulation platforms for stress testing the Design Under Test (DUT). With all the efforts, the purpose is to gain confidence on maturity of DUT to ensures first time silicon success that meets time to market needs of the industry. For any test-environment the
bottleneck, in achieving verification closer, is controllability and observability that is quality of stimulus to unearth issues at early stage and coverage calculation. Software simulation, FPGA prototype, or emulation, each method has its own limitations, be it test-time, ease of use, or cost of software, tools and
hardware-platform. Compared to software simulation, FPGA prototyping and emulation methods pose greater challenges in quality stimulus generation and coverage calculation. Many researchers have identified the problems of bug-detection / localization, but very few have touched the concept of quality
stimulus generation that leads to better functional coverage and thereby uncover hidden bugs in FPGA prototype verification setup. This paper presents a novel approach to address above-mentioned issues by embedding synthesizable active-agent and coverage collector into FPGA prototype. The proposed
architecture has been experimented for functional and stress testing of Universal Serial Bus (USB) Link Training and Status State Machine (LTSSM) logic module as DUT in FPGA prototype. The proposed solution is fully synthesizable and hence can be used in both software simulation as well as in prototype system. The biggest advantage is plug and play nature of this active-agent component, that allows its
reusability in any USB3.0 LTSSM digital core.
COVERAGE DRIVEN FUNCTIONAL TESTING ARCHITECTURE FOR PROTOTYPING SYSTEM USING ...VLSICS Design
This document describes a proposed architecture for functional testing of a USB Link Training and Status State Machine (LTSSM) logic module using a synthesizable active agent embedded in an FPGA prototyping system. The active agent controls stimulus generation and injects errors to target time-sensitive link training and low power states. It also includes a coverage collector to provide observability for closed-loop functional testing. The active agent is fully synthesizable, making it reusable for both software simulation and FPGA prototyping. Experimental results showed the architecture was able to better generate stimuli and improve functional coverage for stress testing the LTSSM module.
COVERAGE DRIVEN FUNCTIONAL TESTING ARCHITECTURE FOR PROTOTYPING SYSTEM USING ...VLSICS Design
Time and efforts for functional testing of digital logic is big chunk of overall project cycle in VLSI industry. Progress of functional testing is measured by functional coverage where test-plan defines what needs to be covered, and test-results indicates quality of stimulus. Claiming closer of functional testing requires that functional coverage hits 100% of original test-plan. Depending on the complexity of the design, availability of resources and budget, various methods are used for functional testing. Software simulations using various logic simulators, available from Electronic Design Automation (EDA) companies, is primary method for functional testing. The next level in functional testing is pre-silicon verification using Field Programmable Gate Array (FPGA) prototype and/or emulation platforms for stress testing the Design Under Test (DUT). With all the efforts, the purpose is to gain confidence on maturity of DUT to ensuresfirst time silicon success that meets time to market needs of the industry. For any test-environment the bottleneck, in achieving verification closer, is controllability and observability that is quality of stimulus to unearth issues at early stage and coverage calculation. Software simulation, FPGA prototype, or emulation, each method has its own limitations, be it test-time, ease of use, or cost of software, tools and
hardware-platform. Compared to software simulation, FPGA prototyping and emulation methods pose greater challenges in quality stimulus generation and coverage calculation. Many researchers have identified the problems of bug-detection / localization, but very few have touched the concept of quality stimulus generation that leads to better functional coverage and thereby uncover hidden bugs in FPGA prototype verification setup. This paper presents a novel approach to address above-mentioned issues by embedding synthesizable active-agent and coverage collector into FPGA prototype. The proposed architecture has been experimented for functional and stress testing of Universal Serial Bus (USB) Link Training and Status State Machine (LTSSM) logic module as DUT in FPGA prototype. The proposed solution is fully synthesizable and hence can be used in both software simulation as well as in prototype system. The biggest advantage is plug and play nature of this active-agent component, that allows its reusability in any USB3.0 LTSSM digital core.
Re usable continuous-time analog sva assertionsRégis SANTONJA
This paper shows how SystemVerilog Assertions (SVA) modules can be bound to analog IP blocks, shall they be at behavioral or transistor-level, enabling the assertions to become a true IP deliverable that can be reused at SoC level. It also highlights how DPIs can fix analog assertions specificities, such as getting rid of hierarchical paths, especially when probing currents. This paper also demonstrates how to flawlessly switch models between digital (wreal) and analog models without breaking the assertions. Finally, it demonstrates how one can generate an adaptive clock to continuously assert analog properties whose stability over time is critical, such as current or voltage references or supplies.
The document discusses various techniques for software testing including whitebox testing, blackbox testing, unit testing, integration testing, validation testing, and system testing. It provides details on techniques like equivalence partitioning, boundary value analysis, orthogonal array testing, and graph matrices. The objective of testing is to systematically uncover errors in a minimum amount of time and effort. Testing should begin with unit testing and progress towards integration and system-level testing.
This document summarizes a research paper about binary obfuscation techniques that aim to make reverse engineering of software more difficult. The paper proposes replacing control transfer instructions like jumps and calls with signals (traps) that are handled by signal handling code to perform the control transfer. It also inserts dummy control transfers and junk instructions after traps to confuse disassemblers. Experimental results show this obfuscation causes disassemblers to miss 30-80% of instructions and make mistakes on over half of control flow edges, while increasing execution time.
Star Test Topology for Testing Printed Circuits BoardsIRJET Journal
This document presents a new testing methodology called star test topology (STT) for testing printed circuit boards. STT aims to address limitations of traditional testing methods such as being manual, limited by chip complexity, and requiring expensive test equipment. STT involves developing a shared test access port over the entire PCB and redesigning on-chip design-for-testability circuitry. In STT, devices under test are connected in a star topology with a central test access port acting as a hub. This allows test patterns to be broadcast to devices and results returned, with minimal pins/resources required. The document describes simulating STT using circuit design software and capturing output signals with a logic analyzer.
Accelerating system verilog uvm based vip to improve methodology for verifica...VLSICS Design
In this paper we present the development of Acceleratable UVCs from standard UVCs in System Verilog
and their usage in UVM based Verification Environment of Image Signal Processing designs to increase
run time performance. This paper covers development of Acceleratable UVCs from standard UVCs for
internal control and data buses of ST imaging group by partitioning of transaction-level components and
cycle-accurate signal-level components between the software simulator and hardware accelerator
respectively. Standard Co-Emulation API: Modeling Interface (SCE-MI) compliant, transaction-level
communications link between test benches running on a host system and Emulation machine is established.
Accelerated Verification IPs are used at UVM based Verification Environment of Image Signal Processing
designs both with simulator and emulator as UVM acceleration is an extension of the standard simulationonly
UVM and is fully backward compatible with it. Acceleratable UVCs significantly reduces development
schedule risks while leveraging transaction models used during simulation.
In this paper, we discuss our experiences on UVM based methodology adoption on TestBench-Xpress
(TBX) based technology step by step. We are also doing comparison between the run time performance
results from earlier simulator-only environment and the new, hardware-accelerated environment. Although
this paper focuses on Acceleratable UVC’s development and their usage for image signal processing
designs. Same concept can be extended for non-image signal processing designs.
This document summarizes a seminar on RAIN technology. RAIN (Reliable Array of Independent Nodes) originated at Caltech to create a redundant network of processors that could automatically recover from failures. It aims to minimize connections between clients and servers and make nodes robust and independent. The architecture includes reliable transport, consistent global state sharing, and local/global fault monitors. Key features are scalability, high availability, fault tolerance for nodes, networks and data storage. Applications include high availability video and web servers. Future work involves further developing APIs and a distributed file system.
Implementing True Zero Cycle Branching in Scalar and Superscalar Pipelined Pr...IDES Editor
In this paper, we have proposed a novel architectural
technique which can be used to boost performance of modern
day processors. It is especially useful in certain code constructs
like small loops and try-catch blocks. The technique is aimed
at improving performance by reducing the number of
instructions that need to enter the pipeline itself. We also
demonstrate its working in a scalar pipelined soft-core
processor developed by us. Lastly, we present how a superscalar
microprocessor can take advantage of this technique and
increase its performance.
As the complexity of the scan algorithm is dependent on the number of design registers, large SoC scan
designs can no longer be verified in RTL simulation unless partitioned into smaller sub-blocks. This paper
proposes a methodology to decrease scan-chain verification time utilizing SCE-MI, a widely used
communication protocol for emulation, and an FPGA-based emulation platform. A high-level (SystemC)
testbench and FPGA synthesizable hardware transactor models are developed for the scan-chain ISCAS89
S400 benchmark circuit for high-speed communication between the host CPU workstation and the FPGA
emulator. The emulation results are compared to other verification methodologies (RTL Simulation,
Simulation Acceleration, and Transaction-based emulation), and found to be 82% faster than regular RTL
simulation. In addition, the emulation runs in the MHz speed range, allowing the incorporation of software
applications, drivers, and operating systems, as opposed to the Hz range in RTL simulation or submegahertz
range as accomplished in transaction-based emulation. In addition, the integration of scan
testing and acceleration/emulation platforms allows more complex DFT methods to be developed and
tested on a large scale system, decreasing the time to market for products.
Fault Modeling of Combinational and Sequential Circuits at Register Transfer ...VLSICS Design
As the complexity of Very Large Scale Integration (VLSI) is growing, testing becomes tedious and tougher. As of now fault models are used to test digital circuits at the gate level or below that level. By using fault models at the lower levels, testing becomes cumbersome and will lead to delays in the design cycle. In addition, developments in deep submicron technology provide an opening to new defects. We must develop efficient fault detection and location methods in order to reduce manufacturing costs and time to market. Thus there is a need to look for a new approach of testing the circuits at higher levels to speed up the design cycle. This paper proposes on Register Transfer Level (RTL) modeling for digital circuits and computing the fault coverage. The result obtained through this work establishes that the fault coverage with the RTL fault model is comparable to the gate level fault coverage.
FAULT MODELING OF COMBINATIONAL AND SEQUENTIAL CIRCUITS AT REGISTER TRANSFER ...VLSICS Design
This document summarizes research on modeling faults at the register transfer level (RTL) for digital circuit testing. It proposes a new RTL fault model that models stuck-at faults by inserting buffers for each bit in the variables of the RTL code. Fault simulation is performed on faulty circuits generated from the RTL code to determine fault coverage. Results on combinational and sequential circuits show the RTL fault coverage obtained matches closely with gate-level fault coverage obtained through logic synthesis and gate-level fault simulation. The proposed RTL fault model provides a way to estimate fault coverage earlier in the design cycle compared to traditional gate-level fault simulation.
Crypto Mark Scheme for Fast Pollution Detection and Resistance over NetworkingIRJET Journal
The document proposes a new scheme called HOPVOTE to efficiently detect pollution attacks in networks. HOPVOTE is a packet HOP_VOTE technique that attaches an encrypted key to each packet to verify its integrity at each hop. It aims to rapidly identify polluters and misbehaving data/routes. The scheme uses keybit verification and cache-based recovery to identify and block nodes that drop or modify data, and recovers polluted data for retransmission. Simulation results show the scheme can effectively detect pollution attacks with low overhead. Future work will analyze routing performance under common network applications and flows.
Verilog Ams Used In Top Down Methodology For Wireless Integrated CircuitsRégis SANTONJA
The document discusses using the VerilogAMS language and top-down methodology for wireless integrated circuit designs. Specifically, it discusses:
1) Using the top-down methodology to allow for general functionality verification early in the design process by analyzing the ASIC from top to bottom before individual block implementation.
2) Describing the steps of behavioral modeling of blocks using VerilogA, replacing blocks with transistor-level designs, and simulating the entire design with mixed behavioral and transistor-level blocks.
3) Noting that the top-down methodology can be applied whether the design has a large analog/small digital portion or large digital/small analog portion.
The document discusses different levels of testing including debugging, unit testing, host testing, white-box testing, black-box testing, integration testing, capacity/performance testing, and acceptance testing. It also discusses the ideal features of a testing framework, such as being readable by non-technical customers, supporting different test levels with one tool, and being reusable. Key advantages of keyword-driven and exception-driven simulator approaches are outlined.
White paper: Software-Defined Networking Matrix SwitchingJoel W. King
This whitepaper describes a Software-Defined Networking use case, using an OpenFlow controller and white box switches to implement Open Systems Interconnection model (OSI) Layer-1 matrix switch functionality for World Wide Technology’s Advanced Technology Center (ATC).
This document contains an interview question and answer summary on SystemVerilog. It discusses callback mechanisms, the factory pattern, data types like logic, reg and wire, the purpose of clocking blocks, ways to avoid race conditions between testbenches and RTLs, types of coverage, virtual interfaces, abstract classes and virtual methods, the purpose of packages, and various other SystemVerilog concepts.
Denial of Service Attacks in Software Defined Networking - A SurveyIRJET Journal
This document summarizes a survey on denial of service attacks in software defined networking. It begins with an introduction to software defined networking and how it separates the control plane from the data plane. It then discusses how saturation attacks like denial of service (DoS) and distributed denial of service (DDoS) attacks work in SDNs by overwhelming switches, controller-switch links, and controllers. Various proposals for detecting and mitigating these attacks are overviewed, such as diverting packets, caching packets, classifying packets, and anomaly detection. Challenges in mitigating low rate attacks and securing SDN-based IoT networks are also discussed.
The document discusses various architectural styles used in software design. It describes styles such as main program and subroutines, object-oriented, layered, client-server, data-flow, shared memory, interpreter, and implicit invocation. Each style has different components, communication methods, and advantages/disadvantages for managing complexity and promoting qualities like reuse and modifiability. Common examples of each style are also provided.
The document discusses software design and implementation. It describes the design phase as involving high-level architectural design to develop the overall structure of a software program, and low-level detailed design to develop specific algorithms and data structures. The implementation phase includes activities like constructing software components, testing, developing prototypes, training, and installing the system. Good design principles include modularity, low coupling between modules, and high cohesion within modules.
Serial port in an embedded system types and usesSf Cable, Inc
The most common type of serial cable, straight-through serial cable, is useful to connect a Data Terminal Equipment (DTE) device to a Data Communications Equipment (DCE) device (for example, a modem).
The document discusses object-oriented (OO) design and patterns. It defines OO design as organizing software around objects rather than functions, using techniques like abstraction, encapsulation, inheritance, and polymorphism. C++ supports these OO mechanisms, making it an OO language. Classes in C++ enable abstraction and encapsulation. Inheritance and polymorphism allow modeling "is-a" relationships and enable code reuse. Common design patterns like singleton, factory, and observer are also discussed. The key to OO project success is identifying the right abstractions.
This document discusses criteria for modularization in software design. It defines modules as named entities that contain instructions, logic, and data structures. Good modularization aims to decompose a system into functional units with minimal coupling between modules. Modules should be designed for high cohesion (related elements) and low coupling (dependencies). The types of coupling from strongest to weakest are content, common, control, stamp, and data coupling. The document also discusses different types of cohesion within modules from weakest to strongest. The goal is functional cohesion with minimal coupling between modules.
NETKIT is a software component-based approach to programmable networks. It uses a lightweight component model to provide a uniform programming approach across all layers of the network. NETKIT implements a component model with interfaces, receptacles, bindings, and capsules. It also includes reflective meta-models to allow inspection and reconfiguration of components. Examples using NETKIT demonstrate simple network topologies and exploring ARP behavior.
System verilog verification building blocksNirav Desai
SystemVerilog introduces key concepts like program blocks, interfaces, and clocking blocks to help with verification. Program blocks separate the testbench code from the design code to avoid race conditions. Interfaces encapsulate communication between blocks and help prevent errors from manual port connections. Clocking blocks synchronize signal drivers and allow specifying timing for sampled signals. Together these features help manage complexity when verifying designs.
DESIGN APPROACH FOR FAULT TOLERANCE IN FPGA ARCHITECTUREVLSICS Design
Failures of nano-metric technologies owing to defects and shrinking process tolerances give rise to significant challenges for IC testing. In recent years the application space of reconfigurable devices has grown to include many platforms with a strong need for fault tolerance. While these systems frequently contain hardware redundancy to allow for continued operation in the presence of operational faults, the need to recover faulty hardware and return it to full functionality quickly and efficiently is great. In addition to providing functional density, FPGAs provide a level of fault tolerance generally not found in mask-programmable devices by including the capability to reconfigure around operational faults in the field. Reliability and process variability are serious issues for FPGAs in the future. With advancement in process technology, the feature size is decreasing which leads to higher defect densities, more sophisticated techniques at increased costs are required to avoid defects. If nano-technology fabrication are applied the yield may go down to zero as avoiding defect during fabrication will not be a feasible option Hence, feature architecture have to be defect tolerant. In regular structure like FPGA, redundancy is commonly used for fault tolerance. In this work we present a solution in which configuration bit-stream of FPGA is modified by a hardware controller that is present on the chip itself. The technique uses redundant device for replacing faulty device and increases the yield.
Accelerating system verilog uvm based vip to improve methodology for verifica...VLSICS Design
In this paper we present the development of Acceleratable UVCs from standard UVCs in System Verilog
and their usage in UVM based Verification Environment of Image Signal Processing designs to increase
run time performance. This paper covers development of Acceleratable UVCs from standard UVCs for
internal control and data buses of ST imaging group by partitioning of transaction-level components and
cycle-accurate signal-level components between the software simulator and hardware accelerator
respectively. Standard Co-Emulation API: Modeling Interface (SCE-MI) compliant, transaction-level
communications link between test benches running on a host system and Emulation machine is established.
Accelerated Verification IPs are used at UVM based Verification Environment of Image Signal Processing
designs both with simulator and emulator as UVM acceleration is an extension of the standard simulationonly
UVM and is fully backward compatible with it. Acceleratable UVCs significantly reduces development
schedule risks while leveraging transaction models used during simulation.
In this paper, we discuss our experiences on UVM based methodology adoption on TestBench-Xpress
(TBX) based technology step by step. We are also doing comparison between the run time performance
results from earlier simulator-only environment and the new, hardware-accelerated environment. Although
this paper focuses on Acceleratable UVC’s development and their usage for image signal processing
designs. Same concept can be extended for non-image signal processing designs.
This document summarizes a seminar on RAIN technology. RAIN (Reliable Array of Independent Nodes) originated at Caltech to create a redundant network of processors that could automatically recover from failures. It aims to minimize connections between clients and servers and make nodes robust and independent. The architecture includes reliable transport, consistent global state sharing, and local/global fault monitors. Key features are scalability, high availability, fault tolerance for nodes, networks and data storage. Applications include high availability video and web servers. Future work involves further developing APIs and a distributed file system.
Implementing True Zero Cycle Branching in Scalar and Superscalar Pipelined Pr...IDES Editor
In this paper, we have proposed a novel architectural
technique which can be used to boost performance of modern
day processors. It is especially useful in certain code constructs
like small loops and try-catch blocks. The technique is aimed
at improving performance by reducing the number of
instructions that need to enter the pipeline itself. We also
demonstrate its working in a scalar pipelined soft-core
processor developed by us. Lastly, we present how a superscalar
microprocessor can take advantage of this technique and
increase its performance.
As the complexity of the scan algorithm is dependent on the number of design registers, large SoC scan
designs can no longer be verified in RTL simulation unless partitioned into smaller sub-blocks. This paper
proposes a methodology to decrease scan-chain verification time utilizing SCE-MI, a widely used
communication protocol for emulation, and an FPGA-based emulation platform. A high-level (SystemC)
testbench and FPGA synthesizable hardware transactor models are developed for the scan-chain ISCAS89
S400 benchmark circuit for high-speed communication between the host CPU workstation and the FPGA
emulator. The emulation results are compared to other verification methodologies (RTL Simulation,
Simulation Acceleration, and Transaction-based emulation), and found to be 82% faster than regular RTL
simulation. In addition, the emulation runs in the MHz speed range, allowing the incorporation of software
applications, drivers, and operating systems, as opposed to the Hz range in RTL simulation or submegahertz
range as accomplished in transaction-based emulation. In addition, the integration of scan
testing and acceleration/emulation platforms allows more complex DFT methods to be developed and
tested on a large scale system, decreasing the time to market for products.
Fault Modeling of Combinational and Sequential Circuits at Register Transfer ...VLSICS Design
As the complexity of Very Large Scale Integration (VLSI) is growing, testing becomes tedious and tougher. As of now fault models are used to test digital circuits at the gate level or below that level. By using fault models at the lower levels, testing becomes cumbersome and will lead to delays in the design cycle. In addition, developments in deep submicron technology provide an opening to new defects. We must develop efficient fault detection and location methods in order to reduce manufacturing costs and time to market. Thus there is a need to look for a new approach of testing the circuits at higher levels to speed up the design cycle. This paper proposes on Register Transfer Level (RTL) modeling for digital circuits and computing the fault coverage. The result obtained through this work establishes that the fault coverage with the RTL fault model is comparable to the gate level fault coverage.
FAULT MODELING OF COMBINATIONAL AND SEQUENTIAL CIRCUITS AT REGISTER TRANSFER ...VLSICS Design
This document summarizes research on modeling faults at the register transfer level (RTL) for digital circuit testing. It proposes a new RTL fault model that models stuck-at faults by inserting buffers for each bit in the variables of the RTL code. Fault simulation is performed on faulty circuits generated from the RTL code to determine fault coverage. Results on combinational and sequential circuits show the RTL fault coverage obtained matches closely with gate-level fault coverage obtained through logic synthesis and gate-level fault simulation. The proposed RTL fault model provides a way to estimate fault coverage earlier in the design cycle compared to traditional gate-level fault simulation.
Crypto Mark Scheme for Fast Pollution Detection and Resistance over NetworkingIRJET Journal
The document proposes a new scheme called HOPVOTE to efficiently detect pollution attacks in networks. HOPVOTE is a packet HOP_VOTE technique that attaches an encrypted key to each packet to verify its integrity at each hop. It aims to rapidly identify polluters and misbehaving data/routes. The scheme uses keybit verification and cache-based recovery to identify and block nodes that drop or modify data, and recovers polluted data for retransmission. Simulation results show the scheme can effectively detect pollution attacks with low overhead. Future work will analyze routing performance under common network applications and flows.
Verilog Ams Used In Top Down Methodology For Wireless Integrated CircuitsRégis SANTONJA
The document discusses using the VerilogAMS language and top-down methodology for wireless integrated circuit designs. Specifically, it discusses:
1) Using the top-down methodology to allow for general functionality verification early in the design process by analyzing the ASIC from top to bottom before individual block implementation.
2) Describing the steps of behavioral modeling of blocks using VerilogA, replacing blocks with transistor-level designs, and simulating the entire design with mixed behavioral and transistor-level blocks.
3) Noting that the top-down methodology can be applied whether the design has a large analog/small digital portion or large digital/small analog portion.
The document discusses different levels of testing including debugging, unit testing, host testing, white-box testing, black-box testing, integration testing, capacity/performance testing, and acceptance testing. It also discusses the ideal features of a testing framework, such as being readable by non-technical customers, supporting different test levels with one tool, and being reusable. Key advantages of keyword-driven and exception-driven simulator approaches are outlined.
White paper: Software-Defined Networking Matrix SwitchingJoel W. King
This whitepaper describes a Software-Defined Networking use case, using an OpenFlow controller and white box switches to implement Open Systems Interconnection model (OSI) Layer-1 matrix switch functionality for World Wide Technology’s Advanced Technology Center (ATC).
This document contains an interview question and answer summary on SystemVerilog. It discusses callback mechanisms, the factory pattern, data types like logic, reg and wire, the purpose of clocking blocks, ways to avoid race conditions between testbenches and RTLs, types of coverage, virtual interfaces, abstract classes and virtual methods, the purpose of packages, and various other SystemVerilog concepts.
Denial of Service Attacks in Software Defined Networking - A SurveyIRJET Journal
This document summarizes a survey on denial of service attacks in software defined networking. It begins with an introduction to software defined networking and how it separates the control plane from the data plane. It then discusses how saturation attacks like denial of service (DoS) and distributed denial of service (DDoS) attacks work in SDNs by overwhelming switches, controller-switch links, and controllers. Various proposals for detecting and mitigating these attacks are overviewed, such as diverting packets, caching packets, classifying packets, and anomaly detection. Challenges in mitigating low rate attacks and securing SDN-based IoT networks are also discussed.
The document discusses various architectural styles used in software design. It describes styles such as main program and subroutines, object-oriented, layered, client-server, data-flow, shared memory, interpreter, and implicit invocation. Each style has different components, communication methods, and advantages/disadvantages for managing complexity and promoting qualities like reuse and modifiability. Common examples of each style are also provided.
The document discusses software design and implementation. It describes the design phase as involving high-level architectural design to develop the overall structure of a software program, and low-level detailed design to develop specific algorithms and data structures. The implementation phase includes activities like constructing software components, testing, developing prototypes, training, and installing the system. Good design principles include modularity, low coupling between modules, and high cohesion within modules.
Serial port in an embedded system types and usesSf Cable, Inc
The most common type of serial cable, straight-through serial cable, is useful to connect a Data Terminal Equipment (DTE) device to a Data Communications Equipment (DCE) device (for example, a modem).
The document discusses object-oriented (OO) design and patterns. It defines OO design as organizing software around objects rather than functions, using techniques like abstraction, encapsulation, inheritance, and polymorphism. C++ supports these OO mechanisms, making it an OO language. Classes in C++ enable abstraction and encapsulation. Inheritance and polymorphism allow modeling "is-a" relationships and enable code reuse. Common design patterns like singleton, factory, and observer are also discussed. The key to OO project success is identifying the right abstractions.
This document discusses criteria for modularization in software design. It defines modules as named entities that contain instructions, logic, and data structures. Good modularization aims to decompose a system into functional units with minimal coupling between modules. Modules should be designed for high cohesion (related elements) and low coupling (dependencies). The types of coupling from strongest to weakest are content, common, control, stamp, and data coupling. The document also discusses different types of cohesion within modules from weakest to strongest. The goal is functional cohesion with minimal coupling between modules.
NETKIT is a software component-based approach to programmable networks. It uses a lightweight component model to provide a uniform programming approach across all layers of the network. NETKIT implements a component model with interfaces, receptacles, bindings, and capsules. It also includes reflective meta-models to allow inspection and reconfiguration of components. Examples using NETKIT demonstrate simple network topologies and exploring ARP behavior.
System verilog verification building blocksNirav Desai
SystemVerilog introduces key concepts like program blocks, interfaces, and clocking blocks to help with verification. Program blocks separate the testbench code from the design code to avoid race conditions. Interfaces encapsulate communication between blocks and help prevent errors from manual port connections. Clocking blocks synchronize signal drivers and allow specifying timing for sampled signals. Together these features help manage complexity when verifying designs.
DESIGN APPROACH FOR FAULT TOLERANCE IN FPGA ARCHITECTUREVLSICS Design
Failures of nano-metric technologies owing to defects and shrinking process tolerances give rise to significant challenges for IC testing. In recent years the application space of reconfigurable devices has grown to include many platforms with a strong need for fault tolerance. While these systems frequently contain hardware redundancy to allow for continued operation in the presence of operational faults, the need to recover faulty hardware and return it to full functionality quickly and efficiently is great. In addition to providing functional density, FPGAs provide a level of fault tolerance generally not found in mask-programmable devices by including the capability to reconfigure around operational faults in the field. Reliability and process variability are serious issues for FPGAs in the future. With advancement in process technology, the feature size is decreasing which leads to higher defect densities, more sophisticated techniques at increased costs are required to avoid defects. If nano-technology fabrication are applied the yield may go down to zero as avoiding defect during fabrication will not be a feasible option Hence, feature architecture have to be defect tolerant. In regular structure like FPGA, redundancy is commonly used for fault tolerance. In this work we present a solution in which configuration bit-stream of FPGA is modified by a hardware controller that is present on the chip itself. The technique uses redundant device for replacing faulty device and increases the yield.
Similar to siemens-eda_technical-paper_the-missing-link-the-testbench-to-dut-connection.pdf (20)
Google Calendar is a versatile tool that allows users to manage their schedules and events effectively. With Google Calendar, you can create and organize calendars, set reminders for important events, and share your calendars with others. It also provides features like creating events, inviting attendees, and accessing your calendar from mobile devices. Additionally, Google Calendar allows you to embed calendars in websites or platforms like SlideShare, making it easier for others to view and interact with your schedules.
Building a Raspberry Pi Robot with Dot NET 8, Blazor and SignalR - Slides Onl...Peter Gallagher
In this session delivered at Leeds IoT, I talk about how you can control a 3D printed Robot Arm with a Raspberry Pi, .NET 8, Blazor and SignalR.
I also show how you can use a Unity app on an Meta Quest 3 to control the arm VR too.
You can find the GitHub repo and workshop instructions here;
https://bit.ly/dotnetrobotgithub
1. David Rich
Executive summary
This paper focuses on several methodologies used in practice to connect
the testbench to the DUT. The most common approach is the use of
SystemVerilog’s virtual interface. This is so common that people fail to
investigate other methodologies that have merit in certain situations. The
abstract class methodology has been presented before, but still seems to
have barriers to adoption. There are also some obvious direct connection
methodologies that are often overlooked. This paper will compare and
contrast each one so that users may choose the methodology that meets
their requirements.
Siemens Digital Industries Software
siemens.com/eda
The missing link:
the testbench to
DUT connection
2. White paper | The missing link: the testbench to DUT connection
2
Siemens Digital Industries Software
Contents
Introduction.............................................................. 3
Static pin to pin connections.................................... 4
Dynamic connections................................................ 5
Whitebox verification............................................... 7
A. Hierarchical references........................................... 7
B. Bind....................................................................... 7
Special design consideration.................................... 8
A. Bidirectional or tri-state busses................................ 8
B. Race conditions and clocking blocks........................ 8
Conclusion................................................................ 9
References................................................................ 9
Appendix A. – Example of using virtual interface
and abstract class together..................................... 10
3. White paper | The missing link: the testbench to DUT connection
3
Siemens Digital Industries Software
One of the key notions of SystemVerilog was the merg-
ing of hardware verification language (HVL) concepts
used in a testbench with hardware description language
(HDL) concepts used in a design. Even though the lan-
guage has merged, the experiences of the users have
not – they still have different ideas about which con-
structs can and cannot be used in their environment.
As has been true since the beginning of logic design, a
design under test (DUT) is a boundary between what will
be implemented in hardware and everything else needed
to verify that implementation. In SystemVerilog as in
Verilog, that boundary is represented by a module.[1]
The job of the testbench is to provide the necessary pin
wiggles to stimulate the DUT and analyze the pin
wiggles coming out. Although this may seem like an
over simplification, no matter how complex the environ-
ment becomes this point remains the same.
The difference that SystemVerilog introduces is that
most of the testbench will written in dynamically con-
structed classes after the beginning of simulation. That
means connections between the DUT and testbench
normally need to be dynamic as well.
Let us start with a progression of testbench environ-
ments starting with an original Verilog testbench and
gradually introduce additional levels of complexity
along with the features in SystemVerilog that address
this added complexity.
Introduction
4. White paper | The missing link: the testbench to DUT connection
4
Siemens Digital Industries Software
In Verilog, a design under test (DUT) can be modeled
exactly like that – a testbench module above with the
design instantiated in a module underneath. The DUT
port connections are made with variables and wires
directly connected to the DUT instance. Procedural code
at the top level stimulates and observes the port signals.
The structure above rapidly breaks down as the design
becomes more complex. The test usually needs to
become modularized just as the DUT is, so the testbench
is broken into a separate module or several modules and
is instantiated alongside the DUT. Wires at the top level
connect the ports of the test and DUT together.
Observing the redundancy of repeatedly specifying the
names of signals involved in connections, SystemVerilog
added the concept of an interface to represent a collec-
tion of signals. Those signals are defined once in an
interface and used in the port connections to the DUT
and testbench.
This can signifi-
cantly reduce the
total number of
code lines, espe-
cially when there
are a large number
of signals that can
be put into the
interface.
Sometimes mod-
ules are brought in
from legacy
designs, or from
environments that
do not support
SystemVerilog
interfaces. In that case you can simply replace the port
list of the DUT instantiation line with hierarchical refer-
ences to the interface signals.
DUT d1(itf.clock,itf.reset,itf.data,itf.address);
The connections shown up to this point have all been
structurally static. The testbench and DUT modules as
well as the connection to those modules are declared at
compile time. Any change to the structure requires
recompilation and elaboration of that structure.
Static pin to pin connections
5. White paper | The missing link: the testbench to DUT connection
5
Siemens Digital Industries Software
In a class-based testbench environment, classes are
used instead of modules to represent different compo-
nents of a testbench, like drivers and monitors. Because
SystemVerilog classes are always constructed dynami-
cally, we can take advantage of that to randomize the
testbench as well as override the behavior of those
classes by extending them.
Because classes do not have ports that can be con-
nected to other module ports, some other mechanisms
must be used to communicate with the DUT. We could
simply use hierarchical references to signals in a mod-
ule, but as shown previously in,[2]
this leads to un-reus-
able and unmanageable code. Using the recommended
practice of putting class declarations in packages
enforces this restriction because hierarchical references
are not allowed from inside packages.
A. Virtual Interfaces
A virtual interface variable is the simplest mechanism
to dynamically refer to an interface instance. This type
of variable can be procedurally assigned to reference an
interface of the same type.
The driver class is free of hierarchical references and its
run method can synchronize to the clock inside the
interface. In this way a virtual interface variable is simi-
lar to a class handle variable where the interface is used
as a type and you are referencing members of the class.
Because the interface is treated like a type, any param-
eterization of the interface instance needs to be
repeated in the virtual interface declaration.
In the example above, vitf1 can only be assigned to top.
i1 and vitf2 only to top.i2. As interfaces get more com-
plicated in larger designs, keeping all the virtual inter-
face parameters in sync with the interface instance
parameters becomes a challenge.
B. Abstract Classes
The abstract class construct of SystemVerilog is an
object-oriented programming concept used to define
software interfaces. It has functionality similar to that of
a virtual interface, with the benefit of a class based
approach that may include inheritance and polymor-
phism. Another benefit is that an abstract class can
completely decouple a testbench class component from
any dependencies on the SystemVerilog interface, such
as parameters overrides. A downside is that all members
of the interface need to be accessed via methods, never
bi direct reference. However, this is the normal pro-
gramming style for object-oriented software.
Dynamic connections
6. White paper | The missing link: the testbench to DUT connection
6
Siemens Digital Industries Software
Now our testbench classes can be written to use the
concrete class handle referenced via an abstract class
variable instead of the virtual interface variable.
7. White paper | The missing link: the testbench to DUT connection
7
Siemens Digital Industries Software
It is not always possible to treat the DUT as a black box;
that is to monitor and drive signals for only the top-level
ports of the DUT. This is true especially as one moves
from block-level testing to larger system level testing.
Sometimes we need implementation knowledge to
access signals internal to the DUT. This is known as
whitebox verification.
A. Hierarchical references
Verilog has always provided the ability to reach inside
almost any hierarchical scope from another scope.
Although this is a very convenient feature, it has several
drawbacks:
1. It makes the code less reusable because the refer-
ences in the testbench are dependent on the
structure of the DUT.
2. It requires full or partial recompilation of the DUT to
provide access to internal signals.
3. It creates a poorly optimized DUT because internal
signals may need to be preserved to provide access.
It may be impossible to avoid all hierarchical refer-
ences. As a general rule, it is best to keep them at
the top level of the testbench, or isolated to as few
modules as practical.
B. Bind
SystemVerilog provides a bind construct that allows you
to instantiate one module or interface into another
target module or interface without modifying the source
code of the target. The ports of the instance are usually
connected to the internal signals of the target. If you
bind an interface, you can use either the virtual interface
or abstract class mechanisms to reference the interface.
An interface used in a bind construct typically has
ports used to connect to the internal signals of the
target module.
The top-level TEST can declare the bind statement,
or some other designated module suited for that
purpose can declare all the bind statements for a
particular testbench.
A complete UVM based example to probe internal sig-
nals using bind is shown in Appendix A. This example
also shows the recommend way to reach into the
design hierarchy using a configuration database.[5]
Whitebox verification
8. White paper | The missing link: the testbench to DUT connection
8
Siemens Digital Industries Software
Some aspects of the DUT to testbench connection
require more detailed knowledge of basic Verilog
modeling issues, especially when dealing with signal
strengths and race conditions.
A. Bidirectional or tri-state buses
Any signal with multiple drivers (continuous assign-
ments, in this context) needs to be modeled using a
net. A net is the only construct that resolves the effect
of different states and strengths simultaneously driving
the same signal. The behavior of a net is defined by a
built-in resolution function using the values and
strengths of all the drivers on a net. Every time there is
a change on one of the drivers, the function is called to
produce a resolved value. The function is created at
elaboration (before simulation starts) and is based on
the kind of net type, wand, wor, tri1, etc.
Procedural assignments to variables use the simple rule:
last write wins. You are not allowed to make procedural
assignments to nets because there is no way to repre-
sent how the value you are assigning should be resolved
with the other drivers. There is also no way to represent
how long the procedural assignment should be in effect
before another continuous assignment takes over.
Class based testbenches cannot have continuous assign-
ments because classes are dynamically created objects
and are not allowed to have structural constructs like
continuous assignments. Although a class can read the
resolved value of nets, it can only make procedural
assignments to variables. Therefore, the testbench
needs to create a variable that is continuously assigned
to a wire.
In this example, procedural assignments are made to
bus_reg for the class-based testbench, while bus has
the value of the resolved signals.
B. Race conditions and clocking blocks
If not modeled correctly, a testbench is susceptible to
the same race conditions as the DUT. Any signal that is
written by one process and read in another process
when the two processes are synchronized by the same
clock or event must be assigned using a non-blocking
assignment (NBA).
A clocking block can address these race conditions even
further by sampling or driving signals some number of
time units away from the clock edge. It also takes care
of the procedural assignment to a net problem by
implicitly creating a continuous assignment from the
clocking block variable to the net.
One caution about using clocking blocks: use only the @
cb event to synchronize the process that is using the
clocking block variables. Using @(posedge clk) or any
other event will introduce race conditions.
Special design considerations
9. White paper | The missing link: the testbench to DUT connection
9
Siemens Digital Industries Software
A number of different mechanisms have been shown to
connect the DUT to the testbench. They are not meant
to be exclusive. The complexity of your verification
environment will dictate the most efficient mechanism
for you to use. Above all, it is important to be as consis-
tent as possible with your coding decisions and docu-
ment those decisions.
Conclusion
References
1. IEEE (2009) “Standard for SystemVerilog- Unified Hardware Design,
Specification, and Verification Language”, IEEE Std 1800-2009.
2. Bromley, J. & Rich, D (Feb 2008) “Abstract BFMs Outshine Virtual
Interfaces for Advanced SystemVerilog Testbenches”, Design &
Verification Conference, San Jose, CA.
3. Baird, M (Feb 2010) “Coverage Driven Verification of an Unmodified
DUT within an OVM Testbench”, Design & Verification Conference, San
Jose, CA.
4. Bromley, J. (Feb 2012) “First Reports from the UVM Trenches: User-
friendly, Versatile and Malleable, or Just the Emperor’s New
Methodology?”, Design & Verification Conference, San Jose, CA.
5. Horn, M, Peryer, M. et. al. (retrieved on February 7, 2012) “Connect/
Dut Interface”, Verification Academy UVM/OVM Cookbook, <http://veri-
ficationacademy.com/uvm-ovm/Connect/Dut_Interface>
10. White paper | The missing link: the testbench to DUT connection
10
Siemens Digital Industries Software
Appendix A. – Example of using a virtual
interface and abstract class together
// $Id: probe.sv,v 1.4 2010/04/01 14:34:38 drich Exp $
//----------------------------------------------------------------------
// Dave Rich dave_rich@mentor.com
// Copyright 2007-2012 Mentor Graphics Corporation
// All Rights Reserved Worldwide
//
// Licensed under the Apache License, Version 2.0 (the
// "License"); you may not use this file except in
// compliance with the License. You may obtain a copy of
// the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in
// writing, software distributed under the License is
// distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
// CONDITIONS OF ANY KIND, either express or implied. See
// the License for the specific language governing
// permissions and limitations under the License.
//----------------------------------------------------------------------
// -----------------------------------------------------------------------------------
// file RTL.sv
// Package of parameters to be shared by DUT and Testbench
package common_pkg;
parameter WordSize1 = 32;
parameter WordSize2 = 16;
endpackage
// simple DUT containing two sub models.
module DUT (input wireC LK,CS,WE);
import common_pkg::*;
wire CS_L = !CS;
model #(WordSize1) sub1 (.CLK, .CS, .WE);
model #(WordSize2) sub2 (.CLK, .CS(CS_L), .WE);
endmodule
// simple lower level modules internal to the DUT
module model (input wire CLK, CS, WE);
parameter WordSize = 1;
reg [WordSize-1:0] Mem;
wire [WordSize-1:0] InternalBus;
always @(posedge CLK)
if (CS && WE)
begin
Mem = InternalBus;
$display("%m Wrote %h at %t",InternalBus,$time);
end
assign InternalBus = (CS && !WE) ? Mem : 'z;
endmodule
// -----------------------------------------------------------------------------------
// file probe_pkg.sv
//
// abstract class interface
package probe_pkg;
import uvm_pkg::*;
virtual class probe_abstract #(type T=int) extends uvm_object;
function new(string name="");
super.new(name);
11. White paper | The missing link: the testbench to DUT connection
11
Siemens Digital Industries Software
endfunction
// the API for the internal probe
pure virtual function T get_probe();
pure virtual function void set_probe(T Data );
pure virtual task edge_probe(bit Edge=1);
endclass : probe_abstract
endpackage : probe_pkg
// This interface will be bound inside the DUT and provides the concrete class defintion.
interface probe_itf #(int WIDTH) (inout wire [WIDTH-1:0] WData);
import uvm_pkg::*;
typedef logic [WIDTH-1:0] T;
T Data_reg = 'z;
assign WData = Data_reg;
import probe_pkg::*;
// String used for factory by_name registration
localparam string PATH = $psprintf("%m");
// concrete class
class probe extends probe_abstract #(T);
function new(string name="");
super.new(name);
endfunction // new
typedef uvm_object_registry #(probe,{"probe_",PATH}) type_id;
static function type_id get_type();
return type_id::get();
endfunction
// provide the implementations for the pure methods
function T get_probe();
return WData;
endfunction
function void set_probe(T Data );
Data_reg = Data;
endfunction
task edge_probe(bit Edge=1);
@(WData iff (WData === Edge));
endtask
endclass : probe
endinterface : probe_itf
// -----------------------------------------------------------------------------------
// file test_pkg.sv
//
// This package defines the UVM test environment
`include “uvm_macros.svh”
package test_pkg;
import uvm_pkg::*;
import common_pkg::*;
import probe_pkg::*;
//My top level UVM test class
class my_driver extends uvm_component;
function new(string name="",uvm_component parent=null);
super.new(name,parent);
endfunction
typedef uvm_component_registry #(my_driver,"my_driver") type_id;
// Virtual interface for accessing top-level DUT signals
typedef virtual DUT_itf vi_itf_t;
vi_itf_t vi_itf_h;
12. White paper | The missing link: the testbench to DUT connection
12
Siemens Digital Industries Software
// abstract class variables that will hold handles to concrete classes built by the factory
// These handle names shouldn't be tied to actual bind instance location - just doing it to help
// follow the example. You could use config strings to set the factory names.
probe_abstract #(logic [WordSize1-1:0]) sub1_InternalBus_h;
probe_abstract #(logic [WordSize2-1:0]) sub2_InternalBus_h;
probe_abstract #(logic) sub1_ChipSelect_h;
function void build_phase(uvm_phase phase);
if (!uvm_config_db#(vi_itf_t)::get(this,"","DUT_itf",vi_itf_h))
uvm_report_fatal("NOVITF","No DUT_itf instance set",,`__FILE__,`__LINE__);
$cast(sub1_InternalBus_h,
factory.create_object_by_name("probe_testbench.dut.sub1.m1_1",,"sub1_InternalBus_h"));
$cast(sub2_InternalBus_h,
factory.create_object_by_name("probe_testbench.dut.sub2.m1_2",,"sub2_InternalBus_h"));
$cast(sub1_ChipSelect_h,
factory.create_object_by_name("probe_testbench.dut.sub1.m1_3",,"sub1_ChipSelect_h"));
endfunction : build_phase
// simple driver routine just for testing probe class
task run_phase(uvm_phase phase);
phase.raise_objection( this );
vi_itf_h.WriteEnable <= 1;
vi_itf_h.ChipSelect <= 0;
fork
process1: forever begin
@(posedge vi_itf_h.Clock);
`uvm_info("GET1",$psprintf("%h",sub1_InternalBus_h.get_probe()));
`uvm_info("GET2",$psprintf("%h",sub2_InternalBus_h.get_probe()));
end
process2: begin
sub1_ChipSelect_h.edge_probe();
`uvm_info("EDGE3","CS had a posedge");
sub1_ChipSelect_h.edge_probe(0);
`uvm_info("EDGE3","CS had a negedge");
end
process3: begin
@(posedge vi_itf_h.Clock);
vi_itf_h.ChipSelect <= 0;
sub2_InternalBus_h.set_probe('1);
@(posedge vi_itf_h.Clock);
vi_itf_h.ChipSelect <= 1;
sub1_InternalBus_h.set_probe('1);
@(posedge vi_itf_h.Clock);
vi_itf_h.ChipSelect <= 0;
sub2_InternalBus_h.set_probe('0);
@(posedge vi_itf_h.Clock);
@(posedge vi_itf_h.Clock);
end
join_any
phase.drop_objection( this );
endtask : run_phase
endclass : my_driver
class my_test extends uvm_test;
function new(string name="",uvm_component parent=null);
super.new(name,parent);
endfunction
typedef uvm_component_registry #(my_test,"my_test") type_id;
my_driver my_drv_h;
function void build_phase(uvm_phase phase);
my_drv_h = my_driver::type_id::create("my_drv_h",this);
endfunction : build_phase
endclass : my_test
endpackage : test_pkg
13. White paper | The missing link: the testbench to DUT connection
13
Siemens Digital Industries Software
// -----------------------------------------------------------------------------------
// file testbench.sv
//
interface DUT_itf(input bit Clock);
logic ChipSelect;
WriteEnable;
logic
endinterface : DUT_itf
module testbench;
import common_pkg::*;
import uvm_pkg::*;
import test_pkg::*;
bit SystemCLK=1;
always #5 SystemCLK++;
// The DUT interface;
DUT_itf itf(.Clock(SystemCLK));
typedef virtual DUT_itf vi_itf_t;
// The DUT
DUT dut(.CLK(itf.Clock), .CS(itf.ChipSelect), .WE(itf.WriteEnable));
// instantiate interfaces internal to DUT
bind model : dut.sub1 probe_itf #(.WIDTH(common_pkg::WordSize1)) m1_1(InternalBus);
bind model : dut.sub2 probe_itf #(.WIDTH(common_pkg::WordSize2)) m1_2(InternalBus);
bind model : dut.sub1 probe_itf #(.WIDTH(1)) m1_3(CS);
initial begin
uvm_config_db#(vi_itf_t)::set(null,"","DUT_itf",itf);
run_test("my_test");
end
endmodule : testbench