Are you new to functional verification? Or do you need a refresher? This presentation takes you through the basics of functional verification - overall scope and process with examples. Also included are some tips on do's and don'ts!
Functional verification is one of the key bottlenecks in the rapid design of integrated circuits. It is estimated that verification in its entirety accounts for up to 60% of design resources, including duration, computer resources and total personnel. The three primary tools used in logic and functional verification of commercial integrated circuits are simulation (at various levels), emulation at the chip level, and formal verification.
Introduction to SOC Verification Fundamentals and System Verilog language coding. Explains concepts on Functional Verification methodologies used in industry like OVM, UVM
Functional verification is one of the key bottlenecks in the rapid design of integrated circuits. It is estimated that verification in its entirety accounts for up to 60% of design resources, including duration, computer resources and total personnel. The three primary tools used in logic and functional verification of commercial integrated circuits are simulation (at various levels), emulation at the chip level, and formal verification.
Introduction to SOC Verification Fundamentals and System Verilog language coding. Explains concepts on Functional Verification methodologies used in industry like OVM, UVM
UVM is a standardized methodology for verifying complex IP and SOC in the semiconductor industry. UVM is an Accellera standard and developed with support from multiple vendors Aldec, Cadence, Mentor, and Synopsys. UVM 1.0 was released on 28 Feb 2011 which is widely accepted by verification Engineer across the world. UVM has evolved and undergone a series of minor releases, which introduced new features.
UVM provides the standard structure for creating test-bench and UVCs. The following features are provided by UVM
• Separation of tests from test bench
• Transaction-level communication (TLM)
• Sequences
• Factory and configuration
• Message reporting
• End-of-test mechanism
• Register layer
The System Verilog UVM promises to improve verification productivity while enabling teams to share tests and test benches between projects and divisions. This promise can be achieved; the UVM is a powerful methodology for using constrained randomization to reach higher functional coverage goals and to explore combinations of tests without having to write each one individually. Unfortunately the UVM promise can be hard to reach without training, practice and some significant expertise. Verification is one of the most important activities in the flow of ASIC/VLSI design. Verification consumes large amount of design flow cycle & efforts to ensure design is bug free. Hence it becomes intense requirement for powerful and reusable methodology for verification.
SystemVerilog based OVM and UVM Verification MethodologiesRamdas Mozhikunnath
Introduction to System Verilog based verification methodologies - OVM and UVM concepts
For more online courses and resources follow http://verificationexcellence.in/
Level sensitive scan design(LSSD) and Boundry scan(BS)Praveen Kumar
This presentation contains,
Introduction,design for testability, scan chain, operation, scan structure, test vectors, Boundry scan, test logic, operation, BS cell, states of TAP controller, Boundry scan instructions.
How to create SystemVerilog verification environment?Sameh El-Ashry
Basic knowledge for the verification engineer to learn the art of creating SystemVerilog verification environment.
Starting from the specifications extraction till coverage closure.
Spyglass DFT is comprehensive process of resolving RTL Design issues, thereby ensuring high quality RTL with fewer design bugs.
Improves test quality by diagnosing DFT issues early at RTL or netlist.
Shortens test implementation time and cost by ensuring RTL or netlist is scan-compliant.
Advances in Verification - Workshop at BMS College of EngineeringRamdas Mozhikunnath
Day 1 of workshop at BMS college of Engineering
Covers SystemVerilog language fundamentals - Language constructs, building blocks, Arrays, Process, Classes
UVM RAL is an object-oriented model for registers inside the design. To access these design registers, UVM RAL provides ready-made base classes and APIs
UVM is a standardized methodology for verifying complex IP and SOC in the semiconductor industry. UVM is an Accellera standard and developed with support from multiple vendors Aldec, Cadence, Mentor, and Synopsys. UVM 1.0 was released on 28 Feb 2011 which is widely accepted by verification Engineer across the world. UVM has evolved and undergone a series of minor releases, which introduced new features.
UVM provides the standard structure for creating test-bench and UVCs. The following features are provided by UVM
• Separation of tests from test bench
• Transaction-level communication (TLM)
• Sequences
• Factory and configuration
• Message reporting
• End-of-test mechanism
• Register layer
The System Verilog UVM promises to improve verification productivity while enabling teams to share tests and test benches between projects and divisions. This promise can be achieved; the UVM is a powerful methodology for using constrained randomization to reach higher functional coverage goals and to explore combinations of tests without having to write each one individually. Unfortunately the UVM promise can be hard to reach without training, practice and some significant expertise. Verification is one of the most important activities in the flow of ASIC/VLSI design. Verification consumes large amount of design flow cycle & efforts to ensure design is bug free. Hence it becomes intense requirement for powerful and reusable methodology for verification.
SystemVerilog based OVM and UVM Verification MethodologiesRamdas Mozhikunnath
Introduction to System Verilog based verification methodologies - OVM and UVM concepts
For more online courses and resources follow http://verificationexcellence.in/
Level sensitive scan design(LSSD) and Boundry scan(BS)Praveen Kumar
This presentation contains,
Introduction,design for testability, scan chain, operation, scan structure, test vectors, Boundry scan, test logic, operation, BS cell, states of TAP controller, Boundry scan instructions.
How to create SystemVerilog verification environment?Sameh El-Ashry
Basic knowledge for the verification engineer to learn the art of creating SystemVerilog verification environment.
Starting from the specifications extraction till coverage closure.
Spyglass DFT is comprehensive process of resolving RTL Design issues, thereby ensuring high quality RTL with fewer design bugs.
Improves test quality by diagnosing DFT issues early at RTL or netlist.
Shortens test implementation time and cost by ensuring RTL or netlist is scan-compliant.
Advances in Verification - Workshop at BMS College of EngineeringRamdas Mozhikunnath
Day 1 of workshop at BMS college of Engineering
Covers SystemVerilog language fundamentals - Language constructs, building blocks, Arrays, Process, Classes
UVM RAL is an object-oriented model for registers inside the design. To access these design registers, UVM RAL provides ready-made base classes and APIs
Guide to continuous delivery and the journey wix.com had made transitioning to DevOps and continuous delivery culture making ~100 production changes daily
Actionable Continuous Delivery Metrics - QCon San Francisco November 2018 Suzie Prince
High performance teams are defined by their ability to deliver software faster, with higher quality and reliability. A key ingredient is a Continuous Delivery process that allows you to deliver features to production seamlessly. Once you embrace Continuous Delivery, it is important to measure the effectiveness of your CD workflow.
If you are looking at increasing the deployment frequency of your applications, recovering from failures more quickly, or improving the cycle time of features to production, this talk discusses the metrics that help to improve your software delivery practice.
In this talk, I cover:
The value of measuring and monitoring your CD pipeline What metrics matter when improving your path to production? We will go through important concepts like throughput, failure rate, mean time to recover, cycle time etc. A step to step guide to using metrics to improve your CD process. We will use examples to address common issues like low throughput, slow cycle time, high failure rate, high MTTR.
Presented at QCon San Francisco November 2018.
Modern business drivers are continually pushing to reduce the time it takes to get a product or service to market, reduce the risk and cost associated with that, and to improve quality.
In laboratories, delivering an analytical result that’s ‘right first time’ (RFT) is the answer. There is no reprocessing data or re-running injections and no out of specification (OOS) results or reporting/calculation errors.
Using chromatography data system tools for RFT analysis automatically gives high quality of results and confidence in results, lower cost of analysis, improved lab efficiency, and faster release to market and return on investment (ROI).
Arrow Devices MIPI MPHY Verification IP SolutionArrow Devices
“Easy to Use”
“Catches tricky corner cases”
“Provides complete comprehensive test coverage”
These are some of the things being said by our customers about our MIPI MPHY Verification IP Solution.
Our MIPI MPHY Verification IP Solution has been adopted by many top SoC/IP companies. In the coming slides, we talk about the major aspects of our mature MIPI MPHY Verification Solution.
Behavioral modeling of Clock/Data RecoveryArrow Devices
Clock/Data recovery (CDR) is a tricky logic to implement correctly. To verify the clock/data recovery logic implemented in designs, the corresponding verification infrastructure needs to be modeled correctly.
This presentation aims to present the various issues faced for modeling CDR behaviorally along with their solutions.
Mastery: The key to Success and HappinessArrow Devices
At Arrow Devices, we treat our products as works of art and we aim to inculcate a culture where skill and mastery is highly valued. Unfortunately, it is not easy to achieve Mastery. A newly minted engineer needs to focus on this higher goal with the belief that he is doing the right thing without falling prey to short term distractions.
Below is a presentation that I gave to engineers at Arrow Devices to encourage them to strive for Mastery in whatever they do. It is structured as a review of Robert Greene’s book “Mastery” with some ideas of my own, thrown in.
I hope this is useful for you!
Key Takeaways:
- Why Mastery?: Mastery leads to success, happiness and wealth!
- Finding Life Interests: Find an aspect in your job that you love
- Go above and beyond: In order to draw and sculpt human forms better, Leonardo Da Vinci cut open cadavers to figure out how muscle lay under skin. Sometimes learning needs you to go above and beyond the call of duty
- Aim for Transformation: Intense practice transforms. It leads to the mind learning and transferring conscious patterns into the sub-conscious. Ideas come out of thin air!
Presentation discusses Issues in modeling bidirectional buses such as USB 2.0. Solutions for common issues are shown through pictures and verilog code.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
2. | 2
Contents
▪ Verification Philosophy
▪ Familiarity
▪ Scope of Verification
▪ Functional Verification as a Career
▪ Overview of Process
▪ Some Examples: The 3 Musketeers
▪ Do’s and Don’t’s
▪ State of the Art
3. | 3
The Verification Philosophy
KNOW EVOLUTION to understand why
things are the way they are!
GET THE DNA to evolve further
GO BEYOND basics to get productive
4. | 4
Contents
▪ Verification Philosophy
▪ Familiarity
▪ Scope of Verification
▪ Functional Verification as a Career
▪ Overview of Process
▪ Some Examples: The 3 Musketeers
▪ Do’s and Don’t’s
▪ State of the Art
5. | 5
Basic Lab Verification
How did you Verify ICs used
during the Digital Design
Lab ?
- Covered using the Truth
table by giving various
inputs and checking
outputs
- Downsides are
- Time consuming
- Error prone
- Repeatability is not
automatic
7. | 7
Contents
▪ Verification Philosophy
▪ Familiarity
▪ Scope of Verification
▪ Functional Verification as a Career
▪ Overview of Process
▪ Some Examples: The 3 Musketeers
▪ Do’s and Don’t’s
▪ State of the Art
8. | 8
• Four major types of verification :
1. Functional
2. Timing
3. Test
4. Equivalence
• Functional verification ensures correct logic
performance - under all conditions
• 70 % of chip cycle is verification and more than 70 %
re-spins are functional bugs
• Functional verification is very time consuming and thus
is always on the critical path
Scope of Verification (1/2)
9. | 9
• Design technologies are evolving in Moore’s Law
friendly.
• Functional verification technologies are yet to catch up
with the advancement happening in the are of Design
• Functional Verification Tools and methodologies are
attempting to reduce the overall verification time by
enabling parallelism of effort, higher abstraction levels
and automation
• There is no single formula of success
Scope of Verification (2/2)
10. | 10
Contents
▪ Verification Philosophy
▪ Familiarity
▪ Scope of Verification
▪ Functional Verification as a Career
▪ Overview of Process
▪ Some Examples: The 3 Musketeers
▪ Do’s and Don’t’s
▪ State of the Art
12. | 12
• Knows the design
• Contributes to architecture
• Gives feedback as first user
• Is Creative, Logical and Lateral
• Programmer
• “Quality cop”
• Bridges specification, architecture & design
Hats worn by a Verification Engineer
13. | 13
Contents
▪ Verification Philosophy
▪ Familiarity
▪ Scope of Verification
▪ Functional Verification as a Career
▪ Overview of Process
▪ Some Examples: The 3 Musketeers
▪ Do’s and Don’t’s
▪ State of the Art
14. | 14
1. Verification plan
2. Test bench
3. Bring up
4. Debug
5. Regressions
6. Sign off
Process of Functional Verification
1
2
3
4
5
6
15. | 15
• Create varied stimuli and check if the response is
as expected
• Ensure design is not doing anything unexpected
• Look at both Functional specifications and Micro-
architecture specifications (Black box & White
box)
Stimuli
Response
Verification Plan (1/2)1
16. | 16
• Test plan: Create the list of programmable parameters
and features
1. Normal operation : Basic & Concurrent
2. Corner cases
3. Error cases
4. Negative cases
5. Stress design cases
• Functional coverage plan: Capture important features and
parameters
• Strategy to execute the test plan
Verification Plan (2/2)1
17. | 17
Every Test Bench Should have the following characteristics
• Ability to exercise all features and parameters
• Ability achieve the desired coverage
• Necessary abstraction
• Scalability
• Balance ease of verification vs. reality
• Ease of understanding and maintenance
• Ease of debugging – Developer vs User
• Developer is interested in Internal architecture
• User is interested in only black box view
• Easily controllable and observable
Test Bench - Architecture2
18. | 18
External interfaces of DUT
• Reset and Clock
generators
• Bus functional models/
drivers/monitors DUT
Ports
BFM Driver
Test Bench - Components (1/3)2
19. | 19
Functionality
• API or Transactors
• Scoreboard
• Golden models
• End of test
• Assertions
• Functional coverage
DUT
Ports
BFM Driver
SB Reg Xfer
Test Bench - Components (2/3)2
20. | 20
Stimuli :
• Randomized
parameters
• Constrained
random gens
• Self checking
directed test
DUT
Ports
BFM Driver
SB Reg Xfer
CFG GEN TES
T
Test Bench - Components (3/3)2
21. | 21
• Get the clocks and resets up
• No ‘X’ & ‘Z’ on critical inputs, Tie them off if needed
• Get all critical interfaces of DUT up
• Start enabling features in its basic form and introduce
variables one at a time
Design and Test Bench Bring Up3
22. | 22
• Done when things not going as expected
• Takes up 30% of functional verification activity
• First step is to isolate between the DUT/TB
• Isolation is done with help of logging
• Use regular expression friendly logging
• Extract stimuli and response from log
• If its not isolatable from log look at waves
• Benchmark debug is to be able to suggest the design fix
• Verification engineer is owner of bug till it is fixed
Debug4
23. | 23
• Intersection among features
• Introduction of new features can cause a domino effect of failures
on previously passing ones
• Protect what has been working by running a set of critical tests
before every update
• Periodically complete run of all tests to maintain the health
• Random tests run with seeds
Regressions5
24. | 24
• Verification never really completes
• Minimize risk using metrics
• 100 % code coverage
• 100 % functional coverage
• All directed tests passing
• Randoms having zero failures or acceptable failure rate
over few weeks
• No critical bugs found over last few weeks
Sign Off6
25. | 25
Contents
▪ Verification Philosophy
▪ Familiarity
▪ Scope of Verification
▪ Functional Verification as a Career
▪ Overview of Process
▪ Some Examples: The 3 Musketeers
▪ Do’s and Don’t’s
▪ State of the Art
26. | 26
• Action begins…
• Lets demonstrate what we have learnt by attacking The
Three Musketeers described below
• Each of these will take you through complete flow and
also focus on clarifying one aspect we have learnt in
previous slides
1. Counters – 2-bit and 256-bit
• Verification strategy
2. FIFO Verification
• Test cases and assertions
3. UART Verification
• Test bench architecture
The 3 Musketeers
27. | 27
Test plan
▪ Make sure on reset assertion count can reset to 0
▪ Counter can roll back to 0 after max
▪ Counter can count through all counts
Verification strategy
▪ Do we need random?
▪ Use the directed tests
Test bench
▪ Clock and reset generator
▪ 2 directed tests
Counters: 2-Bit Counter Verification
Strategy
28. | 28
Test plan:
▪ Make sure on reset assertion count can reset to 0
▪ Counter can roll back to 0 after max
▪ Counter can count through all counts
Verification strategy:
▪ Do we need random?
▪ Sequentially counting all counts 2 ** 256 = 1e77 would
take almost forever
Counters: 256-Bit Counter Verification
Strategy (1/3)
29. | 29
Bring up
▪ Start off with the pre load count as 0
▪ Disable the intermediate reset generation
▪ Make sure counter can count up to random count after initial reset
Enabling random
▪ Enable the random pre-load count
▪ Enable intermediate reset generation
▪ Complete the directed negative test
Closure
▪ Code and functional coverage defined above are met
▪ 50 seeded random regressions are clean over couple of weeks
Counters: 256-Bit Counter Verification
Strategy (2/3)
30. | 30
• Start from random value and count up to another random
value
• Divide count range into 4 and make sure each range
covers some counts
• All the bits should be toggling
• Count from Max to 0
• Reset during random count
• Bias the design change for pre load
• How do you handle condition of reset and enable
asserted together ?
Counters: 256-Bit Counter Verification
Strategy (3/3)
31. | 31
• Identify variables that need randomization
– Random pre-load count
– Random count up value to end the test
– Control to enable/disable intermediate reset generation
– Random count at which reset needs to be generated
▫ All random stimulus should have control to make it directed if
needed
• Clock generator
• Counter model with load and reset control - This will act as a score board.
• The model’s output will be used for check - Comparing at every clock will
ensure that the bug’s can be caught immediately
Counters: 256-Bit Counter Test Bench
32. | 32
count <= count + 1
• Models are
behavioral code
• Simpler to write
• Smaller than DUT
• Quicker to write
• Lesser code
means lesser
bugs!
Counters: 256-Bit Counter DUT vs Model
34. | 34
Normal operation
▪ Order of data pushed should match POP
▪ After N PUSH and 0 POP, FIFO FULL assertion
▪ Num of PUSH == Num of POP EMPTY assertion
▪ Stress : PUSH and POP at same time
▪ Reset intermediately
Negative case
▪ PUSH on FULL and POP on EMPTY
Implementation specific
▪ Internal counter alignments for FULL/EMPTY generation
Always true conditions [Assertion]
▪ Both FULL and EMPTY should never get asserted together
▪ Number of clocks between the assertion of Write on input to de-assertion of
empty on output - This is also indicative of performance of the FIFO.
Synchronous FIFO - Test Plan
35. | 35
Normal FIFO operation should be covered with the random
approach
▪ Random delay on the POP side will lead to FULL conditions on PUSH
side
▪ Random delay on PUSH and POP side together will lead to correct
checking of FULL/EMPTY conditions on various read and write
counter alignments
▪ Data pushed should be randomized - Cover for all toggling bits
Negative and intermediate reset cases can be covered as directed
case
▪ PUSH when full and POP when empty
▪ Reset during the various combinations of full/empty
Synchronous FIFO – Verification Strategy
36. | 36
DUT
DRV PUSH DRV POPSCOREBOARD
GEN PUSH GEN POP
RND
TESTS
DIR TESTS
Synchronous FIFO – Test Bench (1/4)
37. | 37
DRV PUSH
▪ Handles the push side interface of FIFO
▪ push_data(data)/is_full()
DRV POP
▪ Handles the pop side interface of FIFO
▪ pop_data(data)/is_empty()
GEN PUSH
▪ Uses the PUSH DRV and generates specified number of PUSH
▪ Injects random delay between the PUSH
GEN POP
▪ Uses the PUSH DRV and generates specified number of PUSH
▪ Injects random delay between the PUSH
Synchronous FIFO – Test Bench (2/4)
38. | 38
Scoreboard
▪ Model of FIFO - Implemented in the behavioral code
▪ Connected to test bench environment using the call backs
▪ Drivers indicate the scoreboard when PUSH and POP happens
▪ On PUSH, scoreboard will store the data pushed - This is
“golden data”
▪ On POP, scoreboard will compare the data popped from design
with the data stored internally and indicate if there are
mismatches - This will verify both ordering and data integrity
▪ Scoreboard should have capability to get disabled - This will be
used during some of the directed tests for negative cases and
resets
Synchronous FIFO – Test Bench (3/4)
39. | 39
Random tests
▪ Simplest level just sets up counts of FIFO operations
▪ Set the delays between PUSH/POP
▪ With and without random resets enabled
Directed tests
▪ Negative cases directly using the API’s of DRV PUSH and
DRV POP
End of test
▪ Random tests check specified count completed
▪ Make sure at the end FIFO is its default state
Synchronous FIFO – Test Bench (4/4)
40. | 40
• Confirm clocks and resets are running and no ‘x’ on
any input
• Start off with a simple directed test that disables the
scoreboard and does couple of pushes and pops
• In the directed test enable the scoreboard and get the
basic case fully working
Synchronous FIFO – Bring up
41. | 41
Full feature enabling
▪ Start off the normal operation random test with smaller
counts of operations
▪ Gradually increase the count of operations to reasonable
number
▪ Do multiple seeded regressions of the random operation
command line
▪ Write up the various negative condition and any corner
case directed test cases
Closure
▪ Code and functional coverage met
▪ 50 seeded random regressions are clean over couple of
weeks
Synchronous FIFO – Closure
42. | 42
DUT
HOST
BUS BFM
UART BFMSCOREBOARD
REG API INTR API
TX GEN RX GEN
BFM API
TESTS
UART Test Bench (1/2)
43. | 43
UART BFM
▪ Behavioral model of UART only modeling the Bus
operation and Tx and Rx pins
▪ Does not need to implement all the internal registers of the
BFM but it does need to provide the necessary controls
▪ Does not need to implement the interrupts
set_configuration()/tx_bfm_data()/rx_bfm_data()
▪ Provides support for the tx error injections
▪ Does framing checks on rx
UART Test Bench (2/2)
44. | 44
REGAPI
▪ All the register access details can be implemented here
▪ DUT initialization will be done here
▪ Data transmitted/received from UART will have series of
steps. That sequence will be implemented here.
Interrupt API
▪ Will implement the interrupt servicing of the UART
interrupts
▪ Can be for popping received data or can be for the error
conditions
UART Functional Layer (1/2)
45. | 45
BFM API
▪ Sets up the BFM configuration matching the DUT
configuration
▪ Pops the data received from the BFM
▪ Provides hooks for TB to inject the errors
TX GEN/RX GEN
▪ Generates the specified number of the data packets
▪ Injects the random delay between the packets
UART Functional Layer (2/2)
46. | 46
• Extracts DUT Tx data transmitted from HOST BFM and
compares in order with the data received from UART BFM
on it’s Rx
• Extracts UART BFM Tx data transmitted from UART BFM
and compares in order with the data received from HOST
BFM on DUT Rx
UART Scoreboard
47. | 47
Contents
▪ Verification Philosophy
▪ Familiarity
▪ Scope of Verification
▪ Functional Verification as a Career
▪ Overview of Process
▪ Some Examples: The 3 Musketeers
▪ Do’s and Don’t’s
▪ State of the Art
48. | 48
• Focus on optimum verification to achieve quality
• Design evolves - make sure verification plan evolves
around that
• Get plugged into various sources of information to
track changes
• Use emails or better bug reports
• Document and archive important discussions
• Avoid bypass and short cuts. If you really need to
take one always clearly comment/document
Do’s and Don’t’s
49. | 49
Contents
▪ Verification Philosophy
▪ Familiarity
▪ Scope of Verification
▪ Functional Verification as a Career
▪ Overview of Process
▪ Some Examples: The 3 Musketeers
▪ Do’s and Don’t’s
▪ State of the Art
50. | 50
• Constrained random approach has been found to be very
effective in functional verification
• System Verilog has OOPs support to enhance the re-
usability, special constructs for randomization and
coverage
State of the Art – SystemVerilog
51. | 51
• Common good practices of constrained random verification
environment development have been standardized as
methodologies
• There are multiple such industry level standard verification
methodologies such as VMM, OVM and UVM
• These methodologies also provide set of base class libraries and
• utilities that are very useful across the entire spectrum of
verification activities
State of the Art – Verification
Methodologies
52. | 52
End of Document
Visit us on:
http://www.arrowdevices.com