Coverage dallas june20-2006


Published on

  • Be the first to comment

  • Be the first to like this

No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide
  • I liked this quote. Emphasizes that we are building these fancy verification environments which do so much automatically for us. We get a slick compute farm setup and run simulations 24x7. And then “run like mad”. But the question remains….
  • What have I really done? Did I get the results I wanted? Are my tests really doing what I think they are doing? Are they still doing what they did last week? Am I done? Emphasize the coverage is a tool to HELP answer the question of “am I done verifying the design?” It does not provide the full answer.
  • What coverage does provide is excellent feedback on what you have actually accomplished. Test effectiveness and redundancy. Caution on doing too much with random test grading.
  • Coverage space: different aspects could be unioned into a single coverage space definition, but are really disjoint Coverage space: a space could be defined to use multiple technologies, but is likely to cause confusion during data generation and analysis
  • What is the distinction between a coverage model and coverage tools. The coverage model is the description of WHAT you want to cover. It ranges from high-level detail such as that from the architecture definition and project specification documents to low-level details of the design implementation. Coverage tools provide the HOW do I work with a coverage model. We can get ad-hoc coverage data such as bug rates and number of simulation cycles run More traditional coverage tools include code coverage tools. More recently, embedded monitors in the form of FCPs and assertions are providing coverage feedback. Moving forward, the use of transaction level modeling in the testbench is enabling transaction coverage tools to help use the coverage model.
  • Code coverage monitors the design as a whole without any specific knowledge of its operation We have typically only used Line coverage. We look for 100% line coverage. Code coverage provides an opportunistic view of coverage. Could mark line as covered even though it may have an error in it if error never propagated to a checker. Increased risk of misinterpreting coverage data Issues have we had with other sources of code coverage Path coverage are analyzed without an understanding of the relationship of variables controlling the paths. For example, if had a module that had if(a) at top and another if(a) at the bottom with sequential code in the middle, would get four paths but reality is really only two. Led to many, many, many false paths. The other two paths are unreachable, but not marked as such. On the order of 10,000’s. State – issues with state machine extraction. Extra work to add pragmas to get extraction to happen properly. Expression – time consuming to analyze. Tons of data to migrate through.
  • Structural coverage (code) (tied to structure of design/RTL), implementation coverage, as apposed to purely functional coverage (architectural) Focus of logic engineers (implementation) and verification engineers (architecture) Example The assertion check will never be evaluated if the precondition is not first seen. Assertion coverage provides feedback on whether you have seen the precondition. In this example, we also need to see if we have seen both Flush and SMQueStop. If we don’t see both through our verification efforts, we have a coverage hole in which the check has never been evaluated.
  • Structural coverage (code) (tied to structure of design/RTL), implementation coverage, as apposed to purely functional coverage (architectural) Focus of logic engineers (implementation) and verification engineers (architecture)
  • With TLM verification approaches, simulation databases can record transactions. Other types of txn coverage can include having verification tool looking for a sequence of things watched for in the Verilog domain. When the tool sees the sequence of events, it can log a “packet” with significant information. In a simple case, this would be the same as temporal functional coverage. Normally these cases will be recording significant information related to the sequence of events.
  • EDA companies are beginning to provide tools to help with coverage. Previously we each had to create our own custom solutions. There are still a number of things that are not provided by the EDA industry, but a significant amount of infrastructure is now available. For instance, the is broad support for PSL and SVA. The results of assertion and FCPs are automatically recorded into a database during a simulation. Most vendors provide a GUI that allows you to look at the coverage data graphically, including effective use of color coding. Most provide a batch mode interface from which text reports can be generated. Debug tools exist for analyzing assertions and FCPs written in PSL or SVA. Often can replay a simulation through a database file after making changes to the PSL or SVA code. Moving forward, some vendors are beginning to build their tool suites around encouraging a coverage-driven methodology. Tools will provide links between the test plan, specifications, and simulation results using coverage data to link it together. Others are creating unified coverage databases which will link multiple sources of coverage (code, assertion, FCP, transaction, etc) in one database and accessed through a common tool interface. What is still lacking is more advance capabilities around supporting volume simulations, efficient merging of coverage databases from multiple locations and environment, and generating customized views of the data. Tools to use – assertions/FCP/txn coverage databases, code coverage, initial reporting from vendors. Need some custom tools to get all views that you will need
  • Choose form of specification Plan coverage model Start with functional spec, test plan Plan coverage model – identify areas that need to be covered and how you will cover them. Execution Implement coverage models – create FCPs, record transactions Collect the data Consumption Analyze the data React to the data
  • Content of coverage model includes not only where coverage is needed, but what coverage tool will be used to get that coverage. Understand what coverage types will be used and what information you will obtain from each type. Some types may not providing meaningful data on your project Plan time for validating correct behavior of vendor/custom tools for your use Tools that will be needed are not always available from the vendor. Will need to understand capabilities of the vendor tools and plan for development costs to supplement with custom tools. Coverage infrastructure consists not only of tools (vendor and custom) Computer infrastructure with enough horsepower to support the cost of the coverage tools Disk space Processes and tools to aggregate coverage data from all volume simulations Coverage execution There is a cost to coverage tools running during a simulation. Plans should be made on what percent of sims will be run with coverage enabled. We run with FCPs and transaction coverage always enabled. Code coverage is run once a week. Maintenance – The infrastructure will require time to maintain. Enhancements will be identified over the course of the project. Integrate new vendor tool releases Coverage goals need to be defined up front. Identify metrics that will be used to help track coverage progress Need to focus on quality and completeness of coverage model through reviews just as teams normally do with design implementation.
  • What does each engineer create? Logic: put in coverage of interest at a low level. Must force themselves to include a big-picture view. DV: put in coverage from black box viewpoint; identify additional coverage required for test plans. What to cover? Planning, planning, planning. When to add? Adding with RTL applies to FCPs/assertion and txn. For code coverage, might need to add pragmas (unreachable, state machines, etc). Where to look for ideas. See books. Excellent examples. Why? I don’t think it will take long for open-minded engineers to understand the value of coverage methodologies. It is more a need to understand how to get started.
  • These are excellent questions that should be asked to help add FCPs to a design.
  • Duplication – there is a cost to coverage. Don’t want to incur cost of capturing duplicate coverage data. For example, don’t add FCPs that provide the same coverage obtained from code coverage Don’t read too much into grading of tests. Understand what the focus of different tests is. Just because the coverage for a limited number of runs shows duplicate coverage between two tests, one test could be going after a difficult to reach corner case that requires many simulations to actually hit.
  • Historically, a rule of thumb has been that the significant majority of simulation time should be spent inside the RTL simulator, not the verification tool. With advanced verification techniques being used, there is a cost associated with these technologies. The rule of thumb is shifting to spending more time in the verification tool. The payback is that the computer is doing more of the verification work and the verification engineer is being utilized for his knowledge.
  • How do I use the coverage tools to describe the coverage model? For code coverage, the RTL is the coverage model description. For assertion and FCPs, an assertion language or library such as PSL, SVA, or OVL are used. For transactions, there is less infrastructure in place for this. Most simulators provide hooks to “record” transactions into a database. Queries must be written to extract coverage data from this database. Can also embed into the monitors and checkers used in verification environment.
  • Coverage data needs to be aggregated across all simulation licenses that are running 24x7. Data aggregation Setup how data will be aggregated. By simulation environment, by block or chip, global. Create a central location to store aggregated coverage data Large amounts of data will be generated and must be managed from an infrastructure standpoint. How much historic coverage data to keep? When to reset coverage aggregation? Mode release boundaries? Calendar boundaries?
  • It is very easy to generate many gigabytes worth of coverage data. Information is the goal, not just data. The trick is to have good methods in place to help look at this volume of data. You must find ways to organize it. As part of your coverage planning determine how you will want to view the data.
  • How can I most effectively look at my coverage data? Without a good plan here, you will feel like you are standing at the base of this waterfall. Easiest question is to get a report of only un-hit coverage. Even looking just at un-hit coverage can be challenging. Must break coverage into buckets. By team or block. By functionality (errors, debug, mode, etc.) As program execution continues, might want to see just coverage for features that should be implemented for the current milestone. Specific modules might be instantiated multiple times. Common blocks used across multiple chips. Multiple instances of an interface (i.e. two memory controllers). Do you want to see the coverage for the module (merge coverage from multiple instances) only? Do you need to know how well you are exercising each instance (each memory controller) With all the coverage data being generated, what combinations of this data is interesting? Merge data from different simulation environments? Merge data over time? What new coverage am I hitting this week that I didn’t hit last week? What coverage was I hitting last week that I am not hitting this week? Merge data over multiple model releases? Sometimes you need to cross different views into a new combined view Might be interested in module instance (merge multiple instances) but some module instances will need to have coverage filtered due to constrained functionality.
  • Views Verification environments. Useful for identifying tests that are more/less effective and the effectiveness of each environment Also use views to help determine TR readiness. Focus on major blocks (including common blocks) and complete chips. Use a combination of instance specific and merged module coverage at block level. Filtering Utilize a custom filtering infrastructure to provide enhanced views Unreachable FCP – some common functionality may be constrained in some instances and result in FCPs being unreachable. Aggregation Aggregate across model releases using a sliding window. Always aggregate over the last N releases. Historic data is kept for a limited period of time and then deleted. Automatically generated metrics (% coverage) provided for each team and for full chips. Metrics are generated and tracked throughout project. However, we are not driven by the metrics. Use it as a tool. But our management does not make rash decisions based solely on coverage metrics.
  • Really understand the un-hit coverage. Work to find ways to fill those coverage holes. Could mean tweaking knobs controls or writing directed tests to target specific functionality Look for difficult to hit coverage. Determine if more focused tests are need to hit those areas of the coverage model more often. Metrics – need to understand completeness of the design. Early on, track metric to get some trends. Caution against hard and fast project-wide coverage goals. Factors affecting coverage including completeness of design, completeness of coverage space implementation, amount of testing/simulation time focused on different functionality. We lean towards having each team setting individual goals based on their specific execution path.
  • For the SX1000 and SX2000 coverage models, we utilized assertion and functional coverage. We are currently expanding our coverage infrastructure into transaction coverage utilizing a custom backend system to extract coverage from vendor transaction databases.
  • Here are some books I have read and would recommend. Other books on SystemVerilog assertions are beginning to come out. You can also contact myself or Rob Porter if you have any specific questions on how you can most effectively utilize coverage on your projects.
  • Coverage dallas june20-2006

    1. 1. What’s with all this talk about coverage? David Lacey and Rob Porter Hewlett Packard Company June 20, 2006
    2. 2. “ You have this awesome generation that pseudo-randomly creates all sorts of good scenarios. You also have created equally awesome scoreboard and temporal checker infrastructure that will catch all the bugs. Next, you run it like mad, with all sorts of seeds to hit as much of the verification space as possible. ” Peet James Verification Plans
    3. 3. Given all that, what really happened? <ul><li>Where did all those transaction go? </li></ul><ul><li>Which lines of RTL were exercised? </li></ul><ul><li>Which sections of the specification were tested? </li></ul><ul><li>Which corner cases of my implementation were hit? </li></ul><ul><li>What was the distribution of transaction types issues? </li></ul><ul><li>Do I need to create new tests? </li></ul><ul><li>Can I stop running simulations? </li></ul>Coverage helps provides the answers! Coverage is a piece of the puzzle, not the final answer
    4. 4. Coverage provides… <ul><li>An understanding of which portions of the design have been exercised </li></ul><ul><li>Increased observability of simulation behavior </li></ul><ul><li>Feedback on which tests are effective </li></ul><ul><li>Feedback to direct future verification efforts </li></ul>
    5. 5. <ul><li>Coverage terms and tools </li></ul><ul><li>How to get started </li></ul><ul><li>Coverage planning </li></ul><ul><li>Coverage execution </li></ul><ul><li>Coverage analysis </li></ul><ul><li>Coverage results </li></ul>Agenda
    6. 6. Coverage terms and tools
    7. 7. Coverage terms <ul><li>Coverage strategy </li></ul><ul><ul><li>Approach defined to utilize coverage technology </li></ul></ul><ul><ul><li>Generate, gather, and analyze coverage data </li></ul></ul><ul><li>Coverage model </li></ul><ul><ul><li>Collection of coverage spaces </li></ul></ul><ul><ul><li>Definition of one or more coverage spaces of interest </li></ul></ul><ul><li>Coverage space </li></ul><ul><ul><li>Set of coverage points associated with a single aspect of the design and a single coverage technology </li></ul></ul><ul><li>Coverage technology </li></ul><ul><ul><li>Specific mechanism such as code, functional, assertion, transaction </li></ul></ul><ul><li>Coverage point </li></ul><ul><ul><li>A specific named aspect of the design behavior </li></ul></ul><ul><ul><li>FCP, line of code, state transition, transaction or sequence of transactions </li></ul></ul><ul><li>Coverage data </li></ul><ul><ul><li>Raw data collected from all coverage points and coverage spaces </li></ul></ul><ul><li>Coverage results </li></ul><ul><ul><li>Interpretation of coverage data in context of coverage model </li></ul></ul>
    8. 8. Coverage model vs. coverage tools <ul><li>WHAT </li></ul><ul><ul><li>Coverage model </li></ul></ul>High Level Architecture Specification Design detail Low Level Transaction Bug rates Code Coverage Model Functional Assertion Coverage tools Sim cycles <ul><li>HOW </li></ul><ul><ul><ul><li>Coverage tools </li></ul></ul></ul>
    9. 9. Code coverage <ul><li>Line/block , branch, path, expression, state </li></ul><ul><li>Measures controllability aspect of our stimulus </li></ul><ul><ul><li>i.e. What lines of code have we exercised </li></ul></ul><ul><li>Does not connect us to the actual functionality of the chip </li></ul><ul><ul><li>No insight into functional correctness </li></ul></ul><ul><li>Takes a blind approach to coverage (low observability) </li></ul><ul><ul><li>Activating an erroneous statement does not mean the error will propagate to an observable point during the course of a simulation </li></ul></ul><ul><li>Generates a lot of data </li></ul><ul><ul><li>Difficult to interpret what is significant and what is not </li></ul></ul>
    10. 10. Assertion coverage <ul><li>Assertions monitor and report undesirable behavior </li></ul><ul><li>Ensures that preconditions of an assertion check have been met </li></ul>// SVA: if asserting stop or flush, no new request assert property (@( posedge clk) disable iff (rst_n) ((Flush | SMQueStop) |-> SMQueNew)) else $error (“Illegal behavior”); precondition check
    11. 11. Functional coverage <ul><li>Similar in nature to assertions </li></ul><ul><ul><li>Assertions monitor and report undesirable behavior </li></ul></ul><ul><ul><li>Functional coverage monitors and reports desirable behavior </li></ul></ul><ul><li>Functional coverage </li></ul><ul><ul><li>Specific design details </li></ul></ul><ul><ul><li>Corner cases of interest to engineers </li></ul></ul><ul><ul><li>Architectural features </li></ul></ul>
    12. 12. Transaction coverage <ul><li>A transaction is the logging of any data structure </li></ul><ul><ul><li>A packet on a bus </li></ul></ul><ul><ul><li>Does not have to be a system packet </li></ul></ul><ul><li>Example transaction coverage points </li></ul><ul><ul><li>All transaction types were seen on each interface </li></ul></ul><ul><ul><li>Transactions with specific data were seen </li></ul></ul><ul><ul><ul><li>Source, destination, address, address ranges </li></ul></ul></ul><ul><li>Sequences of transactions </li></ul><ul><ul><li>Have recording monitor watch for sequence </li></ul></ul><ul><ul><li>Implement advanced queries to look for sequence </li></ul></ul><ul><li>Two parts to transaction coverage </li></ul><ul><ul><li>Record the right data </li></ul></ul><ul><ul><li>Correct queries </li></ul></ul>
    13. 13. EDA tools <ul><li>Code, FCPs and transactions are recorded into vendor specific databases </li></ul><ul><ul><li>Tools are provided to look at coverage data </li></ul></ul><ul><ul><li>Report engines provide text reports </li></ul></ul><ul><li>Debug tools for FCPs and assertions </li></ul><ul><li>Tools to encourage coverage-driven methodologies </li></ul><ul><li>Coverage is still a young technology </li></ul><ul><ul><li>Tools still expanding set of capabilities </li></ul></ul><ul><ul><li>Development areas such as data aggregation, multiple view extraction </li></ul></ul>
    14. 14. How do I get started with this coverage stuff?
    15. 15. Coverage roadmap – getting started PSL / SVA / OVL Coverage tools Code / Assertion / FCP / Txn Txn FCP Code 4. Collect data 1. Choose specification form 2. Identify coverage model 3. Implement coverage model 5. Analyze data 6. React to data Spec, design Adjust stimulus Planning Execution Consumption
    16. 16. Coverage planning PSL / SVA / OVL Coverage tools Code / Assertion / FCP / Txn Txn FCP Code 4. Collect data 1. Choose specification form 2. Identify coverage model 3. Implement coverage model 5. Analyze data 6. React to data Spec, design Adjust stimulus Planning Execution Consumption
    17. 17. Coverage planning <ul><li>Identify content of the coverage model </li></ul><ul><ul><li>Coverage types to be used </li></ul></ul><ul><li>Identify required tools </li></ul><ul><li>Coverage infrastructure </li></ul><ul><li>Coverage execution </li></ul><ul><li>Maintenance and tool enhancements </li></ul><ul><li>Define coverage goals and metrics </li></ul><ul><li>Coverage reviews </li></ul><ul><li>Start looking at coverage up front! </li></ul><ul><ul><li>Coverage results only as good as coverage model </li></ul></ul>
    18. 18. Who, what, when, where, why <ul><li>Who creates coverage model? </li></ul><ul><ul><li>Who analyses the data? </li></ul></ul><ul><ul><li>Who owns coverage? </li></ul></ul>Add with RTL Analyze continuously Logic and DV engineers Concern areas spec, design, assertions, test plan Because… <ul><li>What to cover in the model? </li></ul><ul><li>When to add coverage points? </li></ul><ul><ul><li>When to analyze coverage data? </li></ul></ul><ul><li>Where to look for ideas? </li></ul><ul><li>Why mess with coverage? </li></ul>
    19. 19. <ul><li>For FCPs, ask yourself… </li></ul><ul><ul><li>What should be covered? </li></ul></ul><ul><ul><li>Where is best place to put FCPs? </li></ul></ul><ul><ul><li>When to look for condition? </li></ul></ul><ul><ul><li>Why have coverage point? </li></ul></ul>
    20. 20. Watch out for… <ul><li>Too much data </li></ul><ul><ul><li>Need information , not data </li></ul></ul><ul><ul><li>Need supporting tools to get correct views of data </li></ul></ul><ul><li>Ineffective use of coverage </li></ul><ul><ul><li>FCPs that fire every clock cycle </li></ul></ul><ul><ul><li>Duplication of coverage with different tools </li></ul></ul><ul><li>Reading too much into grading tests </li></ul><ul><ul><li>Random tests produce different results with different seeds </li></ul></ul>
    21. 21. Cost of coverage <ul><li>Plan for the costs of using coverage </li></ul><ul><ul><li>Get solid infrastructure setup </li></ul></ul><ul><ul><li>Plan for slower simulations </li></ul></ul><ul><li>Some level of cost is acceptable </li></ul><ul><ul><li>Getting value back for investment </li></ul></ul><ul><li>Be smart </li></ul><ul><ul><li>Architect coverage plan up front to ensure success </li></ul></ul>
    22. 22. Coverage execution PSL / SVA / OVL Coverage tools Code / Assertion / FCP / Txn Txn FCP Code 4. Collect data 1. Choose specification form 2. Identify coverage model 3. Implement coverage model 5. Analyze data 6. React to data Spec, design Adjust stimulus Planning Execution Consumption
    23. 23. Describing coverage model <ul><li>Code coverage </li></ul><ul><ul><li>RTL code, pragmas </li></ul></ul><ul><li>Assertion and functional coverage </li></ul><ul><ul><li>Use assertion language or library (PSL, SVA, OVL) </li></ul></ul><ul><li>Transaction </li></ul><ul><ul><li>Use hooks into Transaction Level Modeling </li></ul></ul>// PSL cover example default clock = ( posedge clk); sequence qFullCondition = {reset_n ? (q_full : 1’b0); cover qFullCondition; // SVA cover example always @( posedge clk) begin if (reset_n) myQfull: cover (q_full) $info (“queue was full”); end
    24. 24. Data collection <ul><li>Collect data across volume simulation </li></ul><ul><li>Aggregate multiple databases </li></ul><ul><li>Location of coverage data repository </li></ul><ul><li>Manage volume of data </li></ul>
    25. 25. Coverage analysis PSL / SVA / OVL Coverage tools Code / Assertion / FCP / Txn Txn FCP Code 4. Collect data 1. Choose specification form 2. Identify coverage model 3. Implement coverage model 5. Analyze data 6. React to data Spec, design Adjust stimulus Planning Execution Consumption
    26. 26. The analysis <ul><li>Easy to generate a ton of data </li></ul><ul><ul><li>Want information , not data </li></ul></ul><ul><li>Need to organize the data </li></ul><ul><ul><li>Can’t look at it all at once </li></ul></ul><ul><ul><li>Determine views needed </li></ul></ul>
    27. 27. Views of coverage data <ul><li>Un-hit coverage </li></ul><ul><li>Functionality groups </li></ul><ul><li>Block, chip, system </li></ul><ul><li>Current milestone functionality </li></ul><ul><li>Instance or module specific </li></ul><ul><li>Across environments, time, model releases </li></ul><ul><li>Cross views </li></ul>
    28. 28. Our use of coverage <ul><li>Aggregate data for each verification environment </li></ul><ul><li>Views: Verification effectiveness </li></ul><ul><ul><li>Verification environment </li></ul></ul><ul><li>Views: TR readiness </li></ul><ul><ul><li>Major sub-blocks and chip </li></ul></ul><ul><li>Filtering infrastructure </li></ul><ul><ul><li>Milestone specific functionality </li></ul></ul><ul><ul><li>Unreachable </li></ul></ul><ul><li>Aggregate coverage data across windows of time </li></ul><ul><li>Metrics provided for each team and full chips </li></ul>
    29. 29. Analysis is done… now what? <ul><li>Understand all un-hit coverage </li></ul><ul><li>Fill coverage holes </li></ul><ul><li>Look for hard to hit coverage </li></ul><ul><li>Track coverage metrics </li></ul>Don’t play games with metrics just to get coverage goals met Really understand the results
    30. 30. Coverage results PSL / SVA / OVL Coverage tools Code / Assertion / FCP / Txn Txn FCP Code 4. Collect data 1. Choose specification form 2. Identify coverage model 3. Implement coverage model 5. Analyze data 6. React to data Spec, design Adjust stimulus Planning Execution Consumption
    31. 31. Success stories <ul><li>They exist! </li></ul><ul><li>Check them out in Assertion-Based Design </li></ul>
    32. 32. HP coverage data <ul><li>SX1000 chipset </li></ul><ul><ul><li>6,500 FCPs </li></ul></ul><ul><li>SX2000 chipset </li></ul><ul><ul><li>25,000 FCPs </li></ul></ul><ul><li>Current efforts </li></ul><ul><ul><li>135,000 assertions and 650,000 FCPs </li></ul></ul><ul><ul><li>56,000 transaction points </li></ul></ul><ul><li>Coverage goals </li></ul><ul><ul><li>100% coverage with understood exceptions </li></ul></ul><ul><ul><li>Team defined goals per milestone </li></ul></ul>
    33. 33. Resources <ul><ul><li>J. Bergeron, Writing Testbenches: Functional Verification of HDL Models, Second Edition, Kluwer Academic Publishers, 2003. </li></ul></ul><ul><ul><li>H. Foster, A. Krolnick, D. Lacey, Assertion-Based Design, Second Edition , Kluwer Academic Publishers, 2004. </li></ul></ul><ul><ul><li>P. James, Verification Plans: The Five-Day Verification Strategy for Modern Hardware Verification Languages, Kluwer Academic Publishers, 2004. </li></ul></ul><ul><ul><li>A. Piziali, Functional Verification Coverage Measurement and Analysis, Kluwer Academic Publishers, 2004. </li></ul></ul><ul><ul><li>B. Cohen, Using PSL/Sugar with Verilog and VHDL, Guide to Property Specification Language for ABV, VhdlCohen Publishing, 2003. </li></ul></ul><ul><ul><li>David Lacey, Hewlett Packard, [email_address] </li></ul></ul><ul><ul><li>Rob Porter, Hewlett Packard, [email_address] </li></ul></ul>