Your SlideShare is downloading. ×
Stephan berg   track f
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

×

Introducing the official SlideShare app

Stunning, full-screen experience for iPhone and Android

Text the download link to your phone

Standard text messaging rates apply

Stephan berg track f

579
views

Published on

Published in: Education, Technology, Design

0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total Views
579
On Slideshare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
Downloads
15
Comments
0
Likes
0
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
No notes for slide
  • Back to reality, here’s a real example relevant to audience – the rules for an AHB master. Also have APB, and AXI and OCP are in development (30 secs)
  • The customer’s design was a multi-channel bus bridge based on ARM’s AXI architecture. The design receives instructions via a proprietary interface through multiple channels, and performs arbitration, splitting, and aggregation as required, finally generating standard bus transactions through its AXI interface. The customer’s existing functional verification environment used VCS to simulate a constrained random testbench written in SystemVerilog. The testbench itself was pretty straightforward, generating random values for 20 variables, with variable interdependencies described in several algebraic constraints (in SystemVerilog). But you’ll see on the next slide that it wasn’t as simple as it seemed.
  • It turns out that even after constraining the variables to their legal interdependencies, the total functional space is much too large to cover. The 20 variable labels are described in the grey column of this table. Their legal ranges are described in the white column. Trying to cross them all would take forever. The customer realized that not all combinations were important anyway, so the variable ranges were reduced to include only the values that were considered important for verification purposes. Notice the green column where several variable ranges were considerably reduced. However, noticing that when using constrained random testing, the total coverage domain was still too large to cross completely, the customer further reduced the verification goals to a practical level. The next slide will describe each verification goal in more detail.
  • In verification goal number 1, a constrained random simulation will be run until each value of each variable is achieved. No crossing will be attempted yet. The goal is merely to simulate a testcase that covers each of the 1360 values - the additive total of each cover-point. And notice that the largest cover-point contains 776 bins. That will become important later. In verification goal number 2, a constrained random simulation will be run until a cross of the bytes and addr cover-points is achieved. The other cover-points will not be crossed or measured. Only the bytes and addr variables will be measured. When crossing 776 bytes cover-points and 255 addr cover-points, minus a few undesired testcase combinations, a total of 196,608 cross cover-point bins will need to be hit. Let’s look at the next slide to see the results.
  • For Verification Goal #1, coverage of the 1360 coverpoint bins was achieved after randomly generated 475,500 stimulus patterns. Achieving coverage of the ‘bytes’ coverpoint took the longest time, both because it contained the most bins and because it described corner-case coverage goals that were difficult to hit with randomly-generated stimulus. Only 79% of Verification Goal #2 was achieved, even after randomly generating 26m stimulus patterns. Presentation Title
  • inFact was added to the verification environment was the goal of accelerating achievement of Verification Goal #1 and achieving Verification Goal #2. Very few changes were required to the environment. Specifically, no changes were needed to the design. The testbench architecture and language did not change. An inFact graph was added to the testbench to generate stimulus, reusing the existing Verification IP. Bottom line: very few changes to the existing environment Presentation Title
  • The process of creating an inFact graph and adding inFact to the verification environment was simple. First, the graph was created. This was doine by describing each variable and its domain using inFact rules, then describing constraints and variable relationships. The coverage goals for Verification Goal #1 and #2 were annotated on the graph (shown by the shaded region). Adding the inFact graph to the testbench was simple. Existing calls to the SystemVerilog randomize function were replaced with a call to the inFact Graph’s ‘fill’ function. The process took a total of one day, requiring around 100 lines or rule code, and less than 10 lines of SystemVerilog code. Presentation Title
  • Let’s look at the results of simulating with inFact. On Verification Goal #1, inFact achieved coverage of the 1360 coverpoint bins in 776 stimulus patterns. Notice that 776 is the number of bins in the largest coverpoint. inFact was able to hit the total 1360 coverpoint bins in 776 stimulus patterns because inFact was able to target multiple coverage goals with each stimulus item. On Verification Goal #1, inFact achieved coverage 612x faster compared to constrained random stimulus generation. On Verification Goal #2, inFact achieved coverage of the 196,608 cross-coverpoint bins in exactly 196,608 stimulus items. Since constrained-random stimulus generation did not achieve coverage closure, it’s a but more difficult to find an appropriate basis on which to compare results. One way to look at the results is that inFact achieved 100% coverage 170x faster than constrained random achieved 79% coverage. Another way to look at the results is that when inFact has achieved 100% coverage closure, constrained-random generation has only achieved 1.15% coverage. Presentation Title
  • Transcript

    • 1. Using Algorithmic Test Generation to improve Functional Coverage in existing Verification Environments Staffan Berg Verification Specialist Mentor Graphics (Europe)
    • 2. What’s the Problem ? Current Simulation Stimuli Generation Techniques Can’t Keep Up With Design Complexity How Can We Take Advantage of New Technology Without Breaking Existing Flows/Environments?
    • 3. Limitations of C-R Stimuli Generation To Produce N unique Vectors C-R Requires N * ln(N) Vectors For many of today’s complex designs, it is Impossible to reach Coverage goals within the project schedules
    • 4. Current Test Generation Technology
        • Explicitly describe every test procedurally
        • addr <= “0000”; wait 10; addr <= “0001”;
      Directed Tests
        • Implicitly define the stimuli space by declaration of Constraints
        • Constraint c1 (addr inside {2, 4, [80:100]};
      C-R Tests
        • Explicitly Define the whole space by defining a Grammar through a set of Rules
        • Transaction = (req read | write wait_ack);
      Algorithmic Tests
    • 5. Algorithmic Test Generation Basics Rule_graph simple_engine { action init; action wait_rdy, setup_rd, setup_wr, ack; action rw_1, rw_2, rw_4; symbol rw_opts, rw_size; rw_opts = setup_rd | setup_wr; rw_size = rw_1 | rw_2 | rw_4; simple_engine = init repeat { wait_rdy rw_opts rw_size ack } }
    • 6. Real Example: AMBA AHB Master
    • 7. Key Concepts
      • The Rules are compiled into an NDFSM (non-Deterministic Finite State Machine) Representation
      • Action Functions are written in Verilog, SystemVerilog, VHDL or C++
      • The Rule Graph is then traversed during simulation and
      • the Action functions are called to produce stimuli
      • Without coverage goals, the traversal will be random
    • 8. The Role of Coverage
      • Stimulus model describes valid inputs
      • Coverage model describes interesting/priority stimulus
      • Directed Tests
          • Accurate but low productivity
          • Difficult to produce enough tests
          • Constrained Random
          • Automation boosts productivity
          • But, difficult to target
          • Redundancy slows coverage closure
          • Requires manual analysis and constraining to close coverage
      • Algorithmic Test Generation
        • Eliminates redundant stimulus
        • Efficiently targets coverage model
        • Produces stimulus outside coverage model after coverage achieved
    • 9. Using Coverage to drive Stimuli Generation Path Coverage is used to define the Coverage goals A single Path Coverage Object can cover all legal paths in a graph…. Or you could use multiple PC Objects to cover specific goals and cross products
    • 10. ATG and OVM DUT
      • Fundamentals of OVM
      • Highly Modular Testbench Components
      • TLM-Based Communication
      • High Degree of Configurability & Re-Use
      • What Is OVM?
      • Open Verification Methodology
      • Joint Development by Cadence and Mentor
      • 7400+ Registered Users
    • 11. Integration in Existing Testbench Environment: SV OVM OVM Testbench (Partial) Meta-Actions selects values from ranges or sets Rules
    • 12. Integration in Existing Testbench Environment: SV OVM Declare Sequence Item Create a Sequence Item Assign Values Send Item to Sequencer Modify Environment
    • 13. Integration in Existing Environment: ‘e’ Testbench Rule Graph C obj C api Sn_compile ‘ e’ Unit or Struct ‘ e’ Verify Unit graph_1 : infact_e_comp is instance; Verify_1() @posedge_clk is{ var req: trans_struct = new; while (TRUE) do { graph_1.fill_item(req); ‘ e’ struct
    • 14. ‘ e’ Integration Specifics
      • Testengine is Untimed
      • The Action Functions can be Completely
      • Auto-Generated
      • ‘ e’ Enumerated Types are mapped to Action Functions :
      • type instruction_t : [add, sub, mult];
      • Minimal Changes to existing ‘e’ environment:
    • 15. Case Study: Wireless Infrastructure
      • Complex Interface with 1000’s of configurations
      Complex ‘e’ testbench representing 100’s of man-years Impossible to achieve Coverage goals within reasonable time using C-R inFact DUT Specman TB Struct(s) eVC(s) BFM MON. Seq. Driver(s) Action function Action function Action functions TB uVC
    • 16. Case Study: Wireless Infrastructure Results: ATSG Achieved Coverage Goals in 1/10 th of the simulation time* (less than 100K tests vs. 850K tests) *Using Algorithmic Test Generation in a Constrained Random Test Environment Håkan Askdal DAC Conference 2009
    • 17. Case Study: AXI Bus Bridge Existing Design and Verification Environment
      • DUT Overview
      • Parameterizable N-Channel Bus Bridge
      • AXI bus control interface
      • Arbitrates requests from proprietary interface, performs splits or aggregation as appropriate, and generates AXI bus calls
      • Verification Environment
      • Simulator - VCS
      • Testbench - SystemVerilog
      • Stimulus - Constrained Random
      • Current Testbench
      • Generation of 20 Random Variables
      • Interdependencies described with constraints
      • Cover-points on all variables
      • Crosses between select cover-points
       Proprietary I/F VIP AXI Bus VIP Coverage Scoreboard Sequence Generator Testbench Parameterizable Bus Bridge AXI Bus Interface N Request Channels DUT
    • 18. Case Study: AXI Bus Bridge Existing Verification Objectives
      • Functional Domain Space (White Column)
      • Too many legal combinations to simulate
      • Not all are important anyway
      • Coverage Domain Space (Green Column)
      • Reduced to manageable number
      • Number of crosses will be limited
      • But still too many combinations
      • Verification Goals
      • Run constrained random tests until test conditions are achieved
      • Goal # 1 - Cover each value of each variable one time
      • Goal # 2 - Cover a cross of all combinations of the bytes and addr variables
       Variable Field trans phys addr id1 id2 bytes pri wrap start end seq1 seq2 offset1 offset2 res cache type1 ctrl1 type2 ctrl2 Functional Domain 5 2 2^36 256 256 65,536 2 2 2 2 2^32 2^32 16 16 4 2 4 8 4 4 Coverage Domain 5 2 255 64 64 776 2 2 2 2 64 64 16 16 4 2 4 8 4 4
    • 19.
      • Verification Goal # 1
      • Cover each value of each variable one time
      • Total of 20 cover-bins
      • Largest cover-bin contains 776 cover-points
      • Total of 1360 cover-points
      • Verification Goal # 2
      • Cover a cross of all combinations of the bytes and addr variables - minus a few select unimportant cases
      • bytes cover-bin contains 776 cover-points
      • addr cover-bin contains 255 cover-points
      • Total of 196,608 cross cover-points
      Case Study: AXI Bus Bridge Existing Verification Objectives Variable Field Functional Domain Coverage Domain trans phys addr id1 id2 bytes pri wrap start end seq1 seq2 offset1 offset2 res cache type1 ctrl1 type2 ctrl2 5 2 2^36 256 256 65,536 2 2 2 2 2^32 2^32 16 16 4 2 4 8 4 4 5 2 255 64 64 776 2 2 2 2 64 64 16 16 4 2 4 8 4 4  1360 #1 #2
    • 20. Case Study: AXI Bus Bridge Results with Existing Testbench
      • Verification Results # 1
      • Total cover-points - 1360
      • Largest cover-bin - 776
      • Coverage achieved - 100%
      • Testcases required - 475,500
      • Verification Results # 2
      • bytes cover-bin - 776
      • addr cover-bin - 255
      • Total crosses - 196,608
      • Coverage achieved - 79%
      • Testcases required - 26,315,000
       100% Testcases Coverage 0 475,500 0 1360 CRT 0% 100% Testcases Coverage 0 26,315,000 0 196,608 CRT 0% 79%
    • 21. Case Study: AXI Bus Bridge Updated Verification Environment
      • Design Under Test Overview
      • No Change
      • Verification Environment
      • Simulator - Replace VCS with Questa/inFact
      • Testbench - Stay with SystemVerilog
      • Stimulus - Add Graph at the top level
      • New Testbench
      • Almost No Change (see next slide)
       Proprietary I/F VIP AXI Bus VIP Coverage Scoreboard Sequence Generator Testbench Parameterizable Bus Bridge AXI Bus Interface N Request Channels DUT
    • 22. Case Study: AXI Bus Bridge ATSG Testbench Steps
      • Graph Development
      • Describe the domain of each variable
      • Describe constraints on variable relationships
      • Describe variables and combinations to cover
      • Graph Integration
      • Replace call to SV “randomize( )” function with call to inFact “fill ( )” method
      • Testbench Development Effort
      • Total time required - less than 1 day
      • inFact code written - approximately 100 lines
      • SystemVerilog code written - less than 10 lines
      • Reuse of existing testbench code - 99.9%
    • 23. Case Study: AXI Bus Bridge Coverage Closure Results
      • Verification Results # 1
      • Total cover-points - 1360
      • Largest cover-bin - 776
      • Coverage achieved - 100%
      • Testcases required - 776
      • Coverage closure acceleration - 612x
      • Verification Results # 2
      • bytes cover-bin - 776
      • addr cover-bin - 255
      • Total crosses - 196,608
      • Coverage achieved - 100%
      • Testcases required - 196,608
      • Coverage closure acceleration - >>170x
       iTBA 100% Testcases Coverage 0 475,500 0 1360 CRT 0% 776 612 x iTBA 100% Testcases Coverage 0 26,315,000 0 196,608 CRT 0% 79% 1.15% 196,608
    • 24. Conclusions ATG can significantly shorten Time-to-Coverage ATG can be introduced while still preserving existing verification IP More Verification Less Cycles THANK YOU!