Dealing with the Three Horrible Problems in Verification Prof. David L. Dill Department of Computer Science Stanford University
An excursion out of the ivory tower 0-In, July 1996, initial product design discussions:  There are  three horrible problems in verification: Specifying the properties to be checked Specifying the environment Computational complexity of attaining high coverage Up to then, I had assumed that the first two were someone else’s problem, and focused on the last. I still think this is a reasonable framework for thinking about verification.
Topics Mutation coverage (Certess) System-Level Equivalence Checking (Calypto) Integrating verification into early system design (research). Conclusions
Typical verification experience Weeks Bugs per week (Based on fabricated data) Functional testing Tapeout Purgatory
Coverage Analysis: Why? What aspects of design haven’t been exercised? Guides test improvement How comprehensive is the verification so far? Stopping criterion Which aspects of the design have not been well-tested? Helps allocate verification resources.
Coverage Metrics A metric identifies important  structures in a design representation HDL lines, FSM states, paths in netlist classes of behavior Transactions,  event sequences Metric classification based on level of representation. Code-based  metrics (HDL code) Circuit structure-based metrics (Netlist) State-space based metrics (State transition graph) Functionality-based metrics (User defined tasks) Spec-based metrics (Formal or executable spec)
Code-Based Coverage Metrics On the HDL description Line/code block coverage Branch/conditional coverage Expression coverage Path coverage Useful guide for writing test cases Little overhead Inadequate in practice always @ (a or b or s) // mux begin if  (  ~s  &&  p  ) d = a; r =  x  else  if( s ) d = b; else d = 'bx; if( sel == 1 ) q = d; else if ( sel == 0 ) q = z
Circuit Structure-Based Metrics Toggle coverage : Is each  node in the circuit toggled? Register activity : Is each register initialized? Loaded? Read? Counters : Are they reset? Do they reach the max/min value? Register-to-register interactions :   Are all feasible paths exercised? Datapath-control interface : Are all possible combinations  of control and status signals  exercised?  (0-In checkers have these kinds of measures.) s init s 3 s 4 s 2 s 5 s 6 Control Datapath
Observability problem A buggy assignment may be stimulated, but still missed Examples: Wrong value generated  speculatively, but never used. Wrong value is computed  and stored in register Read 1M cycles later,  but simulation doesn’t  run that long.
Detection terminology To detect a bug Stimuli must  activate  buggy logic Verification Environment Compare Reference Model Stimuli Activation Bug Design under Verification
Detection terminology To detect a bug Stimuli must  activate  buggy logic The bug must  propagate  to a checker Verification Environment Compare Reference Model Stimuli Propagation Activation Bug Design under Verification
Detection terminology To detect a bug Stimuli must  activate  buggy logic The bug must  propagate  to a checker The checker must  detect  the bug Verification Environment Compare Reference Model Stimuli Propagation Detection Activation Bug Design under Verification
Detection terminology Traditional verification metrics do not account for non-propagated, or non-detected bugs Traditional verification metrics No visibility with traditional metrics Verification Environment Compare Reference Model Stimuli Propagation Detection Activation Bug Design under Verification
Mutation testing To evaluate testbench’s bug  detection  ability Inject fake bugs into design (“mutations”). Simulate and see whether they are detected. If not, there is a potential gap in the testbench. There can be many kinds of mutations “ Stuck at” faults Wrong logical or other operators Idea originates in software testing But is obviously related to testability. Efficient implementation is a challenge.
Certess approach to Mutation Analysis Fault  Model  Analysis Fault  Activation  Analysis Qualify the Verif. Env. Static analysis of the design Analysis of the verification environment behavior Measure the ability of the verification environment to detect mutations Iterate if needed Report Report Report
Avoiding the horrible problems Qualify test framework, not design Environment/properties are in existing test bench. High-quality coverage metric targets resources at maximizing useful coverage.
SEC Advantages SEC vs. Simulation Simulation is resource intensive, with lengthy run times - SEC runs orders of magnitude faster than simulation Vector generation effort-laden, may be source of errors – SEC requires minimal setup, no test vectors Simulation output often requires further processing for answers – SEC is exhaustive (all sequences over all time) SEC vs. Property Checkers Properties are created to convey specification requirements –  SEC uses the golden model as the specification Properties are often incomplete, and not independently verifiable Properties are time consuming to construct
Enabling ESL™ SLEC™ SLEC comprehensively proves functional equivalence Identifies design differences (bugs) Supports sequential design changes  State Changes Temporal differences I/O differences = ? Reference Model Implementation Model
SLEC Finds Functional Differences in C-C Verification  Customer example  Verify HLS model is the functionally equivalent to the reference model  Simulation uncovered no differences -  for the given test-bench SLEC System found differences between the two models Reference model was incorrect Probable corner case not easily detectable by simulation SLEC System finds all possible errors or inconsistencies. Simulation is not exhaustive, and therefore cannot fully prove equivalence. Typical functional differences introduced during refinement Code optimization for HLS Datapath word size optimization Ambiguous ESL code Ex:  Out of array bounds Behavioral C/C++ HLS C/C++ wrapper HLS Model wrapper Ref Model C to C Verification Failed! DIFFERENCE FOUND! User Defined Input Constraints Reference Model HLS Input Code
Design Bugs Caught by SLEC System Bugs Found Application High- Level Synthesis bug in “wait_until()” interpretation DCT Function High-Level Synthesis bug in array’s access range Wireless baseband Design bug in logic on asynchronous reset line Video processing Design bug in normalization of operands Custom DSP Block Design bug at proof depth = 1.  Image Resizer High-Level Synthesis bug in sign extension Video processing
System-level Formal Verification Sequential Logic Equivalence Checking (SLEC) Leverages system-level verification Comprehensive verification – 100% coverage Quick setup - no testbenches required Rapid results – eliminates long regressions Focused debug – short counter examples Why is it needed Independent verification Find bugs caused by language ambiguities or incorrect synthesis constraints Shift left by -1 Divide by zero Verify RTL ECO’s Parallels current RTL synthesis methodology
High-level Synthesis Bugs found by SLEC Bugs Found Application Shift by an integer in the C-code could be a shift by a negative number which is undefined in C Multi-media processor Dead end states created Multi-media Processor Combinational loops created FFT Shift left or right by N bits, when the value being shifted is less than N bits Ultra wideband Filter Divide by zero defined in RTL, but undefined in C code Quantize
RTL to RTL Verification with  Sequential Differences RTL Pipelining Latency & throughput changes Clock speed enhancement cmd1 data1 calcA1 out1 calcB1 cmd2 data2 calcA2 out2 calcB2 cmd3 data3 calcA3 out3 calcB3 cmd4 data4 calcA4 out4 calcB4 cmd1 data1 calc1 out1 cmd2 data2 calc2 out2 cmd3 calc3 out3 cmd4 data4 calc4 out4 data3 Verified  Equivalent Or  Counter-Example = ?
RTL to RTL Verification with  Sequential Differences RTL Resource Sharing State and latency changes Size optimization Verified  Equivalent Or  Counter-Example A B C Sum + + clk reset B C Sum + A = ?
RTL to RTL Verification with  Sequential Differences RTL Re-Timing State changes Slack adjustment Allows micro-architecture modifications without breaking testbenches Verified  Equivalent Or  Counter-Example C2 C1 C2 C1 D Q Comb. Logic C2 C1 C2 C1 Comb. Logic D Q Reduced Comb. Logic Comb. Logic = ?
Designer’s Dilemma– Efficient Design for Power At 90nm and below, power is becoming the most critical design constraint Exponential increase in leakage power consumption Quadratic increase in power density Clock Gating is the most common design technique used for reducing power Designers manually add clock gating to control dynamic power Clock gating is  most efficiently  done at the RTL level, but is error prone Mistakes in implementation cause delays and re-spins Difficult to verify with simulation regressions Requires modifications to testbenches Insufficient coverage of clock gating dependencies Aggressive clock gating approaches sometimes rejected due to verification complexity
Addressing Power in the Design Flow Power management  schemes are considered globally as part of the system model and initial RTL functionality Sleep modes Power down  Power optimizations  are local changes made to RTL that do not effect the design functionality Disabling previous pipeline stages when the data is not used Data dependent computation, like multiple by zero Physical Implementation   Manual RTL Optimization RTL High level Synthesis  or  Manual Creation   Optimized RTL System Model
Combinational clock gating Sequential clock gating Sequential clock gating Combinational Equivalence Checking Sequential Equivalence Checking CG CG CG en clk en clk CG CG CG CG
Research Verification currently based on finding and removing bugs. Finding bugs earlier in the design process would be beneficial Early descriptions (protocol, microarchitecture) are smaller, more tractable Early bugs are likely to be serious, possibly lethal Bug cost goes up by >10x at each stage of design. People have been saying this for years. Why can’t we start verifying at the beginning of the design?
An Experiment DARPA-sponsored “Smart Memories” project starting up Have verification PhD student (Jacob Chang) work with system designers Try to verify subsystems as soon as possible. Understand what “keeps designers awake at night.” Try to understand “Design for verification” (willing to trade off some system efficiency for verification efficiency).
Initial Results: Dismal Used a variety of formal verification tools SRI’s PVS system Cadence SMV Did some impressive verification work Didn’t help the design much By the time something was verified, design had changed. We know this would be a problem, but our solutions weren’t good enough
Desperate measures required We discarded tools, used pen and paper This actually helped! Real bugs were found Design principles were clarified Designers started listening
What did we learn? Early verification methods need to be nimble Must be able to keep up with design changes. Existing formal methods are not nimble. Require comprehensive descriptions High level of abstraction helps… But one description still takes on too many issues So design changes necessitate major changes in descriptions – too slow!
Approach: Perspective-based Verification Need to minimize the number of issues that we tackle at one time. Perspective :  Minimal high-level formalization of a design to analyze a particular class of properties. Perspectives should be based on designer’s abstractions What does he/she draw on the whiteboard? Should capture designer’s reasoning about correctness
Example: Resource dependencies Verify Cache Coherence message system is deadlock free Model Dependency graph and check for cycles Analysis method Search for cycles In this case: by hand! System-level deadlocks are notoriously hard to find using conventional formal verification tools.
Dependency graph (cache & memory)
Resource Dependency Perspective Partial formalization of design Relevant details Request buffers dependency Protocol dependencies e.g. cancel must follow all other SyncOp commands Virtual channel in networks Irrelevant details Network ordering requirements Cache controller ordering requirements Buffer implementation One class of verification properties Deadlock free Captures why the property is correct Ensure no cycle in resource dependency
Bug found Dependency Cycle found Taken into account dependency behavior of Virtual channel  Memory controller Cache controller Easy to find once formal model is constructed Hard to find using simulation All channels must be congested Bug found before implementation Cache Controller Memory Controller Cache Controller SyncMiss Sync Op Unsuccessful SyncMiss Sync Op Successful Wake Up Replay Replay
Parallel Transaction Perspective Many systems process a set of transactions Memory reads/writes/updates Packet processing/routing User thinks of transactions as non-interfering processes Hardware needs to maintain this illusion. Model: State transaction diagram Analysis: Systematically check whether one transaction can interfere with another. Several important bugs were found by manually applying this method.
Parallel Transaction Perspective Partial formalization of design Relevant details Effect of transition on self and others Locking mechanism Irrelevant details Timing and ordering information Buffering issues Deadlock issues Targets on one verification property Same behavior of single process in a multi-process environment Captures why the property is correct Interrupts are conflict free
Transaction Diagram Verifier Tool developed for verification of the parallel transaction perspective User input Invariants Transition guards Transition state changes Invariants easy to see it’s true for single process TDV verifies invariant for single process, plus Invariants are true even if other processes execute at the same time
TDV User supplies Blocks (transaction steps) Pre-conditions, post-conditions, guards, assignments Links between blocks (control flow) Tool loops through all pairs of block Construct the verification tasks Verify the tasks through another tool STP decision procedure Not a model checker Verifies unbounded number of transactions Uses theorem-proving technology.
Tradeoffs Sacrifices must be made Perspectives are necessarily partial Not easy to link perspectives to RTL Not easy to link perspectives to each other …but, at least, you can verify or find bugs while their still relevant to the design!
The horrible problems Perspectives omit irrelevant details. including irrelevant environmental constraints. Properties are at the level the  designer thinks, so they are easier to extract Computational complexity reduced as well
Conclusions Practical verification technology must take account of the three horrible problems Products currently on the market do this in innovative ways Coverage analysis that is a closer match to actual bug-finding ability Evaluates existing verification environment System-level equivalence checking avoids need to add assertions Environmental constraint problem reduced. We need a new perspective on system-level verification  

Dill may-2008

  • 1.
    Dealing with theThree Horrible Problems in Verification Prof. David L. Dill Department of Computer Science Stanford University
  • 2.
    An excursion outof the ivory tower 0-In, July 1996, initial product design discussions: There are three horrible problems in verification: Specifying the properties to be checked Specifying the environment Computational complexity of attaining high coverage Up to then, I had assumed that the first two were someone else’s problem, and focused on the last. I still think this is a reasonable framework for thinking about verification.
  • 3.
    Topics Mutation coverage(Certess) System-Level Equivalence Checking (Calypto) Integrating verification into early system design (research). Conclusions
  • 4.
    Typical verification experienceWeeks Bugs per week (Based on fabricated data) Functional testing Tapeout Purgatory
  • 5.
    Coverage Analysis: Why?What aspects of design haven’t been exercised? Guides test improvement How comprehensive is the verification so far? Stopping criterion Which aspects of the design have not been well-tested? Helps allocate verification resources.
  • 6.
    Coverage Metrics Ametric identifies important structures in a design representation HDL lines, FSM states, paths in netlist classes of behavior Transactions, event sequences Metric classification based on level of representation. Code-based metrics (HDL code) Circuit structure-based metrics (Netlist) State-space based metrics (State transition graph) Functionality-based metrics (User defined tasks) Spec-based metrics (Formal or executable spec)
  • 7.
    Code-Based Coverage MetricsOn the HDL description Line/code block coverage Branch/conditional coverage Expression coverage Path coverage Useful guide for writing test cases Little overhead Inadequate in practice always @ (a or b or s) // mux begin if ( ~s && p ) d = a; r = x else if( s ) d = b; else d = 'bx; if( sel == 1 ) q = d; else if ( sel == 0 ) q = z
  • 8.
    Circuit Structure-Based MetricsToggle coverage : Is each node in the circuit toggled? Register activity : Is each register initialized? Loaded? Read? Counters : Are they reset? Do they reach the max/min value? Register-to-register interactions : Are all feasible paths exercised? Datapath-control interface : Are all possible combinations of control and status signals exercised? (0-In checkers have these kinds of measures.) s init s 3 s 4 s 2 s 5 s 6 Control Datapath
  • 9.
    Observability problem Abuggy assignment may be stimulated, but still missed Examples: Wrong value generated speculatively, but never used. Wrong value is computed and stored in register Read 1M cycles later, but simulation doesn’t run that long.
  • 10.
    Detection terminology Todetect a bug Stimuli must activate buggy logic Verification Environment Compare Reference Model Stimuli Activation Bug Design under Verification
  • 11.
    Detection terminology Todetect a bug Stimuli must activate buggy logic The bug must propagate to a checker Verification Environment Compare Reference Model Stimuli Propagation Activation Bug Design under Verification
  • 12.
    Detection terminology Todetect a bug Stimuli must activate buggy logic The bug must propagate to a checker The checker must detect the bug Verification Environment Compare Reference Model Stimuli Propagation Detection Activation Bug Design under Verification
  • 13.
    Detection terminology Traditionalverification metrics do not account for non-propagated, or non-detected bugs Traditional verification metrics No visibility with traditional metrics Verification Environment Compare Reference Model Stimuli Propagation Detection Activation Bug Design under Verification
  • 14.
    Mutation testing Toevaluate testbench’s bug detection ability Inject fake bugs into design (“mutations”). Simulate and see whether they are detected. If not, there is a potential gap in the testbench. There can be many kinds of mutations “ Stuck at” faults Wrong logical or other operators Idea originates in software testing But is obviously related to testability. Efficient implementation is a challenge.
  • 15.
    Certess approach toMutation Analysis Fault Model Analysis Fault Activation Analysis Qualify the Verif. Env. Static analysis of the design Analysis of the verification environment behavior Measure the ability of the verification environment to detect mutations Iterate if needed Report Report Report
  • 16.
    Avoiding the horribleproblems Qualify test framework, not design Environment/properties are in existing test bench. High-quality coverage metric targets resources at maximizing useful coverage.
  • 17.
    SEC Advantages SECvs. Simulation Simulation is resource intensive, with lengthy run times - SEC runs orders of magnitude faster than simulation Vector generation effort-laden, may be source of errors – SEC requires minimal setup, no test vectors Simulation output often requires further processing for answers – SEC is exhaustive (all sequences over all time) SEC vs. Property Checkers Properties are created to convey specification requirements – SEC uses the golden model as the specification Properties are often incomplete, and not independently verifiable Properties are time consuming to construct
  • 18.
    Enabling ESL™ SLEC™SLEC comprehensively proves functional equivalence Identifies design differences (bugs) Supports sequential design changes State Changes Temporal differences I/O differences = ? Reference Model Implementation Model
  • 19.
    SLEC Finds FunctionalDifferences in C-C Verification Customer example Verify HLS model is the functionally equivalent to the reference model Simulation uncovered no differences - for the given test-bench SLEC System found differences between the two models Reference model was incorrect Probable corner case not easily detectable by simulation SLEC System finds all possible errors or inconsistencies. Simulation is not exhaustive, and therefore cannot fully prove equivalence. Typical functional differences introduced during refinement Code optimization for HLS Datapath word size optimization Ambiguous ESL code Ex: Out of array bounds Behavioral C/C++ HLS C/C++ wrapper HLS Model wrapper Ref Model C to C Verification Failed! DIFFERENCE FOUND! User Defined Input Constraints Reference Model HLS Input Code
  • 20.
    Design Bugs Caughtby SLEC System Bugs Found Application High- Level Synthesis bug in “wait_until()” interpretation DCT Function High-Level Synthesis bug in array’s access range Wireless baseband Design bug in logic on asynchronous reset line Video processing Design bug in normalization of operands Custom DSP Block Design bug at proof depth = 1. Image Resizer High-Level Synthesis bug in sign extension Video processing
  • 21.
    System-level Formal VerificationSequential Logic Equivalence Checking (SLEC) Leverages system-level verification Comprehensive verification – 100% coverage Quick setup - no testbenches required Rapid results – eliminates long regressions Focused debug – short counter examples Why is it needed Independent verification Find bugs caused by language ambiguities or incorrect synthesis constraints Shift left by -1 Divide by zero Verify RTL ECO’s Parallels current RTL synthesis methodology
  • 22.
    High-level Synthesis Bugsfound by SLEC Bugs Found Application Shift by an integer in the C-code could be a shift by a negative number which is undefined in C Multi-media processor Dead end states created Multi-media Processor Combinational loops created FFT Shift left or right by N bits, when the value being shifted is less than N bits Ultra wideband Filter Divide by zero defined in RTL, but undefined in C code Quantize
  • 23.
    RTL to RTLVerification with Sequential Differences RTL Pipelining Latency & throughput changes Clock speed enhancement cmd1 data1 calcA1 out1 calcB1 cmd2 data2 calcA2 out2 calcB2 cmd3 data3 calcA3 out3 calcB3 cmd4 data4 calcA4 out4 calcB4 cmd1 data1 calc1 out1 cmd2 data2 calc2 out2 cmd3 calc3 out3 cmd4 data4 calc4 out4 data3 Verified Equivalent Or Counter-Example = ?
  • 24.
    RTL to RTLVerification with Sequential Differences RTL Resource Sharing State and latency changes Size optimization Verified Equivalent Or Counter-Example A B C Sum + + clk reset B C Sum + A = ?
  • 25.
    RTL to RTLVerification with Sequential Differences RTL Re-Timing State changes Slack adjustment Allows micro-architecture modifications without breaking testbenches Verified Equivalent Or Counter-Example C2 C1 C2 C1 D Q Comb. Logic C2 C1 C2 C1 Comb. Logic D Q Reduced Comb. Logic Comb. Logic = ?
  • 26.
    Designer’s Dilemma– EfficientDesign for Power At 90nm and below, power is becoming the most critical design constraint Exponential increase in leakage power consumption Quadratic increase in power density Clock Gating is the most common design technique used for reducing power Designers manually add clock gating to control dynamic power Clock gating is most efficiently done at the RTL level, but is error prone Mistakes in implementation cause delays and re-spins Difficult to verify with simulation regressions Requires modifications to testbenches Insufficient coverage of clock gating dependencies Aggressive clock gating approaches sometimes rejected due to verification complexity
  • 27.
    Addressing Power inthe Design Flow Power management schemes are considered globally as part of the system model and initial RTL functionality Sleep modes Power down Power optimizations are local changes made to RTL that do not effect the design functionality Disabling previous pipeline stages when the data is not used Data dependent computation, like multiple by zero Physical Implementation Manual RTL Optimization RTL High level Synthesis or Manual Creation Optimized RTL System Model
  • 28.
    Combinational clock gatingSequential clock gating Sequential clock gating Combinational Equivalence Checking Sequential Equivalence Checking CG CG CG en clk en clk CG CG CG CG
  • 29.
    Research Verification currentlybased on finding and removing bugs. Finding bugs earlier in the design process would be beneficial Early descriptions (protocol, microarchitecture) are smaller, more tractable Early bugs are likely to be serious, possibly lethal Bug cost goes up by >10x at each stage of design. People have been saying this for years. Why can’t we start verifying at the beginning of the design?
  • 30.
    An Experiment DARPA-sponsored“Smart Memories” project starting up Have verification PhD student (Jacob Chang) work with system designers Try to verify subsystems as soon as possible. Understand what “keeps designers awake at night.” Try to understand “Design for verification” (willing to trade off some system efficiency for verification efficiency).
  • 31.
    Initial Results: DismalUsed a variety of formal verification tools SRI’s PVS system Cadence SMV Did some impressive verification work Didn’t help the design much By the time something was verified, design had changed. We know this would be a problem, but our solutions weren’t good enough
  • 32.
    Desperate measures requiredWe discarded tools, used pen and paper This actually helped! Real bugs were found Design principles were clarified Designers started listening
  • 33.
    What did welearn? Early verification methods need to be nimble Must be able to keep up with design changes. Existing formal methods are not nimble. Require comprehensive descriptions High level of abstraction helps… But one description still takes on too many issues So design changes necessitate major changes in descriptions – too slow!
  • 34.
    Approach: Perspective-based VerificationNeed to minimize the number of issues that we tackle at one time. Perspective : Minimal high-level formalization of a design to analyze a particular class of properties. Perspectives should be based on designer’s abstractions What does he/she draw on the whiteboard? Should capture designer’s reasoning about correctness
  • 35.
    Example: Resource dependenciesVerify Cache Coherence message system is deadlock free Model Dependency graph and check for cycles Analysis method Search for cycles In this case: by hand! System-level deadlocks are notoriously hard to find using conventional formal verification tools.
  • 36.
  • 37.
    Resource Dependency PerspectivePartial formalization of design Relevant details Request buffers dependency Protocol dependencies e.g. cancel must follow all other SyncOp commands Virtual channel in networks Irrelevant details Network ordering requirements Cache controller ordering requirements Buffer implementation One class of verification properties Deadlock free Captures why the property is correct Ensure no cycle in resource dependency
  • 38.
    Bug found DependencyCycle found Taken into account dependency behavior of Virtual channel Memory controller Cache controller Easy to find once formal model is constructed Hard to find using simulation All channels must be congested Bug found before implementation Cache Controller Memory Controller Cache Controller SyncMiss Sync Op Unsuccessful SyncMiss Sync Op Successful Wake Up Replay Replay
  • 39.
    Parallel Transaction PerspectiveMany systems process a set of transactions Memory reads/writes/updates Packet processing/routing User thinks of transactions as non-interfering processes Hardware needs to maintain this illusion. Model: State transaction diagram Analysis: Systematically check whether one transaction can interfere with another. Several important bugs were found by manually applying this method.
  • 40.
    Parallel Transaction PerspectivePartial formalization of design Relevant details Effect of transition on self and others Locking mechanism Irrelevant details Timing and ordering information Buffering issues Deadlock issues Targets on one verification property Same behavior of single process in a multi-process environment Captures why the property is correct Interrupts are conflict free
  • 41.
    Transaction Diagram VerifierTool developed for verification of the parallel transaction perspective User input Invariants Transition guards Transition state changes Invariants easy to see it’s true for single process TDV verifies invariant for single process, plus Invariants are true even if other processes execute at the same time
  • 42.
    TDV User suppliesBlocks (transaction steps) Pre-conditions, post-conditions, guards, assignments Links between blocks (control flow) Tool loops through all pairs of block Construct the verification tasks Verify the tasks through another tool STP decision procedure Not a model checker Verifies unbounded number of transactions Uses theorem-proving technology.
  • 43.
    Tradeoffs Sacrifices mustbe made Perspectives are necessarily partial Not easy to link perspectives to RTL Not easy to link perspectives to each other …but, at least, you can verify or find bugs while their still relevant to the design!
  • 44.
    The horrible problemsPerspectives omit irrelevant details. including irrelevant environmental constraints. Properties are at the level the designer thinks, so they are easier to extract Computational complexity reduced as well
  • 45.
    Conclusions Practical verificationtechnology must take account of the three horrible problems Products currently on the market do this in innovative ways Coverage analysis that is a closer match to actual bug-finding ability Evaluates existing verification environment System-level equivalence checking avoids need to add assertions Environmental constraint problem reduced. We need a new perspective on system-level verification 

Editor's Notes

  • #24 When the initial block of design does not meet timing engineers must transform the RTL into a faster implementation . A common technique is re-timing. By moving logic around, long paths can be reduced . However, by moving logic around the value/meaning of state elements is completely changed . Combinatorial formal techniques do not support these changes. SLEC handles this type of design easily Outside of changes, two designs should have identical functionality No requirement for internal statepoints to map/match
  • #25 When the initial block of design does not meet timing engineers must transform the RTL into a faster implementation . A common technique is re-timing. By moving logic around, long paths can be reduced . However, by moving logic around the value/meaning of state elements is completely changed . Combinatorial formal techniques do not support these changes. SLEC handles this type of design easily Outside of changes, two designs should have identical functionality No requirement for internal statepoints to map/match
  • #26 When the initial block of design does not meet timing engineers must transform the RTL into a faster implementation . A common technique is re-timing. By moving logic around, long paths can be reduced . However, by moving logic around the value/meaning of state elements is completely changed . Combinatorial formal techniques do not support these changes. SLEC handles this type of design easily Outside of changes, two designs should have identical functionality No requirement for internal statepoints to map/match