Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Dealing with the Three Horrible Problems in Verification


Published on

Published in: Technology, Design
  • Be the first to comment

  • Be the first to like this

Dealing with the Three Horrible Problems in Verification

  1. 1. 1Dealing with the Three HorribleProblems in VerificationProf. David L. DillDepartment of Computer ScienceStanford University
  2. 2. 2An excursion out of the ivory tower0-In, July 1996, initial product design discussions:There are three horrible problems in verification:1. Specifying the properties to be checked2. Specifying the environment3. Computational complexity of attaining high coverageUp to then, I had assumed that the first two weresomeone else’s problem, and focused on the last.I still think this is a reasonable framework for thinkingabout verification.
  3. 3. 3Topics• Mutation coverage (Certess)• System-Level Equivalence Checking (Calypto)• Integrating verification into early system design(research).• Conclusions
  4. 4. 4Typical verification experienceFunctionaltestingWeeksBugsperweekTapeoutPurgatory(Based on fabricated data)
  5. 5. 5Coverage Analysis: Why?• What aspects of design haven’t been exercised?– Guides test improvement• How comprehensive is the verification so far?– Stopping criterion• Which aspects of the design have not been well-tested?– Helps allocate verification resources.
  6. 6. 6Coverage Metrics• A metric identifies important– structures in a design representation• HDL lines, FSM states, paths in netlist– classes of behavior• Transactions, event sequences• Metric classification based on level of representation.– Code-based metrics (HDL code)– Circuit structure-based metrics (Netlist)– State-space based metrics (State transition graph)– Functionality-based metrics (User defined tasks)– Spec-based metrics (Formal or executable spec)
  7. 7. 7Code-Based Coverage Metrics• On the HDL description– Line/code block coverage– Branch/conditional coverage– Expression coverage– Path coverage• Useful guide for writing test cases• Little overhead• Inadequate in practice• always @ (a or b or s) //mux• begin• if ( ~s && p )• d = a;• r = x• else if( s )• d = b;• else• d = bx;• if( sel == 1 )• q = d;• else if ( sel == 0 )• q = z
  8. 8. 8Circuit Structure-Based Metrics• Toggle coverage: Is eachnode in the circuit toggled?• Register activity: Is each registerinitialized? Loaded? Read?• Counters: Are they reset? Do theyreach the max/min value?• Register-to-register interactions:Are all feasible paths exercised?• Datapath-control interface:Are all possible combinationsof control and status signalsexercised? sinits3s4s2s5s6ControlDatapath(0-In checkers have these kindsof measures.)
  9. 9. 9Observability problem• A buggy assignment may be stimulated, but stillmissedExamples:• Wrong value generatedspeculatively, but never used.• Wrong value is computedand stored in register– Read 1M cycles later,but simulation doesn’trun that long.
  10. 10. 10Detection terminology• To detect a bug– Stimuli must activate buggy logicVerification EnvironmentCompareReferenceModelStimuliActivation BugDesign underVerification
  11. 11. 11Detection terminology• To detect a bug– Stimuli must activate buggy logic– The bug must propagate to a checkerVerification EnvironmentCompareReferenceModelStimuliPropagationActivation BugDesign underVerification
  12. 12. 12Detection terminology• To detect a bug– Stimuli must activate buggy logic– The bug must propagate to a checker– The checker must detect the bugVerification EnvironmentCompareReferenceModelStimuliPropagationDetectionActivation BugDesign underVerification
  13. 13. 13Detection terminology• Traditional verification metrics do not account for non-propagated, or non-detected bugsVerification EnvironmentCompareReferenceModelStimuliPropagationDetectionActivation BugDesign underVerificationTraditional verification metrics No visibility with traditional metrics
  14. 14. 14Mutation testing• To evaluate testbench’s bug detection ability– Inject fake bugs into design (“mutations”).– Simulate and see whether they are detected.– If not, there is a potential gap in the testbench.• There can be many kinds of mutations– “Stuck at” faults– Wrong logical or other operators• Idea originates in software testing– But is obviously related to testability.• Efficient implementation is a challenge.
  15. 15. 15Certess approach to Mutation AnalysisReportFault Model AnalysisFault Activation AnalysisQualify the Verif. Env.ReportReportStatic analysisof the designAnalysis of theverificationenvironmentbehaviorMeasure theability of theverificationenvironment todetect mutationsIterate if needed
  16. 16. 16Avoiding the horrible problems• Qualify test framework, not design– Environment/properties are in existing test bench.• High-quality coverage metric targetsresources at maximizing useful coverage.
  17. 17. 17SEC Advantages• SEC vs. Simulation– Simulation is resource intensive, with lengthy run times -SEC runs orders of magnitude faster than simulation– Vector generation effort-laden, may be source of errors –SEC requires minimal setup, no test vectors– Simulation output often requires further processing for answers –SEC is exhaustive (all sequences over all time)• SEC vs. Property Checkers– Properties are created to convey specification requirements –SEC uses the golden model as the specification– Properties are often incomplete, and not independently verifiable– Properties are time consuming to construct
  18. 18. 18=?Enabling ESL™• SLEC comprehensively proves functional equivalence• Identifies design differences (bugs)• Supports sequential design changes– State Changes– Temporal differences– I/O differencesReferenceModelReferenceModelImplementationModelImplementationModelSLEC™
  19. 19. 19SLEC Finds Functional Differences in C-C VerificationCustomer example• Verify HLS model is the functionallyequivalent to the reference model• Simulation uncovered no differences- for the given test-bench• SLEC System found differencesbetween the two models– Reference model was incorrect• Probable corner case not easily detectable bysimulationSLEC System finds all possible errorsor inconsistencies. Simulation is notexhaustive, and therefore cannot fullyprove equivalence.• Typical functional differences introduced duringrefinement– Code optimization for HLS– Datapath word size optimization– Ambiguous ESL code• Ex: Out of array boundsBehavioral C/C++ HLS C/C++wrapperHLSModelwrapperRefModelC to C Verification Failed!DIFFERENCE FOUND!User DefinedInputConstraintsReference Model HLS Input Code
  20. 20. 20Application Bugs FoundWireless baseband High-Level Synthesis bug inarray’s access rangeVideo processing Design bug in logic onasynchronous reset lineVideo processing High-Level Synthesis bug in signextensionCustom DSP Block Design bug in normalization ofoperandsDCT Function High- Level Synthesis bug in“wait_until()” interpretationImage Resizer Design bug at proof depth = 1.Design Bugs Caught by SLEC System
  21. 21. 21System-level Formal Verification• Sequential Logic Equivalence Checking(SLEC) Leverages system-level verification Comprehensive verification – 100% coverage Quick setup - no testbenches required Rapid results – eliminates long regressions Focused debug – short counter examples• Why is it needed Independent verification Find bugs caused by language ambiguities orincorrect synthesis constraints Shift left by -1 Divide by zero Verify RTL ECO’s Parallels current RTL synthesis methodology
  22. 22. 22Application Bugs FoundMulti-media Processor Dead end states createdFFT Combinational loops createdQuantizeDivide by zero defined in RTL, butundefined in C codeUltra wideband FilterShift left or right by N bits, whenthe value being shifted is lessthan N bitsMulti-media processorShift by an integer in the C-codecould be a shift by a negativenumber which is undefined in CHigh-level Synthesis Bugs found by SLEC
  23. 23. 23RTL to RTL Verification withSequential Differences• RTL Pipelining– Latency & throughputchanges– Clock speed enhancementcmd1 data1 calcA1 out1calcB1cmd2 data2 calcA2 out2calcB2cmd3 data3 calcA3 out3calcB3cmd4 data4 calcA4 out4calcB4cmd1 data1 calc1 out1cmd2 data2 calc2 out2cmd3 calc3 out3cmd4 data4 calc4 out4data3VerifiedEquivalentOrCounter-Example=?
  24. 24. 24RTL to RTL Verification withSequential Differences• RTL Resource Sharing– State and latencychanges– Size optimizationA B CSum++clkresetB CSum+AVerifiedEquivalentOrCounter-Example=?
  25. 25. 25RTL to RTL Verification withSequential DifferencesC2C1C2C1DQComb.LogicC2C1C2C1 Comb.LogicDQReducedComb.LogicComb.LogicVerifiedEquivalentOrCounter-Example• RTL Re-Timing– State changes– Slack adjustmentAllows micro-architecturemodifications withoutbreaking testbenches=?
  26. 26. 26Designer’s Dilemma– Efficient Design for Power• At 90nm and below, power is becoming the most critical designconstraint– Exponential increase in leakage power consumption– Quadratic increase in power density• Clock Gating is the most common design technique used for reducingpower– Designers manually add clock gating to control dynamic power• Clock gating is most efficiently done at the RTL level, but is errorprone– Mistakes in implementation cause delays and re-spins– Difficult to verify with simulation regressions• Requires modifications to testbenches• Insufficient coverage of clock gating dependencies• Aggressive clock gating approaches sometimes rejected due to verificationcomplexity
  27. 27. 27Addressing Power in the Design Flow• Power management schemes areconsidered globally as part of thesystem model and initial RTLfunctionality– Sleep modes– Power down• Power optimizations are localchanges made to RTL that do noteffect the design functionality– Disabling previous pipeline stageswhen the data is not used– Data dependent computation, likemultiple by zeroPhysicalImplementationManual RTLOptimizationRTLHigh level SynthesisorManual CreationOptimizedRTLSystemModel
  28. 28. 28CGCGCGCombinationalclock gatingenclkenclk CGSequentialclock gatingSequentialclock gatingCombinational Equivalence CheckingSequential Equivalence CheckingCGCGCG
  29. 29. 29Research• Verification currently based on finding and removingbugs.• Finding bugs earlier in the design process would bebeneficial– Early descriptions (protocol, microarchitecture) are smaller,more tractable– Early bugs are likely to be serious, possibly lethal– Bug cost goes up by >10x at each stage of design.• People have been saying this for years.Why can’t we start verifying at the beginning of thedesign?
  30. 30. 30An Experiment• DARPA-sponsored “Smart Memories” project startingup• Have verification PhD student (Jacob Chang) workwith system designers– Try to verify subsystems as soon as possible.– Understand what “keeps designers awake at night.”– Try to understand “Design for verification” (willing to trade offsome system efficiency for verification efficiency).
  31. 31. 31Initial Results: Dismal• Used a variety of formal verification tools– SRI’s PVS system– Cadence SMV• Did some impressive verification work• Didn’t help the design much– By the time something was verified, design had changed.– We know this would be a problem, but our solutions weren’tgood enough
  32. 32. 32Desperate measures required• We discarded tools, used pen and paper• This actually helped!• Real bugs were found• Design principles were clarified• Designers started listening
  33. 33. 33What did we learn?• Early verification methods need to be nimble– Must be able to keep up with design changes.• Existing formal methods are not nimble.– Require comprehensive descriptions– High level of abstraction helps…– But one description still takes on too many issues– So design changes necessitate major changes indescriptions – too slow!
  34. 34. 34Approach: Perspective-based Verification• Need to minimize the number of issues that we tackleat one time.• Perspective: Minimal high-level formalization of adesign to analyze a particular class of properties.– Perspectives should be based on designer’s abstractions• What does he/she draw on the whiteboard?– Should capture designer’s reasoning about correctness
  35. 35. 35Example: Resource dependencies• Verify Cache Coherence message system isdeadlock free• Model– Dependency graph and check for cycles• Analysis method– Search for cycles– In this case: by hand!• System-level deadlocks are notoriously hard to findusing conventional formal verification tools.
  36. 36. 36Dependency graph (cache & memory)
  37. 37. 37Resource Dependency Perspective1. Partial formalization of design– Relevant details• Request buffers dependency• Protocol dependencies– e.g. cancel must follow all other SyncOp commands• Virtual channel in networks– Irrelevant details• Network ordering requirements• Cache controller ordering requirements• Buffer implementation2. One class of verification properties– Deadlock free3. Captures why the property is correct– Ensure no cycle in resource dependency
  38. 38. 38Bug found• Dependency Cycle found– Taken into accountdependency behavior of• Virtual channel• Memory controller• Cache controller• Easy to find once formalmodel is constructed– Hard to find using simulation• All channels must becongested• Bug found beforeimplementationCacheControllerMemoryControllerCacheControllerSyncMissSync Op UnsuccessfulSyncMissSync Op SuccessfulWakeUpReplayReplay
  39. 39. 39Parallel Transaction Perspective• Many systems process a set of transactions– Memory reads/writes/updates– Packet processing/routing• User thinks of transactions as non-interferingprocesses• Hardware needs to maintain this illusion.• Model: State transaction diagram• Analysis: Systematically check whether onetransaction can interfere with another.Several important bugs were found by manuallyapplying this method.
  40. 40. 40Parallel Transaction Perspective1. Partial formalization of design– Relevant details• Effect of transition on self and others• Locking mechanism– Irrelevant details• Timing and ordering information• Buffering issues• Deadlock issues2. Targets on one verification property– Same behavior of single process in a multi-processenvironment3. Captures why the property is correct– Interrupts are conflict free
  41. 41. 41Transaction Diagram Verifier• Tool developed for verification of the paralleltransaction perspective• User input– Invariants– Transition guards– Transition state changes• Invariants easy to see it’s true for single process• TDV verifies invariant for single process, plus– Invariants are true even if other processes execute at thesame time
  42. 42. 42TDV• User supplies– Blocks (transaction steps)• Pre-conditions, post-conditions, guards, assignments– Links between blocks (control flow)• Tool loops through all pairs of block– Construct the verification tasks– Verify the tasks through another tool• STP decision procedure• Not a model checker– Verifies unbounded number of transactions– Uses theorem-proving technology.
  43. 43. 43Tradeoffs• Sacrifices must be made– Perspectives are necessarily partial– Not easy to link perspectives to RTL– Not easy to link perspectives to each other• …but, at least, you can verify or find bugs while theirstill relevant to the design!
  44. 44. 44The horrible problems• Perspectives omit irrelevant details.– including irrelevant environmental constraints.• Properties are at the level the designer thinks, sothey are easier to extract• Computational complexity reduced as well
  45. 45. 45Conclusions• Practical verification technology must take account ofthe three horrible problems• Products currently on the market do this in innovativeways– Coverage analysis that is a closer match to actual bug-finding ability• Evaluates existing verification environment– System-level equivalence checking avoids need to addassertions• Environmental constraint problem reduced.• We need a new perspective on system-levelverification 