Introducing LCS to Digital Design Verification
Upcoming SlideShare
Loading in...5
×
 

Like this? Share it with your network

Share

Introducing LCS to Digital Design Verification

on

  • 1,055 views

Charalambos Ioannides. "Introducing LCS to Digital Design Verification". IWLCS, 2011

Charalambos Ioannides. "Introducing LCS to Digital Design Verification". IWLCS, 2011

Statistics

Views

Total Views
1,055
Views on SlideShare
1,004
Embed Views
51

Actions

Likes
0
Downloads
5
Comments
0

2 Embeds 51

http://home.dei.polimi.it 48
http://home.deib.polimi.it 3

Accessibility

Categories

Upload Details

Uploaded via as Adobe PDF

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

Introducing LCS to Digital Design Verification Presentation Transcript

  • 1. Introducing  LCS  to  Digital  Design   Verifica6on   Charalambos  Ioannides  
  • 2. What is a Design?•  Simply – A collection of code files that define the functionality of a single digital electronics component•  Piece of code in hardware description language (Verilog or VHDL)•  A Design goes through various phases: List  of   List  of   Code   Silicon   Requirements   Gates  
  • 3. What is Design Verification?•  Process used to ensure a design s functional correctness with respect to requirements and specification prior to manufacture•  Ultimate goal is to discover as many bugs on a design as possible before shipping product to customer•  Maximize bug discovery if we maximize coverage (code, functional) on design by means of testing
  • 4. Simulation-based DV Test Learning Directed Test Generation TG   Tests   ML   Biased Random Biases   SIM   Bias Learning Test GenerationTechnique     DUV   Coverage  
  • 5. Why is DV Hard?•  Increasing digital design complexity •  Greater automation of Design process than Verification process •  Increasing miniaturization level of silicon chips •  Competition – increasing feature demands by customers •  Power management•  Increasing Verification Effort •  Many hundreds of tests, 7 month projects
  • 6. Why is DV Hard? Design   Complexity   Technology   Process   ReputaIon   Limited   Engineers  Resources   Product  to  Market   Get  it  Right!   Limited  Budget   Make  it  Fast!  
  • 7. Previous attempts BN   GA  •   Good  results   •   Mature  plaKorm  •   Approx.  CDG  process  well   •   Decent  results  -­‐-­‐-­‐   -­‐-­‐-­‐  •   Domain  Knowledge   •   Non-­‐universal  environment    •    Difficult   to   interpret   know-­‐ •    Finds   one/few   soluIons   for  ledge   CDG   enIre  search  space   via  ML   Markov  Models   GP  •   Excellent  bug  discovery   •   Good  results  •   Approx.  CDG  process  well   •   No  Domain  Knowledge  -­‐-­‐-­‐   -­‐-­‐-­‐  •   Effort  to  setup  environment   •   Code  Diversity  •    Difficult   to   interpret   know-­‐ •    Finds   one/few   soluIons   for  ledge   enIre  search  space  For details: http://www.cs.bris.ac.uk/Publications/pub_master.jsp?id=2001405
  • 8. Why LCS (XCS) on DV?•  Adaptive Learning Systems Ø  Could formulate the problem at a range of different possibilities Ø  Designs change over time and also coverage requirements change during a simulation run•  Develop a complete, accurate and minimal representation of a problem Ø  Achieve coverage in more than one ways and balance it•  Rules developed easy to understand, analyse, combine and alter. No domain knowledge required.
  • 9. Why LCS (XCS) on DV? XCS  Generaliza6on  on  MUX   1E+15   1E+14   1E+13   1E+12   1E+11   1E+10   1E+09   100000000   10000000   1000000   GeneralizaIon  RaIo   100000   Time(h)/experiment   10000   1000   100   10   1   0.1   6   11   20   37   70   0.01   0.001   0.0001   MUX  size  
  • 10. Why not LCS (XCS) on DV?•  XCS has issue with Boolean problems that require overlapping rules•  Problem itself is too big for XCS, but can scale it down•  Need to make any future attempt noise-proof FUTURE RESEARCH WILL TELL!
  • 11. First XCS attempt on DV•  Single step problem•  Learn relationship between biases for a Test Generator (Condition) and the coverage (Action) they achieve•  Noiseless environment as single randomization seed used by the TG•  Both Conditions and Action are bit strings and we use the ternary alphabet {0,1,#} for expressing the learnt relationships•  Use the standard XCS parameters as in the 2002 XCS algorithmic description
  • 12. Proposed Solution
  • 13. DV1 Function
  • 14. XCS (orig.) on DV1
  • 15. XCS (impr.) on DV1
  • 16. Proposed Solution
  • 17. DV3 Function
  • 18. XCS (impr.) on DV3
  • 19. Learnt Rules (DV3) ClassifiersID Cond. : Action R E F AS EXP NUM •  ID1 – which 32 biases to avoid1 0###1## : 0000 1000 0 1.00 26.98 22805 21 •  ID2 – wrong, but2 ###10## : 0000 0 0 0.48 31.76 390 6 tells us that the 323 0#110## : 1110 1000 0 0.63 23.31 803 6 bias vectors will cover at least one signal4 01#10## : 1110 1000 0 0.58 26.06 392 2 •  ID3 & 5 or ID4 & 55 1#01100 : 0001 1000 0 0.42 9.34 102 3 achieve max6 0#100## : 0010 1000 0 0.50 13.29 1534 2 coverage, longer are7 01#00## : 0010 1000 0 0.82 17.97 3520 8 ID 5, 6 & 8 or 5, 7 & 8 •  ID9 & 10 – tell us8 1###0#0 : 1100 1000 0 0.62 20.85 403 4 what cannot be9 ####### : 0011 0 0 1.00 36.16 6285 36 achieved10 ####### : 0110 0 0 1.00 44.35 6157 30
  • 20. Why deal with DV?•  DV is a hard real world problem •  Designs have complex interactions and becoming more complex •  Maximise coverage, minimizing resources for it. •  Wicked fitness landscape resembling needle in haystack or deceptive problems •  80/20 Rule applies•  Chance to compete with other EA and probabilistic ML techniques•  Formulate the problem as either multistep and single step, using a variety of representations (binary, integers, real numbers, graphs etc.)
  • 21. Fame and Fortune!!!
  • 22. THANK YOU Any Questions?
  • 23. Previous attempts 1•  Genetic Algorithms •  Test structure or bias for maximising coverage •  Pros: •  Decent results in both Code and Functional Coverage (>70%) •  Easy to understand evolved knowledge •  Mature platforms •  Cons: •  Some techniques required domain knowledge (setting fitness function or tweaking other parameters) •  Non-universal verification environment •  Search for a single solution for the entire search space – this is not very helpful for DV problems
  • 24. Previous attempts 2•  Genetic Programming •  Test structure for maximising coverage by learning DAGs •  Pros: •  Good results in Code Coverage (>90%) •  Only point of user involvement is the Instruction Library •  Mature platform •  Cons: •  Earlier versions had problem with code diversity •  Verification environment mostly for microprocessors •  Search for a single solution for the entire search space – this is not very helpful for DV problems
  • 25. Previous attempts 3•  Bayesian Networks •  Probabilistic Network model to answer MPE questions on coverage to be achieved •  Pros: •  Good results in Functional Coverage (~90-100%) •  Approximates the CDG process well •  Cons: •  Domain Knowledge in constructing initial Network (though automation techniques have been tried) •  Verification environment mostly for sub-systems of microprocessors (i.e. doesn’t scale on larger systems) •  Difficult to understand what has been learnt, difficult to later manually improve
  • 26. Previous attempts 4•  Markov Models •  Probabilistic Network model (FSM) to generate stimuli for achieving maximum coverage •  Pros: •  Excellent results in bug coverage (100%) •  Approximates the CDG process very well •  Cons: •  Effort in constructing the template files (TG) and activity monitors •  Difficult to understand what has been learnt, difficult to later manually improve