Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Verification challenges and methodologies - SoC and ASICs

1,188 views

Published on

Verification Challenges and Methodologies
UVM Introduction

Published in: Technology
  • Be the first to comment

Verification challenges and methodologies - SoC and ASICs

  1. 1. Shivananda  (Shivoo)  R  Koteshwar   Director,  MediaTek   shivoo.koteshwar@gmail.com  /  Facebook:  shivoo.koteshwar   BLOG:  http://shivookoteshwar.wordpress.com     SLIDESHARE:  www.slideshare.net/shivoo.koteshwar     Mentor Graphics, Bangalore Jul 2016
  2. 2. 1.  Basics   2.  Verification  Challenges   3.  Verification  Technologies   4.  Verification  Strategies     5.  Verification  Methodologies   6.  Skills  needed  for  today’s  corporate  job   7.  Q&A  
  3. 3. •  Design  synthesis:   §  Given  an  I/O  function,  develop  a  procedure  to   manufacture  a  device  using  known  materials  and   processes   •  Verification:   §  Predictive  analysis  to  ensure  that  the  synthesized  design,   when  manufactured,  will  perform  the  given  I/O  function     •  Test:   §  A  manufacturing  step  that  ensures  that  the  physical   device,  manufactured  from  the  synthesized  design,  has  no   manufacturing  defect.   3
  4. 4. 4
  5. 5. ¡  Goal:  Validate  a  model  of  the  design   ¡  Testbench  wraps  around  the  design  under  test  (DUT)   ¡  Inputs  provide  (deterministic  or  random)  stimulus   §  Reference  signals:  clock(s),  reset,  etc.   §  Data:  bits,  bit  words   §  Protocols:  PCI,  SPI,  AMBA,  USB,  etc.   ¡  Outputs  capture  responses  and  make  checks   §  Data:  bits,  bit  words   §  Protocols:  PCI,  SPI,  AMBA,  USB,  etc.   Design Under Test Testbench inputs outputs
  6. 6.  Verification  is  the  process  of  verifying  the  transformation  steps  in   the  design  flow  are  executed  correctly.   Algorithm Architecture/ Spec RTL Gate GDSII ASIC End productIdea Product Acceptance Test Transformations C-Model Spec Acceptance Review Simulation/ Code Review Formal Functional/ Timing Verification ATE Sign-Off Review
  7. 7. ¡  Ensure  full  conformance  with  specification:   §  Must  avoid  false  positives  (untested   functionalities)   ???Pass Fail Good Bad(bug) RTL code √Tape out! Debug testbench Debug RTL code Testbench Simulation result False positive results in shipping a bad design How do we achieve this goal?
  8. 8. ¡  Simulators  are  the  most  common  and  familiar  verification  tools.  They  are  named   simulators  because  their  role  is  limited  to  approximating  reality.     ¡  A  simulation  is  never  the  final  goal  of  a  project.  The  goal  of  all  hardware  design   projects  is  to  create  real  physical  designs  that  can  be  sold  and  generate  profits.     ¡  Simulators   attempt   to   create   an   artificial   universe   that   mimics   the   future   real   design.  This  lets  the  designers  interact  with  the  design  before  it  is  manufactured   and  correct  flaws  and  problems  earlier   ¡  Simulators  are  only  approximations  of  reality   §  Many  physical  characteristics  are  simplified  -­‐  or  even  ignored  -­‐  to  ease  the  simulation  task.  For   example,  a  digital  simulator    assumes  that  the  only  possible  values  for  a  signal  are  ‘0’,  ‘1’,  X,   and  Z.  However,  in  the  physical  and  analog  world,  the  value  of  a  signal  is  a  continuous:  an   infinite  number  of  possible  values.  In  a  discrete  simulator,  events  that  happen  deterministically   5  ns  apart  may  be  asynchronous  in  the  real  world  and  may  occur  randomly   ¡  Simulators  are  at  the  mercy  of  the  descriptions  being  simulated   §  The  description  is  limited  to  a  well-­‐defined  language  with  precise  semantics.  If  that  description   does  not  accurately  reflect  the  reality  it  is  trying  to  model,  there  is  no  way  for  you  to  know  that   you   are   simulating   something   that   is   different   from   the   design   that   will   be   ultimately   manufactured.   Functional   correctness   and   accuracy   of   models   is   a   big   problem   as   errors   cannot  be  proven  not    to  exist.   9
  9. 9. ¡  Simulation  requires  stimulus   §  Simulators  are  not  static  tools.  A  static  verification  tool  performs  its  task  on   the  design  without  any  additional  information  or  action  required  by  the  user.   For   example,   linting   tools   are   static   tools.   Simulators,   on   the   other   hand,   require  that  you  provide  a  facsimile  of  the  environment  in  which  the  design   will  find  itself.  This  facsimile  is  often  called  a  testbench,  stimulus.     §  The  testbench  needs  to  provide  a  representation  of  the  inputs  observed  by  the   design,   so   the   simulator   can   emulate   the   design’s   responses   based   on   its   description   ¡  The   simulation   outputs   are   validated   externally,   against   design   intents.   §  The  other  thing  that  you  must  not  forget  is  that  simulators  have  no  knowledge   of   your   intentions.   They   cannot   determine   if   a   design   being   simulated   is   correct.  Correctness  is  a  value  judgment  on  the  outcome  of  a  simulation  that   must  be  made  by  you,  the  designer.   §  Once   the   design   is   submitted   to   an   approximation   of   the   inputs   from   its   environment,  your  primary  responsibility  is  to  examine  the  outputs  produced   by  the  simulation  of  the  design’s  description  and  determine  if  that  response  is   appropriate.   10
  10. 10. ¡  Simulators  are  never  fast  enough   §  They  are  attempting  to  emulate  a  physical  world  where  electricity  travels  at  the  speed  of  light   and  transistors  switch  over  one  billion  times  in  a  second.  Simulators  are  implemented  using   general   purpose   computers   that   can   execute,   under   ideal   conditions,   up   to   100   million   instructions  per  second   §  The  speed  advantage  is  unfairly  and  forever  tipped  in  favor  of  the  physical  world   ¡  Outputs  change  only  when  an  input  changes   §  One  way  to  optimize  the  performance  of  a  simulator  is  to  avoid  simulating  something  that   does  not  need  to  be  simulated.     §  Figure  shows  a  2-­‐input  XOR  gate.  In  the  physical  world,  if  the  inputs  do  not  change  (a),  even   though  voltage  is  constantly  applied,  output  does  not  change  Only  if  one  of  the  inputs  change   (b)  does  the  output  change   ¡  Change  in  values,  called  events,  drive  the  simulation  process   §  The  simulator  could  choose  to  continuously  execute  this  model,  producing  the  same  output   value  if  the  input  values  did  not  change.     §  An   opportunity   to   improve   upon   that   simulator’s   performance   becomes   obvious:   do   not   execute  the  model  while  the  inputs  are  constants.  Phrased  another  way:  only  execute  a  model   when  an  input  changes.  The  simulation  is  therefore  driven  by  changes  in  inputs.  If  you  define   an  input  change  as  an  event,  you  now  have  an  event-­‐driven  simulator   11
  11. 11. ¡  Cycle-­‐based  simulations  have  no  timing  information   §  This  great  improvement  in  simulation  performance  comes  at  a  cost:  all  timing   and  delay  information  is  lost.  Cycle-­‐based  simulators  assume  that  the  entire   design  meets  the  set-­‐up  and  hold  requirements  of  all  the  flip-­‐flops.     §  When  using  a  cycle-­‐based  simulator,  timing  is  usually  verified  using  a  static   timing  analyzer     ¡  Cycle-­‐based  simulators  can  only  handle  synchronous  circuits   §  Cycle-­‐based  simulators  further  assume  that  the  active  clock  edge  is  the  only   significant  event  in  changing  the  state  of  the  design.  All  other  inputs  are   assumed  to  be  perfectly  synchronous  with  the  active  clock  edge.  Therefore,   cycle-­‐based  simulators  can  only  simulate  perfectly  synchronous  designs     §  Anything  containing  asynchronous  inputs,  latches,  or  multiple-­‐clock  domains   cannot  be  simulated  accurately.,  The  same  restrictions  apply  to  static  timing   analysis.  Thus,  circuits  that  are  suitable  for  cycle-­‐based  simulation  to  verify  the   functionality,  are  suitable  for  static  timing  verification  to  verify  the  timing   12
  12. 12. ¡  To  handle  the  portions  of  a  design  that  do  not  meet  the  requirements  for  cycle-­‐ based  simulation,  most  simulators  are  integrated  with  an  event-­‐driven  simulator   ¡  As  shown,  the  synchronous  portion  of  the  design  is  simulated  using  the  cycle-­‐ based  algorithm,  while  the  remainder  of  the  design  is  simulated  using  a   conventional  event-­‐driven  simulator     ¡  Both  simulators  (event-­‐driven  and  cycle-­‐based)  are  running  together,  cooperating   to  simulate  the  entire  design   n  Other popular co-simulation environments provide VHDL and Verilog, HDL and C, or digital and analog co-simulation 13
  13. 13. Design Errors Simulation –Practical Problem
  14. 14. ¡  Coverage   §  Code  Coverage   ▪  Statement  or  Block  Coverage   ▪  Path  Coverage   ▪  Expression  Coverage   §  Functional  Coverage   ¡  Verification  languages  can  raise  the  level  of  abstraction   ¡  Best  way  to  increase  productivity  is  to  raise  the  level  of   abstraction  used  to  perform  a  task   ¡  VHDL  and  Verilog  are  simulation  languages,  not   verification  languages  
  15. 15. ¡  VHDL  simulation  tools  can  automatically  calculate  a  metric  called  code   coverage  (assuming  you  have  licenses  for  this  feature).         ¡  Code  coverage  tracks  what  lines  of  code  or  expressions  in  the  code  have   been  exercised.   ¡  Code  coverage  cannot  detect  conditions  that  are  not  in  the  code   ¡  Code  coverage  on  a  partially  implemented  design  can  reach  100%.    It   cannot  detect  missing  features  and  many  boundary  conditions  (in   particular  those  that  span  more  than  one  block)   ¡  Code  coverage  is  an  optimistic  metric.    In  combinational  logic  code  in  an   HDL,  a  process  may  be  executed  many  times  during  a  given  clock  cycle   due  to  delta  cycle  changes  on  input  signals.    This  can  result  in  several   different  branches  of  code  being  executed.    However,  only  the  last  branch   of  code  executed  before  the  clock  edge  truly  has  been  covered.       ¡  Hence,  code  coverage  cannot  be  used  exclusively  to  indicate  we  are  done   testing.     16
  16. 16. ¡  Functional  coverage  is  code  that  observes  execution  of  a  test  plan.    As  such,  it  is  code  you   write  to  track  whether  important  values,  sets  of  values,  or  sequences  of  values  that   correspond  to  design  or  interface  requirements,  features,  or  boundary  conditions  have  been   exercised   ¡  Specifically,  100%  functional  coverage  indicates  that  all  items  in  the  test  plan  have  been   tested.    Combine  this  with  100%  code  coverage  and  it  indicates  that  testing  is  done   ¡  Functional  coverage  that  examines  the  values  within  a  single  object  is  called  either  point   coverage  or  item  coverage   §  One  relationship  we  might  look  at  is  different  transfer  sizes  across  a  packet  based  bus.    For  example,   the  test  plan  may  require  that  transfer  sizes  with  the  following  size  or  range  of  sizes  be  observed:  1,  2,   3,  4  to  127,  128  to  252,  253,  254,  or  255.   ¡  Functional  coverage  that  examines  the  relationships  between  different  objects  is  called  cross   coverage.    An  example  of  this  would  be  examining  whether  an  ALU  has  done  all  of  its   supported  operations  with  every  different  input  pair  of  registers   ¡  VHDL’s  Open  Source  VHDL  Verification  Methodology  (OSVVM)  provides  a  package,   CoveragePkg,  with  a  protected  type  that  facilitates  capturing  the  data  structure  and  writing   functional  coverage   ¡  Functional  Coverage  provides  additional  supporting  data  that  the  design  is  tested.  It’s  a   supplement  to  Primitive  testing  directed,  algorithmic,  file  based,  or  constrained  random   test  methods   17
  17. 17. ¡  Completeness  does  not  imply  correctness:     §  Code  coverage  indicates  how  thoroughly  your  entire  verification  suite  exercises  the  source  code.  I   does  not  provide  an  indication,  in  any  way,  about  the  correctness    of  the  verification  suite   §  Code  coverage  should  be  used  to  help  identify  corner  cases  that  were  not  exercised  by  the   verification  suite  or  implementation-­‐dependent  features  that  were  introduced  during  the   implementation   §  Code  coverage  is  an  additional  indicator  for  the  completeness  of  the  verification  job.  It  can  help   increase  your  confidence  that  the  verification  job  is  complete,  but  it  should  not  be  your  only  indicator.   ¡  Code  coverage  lets  you  know  if  you  are  not  done:  Code  coverage  indicates  if  the   verification  task  is  not  complete  through  low  coverage  numbers.  A  high  coverage  number  is   by  no  means  an  indication  that  the  job  is  over   ¡  Some  tools  can  help  you  reach  100%  coverage:  There  are  testbench  generation  tools  that   automatically  generate  stimulus  to  exercise  the  uncovered  code  sections  of  a  design   ¡  Code  coverage  tools  can  be  used  as  profilers:  When  developing  models  for  simulation   only,  where  performance  is  an  important  criteria,  code  coverage  tools  can  be  used  for   profiling.  The  aim  of  profiling  is  the  opposite  of  code  coverage.  The  aim  of  profiling  is  to   identify  the  lines  of  codes  that  are  executed  most  often.  These  lines  of  code  become  the   primary  candidates  for  performance  optimization  efforts   18
  18. 18. ¡  It  is  quite  possible  to  achieve  100%  code  coverage  but  only  50%  functional   coverage   §  Here  the  design  is  half  complete     ¡  Equally,  it  is  possible  to  have  50%  code  coverage  but  100%  functional   coverage   §  Indicates  that  the  functional  coverage  model  is  missing  some  key  features  of  the  design     §  Indicates  the  design  contains  untested  code  that  is  not  part  of  the  test  plan   §  This  can  come  from  an  incomplete  test  plan,  extra  undocumented  features  in  the  design,  or   case  statement  others  branches  that  do  not  get  exercised  in  normal  hardware  operation   §  Untested  features  need  to  either  be  tested  or  removed   §  As  a  result,  even  with  100%  functional  coverage  it  is  still  a  good  idea  to  use  code  coverage  as  a   fail  safe  for  the  test  plan   ¡  Code  Coverage  is  quantitative  coverage  and  functional  coverage  is  qualitative   coverage   ¡  The  two  coverage  approaches  are  complementary,  and  high  quality   verification  will  benefit  from  both.   19 Test Done = Test Plan Executed  and All Code Executed REF: https://www.doulos.com/knowhow/sysverilog/uvm/easier_uvm_guidelines/coverage-driven
  19. 19. 20
  20. 20. ¡  IP  /  Module  Level  Verification   Study DUT and related Specification Gather requirements for features to be verified and set priorities Review Requirements with IP Architect/Designer (Requirements should cover all parameters for module) Design Test Infrastructure on paper / document (includes re-usable verification components) Review TB Architecture with Verification Team Design Test Infrastructure (includes re-usable verification components) Code Testcases as per Test-Bench Plan. Also code Functional Coverpoints / Assertions Complete Verification such that Functional Coverage 100% and Code Coverage numbers are logged. Review Code Coverage numbers with Designer to eliminate dead code possibilities. Sign off Module Level Verification by checking in the files having relevant data such as logs.
  21. 21. ¡  SoC  Level  Verification   Study SoC and related Specification Gather requirements for critical data paths set priorities Review Requirements with IP Architect/Designer (Requirements should cover all parameters for module) Design Test Infrastructure on paper / document Identify testcases that can be re-used (includes re-usable verification components) Review TB Architecture with Verification Team Design Test Infrastructure (includes re-usable verification components) Code Testcases as per Test-Bench Plan. Also code Functional Coverpoints / Assertions Complete Verification such that Functional Coverage 100% and Code Coverage numbers are logged. Review Code Coverage numbers with Designer to eliminate dead code possibilities. Sign off SoC Verification by checking in the files having relevant data such as logs.
  22. 22. TESTBENCH ENVIRONMENT / ARCHITECTURE
  23. 23. ¡  Accelera  Systems  Initiative  is  an   independent,  not-­‐for  profit  organization   dedicated  to  create,  support,  promote,  and   advance  system-­‐level  design,  modeling,  and   verification  standards  for  use  by  the   worldwide  electronics  industry   ¡  www.accelera.org   24
  24. 24. ¡  Verification  languages  can  raise  the  level  of  abstraction   §  Best  way  to  increase  productivity  is  to  raise  the  level  of  abstraction  used  to   perform  a  task   ¡  VHDL  and  Verilog  are  simulation  languages,  not  verification  languages   §  Verilog  was  designed  with  a  focus  on  describing  low-­‐level  hardware  structures.   It  does  not  provide  support  for  high-­‐level  data  structures  or  object-­‐oriented   features   §  VHDL  was  designed  for  very  large  design  teams.  It  strongly  encapsulates  all   information  and  communicates  strictly  through  well-­‐defined  interfaces   §  Very  often,  these  limitations  get  in  the  way  of  an  efficient  implementation  of  a   verification  strategy.  Neither  integrates  easily  with  C  models   §  This  creates  an  opportunity  for  verification  languages  designed  to  overcome   the  shortcomings  of  Verilog  and  VHDL.  However,  using  verification  language   requires  additional  training  and  tool  costs   ¡  Proprietary  verification  languages  exist   §  e/Specman  from  Verisity,    VERA  from  Synopsys,  Rave  from  Chronology  etc   25
  25. 25. ¡  Provides  a  reusable  ,standard  infrastructure  in  form  of   base  classes  which  are  pre-­‐defined.  These  can  be   extended  and  enhanced  as  per  user  needs   ¡  Defines  rules  to  create  behavioral  models  also  known   as  Verification  Components  (OVC/UVC)   ¡  Defines  standards  for  higher  level  of  modelling  input   stimulus  using  Transaction  Level  Modelling  (TLM)   ¡  Defines  rules  to  have  a  layered  structure  of  test-­‐ benches   ¡  In  summary  Methodology  =  standardization  of  the   way  of  creating  complex  test-­‐benches  with   constrained  random  test-­‐vectors.  
  26. 26. ¡  OVM   §  Open  Verification  Methodology   §  Derived  mainly  from  the  URM  (Universal  Reuse  Methodology)  which  was,  to  a  large  part,  based  on  the  eRM   (e  Reuse  Methodology)  for  the  e  Verification  Language  developed  by  Verisity  Design  in  2001   §  The  OVM  also  brings  in  concepts  from  the  Advanced  Verification  Methodology  (AVM)   §  System  Verilog   ¡  RVM   §  Reference  Verification  Methodology     §  Complete  set  of  metrics  and  methods  for  performing  Functional  verification  of  complex  designs   §  The  SystemVerilog  implementation  of  the  RVM  is  known  as  the  VMM  (Verification  Methodology  Manual)   ¡  OVL   §  Open  Verification  Language   §  OVL  library  of  assertion  checkers  is  intended  to  be  used  by  design,  integration,  and  verification  engineers  to   check  for  good/bad  behavior  in  simulation,  emulation,  and  formal  verification.     §  Accellera  -­‐  http://www.accellera.org/downloads/standards/ovl/   ¡  UVM   §  Standard  Universal  Verification  Methodology   §  Accellera  -­‐  http://www.accellera.org/downloads/standards/uvm   §  System  Verilog   ¡  OS-­‐VVM   §  VHDL   §  Accellera   OVC: OVM Verification Component UVC: Universal Verification Component
  27. 27. ¡  C  type  data  types  like  int,  typedef,  struct,  union,  enum   ¡  Dynamic  data  types  :  struct,  classes,  dynamic  queues,   dynamic  arrays   ¡  New  operators  and  built  in  methods   ¡  Enhanced  flow  control  like,  foreach,  return,  break,   continue   ¡  Inter-­‐process  synchronization  –  Semaphores,   Mailboxes,  Event  Extension   ¡  Assertions  and  Coverage   ¡  Clocking  Domains   ¡  Direct  Programming  Interface  (DPI)  -­‐  VPI   ¡  Hardware  specific  procedures   REF: http://www.eetimes.com/document.asp?doc_id=1277143
  28. 28. ¡  UVM  (Universal  Verification  Methodology)  is  a   SystemVerilog  language  based  Verification  methodology     ¡  UVM  consists  of  a  defined  methodology  for  architecting   modular  testbenches  for  design  verification.     ¡  UVM  has  a  library  of  classes  that  helps  in  designing  and   implementing  modular  testbench  components  and   stimulus.  This  enables  re-­‐using  testbench  components  and   stimulus  within  and  across  projects,  development  of   Verification  IP,  easier  migration  from  simulation  to   emulation  etc.   ¡  Relies  on  strong,  proven  industry  foundations  .  The  core  of   its  success  is  adherence  to  a  standard  (i.e.  architecture,   stimulus  creation,  automation,  factory  usage  standards   etc.)     29
  29. 29. ¡  Following  can  be  automated  using  UVM     §  Coverage  Driven  Verification  (CDV)  environments     §  Automated  Stimulus  Generation     §  Independent  Checking   §  Coverage  Collection     30
  30. 30. SV Testbench Architecture UVM Testbench Architect 31
  31. 31. •  Syntax   •  RTL   •  OOP   •  Class   •  Interface   System   Verilog   Language   •  Constrained  Random   •  Coverage  Driven   •  Transaction  Level   •  Sequences   •  Scoreboards   Verification   Concepts   •  Base  Classes   •  Use  Cases   •  Configuration-­‐db   •  Phases   Methodology   32
  32. 32. ¡  System  Verilog  Language  syntax  &  semantics  are  pre-­‐ requisite     ¡  All  System  Verilog  experience  is  directly  relevant  for   UVM  (design/RTL,  AVM,  VMM,  etc.)     ¡  But  be  aware  the  verification  part  of  language  is  much   bigger  than  that  used  for  design!     §  Design:  RTL,  Blocks,  Modules,  Vectors,  Assignments,   Arrays  etc.   §  Verification:  Signals,  Interfaces  Clocking-­‐block,   scheduling,  functions,  tasks,  OOP,  class,  random   constraint  coverage,  queues  etc.   ¡  All  verification  experience  is  directly  transferrable  to   UVM     33
  33. 33. 34
  34. 34. ¡  Modularity  and  Reusability  –  The  methodology  is  designed  as  modular   components  (Driver,  Sequencer,  Agents  ,  env  etc)  to  enable  re-­‐use  at  different   levels  of  verification  and  across  projects   ¡  Separating  Tests  from  Testbenches  –  Tests  in  terms  of  stimulus/sequencers  are   kept  separate  from  the  actual  testbench  hierarchy  and  hence  there  can  be  reuse   of  stimulus  across  different  units  or  across  projects   ¡  Simulator  independent  –  The  base  class  library  and  the  methodology  is   supported  by  all  simulators  and  hence  there  is  no  dependence  on  any  specific   simulator   ¡  Sequence  based  Stimulus  generation:  There  are  several  ways  in  which   sequences  can  be  developed  which  includes  randomization,  layered  sequences,   virtual  sequences  etc  which  provides  a  good  control  and  rich  stimulus  generation   capability.   ¡  Configuration  mechanisms  simplify  configuration  of  objects  with  deep   hierarchy.  The  configuration  mechanism  (using  UVM  config  data  base)  helps  in   easily  configuring  different  testbench  components  based  upon  verification   environment  using  it,  and  without  worrying  about  how  deep  any  component  is  in   the  testbench  hierarchy   ¡  Factory  mechanisms  simplifies  modification  of  components  easily.  Creating  each   components  using  factory  enables  them  to  be  overridden  in  different  tests  or   environments  without  changing  underlying  code  base.   35
  35. 35. ¡  Steep  learning  curve:  For  anyone  new  to  the   methodology,  the  learning  curve  to   understand  all  details  and  the  library  is  very   steep.   ¡  Still  developing  and  not  perfect/stable:  The   methodology  is  still  developing  and  has  a  lot   of  overhead  that  can  sometimes  cause   simulation  to  appear  slow  or  probably  can   have  some  bugs   36
  36. 36. 37 Reference Verification Methodology E Reuse Methodology Universal Reuse Methodology Advanced Verification Methodology Verification Methodology Manual Open Verification Methodology 37  
  37. 37. ¡  Run  the  most  important  tests  first  when  you  get  a   new  build   ¡  Do  not  start  over  on  your  test  pass  every  time  you   receive  a  new  build   ¡  Regression  tests  that  have  been  run  already  many   times  are  unlikely  to  reveal  new  bugs.  If  your  testcase   fully  automated,  by  all  means,  run  all  of  them  for  each   build.   ¡  Prioritize  tests  into  “Must-­‐Pass”  types  with  a  more   focused  list  of  tests  which  can  reduce  the  time  of  the   regression.  Major  builds  will  warrant  running  all   testcases.   ¡  Automate  whenever  it  makes  sense  to  do  so.  
  38. 38. ¡  Automation  is  a  means  of  reducing  manual   effort  in  running  repetitive  tasks  such  as   regressions.   ¡  Automation  can  be  done  also  in  creating  test-­‐ benches  so  that  a  standard  infrastructure  is   maintained  across  the  team.   ¡  This  can  be  done  using  PERL  scripts.   ¡  Why  use  PERL  ?   -­‐  Free  and  works  with  most  UNIX  and  Linux  versions   -­‐  Ease  to  work  with  ,  smaller  learning  curve.   -­‐  Advance  PERL  with  OOPs  available  makes  scripting   easier  
  39. 39. ¡  Scripting   §  PERL,  Python,  C++   ¡  Languages  and  Methodologies   §  Verilog,  VHDL,  System  Verilog,  UVM   ¡  Problem  Solving  and  debugging  Skills   ¡  Diligent  and  Methodical   ¡  Documentation  Skills   ¡  Reading  Skills!   ¡  Be  up  to  date  on  standards  and  adjacent  technologies   ¡  Don’t  be  a  generalist  ..Be  a  specialist!   ¡  Assess  yourself  :   http://www.slideshare.net/RamdasMozhikunnath/exercises-­‐on-­‐advances-­‐in-­‐ verification-­‐methodologies    
  40. 40. Visit my slideshare to view all these presentations
  41. 41. Shivananda  (Shivoo)  R  Koteshwar   Director,  Mediatek   shivoo.koteshwar@gmail.com/  Facebook:  shivoo.koteshwar   BLOG:  http://shivookoteshwar.wordpress.com   SLIDESHARE:  www.slideshare.net/shivoo.koteshwar  

×