Software Testing - Verification

915 views

Published on

0 Comments
1 Like
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total views
915
On SlideShare
0
From Embeds
0
Number of Embeds
3
Actions
Shares
0
Downloads
34
Comments
0
Likes
1
Embeds 0
No embeds

No notes for slide

Software Testing - Verification

  1. 1. Software Testing: Verification and Validation <ul><li>Verification “Are we building the product right?” </li></ul><ul><li>Validation “Are we building the right product?” </li></ul><ul><li>Barry Boehm, 1979 </li></ul>
  2. 2. Verification and Validation Techniques <ul><li>Static Techniques </li></ul><ul><ul><li>Software Inspections (against source code) </li></ul></ul><ul><li>Dynamic Techniques </li></ul><ul><ul><li>Software Testing (requires executable program) </li></ul></ul>
  3. 3. Verification and Validation Static techniques <ul><li>Software Inspections </li></ul><ul><ul><li>of requirements documents </li></ul></ul><ul><ul><li>of design documents ( design reviews ) </li></ul></ul><ul><ul><li>of source code ( code reviews ) </li></ul></ul><ul><ul><li>automated static analysis </li></ul></ul>
  4. 4. Verification and Validation Dynamic techniques <ul><li>Software Testing </li></ul><ul><ul><li>specification vs. implementation </li></ul></ul><ul><ul><ul><li>Defect testing [Ch.20] </li></ul></ul></ul><ul><ul><li>verifying non-functional requirements ( e.g. performance; reliability) </li></ul></ul><ul><ul><ul><li>Statistical testing [Ch.21] </li></ul></ul></ul><ul><ul><li>automated dynamic analysis (if applicable) </li></ul></ul>
  5. 5. Verification and Validation Goals <ul><li>Establish that software is fit for purpose, not “bug-free” </li></ul><ul><li>“Good enough” depends on: </li></ul><ul><ul><li>Software function (critical nature?) </li></ul></ul><ul><ul><li>User expectations </li></ul></ul><ul><ul><li>Market </li></ul></ul><ul><ul><ul><li>competition, price </li></ul></ul></ul>
  6. 6. Testing vs. Debugging <ul><li>Verification and Validation </li></ul><ul><ul><li>looking and categorizing existence of system defects [ example ] [ bug list ] </li></ul></ul><ul><ul><li>“ What?” </li></ul></ul><ul><li>Debugging </li></ul><ul><ul><li>locating and correcting these defects </li></ul></ul><ul><ul><li>“ Why?” </li></ul></ul>
  7. 7. <ul><li>Canned test runs to verify that new defects were not introduced during “debugging” session. </li></ul><ul><li>Not exhaustive </li></ul><ul><li>Targeted to a particular interface </li></ul><ul><ul><li>components, sub-systems, integrated system </li></ul></ul><ul><li>Different levels (lengths) of regression tests </li></ul><ul><ul><li>Targeted regressions </li></ul></ul>Regression Testing
  8. 8. <ul><li>Planning should begin right after requirements specification </li></ul><ul><ul><li>Acceptance tests can be written then </li></ul></ul><ul><li>System, sub-system tests can be written against designs </li></ul>The Test Plan
  9. 9. The Test Plan
  10. 10. Software Inspections ( code reviews ) <ul><li>>60% of program errors can be detected in code review [Fagan86] </li></ul><ul><li>>90% if more formal approach used (e.g. “ Cleanroom” process) [Mills87] </li></ul><ul><ul><li>(We’ll talk about Cleanroom later) </li></ul></ul>
  11. 11. Software Inspections ( code reviews ) <ul><li>Why are reviews more effective for finding defects in systems/subsystems ( i.e., before acceptance testing)? </li></ul><ul><ul><li>Bugs are often masked by other bugs </li></ul></ul><ul><ul><li>Leverage domain/programming knowledge </li></ul></ul><ul><ul><ul><li>inspectors are skilled programmers </li></ul></ul></ul><ul><li>Common practice: code reviews and then acceptance testing </li></ul><ul><ul><li>reviews can also help with development of tests </li></ul></ul>
  12. 12. Software Inspections ( code reviews ) <ul><li>Sample procedure: </li></ul><ul><ul><li>Announce review meeting in advance (a week?) </li></ul></ul><ul><ul><li>Provide design document, implementation overview, and pointer to code </li></ul></ul><ul><ul><li>Reviewers read code (and make notes) in advance of meeting </li></ul></ul><ul><ul><li>During meeting, directives recorded by Scribe </li></ul></ul><ul><ul><li>Testers/documenters attend too </li></ul></ul>
  13. 13. Automated Static Analysis <ul><li>CASE tools that catch program curiosities that are usually indicative of bugs: </li></ul><ul><ul><li>unreachable code </li></ul></ul><ul><ul><li>uninitialized variables </li></ul></ul><ul><ul><li>unreferenced variables </li></ul></ul><ul><li>Programming language dependent </li></ul><ul><ul><li>e.g., LINT (for C) </li></ul></ul>
  14. 14. Automated Dynamic Analysis <ul><li>Tools which do bookkeeping while program is run/tested. </li></ul><ul><li>Can find some dynamic problems that compiler cannot catch (depends on language…) </li></ul><ul><li>C/C++ tools: Purify , BoundsChecker </li></ul>

×