Execution Based Verification and Validation: Introduction


Published on

  • Be the first to comment

  • Be the first to like this

No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide
  • Inferring – black cat in a dark room Known environment, selected inputs – real time software & simulator
  • Execution Based Verification and Validation: Introduction

    1. 1. Part III: Execution – Based Verification and Validation Katerina Goseva - Popstojanova Lane Department of Computer Science and Electrical Engineering West Virginia University, Morgantown, WV [email_address] edu www.csee.wvu.edu/~katerina
    2. 2. Outline <ul><li>Introduction </li></ul><ul><ul><li>Definitions, objectives and limitations </li></ul></ul><ul><ul><li>Testing principles </li></ul></ul><ul><ul><li>Testing criteria </li></ul></ul><ul><li>Testing techniques </li></ul><ul><ul><li>Black box testing </li></ul></ul><ul><ul><li>White box testing </li></ul></ul><ul><ul><li>Fault based testing </li></ul></ul><ul><ul><ul><li>Mutation testing </li></ul></ul></ul><ul><ul><ul><li>Fault injection </li></ul></ul></ul>
    3. 3. Outline <ul><li>Testing levels </li></ul><ul><ul><li>Unit testing </li></ul></ul><ul><ul><li>Integration testing </li></ul></ul><ul><ul><ul><li>Top-down </li></ul></ul></ul><ul><ul><ul><li>Bottom-up </li></ul></ul></ul><ul><li>Regression testing </li></ul><ul><li>Validation testing </li></ul>
    4. 4. Outline <ul><li>Non-functional testing </li></ul><ul><ul><li>Configuration testing </li></ul></ul><ul><ul><li>Recovery Testing </li></ul></ul><ul><ul><li>Safety testing </li></ul></ul><ul><ul><li>Security testing </li></ul></ul><ul><ul><li>Stress testing </li></ul></ul><ul><ul><li>Performance testing </li></ul></ul>
    5. 5. Software Quality Assurance <ul><li>Quality of software is the extent to which software satisfies its specifications – it is not “excellence” </li></ul><ul><li>Independence between development team and SQA group </li></ul><ul><ul><li>Neither manager should be able to overrule the other </li></ul></ul><ul><li>Does independent SQA group add considerably to the cost of software development? </li></ul>
    6. 6. Software Verification & Validation <ul><li>Testing is an integral part of the software process and must be carried out throughout the life-cycle </li></ul><ul><ul><li>Verification </li></ul></ul><ul><ul><ul><li>Determine if the phase was completed correctly </li></ul></ul></ul><ul><ul><ul><li>Are we building the product right? </li></ul></ul></ul><ul><ul><li>Validation </li></ul></ul><ul><ul><ul><li>Determine if the product as a whole satisfies its requirements </li></ul></ul></ul><ul><ul><ul><li>Are we building the right product? </li></ul></ul></ul>
    7. 7. V&V and software life cycle <ul><li>repeat the above tests in accordance with the degree of </li></ul><ul><li>redevelopment </li></ul>Maintenance <ul><li>check consistency between design and implementation </li></ul><ul><li>test implementation </li></ul><ul><li>generate structural and functional test data </li></ul><ul><li>execute tests </li></ul>Implementation <ul><li>check consistency between design and requirements </li></ul><ul><li>specification </li></ul><ul><li>evaluate the software architecture </li></ul><ul><li>test the design </li></ul><ul><li>generate structural and functional test data </li></ul>Design <ul><li>determine test strategy </li></ul><ul><li>test requirements specification </li></ul><ul><li>generate functional test data </li></ul>Requirements specification Activities Phase
    8. 8. Cost of software life cycle phases <ul><li>Requirements analysis 3% </li></ul><ul><li>Specification 3% </li></ul><ul><li>Design 5% </li></ul><ul><li>Coding 7% </li></ul><ul><li>Testing 15% </li></ul><ul><li>Maintenance 67% </li></ul>
    9. 9. Cost of finding and fixing faults Requirements Implementation Deployment Cost
    10. 10. Cost of finding and fixing faults <ul><li>Changing a requirements document during its first review is inexpensive. It costs more when requirements change after code has been written: the code must be rewritten. </li></ul><ul><li>Fixing faults is much cheaper when programmers find their own errors. There is no communication cost. They don’t have to explain an error to anyone else. They don’t have to enter the fault into a fault tracking database. Testers and managers don’t have to review the fault status. The fault does not block or corrupt anyone else’s work. </li></ul><ul><li>Fixing a fault before releasing a program is much cheaper than maintenance or the consequences of failure. </li></ul>
    11. 11. Fault & Failure <ul><li>Definitions </li></ul><ul><ul><li>Failure – departure from the specified behavior </li></ul></ul><ul><ul><li>Fault – defect in the software that when executed under particular conditions causes a failure </li></ul></ul><ul><li>What we observe during testing are failures </li></ul><ul><ul><li>A failure may be caused by more than one fault </li></ul></ul><ul><ul><li>A fault may cause different failures </li></ul></ul>
    12. 12. Execution-based testing <ul><li>Two types of testing </li></ul><ul><ul><li>Non-execution based (walkthroughs and inspections) </li></ul></ul><ul><ul><li>Execution based </li></ul></ul><ul><li>Execution based testing is the process of inferring certain behavioral properties of product based, in part, on results of executing product in known environment with selected inputs </li></ul>
    13. 13. <ul><li>Test data - inputs which have been devised to test the system </li></ul><ul><li>Test cases - inputs to test the system and the predicted outputs from these inputs if the system operates according to its specification </li></ul><ul><li>All test cases must be </li></ul><ul><ul><li>Planned beforehand, including expected output </li></ul></ul><ul><ul><li>Retained afterwards </li></ul></ul>Test data and test cases
    14. 14. Structure of the software test plan <ul><li>Testing process – description of the major phases of the testing process </li></ul><ul><li>Requirements traceability – testing should be planned so that all requirements are individually tested </li></ul><ul><li>Tested items – products which are to be tested should be specified </li></ul><ul><li>Testing schedule – overall testing schedule and resource allocation for this schedule </li></ul>
    15. 15. Structure of the software test plan <ul><li>Test recording procedures – results of the tests must be systematically recorded </li></ul><ul><li>Hardware and software requirements – software tools required and estimated hardware utilization </li></ul><ul><li>Constraints – constraints affecting the testing process such as staff shortages </li></ul>
    16. 16. Testing process
    17. 17. Testing workbenches <ul><li>Testing is expensive </li></ul><ul><li>Testing workbenches provide a range of tools to reduce the time required and total testing costs </li></ul>
    18. 18. Debugging <ul><li>When software testing detects a failure, debugging is the process of detecting and fixing the fault </li></ul>Results Test cases Execution of tests Debugging Suspected causes Additional tests Identified causes Corrections Regression Tests
    19. 19. Simple non-software example <ul><li>A lamp in my house does not work </li></ul><ul><li>If nothing in the house works, the cause must be in the main circuit breaker or outside </li></ul><ul><li>I look around to see whether the neighborhood is blacked out </li></ul><ul><li>I plug the suspect lamp into a different socket and a working appliance into the suspect circuit </li></ul>
    20. 20. Debugging approaches <ul><li>Brute force – most common, least efficient </li></ul><ul><li>Backtracking – effective for small programs; beginning from the site where a symptom has been uncovered, the source code is traced backward (manually) until the site of the cause is found </li></ul><ul><li>Cause elimination – a list of all possible causes is developed and tests are conducted to eliminate each; if initial tests indicate that a particular cause shows promise, data are refined in an attempt to isolate the bug </li></ul>
    21. 21. What should be tested <ul><li>Utility </li></ul><ul><li>Correctness </li></ul><ul><li>Robustness </li></ul><ul><li>Non-functional testing </li></ul><ul><ul><li>Configuration testing </li></ul></ul><ul><ul><li>Recovery Testing </li></ul></ul><ul><ul><li>Safety testing </li></ul></ul><ul><ul><li>Security testing </li></ul></ul><ul><ul><li>Stress testing </li></ul></ul><ul><ul><li>Performance testing </li></ul></ul>
    22. 22. Utility <ul><li>Utility – the extend to which a user’s needs are met when a correct product is used under conditions permitted by its specifications </li></ul><ul><li>Does the product meet user’s needs? </li></ul><ul><ul><li>Functionality </li></ul></ul><ul><ul><li>Ease of use </li></ul></ul><ul><ul><li>Cost-effectiveness </li></ul></ul>
    23. 23. Correctness <ul><li>A software product is correct if it satisfies its specification when operated under permitted conditions </li></ul><ul><li>What if the specifications themselves are incorrect? </li></ul>
    24. 24. Correctness of specifications <ul><li>Specification for a sort </li></ul><ul><li>Function trickSort which satisfies this specification: </li></ul>
    25. 25. Correctness of specifications <ul><li>Incorrect specification for a sort </li></ul><ul><li>Corrected specification for the sort </li></ul>
    26. 26. Correctness <ul><li>Correctness of a product is meaningless if its specification is incorrect </li></ul><ul><li>Correctness is NOT sufficient </li></ul>
    27. 27. Robustness <ul><li>A software product is robust if it works satisfactory on invalid inputs </li></ul><ul><ul><li>Deliberately test the product on invalid inputs (error based testing) </li></ul></ul>
    28. 28. <ul><li>All tests should be traceable to requirements definition and specification </li></ul><ul><li>Tests should be planned long before testing begins </li></ul><ul><li>The Pareto principle applies to software testing – 80% of all detected faults during testing will likely be traceable to 20% of program modules </li></ul><ul><li>Exhaustive testing is not possible </li></ul><ul><ul><li>Number of possible input data is exceptionally large </li></ul></ul><ul><ul><li>Number of path permutations for even a moderately sized program is exceptionally large </li></ul></ul>Testing principles
    29. 29. Example <ul><li>A simple program like </li></ul><ul><li>for i from 1 to 100 do </li></ul><ul><li>print( if a[i]=true then 1 else 0 endif ); </li></ul><ul><li>has 2 100 different outcomes; exhaustively testing this program would take 3 x 10 14 years </li></ul>
    30. 30. Limitations of testing <ul><li>Dijkstra, 1972 – “Testing can be very effective way to show the presence of faults, but it is hopelessly inadequate for showing their absence” </li></ul><ul><li>Faults are not detected during testing </li></ul><ul><ul><li>Good news? </li></ul></ul><ul><ul><li>Bad news? </li></ul></ul>
    31. 31. Test objective <ul><li>Provoking failures and detecting faults </li></ul><ul><li>Increasing the confidence in failure-free behavior </li></ul>
    32. 32. Test adequacy criteria <ul><li>Test adequacy criterion can be used as </li></ul><ul><ul><li>Stopping rule – when a sufficient testing has been done? </li></ul></ul><ul><ul><ul><li>Statement coverage criterion – stop testing if all statements have been executed </li></ul></ul></ul><ul><ul><li>Measurement – mapping from the test set to the interval [0,1] </li></ul></ul><ul><ul><ul><li>What is the percentage of statements executed </li></ul></ul></ul><ul><ul><li>Test case generator </li></ul></ul><ul><ul><ul><li>If a 100% statement coverage has not been achieved yet, select an additional test case that covers one or more statements yet untested </li></ul></ul></ul>
    33. 33. Test adequacy criteria <ul><li>Test adequacy criteria are closely linked to test techniques </li></ul><ul><ul><li>Coverage based adequacy criterion (e.g., statement coverage) does not help us in assessing whether all error prone points have been tested </li></ul></ul>
    34. 34. Strategic issues <ul><li>Specify product requirements in a quantifiable manner long before testing commences </li></ul><ul><li>State testing objectives explicitly (e.g., coverage, mean time to failure, the cost to find and fix faults) </li></ul><ul><li>Understand the users of software and develop a profile for each user category </li></ul><ul><li>Build robust software that is designed to test itself </li></ul>
    35. 35. Strategic issues <ul><li>Use reviews (walkthroughs and inspections) as a filter prior to execution based testing </li></ul><ul><li>Develop a continuous improvement approach for testing process </li></ul>
    36. 36. Testing techniques <ul><li>Black box testing , also called functional or specification based testing – test cases are derived from the specification, does not consider software structure </li></ul><ul><li>White box testing, also called structural or glass box testing – considers the internal software structure in the derivation of test cases; test adequacy criteria are specified in terms of the coverage (statements, branches, paths, etc.) </li></ul>
    37. 37. Testing techniques <ul><li>Fault based techniques </li></ul><ul><ul><li>Fault injection – artificially seed a number of faults in a software </li></ul></ul><ul><ul><li>Mutation testing – (large) number of variants (mutants) of a software is generated; each variant slightly differs from the original version </li></ul></ul><ul><li>Experiments indicate that there is no “best” testing technique </li></ul><ul><ul><li>Different techniques tend to reveal different types of faults </li></ul></ul><ul><ul><li>The use of multiple testing techniques results in the discovery of more faults </li></ul></ul>