Software Testing 2IW30 - lecture 1.1

306 views

Published on

0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
306
On SlideShare
0
From Embeds
0
Number of Embeds
1
Actions
Shares
0
Downloads
6
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide
  • The Humphreys quote comes from: http://www.e-magazineonline.com/FEA/FEA011304.htm
  • Software Testing 2IW30 - lecture 1.1

    1. 1. IPA Lentedagen 2006 Testing for Dummies Judi Romijn [email_address] OAS, TU/e
    2. 2. Outline <ul><li>Terminology: What is... error/bug/fault/failure/testing? </li></ul><ul><li>Overview of the testing process </li></ul><ul><ul><li>concept map </li></ul></ul><ul><ul><li>dimensions </li></ul></ul><ul><ul><li>topics of the Lentedagen presentations </li></ul></ul>
    3. 3. What is... <ul><li>error/fault/bug: something wrong in software </li></ul><ul><li>failure: </li></ul><ul><ul><li>manifestation of an error </li></ul></ul><ul><ul><ul><li>(observable in software behaviour) </li></ul></ul></ul><ul><ul><li>something wrong in software behaviour </li></ul></ul><ul><ul><ul><li>(deviates from requirements) </li></ul></ul></ul>requirements: for input i, give output 2*i 3 (so 6 yields 432) software: i=input(STDIN); i=double(i); i=power(i,3); output(STDOUT,i); output (verbose): input: 6 doubling input.. computing power.. output: 1728 error failure
    4. 4. What is... <ul><li>testing: </li></ul><ul><ul><li>by experiment, </li></ul></ul><ul><ul><li>find errors in software (Myers, 1979) </li></ul></ul><ul><ul><li>establish quality of software (Hetzel, 1988) </li></ul></ul><ul><li>a succesful test: </li></ul><ul><ul><li>finds at least one error </li></ul></ul><ul><ul><li>passes (software works correctly) </li></ul></ul>test-to-fail test-to-pass
    5. 5. What’s been said? <ul><li>Dijkstra: </li></ul><ul><ul><li>Testing can show the presence of bugs, but not the absence </li></ul></ul><ul><li>Beizer: </li></ul><ul><ul><li>1 st law: (Pesticide paradox) Every method you use to prevent or find bugs leaves a residue of subtler bugs, for which other methods are needed </li></ul></ul><ul><ul><li>2 nd law: Software complexity grows to the limits of our ability to manage it </li></ul></ul><ul><li>Beizer: </li></ul><ul><ul><li>Testers are not better at test design than programmers are at code design </li></ul></ul><ul><li>Humphreys: </li></ul><ul><ul><li>Coders introduce bugs at the rate of 4.2 defects per hour of programming. If you crack the whip and force people to move more quickly, things get even worse. </li></ul></ul><ul><li>... </li></ul><ul><li>Developing software & testing are truly difficult jobs! </li></ul><ul><li>Let’s see what goes on in the testing process </li></ul>
    6. 6. Concept map of the testing process  
    7. 7. Dimensions of software testing <ul><li>What is the surrounding software development process? (v-model/agile, unit/system/user level, planning, documentation, ...) </li></ul><ul><li>What is tested? </li></ul><ul><ul><li>Software characteristics (design/code/binary, embedded?, language, ...) </li></ul></ul><ul><ul><li>Requirements (functional/performance/reliability/..., behaviour/data oriented, precision) </li></ul></ul><ul><li>Which tests? </li></ul><ul><ul><li>Purpose (kind of coding errors, missing/additional requirements, development/regression) </li></ul></ul><ul><ul><li>Technique (adequacy criterion: how to generate how many tests) </li></ul></ul><ul><ul><li>Assumptions (limitations, simplifications, heuristics) </li></ul></ul><ul><li>How to test? (manual/automated, platform, reproducable) </li></ul><ul><li>How are the results evaluated? (quality model, priorities, risks) </li></ul><ul><li>Who performs which task? (programmer, tester, user, third party) </li></ul><ul><ul><li>Test generation, implementation, execution, evaluation </li></ul></ul>
    8. 8. Dimensions + concept map 2 1 6 2 6 6 4 3 5
    9. 9. 1: Test process in software development <ul><li>V-model: </li></ul>implementation code detailed design specification requirements acceptance test system test integration test unit test
    10. 10. 1: Test process in software development <ul><li>Agile/spiral model: </li></ul>
    11. 11. 1: Test process in software development <ul><li>Topics in the Lentedagen presentations: </li></ul><ul><li>Integration of testing in entire development process with TTCN3 </li></ul><ul><ul><li>standardized language </li></ul></ul><ul><ul><li>different representation formats </li></ul></ul><ul><ul><li>architecture allowing for tool plugins </li></ul></ul><ul><li>Test process management for manufacturing systems (ASML) </li></ul><ul><ul><li>integration approach </li></ul></ul><ul><ul><li>test strategy </li></ul></ul>
    12. 12. 2: Software <ul><li>(phase) Unit vs. integrated system </li></ul><ul><li>(language) imperative/object-oriented/hardware design/binary/… </li></ul><ul><li>(interface) data-oriented/interactive/ embedded/distributed/… </li></ul>
    13. 13. 2: Requirements <ul><li>functional: </li></ul><ul><ul><li>the behaviour of the system should be correct </li></ul></ul><ul><ul><li>requirements can be precise, but often are not </li></ul></ul><ul><li>non-functional: </li></ul><ul><ul><li>performance, reliability, compatibility, robustness (stress/volume/recovery), usability, ... </li></ul></ul><ul><ul><li>requirements are possibly quantifiable, and always vague </li></ul></ul>
    14. 14. 2: Requirements <ul><li>Topics in the Lentedagen presentations: </li></ul><ul><li>models: </li></ul><ul><ul><li>process algebra, automaton, labelled transition system, Spec# </li></ul></ul><ul><li>coverage: </li></ul><ul><ul><li>semantical: </li></ul></ul><ul><ul><ul><li>by formal argument (see test generation) </li></ul></ul></ul><ul><ul><ul><li>by estimating potential errors, assigning weights </li></ul></ul></ul><ul><ul><li>syntactical </li></ul></ul><ul><ul><li>risk-based (likelihood/impact) </li></ul></ul>
    15. 15. 3: Test generation: purpose <ul><li>What errors to find? </li></ul><ul><li>Related to software development phase: </li></ul><ul><li>unit phase </li></ul><ul><ul><li>typical typos, functional mistakes </li></ul></ul><ul><li>integration </li></ul><ul><ul><li>interface errors </li></ul></ul><ul><li>system/acceptance: errors w.r.t. requirements </li></ul><ul><ul><li>unimplemented required features ‘software does not do all it should do’ </li></ul></ul><ul><ul><li>implemented non-required features ‘software does things it should not do’ </li></ul></ul>
    16. 16. 3: Test generation: technique <ul><li>Dimensions: </li></ul>black box: we don’t have acces to the software to be tested white box: we have access to the software to be tested data-based structure-based black box white box error seeding typical errors efficiency ...
    17. 17. 3: Test generation <ul><li>Assumptions, limitations </li></ul><ul><li>single/multiple fault: </li></ul><ul><ul><li>clustering/dependency of errors </li></ul></ul><ul><li>perfect repair </li></ul><ul><li>heuristics: </li></ul><ul><ul><li>knowledge about usual programming mistakes </li></ul></ul><ul><ul><li>history of the software </li></ul></ul><ul><ul><li>pesticide paradox </li></ul></ul><ul><li>... </li></ul>
    18. 18. 3: Test generation <ul><li>Topics in the Lentedagen presentations: </li></ul><ul><li>Mostly black box, based on behavioural requirements: </li></ul><ul><ul><li>process algebra, automaton, labelled transition system, Spec# </li></ul></ul><ul><li>Techniques: </li></ul><ul><ul><li>assume model of software is possible </li></ul></ul><ul><ul><li>scientific basis: formal relation between requirements and model of software </li></ul></ul><ul><li>Data values: constraint solving </li></ul><ul><li>Synchronous vs. asynchronous communication </li></ul><ul><li>Timing/hybrid aspects </li></ul><ul><li>On-the-fly generation </li></ul>
    19. 19. 4: Test implementation & execution <ul><li>Implementation </li></ul><ul><ul><li>platform </li></ul></ul><ul><ul><li>batch? </li></ul></ul><ul><ul><li>inputs, outputs, coordination, ... </li></ul></ul><ul><li>Execution </li></ul><ul><ul><li>actual duration </li></ul></ul><ul><ul><li>manual/interactive or automated </li></ul></ul><ul><ul><li>in parallel on several systems </li></ul></ul><ul><ul><li>reproducible? </li></ul></ul>
    20. 20. 4: Test implementation & execution <ul><li>Topics in the Lentedagen presentations: </li></ul><ul><li>Intermediate language: TTCN3 </li></ul><ul><li>Timing coordination </li></ul><ul><li>From abstract tests to concrete executable tests : </li></ul><ul><ul><li>Automatic refinement </li></ul></ul><ul><ul><li>Data parameter constraint solving </li></ul></ul><ul><li>On-the-fly: </li></ul><ul><ul><li>automated, iterative </li></ul></ul>
    21. 21. 5: Who performs which task <ul><li>Software producer </li></ul><ul><ul><li>programmer </li></ul></ul><ul><ul><li>testing department </li></ul></ul><ul><li>Software consumer </li></ul><ul><ul><li>end user </li></ul></ul><ul><ul><li>management </li></ul></ul><ul><li>Third party </li></ul><ul><ul><li>testers hired externally </li></ul></ul><ul><ul><li>certification organization </li></ul></ul>
    22. 22. 6: Result evaluation <ul><li>Per test: </li></ul><ul><ul><li>pass/fail result </li></ul></ul><ul><ul><li>diagnostical output </li></ul></ul><ul><ul><li>which requirement was (not) met </li></ul></ul><ul><li>Statistical information: </li></ul><ul><ul><li>coverage (program code, requirements, input domain, output domain) </li></ul></ul><ul><ul><li>progress of testing (#errors found per test-time unit: decreasing?) </li></ul></ul><ul><li>Decide to: </li></ul><ul><ul><li>stop (satisfied) </li></ul></ul><ul><ul><li>create/run more tests (not yet enough confidence) </li></ul></ul><ul><ul><li>adjust software and/or requirements, create/run more tests (errors to be repaired) </li></ul></ul>
    23. 23. 6: Result evaluation <ul><li>Topics in the Lentedagen presentations: </li></ul><ul><li>Translate output back to abstract requirements </li></ul><ul><ul><li>possibly on-the-fly </li></ul></ul><ul><li>Statistical information: </li></ul><ul><ul><li>cumulative times at which failures were observed </li></ul></ul><ul><ul><li>fit statistical curve </li></ul></ul><ul><ul><li>quality judgement: X % of errors found </li></ul></ul><ul><ul><li>predict how many errors left, how long to continue </li></ul></ul><ul><ul><li>assumptions: total #errors, perfect repair, single fault </li></ul></ul>
    24. 24. Dimensions + concept map 2 1 6 2 6 6 4 3 5
    25. 25. <ul><li>Hope this helps... </li></ul><ul><li>Enjoy the Lentedagen! </li></ul>

    ×