Your SlideShare is downloading. ×
Software Testing 2IW30 - lecture 1.1
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

×
Saving this for later? Get the SlideShare app to save on your phone or tablet. Read anywhere, anytime – even offline.
Text the download link to your phone
Standard text messaging rates apply

Software Testing 2IW30 - lecture 1.1

176
views

Published on


0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total Views
176
On Slideshare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
Downloads
5
Comments
0
Likes
0
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
No notes for slide
  • The Humphreys quote comes from: http://www.e-magazineonline.com/FEA/FEA011304.htm
  • Transcript

    • 1. IPA Lentedagen 2006 Testing for Dummies Judi Romijn [email_address] OAS, TU/e
    • 2. Outline
      • Terminology: What is... error/bug/fault/failure/testing?
      • Overview of the testing process
        • concept map
        • dimensions
        • topics of the Lentedagen presentations
    • 3. What is...
      • error/fault/bug: something wrong in software
      • failure:
        • manifestation of an error
          • (observable in software behaviour)
        • something wrong in software behaviour
          • (deviates from requirements)
      requirements: for input i, give output 2*i 3 (so 6 yields 432) software: i=input(STDIN); i=double(i); i=power(i,3); output(STDOUT,i); output (verbose): input: 6 doubling input.. computing power.. output: 1728 error failure
    • 4. What is...
      • testing:
        • by experiment,
        • find errors in software (Myers, 1979)
        • establish quality of software (Hetzel, 1988)
      • a succesful test:
        • finds at least one error
        • passes (software works correctly)
      test-to-fail test-to-pass
    • 5. What’s been said?
      • Dijkstra:
        • Testing can show the presence of bugs, but not the absence
      • Beizer:
        • 1 st law: (Pesticide paradox) Every method you use to prevent or find bugs leaves a residue of subtler bugs, for which other methods are needed
        • 2 nd law: Software complexity grows to the limits of our ability to manage it
      • Beizer:
        • Testers are not better at test design than programmers are at code design
      • Humphreys:
        • Coders introduce bugs at the rate of 4.2 defects per hour of programming. If you crack the whip and force people to move more quickly, things get even worse.
      • ...
      • Developing software & testing are truly difficult jobs!
      • Let’s see what goes on in the testing process
    • 6. Concept map of the testing process  
    • 7. Dimensions of software testing
      • What is the surrounding software development process? (v-model/agile, unit/system/user level, planning, documentation, ...)
      • What is tested?
        • Software characteristics (design/code/binary, embedded?, language, ...)
        • Requirements (functional/performance/reliability/..., behaviour/data oriented, precision)
      • Which tests?
        • Purpose (kind of coding errors, missing/additional requirements, development/regression)
        • Technique (adequacy criterion: how to generate how many tests)
        • Assumptions (limitations, simplifications, heuristics)
      • How to test? (manual/automated, platform, reproducable)
      • How are the results evaluated? (quality model, priorities, risks)
      • Who performs which task? (programmer, tester, user, third party)
        • Test generation, implementation, execution, evaluation
    • 8. Dimensions + concept map 2 1 6 2 6 6 4 3 5
    • 9. 1: Test process in software development
      • V-model:
      implementation code detailed design specification requirements acceptance test system test integration test unit test
    • 10. 1: Test process in software development
      • Agile/spiral model:
    • 11. 1: Test process in software development
      • Topics in the Lentedagen presentations:
      • Integration of testing in entire development process with TTCN3
        • standardized language
        • different representation formats
        • architecture allowing for tool plugins
      • Test process management for manufacturing systems (ASML)
        • integration approach
        • test strategy
    • 12. 2: Software
      • (phase) Unit vs. integrated system
      • (language) imperative/object-oriented/hardware design/binary/…
      • (interface) data-oriented/interactive/ embedded/distributed/…
    • 13. 2: Requirements
      • functional:
        • the behaviour of the system should be correct
        • requirements can be precise, but often are not
      • non-functional:
        • performance, reliability, compatibility, robustness (stress/volume/recovery), usability, ...
        • requirements are possibly quantifiable, and always vague
    • 14. 2: Requirements
      • Topics in the Lentedagen presentations:
      • models:
        • process algebra, automaton, labelled transition system, Spec#
      • coverage:
        • semantical:
          • by formal argument (see test generation)
          • by estimating potential errors, assigning weights
        • syntactical
        • risk-based (likelihood/impact)
    • 15. 3: Test generation: purpose
      • What errors to find?
      • Related to software development phase:
      • unit phase
        • typical typos, functional mistakes
      • integration
        • interface errors
      • system/acceptance: errors w.r.t. requirements
        • unimplemented required features ‘software does not do all it should do’
        • implemented non-required features ‘software does things it should not do’
    • 16. 3: Test generation: technique
      • Dimensions:
      black box: we don’t have acces to the software to be tested white box: we have access to the software to be tested data-based structure-based black box white box error seeding typical errors efficiency ...
    • 17. 3: Test generation
      • Assumptions, limitations
      • single/multiple fault:
        • clustering/dependency of errors
      • perfect repair
      • heuristics:
        • knowledge about usual programming mistakes
        • history of the software
        • pesticide paradox
      • ...
    • 18. 3: Test generation
      • Topics in the Lentedagen presentations:
      • Mostly black box, based on behavioural requirements:
        • process algebra, automaton, labelled transition system, Spec#
      • Techniques:
        • assume model of software is possible
        • scientific basis: formal relation between requirements and model of software
      • Data values: constraint solving
      • Synchronous vs. asynchronous communication
      • Timing/hybrid aspects
      • On-the-fly generation
    • 19. 4: Test implementation & execution
      • Implementation
        • platform
        • batch?
        • inputs, outputs, coordination, ...
      • Execution
        • actual duration
        • manual/interactive or automated
        • in parallel on several systems
        • reproducible?
    • 20. 4: Test implementation & execution
      • Topics in the Lentedagen presentations:
      • Intermediate language: TTCN3
      • Timing coordination
      • From abstract tests to concrete executable tests :
        • Automatic refinement
        • Data parameter constraint solving
      • On-the-fly:
        • automated, iterative
    • 21. 5: Who performs which task
      • Software producer
        • programmer
        • testing department
      • Software consumer
        • end user
        • management
      • Third party
        • testers hired externally
        • certification organization
    • 22. 6: Result evaluation
      • Per test:
        • pass/fail result
        • diagnostical output
        • which requirement was (not) met
      • Statistical information:
        • coverage (program code, requirements, input domain, output domain)
        • progress of testing (#errors found per test-time unit: decreasing?)
      • Decide to:
        • stop (satisfied)
        • create/run more tests (not yet enough confidence)
        • adjust software and/or requirements, create/run more tests (errors to be repaired)
    • 23. 6: Result evaluation
      • Topics in the Lentedagen presentations:
      • Translate output back to abstract requirements
        • possibly on-the-fly
      • Statistical information:
        • cumulative times at which failures were observed
        • fit statistical curve
        • quality judgement: X % of errors found
        • predict how many errors left, how long to continue
        • assumptions: total #errors, perfect repair, single fault
    • 24. Dimensions + concept map 2 1 6 2 6 6 4 3 5
    • 25.
      • Hope this helps...
      • Enjoy the Lentedagen!

    ×