VV09

598 views

Published on

0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
598
On SlideShare
0
From Embeds
0
Number of Embeds
1
Actions
Shares
0
Downloads
7
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide

VV09

  1. 1. Dimensions of Testing #5 CFICSE October 2002 Diane Kelly Terry Shepard
  2. 2. Dimensions of Testing Part 5
  3. 3. Test Automation <ul><li>“Simply throwing a tool at a testing problem will not make it go away.” </li></ul><ul><ul><li>Dorothy Graham, The CAST Report </li></ul></ul>
  4. 4. Test Automation Outline <ul><li>Some automation facts </li></ul><ul><ul><li>Manual Testing vs. Automation </li></ul></ul><ul><li>Tool support for life-cycle testing: types of tools </li></ul><ul><li>The promise of test automation </li></ul><ul><li>Common problems of test automation </li></ul><ul><ul><li>Process Issues </li></ul></ul><ul><ul><li>Avoiding the Pitfalls of Automation </li></ul></ul><ul><li>Building Maintainable Tests </li></ul><ul><li>Evaluating Testing Tools </li></ul><ul><li>Choosing a tool to automate testing </li></ul><ul><li>Testing Tool Market </li></ul><ul><li>references: [21],[22],[23] </li></ul>
  5. 5. Some automation facts <ul><li>Automation of one test typically costs about 5 times a single manual test execution </li></ul><ul><ul><li>range is roughly 2 to 10, can be as much as 30 </li></ul></ul><ul><li>Savings are as high as 80% - eventually </li></ul><ul><li>Testing is an interactive cognitive process </li></ul><ul><ul><li>automation is best applied to a narrow spectrum of testing </li></ul></ul><ul><ul><ul><li>not applied to the majority of the test process </li></ul></ul></ul><ul><li>All testing needs human interaction </li></ul><ul><ul><li>tools have no imagination </li></ul></ul>
  6. 6. Manual Testing vs. Automation (1) <ul><li>Testing and test automation are different skills </li></ul><ul><ul><li>good testers have a nose for defects </li></ul></ul><ul><ul><li>good automators are skilled at developing test scripts </li></ul></ul><ul><li>Tool support is most effective after test design is done </li></ul><ul><ul><li>there is more payback in automating test execution and comparison of results than in automating test case generation, coverage measurement, and other test design activities </li></ul></ul>
  7. 7. Manual Testing vs. Automation (2) <ul><li>Don’t evaluate manual testing against automated testing for cost justification </li></ul><ul><ul><li>manual testing and automated testing are two different processes </li></ul></ul><ul><ul><li>treat test automation as one part of a multifaceted test strategy </li></ul></ul><ul><li>Don’t decide on automation simply on the basis of saving money </li></ul><ul><ul><li>testers typically don’t end up with less work to do </li></ul></ul>
  8. 8. Tool support for life-cycle testing: types of tools <ul><li>test case generators, test data generators </li></ul><ul><ul><li>e.g. derive test input from a specification </li></ul></ul><ul><ul><li>e.g. extract random records from a database </li></ul></ul><ul><li>test management: plans, tracking, tracing,… </li></ul><ul><li>static analysis </li></ul><ul><li>coverage </li></ul><ul><li>configuration managers </li></ul><ul><li>complexity and size measurers </li></ul><ul><li>dynamic analysis </li></ul><ul><ul><li>performance analyzers </li></ul></ul><ul><ul><li>capture-replay </li></ul></ul><ul><ul><li>debugging tools used as testing tools </li></ul></ul><ul><ul><li>network analyzers </li></ul></ul><ul><li>simulators </li></ul><ul><li>capacity testing </li></ul><ul><li>test execution and comparison </li></ul><ul><li>compilers </li></ul><ul><li>... </li></ul>
  9. 9. The promise of test automation: What are the potential benefits? (1) <ul><li>run more tests more often </li></ul><ul><li>run tests that are difficult or impossible manually </li></ul><ul><ul><li>e.g. simulate 200 users </li></ul></ul><ul><ul><li>e.g. check for events with no visible output in GUIs </li></ul></ul><ul><li>better use of resources </li></ul><ul><ul><li>testers are less bored, make fewer mistakes, and have more time </li></ul></ul><ul><ul><ul><li>to design more test cases </li></ul></ul></ul><ul><ul><ul><li>to run those tests that must be run manually </li></ul></ul></ul><ul><ul><li>use CPU cycles more effectively </li></ul></ul>
  10. 10. The promise of test automation: What are the potential benefits? (2) <ul><li>consistency and repeatability of tests </li></ul><ul><li>increased reuse of tests leads to better test design and better documentation </li></ul><ul><li>meeting quality targets in less time </li></ul><ul><li>increased confidence/quality/reliability estimation </li></ul><ul><li>reduce regression testing costs </li></ul>
  11. 11. Common problems of test automation <ul><ul><li>unrealistic expectations </li></ul></ul><ul><ul><li>poor testing practice </li></ul></ul><ul><ul><ul><li>automating chaos just gives faster chaos </li></ul></ul></ul><ul><ul><li>expectation that automation will increase defect findings </li></ul></ul><ul><ul><li>false sense of security </li></ul></ul><ul><ul><li>maintenance of automated tests: fragility issues </li></ul></ul><ul><ul><li>technical problems </li></ul></ul><ul><ul><li>organizational issues </li></ul></ul><ul><ul><ul><li>test automation is an infrastructure issue, not a project issue </li></ul></ul></ul>
  12. 12. Process Issues <ul><li>Which tests to automate first? </li></ul><ul><ul><li>do not automate too much too fast </li></ul></ul><ul><li>Selecting which tests to run when </li></ul><ul><ul><li>subset of test suites </li></ul></ul><ul><li>Order of test execution </li></ul><ul><li>Test status </li></ul><ul><ul><li>pass or fail </li></ul></ul><ul><li>Designing software for automated testing </li></ul><ul><li>Synchronization </li></ul><ul><li>Monitoring progress of automated tests </li></ul><ul><li>Processing possibly large amounts of test ouput </li></ul><ul><li>Tailoring your regime around test tools </li></ul>
  13. 13. Avoiding the Pitfalls of Automation (1) <ul><li>Get your test strategy clear first before contemplating automation </li></ul><ul><li>Tests have to be debugged </li></ul><ul><ul><li>test automation is software development and must be done with same care </li></ul></ul><ul><li>Test automation can encourage a proliferation of useless test cases </li></ul><ul><ul><li>evaluate your test suites and clean them up </li></ul></ul><ul><li>Hardest part of automation is interpreting results </li></ul><ul><ul><li>human effort required here </li></ul></ul>
  14. 14. Avoiding the Pitfalls of Automation (2) <ul><li>Test automation (esp. test case generation) can lead to </li></ul><ul><ul><li>a set of weak shallow tests </li></ul></ul><ul><ul><li>tests that ignore interesting bugs </li></ul></ul><ul><ul><li>the tester spending a lot of time on extraneous activities related to the tools being used </li></ul></ul><ul><li>Is it useful to repeat same tests over and over? </li></ul><ul><ul><li>Study at Borland: over 80% of bugs were found manually </li></ul></ul>
  15. 15. Building Maintainable Tests (1) <ul><li>Don’t let the test suite become “too big” </li></ul><ul><ul><li>before adding any new test ask what is the test contribution to … </li></ul></ul><ul><ul><ul><li>defect finding capability </li></ul></ul></ul><ul><ul><ul><li>likely maintenance cost </li></ul></ul></ul><ul><li>Ensuring test designers and test builders limit their use of disc space </li></ul><ul><ul><li>large amounts of test data have an adverse impact on test failure analysis and debugging effort </li></ul></ul><ul><li>Keep functional test cases as short in time and as focused as possible </li></ul>
  16. 16. Building Maintainable Tests (2) <ul><li>Design tests with debugging in mind </li></ul><ul><ul><li>What would I like to know when this test fails? </li></ul></ul><ul><li>Start cautiously when designing tests that chain together </li></ul><ul><ul><li>if possible, use “snapshots” to restart a chain of test cases after one fails </li></ul></ul><ul><li>Adopt a naming convention for test elements </li></ul><ul><li>Document test cases </li></ul><ul><ul><li>over view of test items </li></ul></ul><ul><ul><li>annotations in scripts </li></ul></ul>
  17. 17. Building Maintainable Tests (3) <ul><li>Limit number of complex test cases </li></ul><ul><ul><li>difficult to understand even for minor changes </li></ul></ul><ul><ul><li>effort needed to automate and maintain may wipe out any savings </li></ul></ul><ul><li>Use flexible and portable formats for test data </li></ul><ul><ul><li>time taken to convert data often more acceptable than the cost of maintaining large amounts of data in a specialized format </li></ul></ul>
  18. 18. Evaluating Testing Tools (1) <ul><li>Capability </li></ul><ul><ul><li>having all the critical features needed </li></ul></ul><ul><li>Reliability </li></ul><ul><ul><li>working a long time without failures </li></ul></ul><ul><li>Capacity </li></ul><ul><ul><li>handling industrial environments </li></ul></ul><ul><li>Learnability </li></ul><ul><ul><li>having a reasonable learning curve or support for learning </li></ul></ul>
  19. 19. Evaluating Testing Tools (2) <ul><li>Operability </li></ul><ul><ul><li>offering ease of use of interface </li></ul></ul><ul><li>Performance </li></ul><ul><ul><li>advantages in turn-around time versus manual testing </li></ul></ul><ul><li>Compatibility </li></ul><ul><ul><li>ease of integration with application environment </li></ul></ul><ul><li>Non-intrusiveness </li></ul><ul><ul><li>not altering the behaviour of the software under test </li></ul></ul>
  20. 20. Choosing a tool to automate testing <ul><li>Introduction to Chapters 10 and 11 of [23] </li></ul><ul><ul><li>Where to start in selecting tools: </li></ul></ul><ul><ul><ul><li>your requirements </li></ul></ul></ul><ul><ul><ul><li>NOT the tool market </li></ul></ul></ul><ul><ul><li>The tool selection project </li></ul></ul><ul><ul><li>The tool selection team </li></ul></ul><ul><ul><li>Identifying your requirements </li></ul></ul><ul><ul><li>Identifying your constraints </li></ul></ul><ul><ul><li>Build or buy? </li></ul></ul><ul><ul><li>Identifying what is available on the market </li></ul></ul><ul><ul><li>Evaluating the short listed candidate tools </li></ul></ul><ul><ul><li>Making the decision </li></ul></ul>
  21. 21. Testing Tool Market (Ovum - www.ovum.com) <ul><li>In 1999, was $450M, growing at 30% per year </li></ul><ul><li>dominant players (with 60% of the total market) were: </li></ul><ul><ul><li>Mercury Interactive </li></ul></ul><ul><ul><li>Rational </li></ul></ul><ul><ul><li>Compuware </li></ul></ul><ul><ul><li>Segue </li></ul></ul>
  22. 22. References 1 <ul><li>[1] C.A.R. Hoare, “How did software get so reliable without proof?”, Formal Methods Europe ‘96, Keynote speech </li></ul><ul><li>[2] D. Hamlet and R. Taylor, “Partition Testing does not Inspire Confidence”, IEEE Transactions on Software Engineering, Dec. 1990, pp. 1402-1411 </li></ul><ul><li>[3] Edward Kit, Software Testing in the Real World: Improving the Process, Addison Wesley, 1995 </li></ul><ul><li>[4] Brian Marick, The Craft of Software Testing, Prentice Hall, 1995 </li></ul>
  23. 23. References 2 <ul><li>[5] Boris Beizer, Software Testing Techniques, 2nd Ed’n, Van Nostrand Reinhold, 1990 </li></ul><ul><li>[6] T.J. Ostrand and M.J. Balcer, “The Category-Partition Method for Specifying and Generating Functional Tests”, Communications of the ACM 31, 6, June 1988, pp. 676-686 </li></ul><ul><li>[7] Robert Binder, Object Magazine 1995 http://www. rbsc .com/pages/myths.html </li></ul><ul><li>[8] Robert Poston, Specification-Based Software Testing, IEEE Computer Society, 1996 </li></ul>
  24. 24. References 3 <ul><li>[9] James Bach, General Functionality and Stability Test Procedure, http://www.testingcraft.com/bach-exploratory-procedure.pdf </li></ul><ul><li>[10] Bill Hetzel, The Complete Guide to Software Testing, 2nd ed., 1988, Wiley&Sons </li></ul><ul><li>[11] Cem Kaner, Jack Falk, Hung Quoc Nguyen, Testing Computer Software, 2nd ed., 1993, Van Nostrand Reinhold </li></ul><ul><li>[12] Andrew Rae, Phillippe Robert, Hans-Ludwig Hausen, Software Evaluation for Certification, 1995, McGraw Hill </li></ul>
  25. 25. References 4 <ul><li>[13] William Perry, Effective Methods for Software Testing, 1995, John Wiley & Sons </li></ul><ul><li>[14] John McGregor and David Sykes, A Practical Guide to Testing Object-Oriented Software, Addison Wesley, 2001, ISBN 0-201-32564-0 393 pp. </li></ul><ul><li>[15] G.J. Myers, The Art of Software testing, Wiley, 1979 </li></ul><ul><li>[16] Cem Kaner, Jack Faulk, and Hung Quoc Nguyen, Testing Computer Software, 2nd Edition, Van Nostrand, 1993 </li></ul><ul><li>[17] DO-178B Software Considerations in Airborne System and Equipment Certification </li></ul>
  26. 26. References 5 <ul><li>[18] IEEE 829 - 1998, Standard for Software Test Documentation </li></ul><ul><li>[19] Mark Fewster and Dorothy Graham, Software Test Automation: Effective Use of Test Execution Tools, ACM Press, 1999 </li></ul><ul><li>[20] www.sei.cmu.edu/cmm/cmms/cmms.html </li></ul><ul><li>[21] James Bach, “Test Automation Snake Oil”, First published: Windows Tech Journal, 10/96 (see articles at http://www.satisfice.com/ </li></ul><ul><li>[22] Robert Poston: A Guided Tour of Software Testing Tools, http://www.soft.com/News/TTN-Online/ttnjan98.html </li></ul>
  27. 27. References 6 <ul><li>[23] Fewster and Graham, Software Test Automation:Effective Use of Test Execution Tools, ACM Press/Addison Wesley, 1999 </li></ul><ul><li>[24] W.J. Gutjahr, “Partition Testing vs. Random Testing: The Influence of Uncertainty”, IEEE TSE, Sep/Oct 99, pp. 661-674, v. 25, n.5 </li></ul><ul><li>[25] Robert V. Binder, Testing Object Oriented Systems: Models, Patterns, and Tools, Addison-Wesley, 2000 (see http://www.rbsc.com) </li></ul><ul><li>[26] Capers Jones, Software quality: analysis and guidelines for success, International Thomson Computer press, 1997 </li></ul>
  28. 28. References 7 <ul><li>[27] Andrew Rae, Phillippe Robert, Hans-Ludwig Hausen, Software Evaluation for Certification, 1995, McGraw Hill </li></ul><ul><li>[28] Shari Lawrence Pfleeger, Software Engineering Theory and Practice, 2nd ed., Prentice Hall, 2001 </li></ul><ul><li>[29] Hong Zhu, Patrick Hall, John May, “Software Unit Test Coverage and Adequacy”, ACM Computing Surveys, Vol.29, No.4, Dec.1997 </li></ul><ul><li>[30] IEEE Std 610.12 - 1990, IEEE Standard Glossary of Software Engineering Terminology </li></ul><ul><li>[31] James Whittaker,“What is Software Testing? And Why is it so Hard?”, IEEE Software, Jan./Feb.2000, pp.70-79 </li></ul>

×