11 Automated Testing

345 views

Published on

0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
345
On SlideShare
0
From Embeds
0
Number of Embeds
1
Actions
Shares
0
Downloads
6
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide

11 Automated Testing

  1. 1. Automated Testing
  2. 2. Introduction <ul><li>Software is seldom simple, and applications are inherently complex. </li></ul><ul><li>At some level flaws are always hiding waiting to be exposed. </li></ul><ul><li>Testing must be integrated into every phase of the computing life cycle. </li></ul>
  3. 3. Testing tools landscape <ul><li>Automated testing software falls into one of several categories: </li></ul><ul><ul><li>development test tools, </li></ul></ul><ul><ul><li>GUI and event-driven test tools, </li></ul></ul><ul><ul><li>load testing tools, and </li></ul></ul><ul><ul><li>bug tracking/reporting tools. </li></ul></ul><ul><li>Error-detection products identify specific kinds of bugs that slip past compilers and debuggers. </li></ul>
  4. 4. <ul><li>Problems typically caught with this type of testing: </li></ul><ul><ul><li>memory leaks, </li></ul></ul><ul><ul><li>out-of-bounds arrays, </li></ul></ul><ul><ul><li>pointer misuse, </li></ul></ul><ul><ul><li>type checking, </li></ul></ul><ul><ul><li>object defects, and </li></ul></ul><ul><ul><li>bad parameters </li></ul></ul><ul><li>Catching the problem early saves a lot of time in later phases of development. </li></ul>
  5. 5. <ul><li>Graphical User Interface (GUI) testing tools automatically exercise elements of application screens. </li></ul><ul><li>Test scripts can usually be defined manually or by capturing user activity and then simulating it. </li></ul><ul><li>This kind of regression testing often simulates hours of user activity at the keyboard and mouse. </li></ul><ul><li>Since the testing is based on scripts, it can be saved and repeated. </li></ul><ul><li>It is crucial to enable developers to validate an application interface after even minor changes have been made. </li></ul>
  6. 6. <ul><li>GUI testing evolved into client/server testing as the feasibility of testing more features in a distributed environment seemed within reach. </li></ul><ul><li>The dividing line between GUI, client/server, and load testing tools is one of degrees. </li></ul><ul><li>Load testing tools permit complex applications to be run under simulated conditions. </li></ul><ul><li>This addresses not only quality, but performance and scalability as well. </li></ul>
  7. 7. <ul><li>Such stress tests exercise the network, client software, server software, database software, and the server. </li></ul><ul><li>By simultaneously emulating multiple users, load testing determines if an application can support its audience. </li></ul><ul><li>A capture program similar to those in GUI testing tools helps automate building scripts. </li></ul><ul><li>Those scripts can be varied and replayed to simulate not only many users, but varied tasks as well. </li></ul><ul><li>Load testing charts the time a user must wait for screen responses, finds bottlenecks, and gives developers the chance to correct them. </li></ul>
  8. 8. <ul><li>Hardware, software, database, and middleware components are stress tested as a unit, providing more accurate performance numbers. </li></ul><ul><li>Again, because testing is controlled via scripts, tests are repeatable. </li></ul><ul><li>If you add an index to a database and rerun the test, you can quantify the specific performance impact of that change. </li></ul><ul><li>Load testing can help predict how a system will perform as usage increases. </li></ul>
  9. 9. <ul><li>Tools permit user loads to be incremented and tracked so that performance degradation can be isolated. </li></ul><ul><li>When applications must support a greater number of users, load testing quickly determines the outcome regarding quality and response time. </li></ul><ul><li>Developers can re-use scripts to alter the user levels, transaction mixes and rates, and the complexity of the application. </li></ul><ul><li>Load testing is the only way to verify the scalability of components as they work together. </li></ul>
  10. 10. Regression testing <ul><li>Regression testing is selective retesting of software to detect faults introduced during modifications of a system or system component, to verify that modifications have not caused unintended adverse effects, or to verify that a modified system or system component still meets specified requirements. </li></ul><ul><li>Regression answers the question--&quot;Does everything still work after my fix?&quot; </li></ul><ul><li>The prior to submitting changes into the system environment should do regression testing should run. </li></ul><ul><li>The group should also run regression testing after each major build or delta. </li></ul>
  11. 11. Example: BENEFITS of TestWorks/Regression <ul><li>Automated capture/replay of realistic user session. </li></ul><ul><li>Tree-oriented test suite management and PASS/FAIL reporting. </li></ul><ul><li>Early detection of latent defects due to unexpected changes in application behavior and appearance. </li></ul><ul><li>Early detection of errors for reduced error content in released products. </li></ul><ul><li>Easy interface to full coverage + regression quality process architecture. </li></ul>
  12. 12. APPLICATIONS of TestWorks/Regression: <ul><li>Test suite applications which are very large (1000's of tests). </li></ul><ul><li>Test suites that need to extract ASCII information from screens. </li></ul><ul><li>Integration with the companion TestWorks/Coverage product for C/C++ applications. </li></ul>
  13. 13. Making a Point <ul><li>Even as testing tools are catching up on development technologies, IT managers are learning that quality and performance are not ensured simply by selecting good testing tools. </li></ul><ul><li>Proper testing processes and strategies must be ingrained into the corporate culture. </li></ul><ul><li>RAD without quality accomplishes nothing. </li></ul><ul><li>IT managers need to stop fixating on what testing tools to use, and focus on how to get the job done well. </li></ul><ul><li>In client/server, many applications depend upon several computers, various application modules, and the network to function well. </li></ul>
  14. 14. <ul><li>Even if all the pieces work well independently, it does not mean they will perform well as a unit. </li></ul><ul><li>Automated testing tools not only freed up a great deal of manpower, they also provided greater control. </li></ul><ul><li>The use of quality assurance testing tools in the development process will not suffice, however. </li></ul><ul><li>An application that works well when deployed, doesn't mean continued problem-free functioning over time. </li></ul><ul><li>Many complex applications scale well within certain parameters, but then everything can fall apart. </li></ul>
  15. 15. A Tour of Testing Tools <ul><li>Capture-replay tools are among the most widely known software testing tools. </li></ul><ul><li>Capture-replay take care of only one small part of software testing: the running or executing of tests cases. </li></ul><ul><li>In order to automate specification-based testing fully, we need tools that create, execute, and evaluate test cases. </li></ul><ul><li>A basic testing tool contain: an execution tool and an evaluation tool. </li></ul><ul><li>Many other helpful testing tools are available. </li></ul>
  16. 16. Tools for Requirements Phase <ul><li>Cost-effective software testing starts when requirements are recorded. </li></ul><ul><li>All testing depends on having a reference to test against. </li></ul><ul><li>Software should be tested against the requirements. </li></ul><ul><li>If requirements contain all the information a needed in a usable form, the requirements are test-ready . </li></ul><ul><li>Test-ready requirements minimize the effort and cost of testing. </li></ul><ul><li>If requirements are not test-ready, testers must search for missing information. </li></ul>
  17. 17. Requirements Recorder <ul><li>To capture requirements, practitioners may use requirements recorders. </li></ul><ul><li>Some teams write their requirements in a natural language such as English and record them with a text editor. </li></ul><ul><li>Other teams write requirements in a formal language such as LOTOS or Z and record them with a syntax-directed editor. </li></ul><ul><li>Others use requirements modeling tools to record information graphically. </li></ul>
  18. 18. <ul><li>Requirements modeling tools were used mostly by analysts or developers. </li></ul><ul><li>These tools seldom were used by testers. </li></ul><ul><li>Currently, requirements modeling tools have been evolving in ways that help testers as well as analysts and developers. </li></ul><ul><li>First, a method for modeling requirements called use cases was incorporated into some of these tools. </li></ul><ul><li>Then the use cases were expanded to include test-ready information. </li></ul>
  19. 19. Requirements Verifiers <ul><li>The use case are test-ready when data definitions are added. </li></ul><ul><li>With such definitions, a tool will have enough information from which to create test cases. </li></ul><ul><li>Requirements verifiers are relatively new tools. </li></ul><ul><li>To be testable, requirements information must be unambiguous, consistent, and complete. </li></ul><ul><li>A term or word in a software requirements specification is unambiguous if it has one, and only one definition. </li></ul>
  20. 20. <ul><li>Every action statement must have a defined input, function, and output. </li></ul><ul><li>The tester needs to know that all statements are present. </li></ul><ul><li>Requirements verifiers quickly and reliably check for ambiguity, inconsistency, and statement completeness. </li></ul><ul><li>An automated verifier has no way to determine that requirements statements are complete. </li></ul><ul><li>Requirements verifiers are usually embedded in other tools. </li></ul>
  21. 21. Spec.-Based Test Case Generators <ul><li>The recorder captures requirements information which is then processed by the generator to produce test cases. </li></ul><ul><li>A test case generator creates test cases by statistical, algorithmic, or heuristic means. </li></ul><ul><ul><li>Statistical test case generation chooses input values to form a statistically random distribution or a distribution that matches the usage profile of the software under test. </li></ul></ul><ul><ul><li>Algorithmic test case generation follows a set of rules or procedures, commonly called test design strategies or techniques. </li></ul></ul><ul><ul><li>Often, test case generators employ action-, data-, logic-,event-, and state-driven strategies. Each of these strategies probes for a different kind of software defect as shown next. </li></ul></ul>
  22. 23. Requirements to Test Tracers <ul><li>In the old days, coming up with test cases was a slow, expensive, and labor-intensive. </li></ul><ul><li>With modern test case generators, test case creation and revision time is reduced to a matter of seconds. </li></ul><ul><li>Requirements to test tracers record the testing behavior to determine how every specified function was tested. </li></ul><ul><li>Test tracers can take over most of the work once consumed much human time. </li></ul><ul><li>Tracers exist as individual tools, or are included in testing tools such as specification-based test case generators. </li></ul>
  23. 24. Tools for the Design Phase <ul><li>In the requirements phase, the recorder, verifier, test case generator, and tracer are used at the system level. </li></ul><ul><li>In the design phase, the same tools may be used again to test small systems. </li></ul><ul><li>Designers may record their designs as either object or structured models, depending on which methodology used. </li></ul><ul><li>Then they can use the a validator-like tool to generate test cases from designs. </li></ul>
  24. 25. Tools for the Programming Phase <ul><li>In efficient code development, programmers must write comments in their code to describe what their code will do. </li></ul><ul><li>They must also create algorithms that the code. </li></ul><ul><li>Finally they will write code. </li></ul><ul><li>The comments , algorithms, and code will be inputs to testing tools used during the programming phase. </li></ul><ul><li>Requirements tools may be used once again. </li></ul><ul><li>The metrics reporter, code checker, and instrumentor also can be used for testing during the programming phase. </li></ul><ul><li>These tools are classified as static analysis tools. </li></ul>
  25. 26. Metrics Reporter <ul><li>The metric reporter reads source code and displays metrics information, often in graphical formats. </li></ul><ul><li>Its reports complexity metrics in terms of data flow, data structure, control flow, code size in terms of modules, operands, operators, and lines of code. </li></ul><ul><li>This tool helps the programmer correct and groom code and helps the tester decide which parts of the code need the most testing. </li></ul>
  26. 27. Code Checker <ul><li>The earliest code checker most people remember is LINT offered as part of Unix. </li></ul><ul><li>Other code checkers are available for other systems. </li></ul><ul><li>LINT was aptly named - it goes through code and picks out all the fuzz that makes programs messy and error-prone. </li></ul><ul><li>The checker looks for misplaced pointers, uninitialized variables, deviations from standards, etc. </li></ul><ul><li>Development teams that use software inspections as part of static testing can save many staff hours by letting a code checker identify nitpicky problems before inspection time. </li></ul>
  27. 28. <ul><li>name defined but never used </li></ul><ul><li>bufferCount pre.c(6) </li></ul><ul><li>value type used inconsistently </li></ul><ul><li>strlen llib-lc:string.h(64) unsigned int () :: pre.c(69) int () </li></ul><ul><li>strlen llib-lc:string.h(64) unsigned int () :: pre.c(79) int () </li></ul><ul><li>strlen llib-lc:string.h(64) unsigned int () :: pre.c(138) int () </li></ul><ul><li>strlen llib-lc:string.h(64) unsigned int () :: pre.c(142) int () </li></ul><ul><li>strlen llib-lc:string.h(64) unsigned int () :: pre.c(256) int () </li></ul><ul><li>strlen llib-lc:string.h(64) unsigned int () :: pre.c(264) int () </li></ul><ul><li>strlen llib-lc:string.h(64) unsigned int () :: pre.c(283) int () </li></ul><ul><li>strlen llib-lc:string.h(64) unsigned int () :: pre.c(330) int () </li></ul><ul><li>strlen llib-lc:string.h(64) unsigned int () :: pre.c(332) int () </li></ul><ul><li>strlen llib-lc:string.h(64) unsigned int () :: pre.c(392) int () </li></ul>
  28. 29. <ul><li>function argument ( number ) used inconsistently </li></ul><ul><li>strncmp (arg 3) llib-lc:string.h(47) unsigned int :: pre.c(146) int </li></ul><ul><li>strncmp (arg 3) llib-lc:string.h(47) unsigned int :: pre.c(266) int </li></ul><ul><li>strncpy (arg 3) llib-lc:string.h(39) unsigned int :: pre.c(271) int </li></ul><ul><li>strncmp (arg 3) llib-lc:string.h(47) unsigned int :: pre.c(285) int </li></ul><ul><li>strncpy (arg 3) llib-lc:string.h(39) unsigned int :: pre.c(290) int </li></ul><ul><li>function returns value which is always ignored </li></ul><ul><li>getInput getMatchingRunBuffer fprintf sprintf </li></ul><ul><li>sscanf strcpy strncpy </li></ul><ul><li>declared global, could be static </li></ul><ul><li>runbufferNumber pre.c(7) </li></ul><ul><li>Ready pre.c(10) </li></ul><ul><li>ReadyString pre.c(11) </li></ul><ul><li>End pre.c(12) </li></ul><ul><li>Receive pre.c(13) </li></ul>
  29. 30. <ul><li>outFlush pre.c(14) </li></ul><ul><li>funcend pre.c(17) </li></ul><ul><li>on701 pre.c(18) </li></ul><ul><li>on702 pre.c(19) </li></ul><ul><li>getString pre.c(112) </li></ul><ul><li>getBufferNumberandSize pre.c(124) </li></ul><ul><li>FindString pre.c(132) </li></ul><ul><li>CountChars pre.c(153) </li></ul><ul><li>getInput pre.c(164) </li></ul><ul><li>putOutput pre.c(181) </li></ul><ul><li>ReplaceXuser701 pre.c(192) </li></ul><ul><li>ReplaceXuser702 pre.c(227) </li></ul><ul><li>ReplaceStrings pre.c(242) </li></ul><ul><li>getMatchingRunBuffer pre.c(304) </li></ul><ul><li>ExtractSocketNumberSt pre.c(413) </li></ul><ul><li>ExtractSocketNumberEn pre.c(429) </li></ul><ul><li>FlushRun pre.c(446) </li></ul>
  30. 31. Code Instrumentor <ul><li>The code instrumentor helps programmers and testers measure structural coverage by reading source code. </li></ul><ul><ul><li>For example, the instrumentor might choose to make a measurement after a variable is defined or a branch is taken. </li></ul></ul><ul><li>The tool inserts a new line of code, a test probe , that will record information such as number and duration of test executions. </li></ul>
  31. 32. Tools for the Testing Phase <ul><li>All the tools discussed so far are used before developers get to the testing phase. </li></ul><ul><li>The tools discussed next are dynamic analyzers that must have test cases to run. </li></ul>
  32. 33. Capture-Replay Tool <ul><li>The capture-replay tool works like a VCR or a tape recorder. </li></ul><ul><li>When the tool is in the capture mode, it records all information that flows past it. </li></ul><ul><li>The recording is called a script that is a procedure that contains instructions to execute one or more test cases. </li></ul><ul><li>When the tool is in the replay mode, it stops incoming information and plays a recorded script. </li></ul>
  33. 34. <ul><li>Two features exist in some commercial capture-replay tools. </li></ul><ul><ul><li>First: an object-level, record-playback feature that enables capture-replay tools to record information at the object, control, or widget level. </li></ul></ul><ul><ul><li>Second: a load simulator is a facility that lets the tester simulate hundreds or even thousands of users simultaneously working on software under test. </li></ul></ul><ul><li>Companies are confronted with a &quot;build or buy&quot; decision. </li></ul>
  34. 35. <ul><li>Most such tools are helpful only to people who are testing GUI-driven systems. </li></ul><ul><li>Many unit testers, integration testers, and embedded system testers do not deal with large amounts of software that interact with graphical user interfaces (GUIs). </li></ul><ul><li>Therefore, most capture-replay tools will not satisfy these testers' needs. </li></ul><ul><li>Capture-replay tools may be packaged with other tools such as test managers. </li></ul><ul><li>A tool called a test manager helps testers control large numbers of scripts and report on the progress of testing. </li></ul>
  35. 36. Test Harness <ul><li>A capture-replay tool connects with software under test through an interface, usually located at the screen or terminal. </li></ul><ul><li>But the software under test will probably also have interfaces with an operating system, a data base system, and other application system. </li></ul><ul><li>Each such interface needs to be tested, too, using a test harness. </li></ul><ul><li>If some parts of the software being developed are not available when testing, testers build software packages to simulate the missing parts called stubs and drivers. </li></ul>
  36. 37. <ul><li>Test harnesses have been custom-built per application for years. </li></ul><ul><li>Most harnesses did not become off-the-shelf products. </li></ul><ul><li>Recently, interface standards and standard ways of describing application interfaces through modern software development tools have enabled commercial test harness generators. </li></ul>
  37. 38. Comparator <ul><li>The comparator compares actual outputs to expected outputs and flags differences. </li></ul><ul><li>Software passes a test case when actual and expected output values are within allowed tolerances. </li></ul><ul><li>When complexity and volume of outputs are low, the &quot; diff &quot; function will provide all the comparison information testers need. </li></ul><ul><li>Sometimes, cannot compare data precisely enough to satisfy testers. </li></ul><ul><li>Then testers may turn to comparators. </li></ul><ul><li>Most of today's capture-replay tools include a comparator. </li></ul>
  38. 39. Structure Coverage Analyzer <ul><li>The structure coverage analyzer tells the tester which statements, branches, and paths in the code have been exercised. </li></ul><ul><li>Structure coverage analyzers fall into two categories: </li></ul><ul><ul><li>intrusive </li></ul></ul><ul><ul><li>nonintrusive </li></ul></ul><ul><li>Intrusive analyzers use a code instrumentor to insert test probes into the code. </li></ul><ul><li>The code with the probes is compiled and exercised. </li></ul>
  39. 40. <ul><li>Nonintrusive analyzers gather information in a separate hardware processor that runs in parallel with the processor being used for the software under test. </li></ul><ul><li>If sold commercially, nonintrusive analyzers usually come with the parallel processor(s) included as part of the tool package. </li></ul><ul><li>A special category of coverage analyzers called memory leak detectors find reads of uninitialized memory as well as reads and writes beyond the legal boundary of a program. </li></ul><ul><li>Since these tools isolate defectsand may be classified as debuggers. </li></ul>

×