Test Plan 2008 V2 00 01


Published on

automated regression functional workflow testing plan - phase I and phase II

  • Be the first to comment

  • Be the first to like this

No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide

Test Plan 2008 V2 00 01

  1. 1. Test Environment Design Goals: (Primary) Collaborative testing of end-to-end workflows that provides [broad] coverage in areas vital to Users. Conceive, design, plan, and execute exploratory task- lists that discover bugs that may be missed by other forms of testing, (Secondary) Automate UI-testing using minimum person-hours, increase machine-hours, lower burden of support, increase visibility of test-outcomes. Salient Points (to be incorporated into test plan) 1 Generate and drive the test loads Manual testing OR automated tools Quick and dirty regression test of algorithms Comprehensive regression test of algorithms Performance test of algorithms Performance Tests of Viewer using Marathon UI testing utility Functional test of algorithms Stress test of algorithms Functional test of Viewer using Marathon UI testing utility User level tests by users (simulated initially as end-to-end workflow testing) 2 Standardize Test Materials and Test Facilities Specifically, determine environment (test equipment and facilities) needed 2.1 Determine Number of Test Cycles Iterate | Find Bottlenecks – re-testing, set- up, re-run | Standardize Test-Facility Model completion of deadline-estimate Determine main influences Test-duration and Number of cycles Approximately how many test cycles are needed for confidence in results output 2.2 Establish a working test facility Include and ensure that test equipment is installed and connected correctly Test scripts debugging Explore and build a sense of how the system works and the work load flows Uncover and resolve bottlenecks Build confidence that the tested system is ready to go live 3 Review Performance Test Plan (checklist below)
  2. 2. Performance Test Plan Review Checklist Main Sections: 1) Status: Readiness of the Plan for Review a) Review Status i) Test plan review criteria: Clear, Readable, and Understandable by the people who are reviewing it and also by the people who will be executing the test ii) Is the copy of the plan being reviewed the current and approved version rather than some earlier, obsolete one 2) Experience of the Test Pla nners a) Lead Contributors: Name1, Name2 b) Peripheral Involvement/Advisors: Name3, Name4, Name5 c) Level of comfort & adequate in-depth understanding to deliver valid plan of i) Background subject matter ii) System requirements iii) Technical environment and iv) Testing & QA techniques v) Other Reviewers? Reservations? Basis? 3) Target Audience for the Test Plan a) Plan- fit to intended goals i) For current ‘Testers’ ii) For current Users (1) Target User identified? iii) Understanding of User reasonable? 4) Commitment to the Test Project from key members a) Committing Personnel: Name1, Name2 i) Project needs Commitment Personnel Support? Yes /No ii) Strategy to obtain support exists? b) Peripheral Support Personnel: Name3, Name4 (Tech Support, Sales, Marketing) c) Levels of adequate involvement of (a) and (b) known? d) Needs and expectations of Peripheral Support adequately addressed? e) Willingness of Peripheral Support and Commitment Personnel to sign-off on Test Plan 5) Test Objectives and Scope a) Identify test success and acceptance criteria – overall development and/or maintenance b) Identify constraints on overall project i) Schedule ii) Test-practices iii) Budget iv) Developing Team time with Test Plan Developers and Testers c) Identify success factors | Identify constraints d) Define clear test objectives (what test project intends to accomplish, goals) e) Objective/s adequate? Achievable? Measurable?
  3. 3. f) Define test scope: what is measured, what is not tested, what are assumptions g) Determine scope-coverage i) Types of testing needed: application functionality, performance, usability, X-platform, Batch /GUI, stress, small load, normal load, high load, constrained resources, etc h) Identify and list test items: specific features, specific instances (Launch | Load | Sample View | … … … | specific protein with ANOVA value same on Samples View and Quantitation View, etc) i) Determine scope-overlap 6) Fit of the Test Strategy to the current Need/s a) Approach adequate to achieve test-objective? b) Test plan cohesive and practical? c) Test plan applicable to current circumstances d) Identify trade-offs, assumptions, pre-requisites e) Q: What is evidence that proposed approach is best alternative? 7) Test Focus and Priorities a) Focus and priorities clear b) Define context of test project; determine focus is correct c) Identify risks – likely vulnerability to data-type, processing, hardware constraints d) Other consideration: i) Is risk assessment acceptable ii) Major risks overlooked (or downgraded) iii) Minor risks overstated e) Re- iterate feedback from risk assessment (mitigate, formulate corrective actions, identify and implement suspend-reject criteria) 8) Feasibility of the Test Plan a) Determine proposed approach certainty to meet test objectives b) Implementation of test plan i) Specific enough for implementation? ii) Efficient for any ‘tester’ to proceed c) Limitations of plan (major points) i) Overly ambitious ii) Test-expertise iii) Test- facilities iv) Deadlines d) Assessment (and acceptance) of risk e) Major risks limiting success identified f) Unresolved issues/situations i) Plan for reaching resolution 9) Determine Validity of Assumptions a) Document major assumptions and pre-requis ites i) Stated clearly? ii) Accepted as reasonable in scope and implementation by key personnel? b) Skills and Expertise i) Resource-demand (Development Team time, contract, out-sourcing, hardware limitations, etc)
  4. 4. (1) Resource-availability (2) Resource-assumptions ii) Assumptio ns of Test Facility & Tools (work-stations, network, etc) iii) Testability and readiness of test-system (identify data types, generate tailored test data, etc) iv) Degree and quality of testing in prior phases (Systematic? Documented? etc) c) Are assumptions reasonable and verifiable? 10) System Testability a) Determine testability of i) Batch2 (1) Current descriptors | output to Protxml, Pepxml, other spreadsheet solutions (2) Other UI- functionalities migrated as Batch2 descriptors to generate UI-test files ii) UI (1) Identify [other] sources of UI-testing iii) Comparer (1) Improve diff function b) How can the test-system be designed for testability? i) Tester-role (1) Determine tester role in test-system design c) Does designed plan have touch- (or probe-) points? i) What? ii) Where? d) How are touch-points utilized? 11) Test Coverage a) Determine coverage of test/s i) Measurable? ii) How? b) Determine coverage goals i) Test-coverage goals set? Yes /No (1) Determine assumptions, prerequisites c) Rationale for the goals i) Reasonable? d) Determine the breadth of functional coverage planned in the test i) Determine overall contribution ii) Justify feature/functions that are not tested and not exhaustively tested e) Determine scope of re-testing of existing functionalities on change to current system i) Comprehensive regression required? Yes / No ii) Adequate for specific file set type (or dataset type or version)? Yes /No f) Determine depth and/or thoroughness of function or feature i) Matches risk associated with test ii) Matches %age of detailed path coverage in test 12) Test Entry Criteria a) Determine prerequisites for start of testing i) Are these clear, agreed on and reasonable? (1) Is an acceptance criterion ensuring system is ready to test?
  5. 5. 13) Test Completion Criteria a) Determine criteria for completion i) Are these clear, agreed on and reasonable? ii) System ready for release with certainty (high confidence) if criteria met? Yes / No b) Identify and determine compromises prior to release i) What are the pressures encountered? (1) No bug-fixes for 2 weeks, no new releases, no new features, previous bugs still pending, etc (2) How will the effects of these pressur es neutralized? (a) Expectation minimization, new product, diversify, more people, out-sourcing, sell- off, etc ii) Will Tech Support be ready to handle released-product related issues? (1) iii) Are Users (licensees and evaluators) willing to hold until de-bug of their data? 14) Test Automation Approach a) Identify and determine appropriate test tools i) Available to testers? Yes / No ii) Reliable? Yes / No b) Can existing automated test cases be re-used? i) How? c) Determine degree of coverage of manual and automated testing i) Is division appropriate? ii) Identify section/s of test plan that could be made more efficient (manual to automated, or vice versa) d) Division of manual and automated testing i) Is this clear from the plan? ii) Determine rationale for selecting either manual or automated testing 15) Project Work Plan and Schedule a) Is there a test work plan and schedule? i) Identify test tasks ii) Identify the inter-dependencies iii) Specific goals (deliverables) iv) Resource needs v) Milestones dates b) Test Work Plan flow i) Logical ii) Reasonable iii) Unfolds intuitively c) Sequence of test activities i) Correct or approximate correct? d) Identify bottlenecks and Constraints i) If not, is there plan to address these bottlenecks and constraints? Yes / No ii) Rationale e) Back-up / Contingency plans i) Alternate plans in place if success cannot be achieved? Yes / No f) Determine testing activity proportions i) Are amount of development and maintenance efforts reasonable for test activity?
  6. 6. ii) Ditto for degree of risk inherent in the test system? iii) Are times and cost estimates reasonably determined? (1) Criteria? 16) Test Plan Flexibility and Maintainability a) Determine contingency pla ns, in case events do not follow as anticipated during test implementation b) Identify mechanism/s to learn and re- iterate from experience i) Revise plan to move ahead seamlessly with modified version during test-execution c) Identify the individual(s) who are authorized to approve changes to the system functionality and scope 17) Test Project Staffing a) State clearly the skills, knowledge and expertise needed of the people involved or needed b) Are these people available according to plan schedule? c) Out-sourcing needed? i) Are internal resource lacking? ii) Are consultants hirable? d) Identify possible resource-conflicts i) Projects competing with same people? Yes /No ii) Equipment shared between projects? Yes / No iii) Test plan addresses such conflicts? Yes / No (1) Budget available to mitigate resource conflict? Yes / No (a) Determine ways to increase budget (i) Extramural grants (ii) Increase unit sales (iii)Increase price iv) Determine strategy to resolve resource conflicts 18) Roles and Responsibilities a) Outline (and define) respective roles and responsibilities of key personnel (team/s and individuals) i) Have people who will follow and execute the test plan reviewed what the expectations are? Yes / No b) Review expectations of test objectives from people (after reviewing with them) i) Ascertain their understanding of specific tasks ii) Ascertain their time demands, working conditions, deadlines, and reporting relationships (and unusual work- load including weekend work) 19) Usability of the Test Plan a) Identify people who i) Will follow plan ii) Will execute test b) Identify characteristics of key people i) Available? ii) Have necessary skills? c) Identify test success criteria i) What key people need to know to test successfully (1) Is this covered in test plan? Yes / No
  7. 7. (a) If not, what is approach? d) Determine how plan fits in actual usage of key personnel (how it is used) i) Why ii) When iii) How e) Review of plan i) Have the people reviewed the test plan? Yes / No ii) What is opinion? (1) Fills needs from their perspective? (2) Usable from their perspective? f) Specificity of plan i) Is plan adequately specific to be implemented? ii) Will current and future people be able to proceed effectively? g) 20) Test Environment Preparation a) Test Platforms - Linux/Unix, Mac O/S, 32-bit and 64-bit Win OS b) Representative of User- hardware? c) Representative of User-activity? d) Adjustments needed? 21) Project Tracking a) Test project status monitor b) Assess on-track | Corrective actions | Remedies c) Test-effort progress reported- manual or automated d) {Does Test Plan cover the above points} 22) Project Fit a) Determine if test-project fits with overall system development and maintenance b) Tailor project to suit product/s and work environment 23) Project Coordination a) Determine how test execution effort/s coordinated with i) Other testing activities (1) Problem reporting (2) Follow- up (3) Re-testing ii) Version control iii) Change iv) Release management processes b) 24) Test Case Design and Organization a) Identify test case design techniques used i) Verify appropriateness of selected techniques b) Test case traceability i) How are test cases traceable to
  8. 8. (1) Features (2) Specifications (3) Functionalities ii) Cross-referenced (1) Within test cases (2) Within features (3) Within functionalities c) Test case practical and feasible i) Representative dataset and testGroup ready? d) Test compliance i) Complies with standard format? Yes / No e) Cross-reference i) To system requirements ii) To system performance iii) To system features iv) To system versions v) To data type 25) Test Execution a) Planned sequence or flow of the test case execution i) Most critical to the Least critical b) Description of test support processes used i) Problem reporting and resolution (these processes must be appropriate for the project) 26) Compliance with Standards a) Internal b) External c) Test Plan Version control? d) Test plan been filed? Testware Re-Use e) Majority of test plan reusable? Yes / No i) Relevant section ii) Designed for re-use iii) Flexible for re-use