• Share
  • Email
  • Embed
  • Like
  • Save
  • Private Content
Risk Driven Testing
 

Risk Driven Testing

on

  • 2,017 views

Software organizations that want to maximize the yield of Software Testing find that choosing the right testing strategy is hard, and most testing managers are ill-prepared for this. The organization ...

Software organizations that want to maximize the yield of Software Testing find that choosing the right testing strategy is hard, and most testing managers are ill-prepared for this. The organization has to learn how to plan testing efforts based on the characteristics of each project and the many ways the software product is to be used. This tutorial is intended for Software professionals who are likely to be responsible for defining the strategy and planning of the testing effort and managing it through its life cycle. These roles are usually Testing Managers or Project Managers.

Statistics

Views

Total Views
2,017
Views on SlideShare
2,010
Embed Views
7

Actions

Likes
1
Downloads
38
Comments
0

3 Embeds 7

http://www.slideshare.net 3
http://www.lmodules.com 2
http://www.linkedin.com 2

Accessibility

Categories

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment
  • The purpose of this webinar is to discuss issues that impact the effectiveness of IT organizations. Our discussion will be limited to IT Service Delivery (problem resolution, consultation requests, enhancements and projects). We will not be addressing Infrastructure or Operations Management issues.
  • Discuss these versus the class expectations, going over the notes from the introduction slide.
  • There are many more problems… see what students can add to the list. Other things that are often missing are the quality characteristics - what are the reliability requirements, the availability requirements, maintainability, portability, etc. What platforms are needed? What’s the key problem with today’s system that has to be addressed by this new one? What can go wrong if we don’t plan for these things in testing?
  • A project is a microcosm within a larger organization. Effective risk management must take into account the business environment in which the project operates. Many, if not most, projects fail not for technology or project management reasons, but because of larger organizational pressures that are typically ignored. These organizational pressures come in many forms, such as competitive pressures, financial health, and organizational culture. Here is a sample list of risk sources and possible consequences. It is interesting to note that the elements of significant risk are not the same across all types of projects. Different types of projects face different kinds of risks and must then pursue entirely different forms of risk control. When you take only a limited amount of time to do risk identification, you might use this list of categories to guide brainstorming of the risks to the projects. For example, if you are working on a small project which will receive minimal risk and reviews focus, you may spend only a few minutes considering the risks. Use the list of categories here to guide that time in a top-down approach to identifying the risks.
  • When faced with what to test, the crunch between the scarcity of resources and the need to provide a comprehensive coverage forces the testing manager with a compromise. To go through the horns of this dilemma, the best option is to find those aspects of the product that have the most impact on the business, a concept sometimes identified with “good enough”. A product might be defect free and not good enough, or defect plagued and good enough for its market. These CSFs are the quintessential element of a good testing plan.
  • What are the business drivers for the change? What will make the product a success or a failure? For example, if the business need is headcount reduction based on the goodness of your interface, how can you test that the reduction could be (not will be, because that is outside your scope) achieved? In the above slide, discuss what features might be crucial to the success of the product.
  • You should research who are the buyers of the product. All products are considered to bring in positive changes that will eventually impact the bottom line. For some, this imperative is seen as a short term goal. Is this your case? If so, how? Consider that sometimes the problem the product is expected to solve is that of administrative control. Does the product have the functionality to provide this? Is this functionality correct? Would the end-users also see improvement in the installation of the system? How can you get the kinks out of the system before shipping it to them, so that this is true?
  • What good is a good system if it is not really solving a problem? Would you use eighteen-wheelers for urban transportation of letters and documents? Does that make them bad products? Conversely, would you use motorcycles to send fresh farm produce across the continental United States? Does that make motorcycles unfit for commercial applications? When you are testing, do you only test against requirements? Whose representation are you assuming that makes sense for the business? Remember that your role is not to check that the software runs, nor to prove it correct, but to show all aspects that the users will find objections to!
  • The testing manager has two dimensions to worry about: being effective, that is, detecting as many defects as possible, and being efficient, that is, do this with the restrictions of a scarcity of resources. The most scarce resource is, of course, time. We have already discussed that testing is, by definition, always in the critical path. Therefore, it is sage she who schedules critical tasks (let’s call the testing tasks related to critical success factors so) before others. The purpose of testing is to find defects, but an implied consequence of this is that these defects get fixed. In that sense, reporting is very much a critical skill of a good tester. One way to measure it is in the time spent by developers in reproducing the defect when trying to fix it. This, and the other measures that are shown here, are just examples of goal setting dimensions.
  • Link this plan to the Project Plan by the schedule constraints. Enter this under the Schedule Constraints sub-section. Describe the model being followed by the project: Simple Waterfall, Parallel Waterfall, Evolutionary, Prototyping, Spiral, etc. Enter this under the Project’s Lifecycle Model sub-section. Define the project’s tasks at a high level of granularity, in order to show the schedule dependencies of the testing tasks with the project’s tasks. Use your Testing Process now to interleave the Testing tasks without tailoring them yet. Enter all this under the Project’s Work breakdown structure sub-section. You will have the opportunity later to refine or change the testing tasks, even drop some tasks as you see adequate. If known, enter under the sub section The Project’s Design Architecture the overall design architecture, whether the architecture is batch, event-driven, one, two, or three-tiered, etc. Discuss any shortcomings of the project that can have an impact on the business from the viewpoint of the testing team. Enter this under the Project’s Shortcomings sub-section.
  • Link this plan to the Project Plan by the schedule constraints. Enter this under the Schedule Constraints sub-section. Describe the model being followed by the project: Simple Waterfall, Parallel Waterfall, Evolutionary, Prototyping, Spiral, etc. Enter this under the Project’s Lifecycle Model sub-section. Define the project’s tasks at a high level of granularity, in order to show the schedule dependencies of the testing tasks with the project’s tasks. Use your Testing Process now to interleave the Testing tasks without tailoring them yet. Enter all this under the Project’s Work breakdown structure sub-section. You will have the opportunity later to refine or change the testing tasks, even drop some tasks as you see adequate. If known, enter under the sub section The Project’s Design Architecture the overall design architecture, whether the architecture is batch, event-driven, one, two, or three-tiered, etc. Discuss any shortcomings of the project that can have an impact on the business from the viewpoint of the testing team. Enter this under the Project’s Shortcomings sub-section.
  • There are many more problems… see what students can add to the list. Other things that are often missing are the quality characteristics - what are the reliability requirements, the availability requirements, maintainability, portability, etc. Can we test them? Should we? What platforms are needed? What’s the key problem with today’s system that has to be addressed by this new one?
  • The simile here is that testing, always in the critical path, will not be granted the required time to do a thorough job, in all but the most mission critical projects. However, it still has to do a “good-enough” job. Therefore, a large part of the strategy is to cleverly budget the time allotted to testing. Mind you that this is not a problem of testing resources, because even with a very large number of testers you can have too little time to run a very large number of tests. Also, the nature of the process is that before you run a large number the programs break down and you send them back to fix. This is, in fact, the limiting factor: how many defects can be fixed per unit of time? Since you will find ten times as many defects in the time it takes to correct one, starting early makes all the sense. If you leave the testing till the end, when all the resources have been committed to delivering massive quantities of unusable functionality, the project is lost.
  • You cannot stress enough that quality cannot be tested into a product. Yes, you can test the kinks out of a product, but quality is a fundamental, quintessential, holistic characteristic. User-friendliness is not a requirement, it is a general statement. The (derived) requirement will have to be testable, as in number of buttons, number of clicks to get the job done, feedback received, time to do the job, etc. User friendliness is, surprisingly, very unfriendly to the tester. It isn’t even a usability statement! It probably, but not always, draws from usability, but performance and fitness of purpose are more important. You might want to have reliability numbers, but you can’t if you don’t have profiled scenarios of the usage, with probabilities attached.
  • You cannot stress enough that quality cannot be tested into a product. Yes, you can test the kinks out of a product, but quality is a fundamental, quintessential, holistic characteristic. User-friendliness is not a requirement, it is a general statement. The (derived) requirement will have to be testable, as in number of buttons, number of clicks to get the job done, feedback received, time to do the job, etc. User friendliness is, surprisingly, very unfriendly to the tester. It isn’t even a usability statement! It probably, but not always, draws from usability, but performance and fitness of purpose are more important. You might want to have reliability numbers, but you can’t if you don’t have profiled scenarios of the usage, with probabilities attached.
  • You cannot stress enough that quality cannot be tested into a product. Yes, you can test the kinks out of a product, but quality is a fundamental, quintessential, holistic characteristic. User-friendliness is not a requirement, it is a general statement. The (derived) requirement will have to be testable, as in number of buttons, number of clicks to get the job done, feedback received, time to do the job, etc. User friendliness is, surprisingly, very unfriendly to the tester. It isn’t even a usability statement! It probably, but not always, draws from usability, but performance and fitness of purpose are more important. You might want to have reliability numbers, but you can’t if you don’t have profiled scenarios of the usage, with probabilities attached.
  • It is time to think pre-scheduling. Will this strategy fly? Mainly, will the people be available, will there be time to perform the tests (and the fixes) will the model accommodate the strategy, will you have to change the strategy to accommodate the model. For example, you have set a high coverage goal for the unit tests. The architecture is OO framework. Will you have to accommodate the goals to fit the architecture? Will a high scenario coverage suffice?
  • Risk action planning turns risk information into decisions and actions. Planning involves developing actions to address individual risks, prioritizing risk actions, and creating an integrated risk management plan. Here are four key areas to address during risk action planning: Research. Do we know enough about this risk? Do we need to study the risk further to acquire more information and better determine the characteristics of the risk before we can decide what action to take? Accept. Can we live with the consequences if the risk were actually to occur? Can we accept the risk and take no further action? Manage. Is there anything the team can do to mitigate the impact of the risk should the risk occur? Is the effort worth the cost? Avoid. Can we avoid the risk by changing the project approach?
  • A contingency plan provides a fallback option in case all efforts to manage the risk fail. For example, suppose a new release of a particular tool is needed so that software can be placed on some platform, but the arrival of the tool is at risk. We may want to have a plan to use an alternate tool or platform. Simultaneous development may be the only contingency plan that ensures we hit the market window we seek. Deciding when to start the second parallel effort is a matter of watching the trigger value for the contingency plan. To determine when to launch the contingency plan, the team should select measures of risk handling or measures of impact that they can use to determine when their mitigation strategy is out of control. At that point, they need to start the contingency plan.
  • Trigger values for the contingency plan can often be established based on the type of risk or the type of project consequence that will be encountered. Trigger values help the project team determine when they need to spend the time, money, or effort on their contingency plan, since mitigation efforts are not working.
  • The action plan addresses the risk in a way that allows us to apply resources or other assistance to remove the potential problem. The contingency action is our fallback plan, for the possibility that the action does not work. Here, we see that a case where there probably is no viable option other than the one being developed. If it doesn’t get to us on time, we may need to ship without the feature. The product may have other capabilities for which the customer needs the release on the original date planned, whether or not it has the Web interface.
  • The action plan addresses the risk in a way that allows us to apply resources or other assistance to remove the potential problem. The contingency action is our fallback plan, for the possibility that the action does not work. Here, we see that a case where there probably is no viable option other than the one being developed. If it doesn’t get to us on time, we may need to ship without the feature. The product may have other capabilities for which the customer needs the release on the original date planned, whether or not it has the Web interface.
  • Another way to think it is to have the universe of test suites divided within itself in mandatory test cases, supplementary test cases, and complementary test cases, and have the suites ranked into “must run”, “good to run”, and “optional”.
  • Our focus is to help build effective business processes, leveraging the best products in the marketplace, to build solutions to customer problems quickly.

Risk Driven Testing Risk Driven Testing Presentation Transcript

  • Webinar: Risk Driven Testing May 5th, 2010 11:00 AM CST Please note: The audio portion of this webinar is only accessible through the telephone dial-in number that you received in your registration confirmation email.
  • Jorge Boria Senior VP International Process Improvement Liveware Inc. [email_address] Michael Milutis Director of Marketing Computer Aid, Inc. (CAI) [email_address]
  • About Presenter’s Firm Liveware is a leader among SEI partners, trusted by small, medium and large organizations around the world to increase their effectiveness and efficiency through improving the quality of their processes. With an average collective experience of over 20 years in software process improvement we know how to make our customers succeed. We partner with our clients by focusing on their bottom line and short and long term business goals. With over 70 Introduction to CMMI classes delivered and 40 SCAMPI appraisals performed, you will not find a better consultant for your process improvement needs.
    • CAI is a global IT outsourcing firm currently managing active engagements with over 100 Fortune 1,000 companies and government agencies around the world.
    • CAI is a leader in IT Best Practices for legacy support and new development application management.
    • CAI’s focus is directed toward practical implementations that track and measure the right activities in software activity management
    • CAI consistently promises and delivers double digit productivity in its outsourcing and consulting engagements.
    • CAI makes all of this possible through the use of:
        • Standard processes
        • Management by metrics
        • SLA compliance management
        • Detailed cost, resource, and time tracking
        • Capacity management
        • Standard estimation
        • A unique, metrics based methodology along with a proprietary, real time data repository and management system (TRACER®).
    About Computer Aid, Inc. (CAI)
  • NOW AVAILABLE! ONLINE WEBINAR RECORDINGS ANYTIME ACCESS! WWW. ITMPI.ORG / LIBRARY
  • Today’s Agenda
      • State testing goals and objectives
      • Identify risks with regard to product and project characteristics
      • State testing activities with acceptance criteria
      • Select a testing lifecycle to match the products risks and the project's schedule
  • Our Objectives
    • Identify testing project risks and refine the plan to include mitigation and contingency activities
      • State testing goals and objectives
      • Identify risks with regard to product and project characteristics
      • State testing activities with acceptance criteria
      • Select a testing lifecycle to match the products risks and the project's schedule
  • The Test Categories Map © black box crystal box UNIT INTEGRATION SYSTEM ALPHA USER ACCEPTANCE BETA time goal construction functionality performance usability string volume integration stress error handling readiness configuration memory leaks regression path coverage decision coverage statement coverage data flow coverage Disclaimer: This is just indicative. Individual project’s needs vary
  • The V Model Applied UAT Execution (SDS) Test Report Sys Test Execution (SDS) Test Report Acceptance UAT Test Planning and Preparation System Test Planning and Preparation Unit Test Planning and Preparation Acceptance Requirements (SRD) Acceptance Specifications (TSD) Coding (SDS) Unit Test Execution (SDS) Hand Off Developed Components (SDS)` Phase End Review Phase End Review Post Mortem Project Review
  • Common Testing Problems
      • we ship before we have finished testing
      • our customers find most of our defects
      • we don’t have the testers when we need them
      • testing is ad-hoc and results are irreproducible
      • we don’t test for the critical success factors
      • we don’t test versus requirements from customer
      • there are no acceptance criteria for any phase and deciding when the product is ready becomes a point of contention
      • ...
  • Risk Management Preventable Problems
      • We anticipated lateness but accepted sending out an unfinished product
      • We have too many defects to fix too late in the cycle
      • We have too many tests to run in a short period of time
      • We have tested for the simple defects and the customer gets to test for the “killer” bugs
    • Mission and goals
    • Decision drivers
    • Organization management
    • Customer / end user
    • Budget / cost
    • Schedule
    • Project parameters
    • Development process
    • Development environment
    • Personnel and relationships
    • Operational environment
    • New technology
    • Cost overruns
    • Schedule slips
    • Inadequate functionality
    • Canceled projects
    • Sudden personnel changes
    • Customer dissatisfaction
    • Loss of company image
    • Demoralized staff
    • Poor product performance
    • Legal proceedings
    Risk Sources Project Consequences testing concerns
  • Critical Success Factors
    • A product’s critical success factors (CSFs) are related to:
      • business needs for its development
      • coverage of all its intended user constituencies
      • fitness for use in the intended context
      • lack of dissatisfiers
      • competitive advantage
        • price?
        • delighters?
  • CSF: Business Needs (Why?)
    • Typical business reasons for new systems or changes:
      • cost reduction
        • reducing personnel time in the field reduces costs
      • increased efficiency of work or resource
        • a better interface makes one person capable of doing the job of two
      • developing new markets
        • using a “smart card” allows very small business to get into the credit card market
      • improved resource management
        • electronic data management (EDM) systems allow insurance claims to be processed in hundreds of different locations, depending on work load
    • What Is The Customer Going To Get Out Of The Use Of The Product?
  • CSF: User Constituencies (Who?)
    • Answer the questions:
      • given the business needs, what goals will the product help achieve?
      • who will have to use the product (different user constituencies) to achieve those goals?
    • At least three levels of need:
      • need to increase the bottom line
        • typical user constituency: upper management
      • need to gather supervisory data
        • typical user constituency: middle management
      • need for an efficient interface
        • typical user constituency: end user
      • other?
  • CSF: Fitness for Use (How?)
    • “ Fitness for use”
      • the product may meet all stated requirements but not support solving a real need or problem for a given user constituency
      • testing features does not cover the workflows
    • Test in the context of the job
    • Test the product as if you would have to perform the job yourself
    • Use your expertise in testing to extend the possibilities of use cases
  • High-Level Testing Goals
    • Effectiveness:
      • Defect detection
        • percentage of total found in some time frame
        • severity found after testing
    • AND
    • Efficiency:
      • Resource usage
        • time
        • people
        • machines
      • Reporting
        • time spent reproducing the defect by the developers
  • Risks from the Project
    • Risks from the project come from:
    • Project Plan Schedule Constraints
      • testing might be cut short if development is late
      • testing might have all its tasks in the critical path
    • Lifecycle being followed by the project
      • Simple Waterfall,
      • Parallel Waterfall,
      • Evolutionary,
      • Prototyping,
      • Spiral
    • Design Architecture
    • Resources
      • missing critical skills
      • budgetary shortcomings
  • Design Architecture
    • Figure out what the Architecture is going to be
      • Ask the designer
      • Request information on the design elements early
    • If not traditional, plan accordingly
      • Use scenarios profusely
      • Try having testers join teams early
      • Use testers insight of design to develop test cases
      • Push for updated documentation
      • Push for consistency reviews across documents
        • Requirements traceability
        • Formal reviews
  • Testing Deliverables
    • Planning Assets
      • Test Plan
    • Testing Assets
      • Testing procedures
      • Testing suites
      • Testing templates
    • Reporting Assets
      • Individual tests results
      • Test statistics
      • Acceptance report
  • Strategy Problems
    • Problems that faulty test strategies can cause
      • we spend too much time on just one phase
      • we place all our effort at the end of the project
      • we ignore regressions
      • we test in the wrong configuration
      • we receive work products that are unfit to test
      • we get into endless arguments about product fitness to ship
      • ...
    • Good testing strategies help!
  • Selecting a Strategy
    • Decide on
      • how much testing is required
      • of what type
      • when will it happen
      • who will do it
      • how will it happen, e.g.:
        • Will integration be top-down or bottom-up?
        • Will clear box be manual or automatic?
  • Identifying Test Tasks
    • Review the V-Model
    • Pick and choose the adequate phases to meet the goals
    • You have twenty days to visit all of Europe
    • You may never be able to go back
    • How do you budget your time?
    • Testing is always under-budgeted and over-committed
  • Defining a Strategy
    • Review and rework the business goals
    • Review project variables and rework the Testing Phases
      • Consider cost, time to market, personnel, product life expectancy, users, constraints
    • Emphasize the tasks that tie with the goals
      • try to probe weak areas of the project
    • Points to ponder:
      • regression (when, how much, which)
      • coverage (how much, what type, when)
      • deadlines (slipping deadlines, schedule compression)
      • integration (how, when, by whom)
      • assignment of responsibility (developers, testers, users)
  • Test Strategies (1)
    • Analyze business case
      • where is the payoff?
        • think in terms of customer satisfaction
        • identify particular functionality or killer faults
    • Probe the quality of the product with regard to it
    Note: Overriding assumption is that you will not have time to do it all
  • Test Strategies (2)
    • Analyze users
      • by frequency of use
        • sporadic, heavy, etc.
      • by organizational level
        • general managers, middle managers, end users, etc.
      • by their knowledge of the software
        • experts, newcomers, etc.
    • Build a profile of system usage
      • sketch scenarios
      • assign probabilities for each scenario
        • e.g.., one out of eight times the plan will be run unchanged
    • Use this data to design the test cases to optimize testing coverage of most frequent paths
  • Test Strategies (3)
    • Analyze time to market
      • are you going to be pressed for time?
        • => focus on existing functionality
        • => test system rather than component
        • => test typical rather than exceptional
    • Decide which tests can be run within the time constraints
      • If some of the fundamental tests cannot be run, move tests forward
  • Test Strategies (4)
    • Analyze cost
      • how many staff/hours can you pay?
    • Figure out if the tests you have so far selected fit within the budget
    • If so, if you have room for more…
    • Analyze constraints
        • performance
        • precision
        • volume
      • find critical quantities
      • analyze or set stress testing
    • … then include tests for them
  • Test Strategies (5)
    • Analyze personnel
      • who can you count on
        • reinforce testing of the products of the project’s weaker links
      • which of your documents are going to be weak for lack of experts
        • requirements or specs <=> functional tests
        • high-level design <=> integration
        • modules <=> module testing
  • Test Strategies (6)
    • Analyze product life expectancy
      • find the payoff of documented procedures, test case suites, results, etc..
        • don’t overspend in a product that has a short life expectancy
        • don’t under spend in a product that will be around long
  • Example Strategy (1)
    • Business goal
      • Keep and expand customer base
    • Internal translation to project
      • Make sure that the user finds the entire current version’s functionality works as usual
    • Testing strategy
      • Test old before new, in every phase
  • Example Strategy (2)
    • Business goal
      • Exceed customer’s expectations of product quality
    • Internal translation to project
      • Fewer than 10% of total bugs caught by the user
    • Testing strategy
      • Move functional test suites to developers before and during unit code and testing
  • Acceptance Criteria
    • For each test selected, define:
      • Environment tests would have had to run on
      • Regression suites covered by the tests
      • Functionality covered by the tests
      • Performance testing goals reached
      • Volume Testing goals achieved
      • Reliability Testing (when applicable)
      • Usability Testing (when applicable)
    • How can we tell that the work product is safe for releasing it to the user?
  • System Acceptance Criteria
    • Probably only main deployment environment
      • but NOT the development environment only
    • Usually all regression suites for the entire tests
      • usually automated
    • Most, if not all, functionality
      • focus on functionality not tested or changed after unit code
    • Performance Testing, Volume Testing
      • goals MUST be met, no excuses
      • deployment environment used in tests
    • Reliability Testing
      • if applicable, statistically controlled
    • Usability Testing
      • if critical, or if it leaves development organization
  • User Acceptance Test
    • Purpose is to detect remaining defects
      • focus might change
    • Different things to different people
      • another level of system acceptance
      • emphatically focused on usability
      • performed by users exclusively
      • performed by user proxies, exclusively
      • performed by the IV&V of the organization
      • other? ...
    • Which is yours?
  • User Acceptance Criteria
    • Some general rules
      • Cover all deployment environments
      • Only do regression if regression suites are different
        • or significant fixes and / or changes have happened after system acceptance
      • Functionality focus should be on usual rather than exceptional
      • Performance Testing should focus on throughput
      • Volume Testing might be skipped
        • unless significant fixes and / or changes have happened after system acceptance
      • Reliability Testing
        • only when thoroughly planned from day one
      • Usability Testing
        • probably the most emphatic effort goes here
  • Limiting the Testing Effort (1)
    • Some things cannot be tested
      • quality
      • user-friendliness
      • timeliness
  • Limiting the Testing Effort (2)
    • Some things you might not want to test
      • regression on a new or relatively small enhancement
      • performance
      • stress
  • Limiting the Testing Effort (3)
    • Some things you might not have the ability to test
      • reliability (e.g. MTTF)
      • availability (e.g. (MTTF/(MTTF+MTTR)))
  • Constraints on the Lifecycle
    • Review the phases against the Project’s constraints
      • Can you accommodate the project plan schedule constraints?
      • Is the model being followed by the project
        • Simple Waterfall,
        • Parallel Waterfall,
        • Evolutionary,
        • Prototyping,
        • Spiral
      • allowing for the testing tasks you’ve set?
        • e.g. might not have integration testing
      • Can the tests selected be adjusted to the design architecture?
        • three tiers might bring a whole set of problems
      • Are the goals compatible with the project’s shortcomings?
  • Risk Action Planning
    • Deal with high exposure risks first
      • Research: Do we know enough about this risk?
      • Accept: Can we live with it and do nothing about it?
      • Manage: Can we take action?
      • Avoid: Should we cancel the project or change the approach?
    • Balance the threat of the risk against the effort to avert it
      • How great is the threat?
      • How much does it cost to avert?
  • Risk Contingency Plans
    • Devise contingency plans for
      • High exposure risks, in case the mitigation strategy fails
      • Any risk for which there is no possible mitigation action
    • Specify risk measures and trigger values
      • Measures of time, resources to handle risk
      • Measures of risk impact
      • Trigger values that tell it is time to use contingency approach now
    • Agree with customer and management at project start how contingency plans will be funded and handled
  • Example Contingency Triggers
    • For risks leading to schedule slips
      • Latest date to allow you to use alternative platforms
      • Latest date to select another vendor
    • For risks requiring additional effort or time
      • Latest date to have time to locate the resources
      • Greatest amount of penalty or fine to incur
      • Greatest amount of investment available for overrun
    • Limit for extra cost to the customer
    • Limit for learning time
  • Example of Action and Contingency (1)
    • Risk Statement
      • Since the project team is already behind schedule by two full weeks, we might not have time to cover all the high yield tests before the cutover date and the quality of the product will be seriously imperiled.
    • Risk Action
      • Provide one of our test engineers as a testing consultant to the development team, so they can test and fix before they send the product to the Testing Team.
  • Example of Action and Contingency (2)
    • Contingency Plan
      • If by the first of April, we do not have an integration test version of the product, drop the web testing suites and focus on regression so our customer can use a significantly improved current version for six months without a Web interface.
  • Example Contingency for Lateness
    • Plan your testing activities in passes
      • First Pass (mandatory):
        • test all modules/components
        • use most frequently used scenarios
        • use few test cases
        • use selected error cases
      • Second Pass (supplementary):
        • test only modules with fixes, do first pass on components
        • cover most scenarios
        • use test cases covering all data equivalence classes
        • include test cases for “bad values”
      • Third Pass (complementary):
        • test only modules with fixes from second pass
        • cover all test suite
        • go for 100% clear case coverage
  • Summary
    • Key Activities in Defining the Testing Strategy
      • identify test tasks and their goals
      • rank test tasks by their goals
      • define the limits of the testing effort
      • detail testing tasks
      • define testing tasks entry criteria
      • define testing tasks acceptance criteria
    • Outputs to the process include
      • individual test task goals
      • phase acceptance criteria
      • detailed testing project tasks
      • all in the updated test plan
  • Questions? Your PDU CODE: S010-ITMPI0xxxx
    • CAI Sponsors The IT Metrics & Productivity Institute:
        • Clearinghouse repository of best practices: WWW.ITMPI.ORG
        • Weekly educational newsletter: WWW.ITMPI.ORG / SUBSCRIBE
        • Weekly webinars hosted by industry leaders: WWW.ITMPI.ORG / WEBINARS
        • ACCESS WEBINAR RECORDINGS ANYTIME AT WWW.ITMPI.ORG / LIBRARY
        • Online Expert Training Through CAI University (CAIU): WWW.CAIU.COMPAID.COM
        • Follow us on TWITTER at WWW.TWITTER.COM / ITMPI
    • Software Best Practices Conferences Around the World
    WWW.ITMPI.ORG / EVENTS Feb. 23 Tampa, FL Mar. 18 San Antonio, TX Mar. 23 Philadelphia, PA Mar. 30 El Segundo, CA Apr. 15 Philadelphia, PA Apr. 20 Detroit, MI Apr. 29 Chicago, IL May 4 Trenton, NJ May 11 New York, NY May 20 Albany, NY May 25 Toronto, ON Spring 2010 Sep. 14 Baltimore, MD Sep. 21 Sydney, AU Sep. 28 Detroit, MI Oct. 7 Tallahassee, FL Oct. 13 Orlando, FL Oct. 21 Philadelphia, PA Nov. 16 Miami, FL Fall 2010
  • Jorge Boria Senior VP International Process Improvement Liveware Inc. [email_address] Michael Milutis Director of Marketing Computer Aid, Inc. (CAI) [email_address]