© Copyright GlobalLogic 2009 1
Connect. Collaborate. Innovate.
ISTQB
Test Management
By - Portia Gautam
Internal2011
© Copyright GlobalLogic 2009 2
Connect. Collaborate. Innovate.
Switch off your Mobile phone
Or
Put the Mobile phone on silent mode
2011
© Copyright GlobalLogic 2009 3
Connect. Collaborate. Innovate.
2011
Objective
 Importance of independent testing
 Test Plan, Estimate and Strategies
 Test Progress Monitoring and Control
 Configuration Management
 Risk and Testing
 Incident Management
© Copyright GlobalLogic 2009 4
Connect. Collaborate. Innovate.
2011
Independent and integrated testing
Independent Tester/Team
 Can see more, other, and different defects than a tester working within a programming team
 Brings a different set of assumptions to testing and to reviews, which helps expose hidden
defects and problems
 Reports the issues honestly to the senior management
Issues in Integrated Test Team
 Other project stake-holders might see the independent test team - rightly or wrongly - as a
bottleneck and a source of delay.
 Some programmers abdicate their responsibility for quality, saying, 'Well, we have this test
team now, so why do I need to unit test my code?'
© Copyright GlobalLogic 2009 5
Connect. Collaborate. Innovate.
2011
 It's especially important for testers and test managers to understand the mission they serve
and the reasons why the organization wants an independent test team.
 The entire test team must realize that, whether they are part of the project team or
independent, they exist to provide a service to the project team.
There is no one right approach to organize testing
Things to be considered for an independent test team:
 Project size, the application domain, and the levels of risk, among other factors.
 As the size, complexity, and criticality of the project increases, it is good to have independence in
later levels of testing (like integration test, system test and acceptance test)
 Some testing is often best done by other people such as project managers, quality managers,
developers, business and domain experts or infrastructure or IT operations experts.
Contd….
© Copyright GlobalLogic 2009 6
Connect. Collaborate. Innovate.
2011
Working as a Test Leader
 Involve in the planning, monitoring, and control of the testing activities and tasks
 Devise the test objectives, organizational test policies, test strategies and test plans
 Estimate the testing activities and negotiate with management to acquire the necessary resources
 Recognize when test automation is appropriate
 Plan the effort, select the tools, and ensure training of the team
 Consult with other groups - e.g., programmers - to help them with their testing
 Lead, guide and monitor the analysis, design, implementation and execution of the test cases,
test procedures and test suites
 Ensure proper configuration management of the testware produced and traceability of the tests
to the test basis
 Make sure the test environment is put into place before test execution and managed during
test execution
© Copyright GlobalLogic 2009 7
Connect. Collaborate. Innovate.
Contd….
 Schedule the tests for execution and monitor, measure, control and report on the test progress,
the product quality status and the test results, adapting the test plan
 Do adjustment in the testing activities as when required
 In last they write summary reports on test status
© Copyright GlobalLogic 2009 8
Connect. Collaborate. Innovate.
Working as a Tester
 They analyze and review requirements specifications and contribute to test plans
 Involve in identifying test conditions and creating test designs, test cases, test procedure
specifications and test data
 Help in automating the tests
 Set up the test environments or assist system administration and network management
staff in doing so
 Execute and log the tests, evaluate the results and document problems found
 Monitor the testing and the test environment, often using tools
 Gather performance metrics
 Review each other's work, including test specifications, defect reports and test results
© Copyright GlobalLogic 2009 9
Connect. Collaborate. Innovate.
2011
Defining the skills test staff need
 Good test teams should have the right mix of skills based on the tasks and activities they
need to carry out
 People outside the test team who are in charge of test tasks need the right skills, too
 People involved in testing need basic professional and social qualifications such as
 Ability to prepare and deliver written and verbal reports
 Ability to communicate effectively, and so on
Three main areas which a tester should be aware of:
 Application or business domain: A tester must understand the intended behavior, the
problem the system will solve, the process it will automate
 Technology: A tester must be aware of issues, limitations and capabilities of the chosen
implementation technology
 Testing: A tester must know the testing topics in order to effectively and efficiently carry
out the test tasks assigned
The specific skills in each area and the level of skill required vary by project, organization,
application, and the risks involved
© Copyright GlobalLogic 2009 10
Connect. Collaborate. Innovate.
Test Plan, Estimates and Strategies
A test plan is the project plan for the testing work to be done
Reasons for writing a test plan:
 It guides our thinking
 Forces us to confront the challenges that await us
 Focus us on important topics
 Serve as vehicles for communicating with other members of the project team, testers, peers,
managers and other stakeholders
 Becomes a record of previous discussions and agreements between the testers and the rest of
the project team
 Depends on a number of factors - Level, targets and objectives of the testing we're setting out to
do.
 These activities should happen concurrently and ideally during the planning period for the
overall project
© Copyright GlobalLogic 2009 11
Connect. Collaborate. Innovate.
 Helps us manage change
 As the project progresses, plans are revised
 Written test plans give us a baseline against which to measure such revisions and
changes
 Updating the plan at major milestones helps keep testing aligned with project needs
 Adjustments are made to plans based on the end results
© Copyright GlobalLogic 2009 12
Connect. Collaborate. Innovate.
2011
What to do While Planning Tests
To understand the purpose of testing, answers to questions such as:
 What is in scope and what is out of scope for this testing effort?
 What are the test objectives?
 What are the important project and product risks?
 What constraints affect testing (e.g., budget limitations, hard deadlines, etc.)?
 What is most critical for this product and project?
 Which aspects of the product are more (or less) testable?
 What should be the overall test execution schedule and how should we decide the order
in which to run specific tests?
 Decide how to split the testing work into various levels
 Integrating and coordinating between test levels
 Integrate and coordinate all the testing work to be done with the rest of the project
© Copyright GlobalLogic 2009 13
Connect. Collaborate. Innovate.
Contd….
The factors to consider in such decisions are often called 'entry criteria' and 'exit criteria.'
For such criteria, typical factors are:
 Acquisition and supply: availability of staff, tools, systems and others required
 Test items: the state that the items to be tested must be in to start and to finish testing
 Defects: the number known to be present, the arrival rate, the number predicted to
remain, and the number resolved
 Tests: the number run, passed, failed, blocked, skipped, and so forth
 Coverage: the portions of the test basis, the software code or both that have been tested
and which have not
 Quality: the status of the important quality characteristics for the system
 Money: the cost of finding the next defect in the current level of testing compared to the
cost of finding it in the next level of testing (or in production)
 Risk: the undesirable outcomes that could result from shipping too early (such as latent
defects or untested areas) - or too late (such as loss of market share)
© Copyright GlobalLogic 2009 14
Connect. Collaborate. Innovate.
2011
Estimation Techniques
There are two techniques for estimation covered by the ISTQB Foundation Syllabus.
1. Consulting the people who will do the work and other people with expertise on the tasks to
be done –
 Working with experienced staff members to develop a work-breakdown structure for
the project
 Understand the effort, duration, dependencies, and resource requirements for each
task
 Using a tool such as Microsoft Project or a whiteboard and sticky-notes, the testing
end-date can be predicted. This technique is often called 'bottom up' estimation (start
at the lowest level the task - and let the duration, effort, dependencies and resources
for each task add up across all the tasks)
© Copyright GlobalLogic 2009 15
Connect. Collaborate. Innovate.
2. Analyzing metrics from past projects and from industry data. –
 The simplest approach is to ask, 'How many testers do we typically have per developer on
a project?‘. It involves
 Classifying the project in terms of size (small, medium or large)
 Complexity (simple, moderate or complex)
 Seeing on average how long projects of a particular size and complexity combination
have taken in the past
The tester-to-developer ratio is an example of a top-down estimation technique, in that the
entire estimate is derived at the project level
 Another is look at the average effort per test case in similar past projects and to use the
estimated number of test cases to estimate the total effort. Check the historical or
industry averages for certain key parameters –
 Number of tests run by tester per day
 Number of defects found by tester per day, etc.
The parametric technique is bottom-up when it is used to estimate individual tasks or activities.
Contd….
© Copyright GlobalLogic 2009 16
Connect. Collaborate. Innovate.
2011
Factors Affecting Test Effort
 The test strategies or approaches to be used
 Product factors:
 The importance of non-functional quality characteristics such as usability, reliability,
security, performance, etc influences the testing effort
 Complexity of the project
 Process factors:
 Availability of test tools, especially those that reduce the effort associated with test
execution, which is on the critical path for release
 The life cycle itself is an influential process factor, as which process model is being used
 Time pressure is another factor to be considered
 The test results themselves are important in the total amount of test effort during test execution
© Copyright GlobalLogic 2009 17
Connect. Collaborate. Innovate.
Test Approaches or Strategies
Major types of test strategies are:
 Risk-based strategy : Performing a risk analysis using project documents and stakeholder
input, then planning, estimating, designing, and prioritizing the tests based on risk.
 Analytical: Analytical test strategy is the requirements-based strategy, where requirements
are analyzed for planning, estimating and designing tests.
 Model-based: Creation or selection of some formal or informal model for critical system
behaviors, usually during the requirements and design stages of the project.
 For example, use of mathematical models for loading and response for e-commerce
servers, and testing on it.
 Methodical: A pre-planned, systematized approach that has been developed in-house,
assembled from various concepts developed in- house and gathered from outside
 For example, you might have a checklist that you have put together over the years that
suggests the major areas of testing to run or you might follow an industry-standard for
software quality, such as ISO 9126, for your outline of major test areas.
© Copyright GlobalLogic 2009 18
Connect. Collaborate. Innovate.
 Process- or standard-compliant: Use an externally developed approach to testing with little -
customization
 For example, adopt the IEEE 829 standard for your testing. Alternatively, you might
adopt one of the agile methodologies such as Extreme Programming.
 Dynamic: Dynamic strategies, such as exploratory testing, concentrate on finding as many
defects as possible during test execution and they typically emphasize the later stages of
testing.
 Consultative or directed: Reliance on a group of non-testers to guide or perform the testing
effort and typically emphasize the later stages of testing simply due to the lack of recognition
of the value of early testing.
 For example, you might ask the users or developers of the system to tell you what to
test or even rely on them to do the testing.
 Regression-averse: A set of procedures - usually automated - that allow them to detect
regression defects. Automate functional tests prior to release of the function.
 For example, you might try to automate all the tests of system functionality so that,
whenever anything changes, you can re-run every test to ensure nothing has broken.
Contd….
© Copyright GlobalLogic 2009 19
Connect. Collaborate. Innovate.
Contd….
Some of these strategies are more preventive, others more reactive.
 Analytical test strategies involve upfront analysis of the test basis, and tend to identify
problems in the test basis prior to test execution. This allows the early -and cheap - removal of
defects. That is a strength of preventive approaches.
 Dynamic test strategies focus on the test execution period. Such strategies allow the location
of defects and defect clusters that might have been hard to anticipate until you have the actual
system in front of you. That is a strength of reactive approaches.
As an either/or situation strategies are applied, there is no one best way.
© Copyright GlobalLogic 2009 20
Connect. Collaborate. Innovate.
Contd….
How do one identify which strategies to pick or blend for the best chance of success?
There are many factors; a few of the most important are:
 Risks: Testing is about risk management, so consider the risks and the level of risk.
 For a well-established application that is evolving slowly, regression is an
important risk, so regression-averse strategies make sense.
 For a new application, a risk analysis may reveal different risks if you pick a risk-
based analytical strategy.
 Skills: Consider which skills your testers possess and lack. A standard- compliant strategy
is a smart choice when you lack the time and skills in your team to create your own
approach.
 Objectives: If the objective is to find as many defects as possible with a minimal amount
of up-front time and effort invested - then a dynamic strategy makes sense.
 Regulations: Sometimes you must satisfy not only stakeholders, but also regulators.
 In this case, devise a methodical test strategy that satisfies these regulators that
you have met all their requirements.
© Copyright GlobalLogic 2009 21
Connect. Collaborate. Innovate.
Contd….
 Product: Some products such as weapons systems and contract-development software
tend to have well-specified requirements. This leads to synergy with a requirements-
based analytical strategy.
 Business: Business considerations and business continuity are often important. If a
legacy system can be used as a model for a new system, you can use a model-based
strategy.
© Copyright GlobalLogic 2009 22
Connect. Collaborate. Innovate.
Test Progress Monitoring and Control
Test monitoring serve various purposes during the project:
 Give feedback on how the testing work is going, allowing opportunities to guide
and improve the testing and the project
 Provide visibility about the test results
 Measure the status of the testing, test coverage and test items against the exit
criteria to determine whether the test work is done
 Gather data for use in estimating future test efforts
Test progress monitoring techniques vary considerably depending on:
 The preferences of the testers and stakeholders
 The needs and goals of the project
 Regulatory requirements, time and money constraints and other factors
Test progress monitoring is about gathering detailed test data
© Copyright GlobalLogic 2009 23
Connect. Collaborate. Innovate.
2011
Reporting Test Status
 Reporting test status is about effectively communicating our findings to other project stake-
holders
 It is about helping project stakeholders understand the results of a test period, especially as it
relates to key project goals and whether (or when) exit criteria were satisfied
 It involves analyzing the information and metrics available to support conclusions,
recommendations, and decisions about how to guide the project forward or to take other
actions. For example:
 we might estimate the number of defects remaining to be discovered
 present the costs and benefits of delaying a release date to allow for further testing
 assess the remaining product and project risks and
 offer an opinion on the confidence the stakeholders should have in the quality of the
system under test
 On some projects, the test team must create a test summary report. Such a report, created
either at a key milestone or at the end of a test level, describes the results of a given level or
phase of testing
© Copyright GlobalLogic 2009 24
Connect. Collaborate. Innovate.
Test Control
It is about guiding and corrective actions to achieve the best possible outcome for the project
Consider the following hypothetical examples:
 A portion of the software under test will be delivered late, after the planned test start
date. Market conditions dictate that we cannot change the release date. Test control
might involve re-prioritizing the tests so that we start testing against what is available
now
 For cost reasons, performance testing is normally run on weekday evenings during
off-hours in the production environment. Due to unanticipated high demand for your
products, the company has temporarily adopted an evening shift that keeps the
production environment in use 18 hours a day, five days a week. Test control might
involve rescheduling the performance tests for the weekend
In such cases, the team have to take other actions. For example, suppose that the test
completion date is at risk due to a high number of defect fixes that fail confirmation testing in
the test environment. Here, test control might involve requiring the programmers making the
fixes to thoroughly retest the fixes prior to checking them in to the code repository for
inclusion in a test build
© Copyright GlobalLogic 2009 25
Connect. Collaborate. Innovate.
Configuration Management
 It determines what the items are that make up the software or system. These items include:
 Source code
 Test scripts
 Third-party software
 Hardware
 Data and both development and test documentation
 It also sure that these items are managed carefully, thoroughly and attentively throughout
the entire project and product life cycle
 It supports the build process, which is essential for delivery of a test release into the test
environment
 It allows to map what is being tested to the underlying files and components that make it
up
 For example, when we report defects, we need to report them against something,
something which is version controlled.
 Ideally, when testers receive an organized, version-controlled test release from a
change-managed source code repository, it is accompanied by a test item transmittal
report or release notes.
© Copyright GlobalLogic 2009 26
Connect. Collaborate. Innovate.
Testing & Risks
Risks are classified into:
1. Product risks - factors relating to what is produced by the work, i.e. the thing we are testing
The possibility that the system or software might fail to satisfy some reasonable customer, user,
or stakeholder expectation. Unsatisfactory Software might:
 Omit some key function that the customers specified, the users required or the
stakeholders were promised
 Be unreliable and frequently fail to behave normally
 Fail in ways that cause financial or other damage to a user or the company that user
works for
 Have problems related to a particular quality characteristic, like security, reliability,
usability, performance etc
Risk-based testing starts with product risk analysis. Techniques used are
 Thoroughly understand the requirements and design specifications, user documentation
and other items.
 Brainstorming with many of the project stakeholders.
 Sequence of one-on-one or small-group sessions with the business and technology
experts in the company.
Some people use all these techniques when they can.
© Copyright GlobalLogic 2009 27
Connect. Collaborate. Innovate.
Contd..
Risk-based testing is the idea that we can organize our testing efforts in a way that reduces the
residual level of product risk when the system ships.
 It uses risk to prioritize and emphasize the appropriate tests during test execution
 It starts early in the project, identifying risks to system quality and on that basis testing
planning, specification, preparation and execution happens.
 It includes both mitigation - testing to provide opportunities to reduce the likelihood of
defects, especially high-impact defects - and contingency - testing to identify
workarounds to make the defects that do get past us less painful.
 It involves measuring how well we are doing at finding and removing defects in critical
areas
To understand specific risks, example – incorrect calculation in a calculator
 Consider incorrect addition. This is a high-impact kind of defect, as everyone who
uses the calculator will see it. It is unlikely, since addition is not a complex
algorithm.
 Contrast that with an incorrect sine calculation. This is a low-impact kind of
defect, since few people use the sine function on the Windows calculator. It is
more likely to have a defect, though, since sine functions are hard to calculate.
© Copyright GlobalLogic 2009 28
Connect. Collaborate. Innovate.
2011
To discover these risks, ask yourself and other project participants and stakeholders,
 'What could go wrong on the project to delay or invalidate the test plan, the test
strategy and the test estimate?
 What are unacceptable outcomes of testing or in testing?
 What are the likelihoods and impacts of each of these risks?'
It is similar to risk analysis process for products. Checklists and examples can help in
identifying test project risks
For any risk, product or project, four typical options are there:
 Mitigate: Take steps in advance to reduce the likelihood and impact of the risk
 Contingency: Have a plan in place to reduce the impact
 Transfer: Convince some other member of the team or project stakeholder to
reduce the likelihood or accept the impact of the risk
 Ignore: Do nothing about the risk, which is usually a smart option only when
there's little that can be done or when the likelihood and impact are low
2. Project Risks - factors relating to the way the work is carried out, i.e. the test project
© Copyright GlobalLogic 2009 29
Connect. Collaborate. Innovate.
Contd….
Some typical risks along with some options for managing them:
 Logistics or product quality problems that block tests
 Test items that won't install in the test environment
 Excessive change to the product that invalidates test results or requires updates to test
cases, expected results and environments
 Insufficient or unrealistic test environments that yield misleading results
 Organizational issues such as shortages of people, skills or training, problems with
communicating and responding to test results
 Supplier issues such as problems with underlying platforms or hardware
 Technical problems related to ambiguous, conflicting or unprioritized requirements, an
excessively large number of requirements given other project constraints
 High system complexity and quality problems with the design, the code or the tests
© Copyright GlobalLogic 2009 30
Connect. Collaborate. Innovate.
Incident Management
An incident is any situation where the system exhibits questionable behavior, but often we refer
to an incident as a defect only when the root cause is some problem in the item we're testing.
 Other causes of incidents include:
 Failure of the test environment
 Corrupted test data
 Bad tests, invalid expected results
 Tester mistakes
An incident report contains a description of the misbehavior that was observed and classification
of that misbehavior
© Copyright GlobalLogic 2009 31
Connect. Collaborate. Innovate.
Incident reports are managed through a life cycle from discovery to resolution. After an
incident is reported, a peer tester or test manager reviews the report.
 If passed, the incident report becomes opened, so now the project team decide whether
or not to repair the defect.
 If the defect is to be repaired, a programmer is assigned to repair
 Once the programmer believes the repairs are complete, the incident report
returns to the tester for confirmation testing
 If the confirmation test fails, the incident report is re-opened and then re-
assigned
 Once the tester confirms, the incident report is closed. No further work remains
to be done
 In any state other than rejected, deferred or closed:
 Further work is required on the incident prior to the end of this project.
 The incident report has a clearly identified owner. The owner is responsible for
transitioning the incident into an allowed subsequent state.
Incident Life Cycle
© Copyright GlobalLogic 2009 32
Connect. Collaborate. Innovate.
Contd….
© Copyright GlobalLogic 2009 33
Connect. Collaborate. Innovate.
“Thank You” for your learning contribution!
Please submit Online Feedback to help L&D make
continuous improvement……participation credit will be
given only on feedback submission.
For any queries Dial @ Learning:
Noida: 4444, Nagpur:333, Pune:5222, Banglore:111
E mail: learning@globallogic.com
Check new L&D Reward & Recognition Policy
@ Confluence under Global Training
2011

Test management

  • 1.
    © Copyright GlobalLogic2009 1 Connect. Collaborate. Innovate. ISTQB Test Management By - Portia Gautam Internal2011
  • 2.
    © Copyright GlobalLogic2009 2 Connect. Collaborate. Innovate. Switch off your Mobile phone Or Put the Mobile phone on silent mode 2011
  • 3.
    © Copyright GlobalLogic2009 3 Connect. Collaborate. Innovate. 2011 Objective  Importance of independent testing  Test Plan, Estimate and Strategies  Test Progress Monitoring and Control  Configuration Management  Risk and Testing  Incident Management
  • 4.
    © Copyright GlobalLogic2009 4 Connect. Collaborate. Innovate. 2011 Independent and integrated testing Independent Tester/Team  Can see more, other, and different defects than a tester working within a programming team  Brings a different set of assumptions to testing and to reviews, which helps expose hidden defects and problems  Reports the issues honestly to the senior management Issues in Integrated Test Team  Other project stake-holders might see the independent test team - rightly or wrongly - as a bottleneck and a source of delay.  Some programmers abdicate their responsibility for quality, saying, 'Well, we have this test team now, so why do I need to unit test my code?'
  • 5.
    © Copyright GlobalLogic2009 5 Connect. Collaborate. Innovate. 2011  It's especially important for testers and test managers to understand the mission they serve and the reasons why the organization wants an independent test team.  The entire test team must realize that, whether they are part of the project team or independent, they exist to provide a service to the project team. There is no one right approach to organize testing Things to be considered for an independent test team:  Project size, the application domain, and the levels of risk, among other factors.  As the size, complexity, and criticality of the project increases, it is good to have independence in later levels of testing (like integration test, system test and acceptance test)  Some testing is often best done by other people such as project managers, quality managers, developers, business and domain experts or infrastructure or IT operations experts. Contd….
  • 6.
    © Copyright GlobalLogic2009 6 Connect. Collaborate. Innovate. 2011 Working as a Test Leader  Involve in the planning, monitoring, and control of the testing activities and tasks  Devise the test objectives, organizational test policies, test strategies and test plans  Estimate the testing activities and negotiate with management to acquire the necessary resources  Recognize when test automation is appropriate  Plan the effort, select the tools, and ensure training of the team  Consult with other groups - e.g., programmers - to help them with their testing  Lead, guide and monitor the analysis, design, implementation and execution of the test cases, test procedures and test suites  Ensure proper configuration management of the testware produced and traceability of the tests to the test basis  Make sure the test environment is put into place before test execution and managed during test execution
  • 7.
    © Copyright GlobalLogic2009 7 Connect. Collaborate. Innovate. Contd….  Schedule the tests for execution and monitor, measure, control and report on the test progress, the product quality status and the test results, adapting the test plan  Do adjustment in the testing activities as when required  In last they write summary reports on test status
  • 8.
    © Copyright GlobalLogic2009 8 Connect. Collaborate. Innovate. Working as a Tester  They analyze and review requirements specifications and contribute to test plans  Involve in identifying test conditions and creating test designs, test cases, test procedure specifications and test data  Help in automating the tests  Set up the test environments or assist system administration and network management staff in doing so  Execute and log the tests, evaluate the results and document problems found  Monitor the testing and the test environment, often using tools  Gather performance metrics  Review each other's work, including test specifications, defect reports and test results
  • 9.
    © Copyright GlobalLogic2009 9 Connect. Collaborate. Innovate. 2011 Defining the skills test staff need  Good test teams should have the right mix of skills based on the tasks and activities they need to carry out  People outside the test team who are in charge of test tasks need the right skills, too  People involved in testing need basic professional and social qualifications such as  Ability to prepare and deliver written and verbal reports  Ability to communicate effectively, and so on Three main areas which a tester should be aware of:  Application or business domain: A tester must understand the intended behavior, the problem the system will solve, the process it will automate  Technology: A tester must be aware of issues, limitations and capabilities of the chosen implementation technology  Testing: A tester must know the testing topics in order to effectively and efficiently carry out the test tasks assigned The specific skills in each area and the level of skill required vary by project, organization, application, and the risks involved
  • 10.
    © Copyright GlobalLogic2009 10 Connect. Collaborate. Innovate. Test Plan, Estimates and Strategies A test plan is the project plan for the testing work to be done Reasons for writing a test plan:  It guides our thinking  Forces us to confront the challenges that await us  Focus us on important topics  Serve as vehicles for communicating with other members of the project team, testers, peers, managers and other stakeholders  Becomes a record of previous discussions and agreements between the testers and the rest of the project team  Depends on a number of factors - Level, targets and objectives of the testing we're setting out to do.  These activities should happen concurrently and ideally during the planning period for the overall project
  • 11.
    © Copyright GlobalLogic2009 11 Connect. Collaborate. Innovate.  Helps us manage change  As the project progresses, plans are revised  Written test plans give us a baseline against which to measure such revisions and changes  Updating the plan at major milestones helps keep testing aligned with project needs  Adjustments are made to plans based on the end results
  • 12.
    © Copyright GlobalLogic2009 12 Connect. Collaborate. Innovate. 2011 What to do While Planning Tests To understand the purpose of testing, answers to questions such as:  What is in scope and what is out of scope for this testing effort?  What are the test objectives?  What are the important project and product risks?  What constraints affect testing (e.g., budget limitations, hard deadlines, etc.)?  What is most critical for this product and project?  Which aspects of the product are more (or less) testable?  What should be the overall test execution schedule and how should we decide the order in which to run specific tests?  Decide how to split the testing work into various levels  Integrating and coordinating between test levels  Integrate and coordinate all the testing work to be done with the rest of the project
  • 13.
    © Copyright GlobalLogic2009 13 Connect. Collaborate. Innovate. Contd…. The factors to consider in such decisions are often called 'entry criteria' and 'exit criteria.' For such criteria, typical factors are:  Acquisition and supply: availability of staff, tools, systems and others required  Test items: the state that the items to be tested must be in to start and to finish testing  Defects: the number known to be present, the arrival rate, the number predicted to remain, and the number resolved  Tests: the number run, passed, failed, blocked, skipped, and so forth  Coverage: the portions of the test basis, the software code or both that have been tested and which have not  Quality: the status of the important quality characteristics for the system  Money: the cost of finding the next defect in the current level of testing compared to the cost of finding it in the next level of testing (or in production)  Risk: the undesirable outcomes that could result from shipping too early (such as latent defects or untested areas) - or too late (such as loss of market share)
  • 14.
    © Copyright GlobalLogic2009 14 Connect. Collaborate. Innovate. 2011 Estimation Techniques There are two techniques for estimation covered by the ISTQB Foundation Syllabus. 1. Consulting the people who will do the work and other people with expertise on the tasks to be done –  Working with experienced staff members to develop a work-breakdown structure for the project  Understand the effort, duration, dependencies, and resource requirements for each task  Using a tool such as Microsoft Project or a whiteboard and sticky-notes, the testing end-date can be predicted. This technique is often called 'bottom up' estimation (start at the lowest level the task - and let the duration, effort, dependencies and resources for each task add up across all the tasks)
  • 15.
    © Copyright GlobalLogic2009 15 Connect. Collaborate. Innovate. 2. Analyzing metrics from past projects and from industry data. –  The simplest approach is to ask, 'How many testers do we typically have per developer on a project?‘. It involves  Classifying the project in terms of size (small, medium or large)  Complexity (simple, moderate or complex)  Seeing on average how long projects of a particular size and complexity combination have taken in the past The tester-to-developer ratio is an example of a top-down estimation technique, in that the entire estimate is derived at the project level  Another is look at the average effort per test case in similar past projects and to use the estimated number of test cases to estimate the total effort. Check the historical or industry averages for certain key parameters –  Number of tests run by tester per day  Number of defects found by tester per day, etc. The parametric technique is bottom-up when it is used to estimate individual tasks or activities. Contd….
  • 16.
    © Copyright GlobalLogic2009 16 Connect. Collaborate. Innovate. 2011 Factors Affecting Test Effort  The test strategies or approaches to be used  Product factors:  The importance of non-functional quality characteristics such as usability, reliability, security, performance, etc influences the testing effort  Complexity of the project  Process factors:  Availability of test tools, especially those that reduce the effort associated with test execution, which is on the critical path for release  The life cycle itself is an influential process factor, as which process model is being used  Time pressure is another factor to be considered  The test results themselves are important in the total amount of test effort during test execution
  • 17.
    © Copyright GlobalLogic2009 17 Connect. Collaborate. Innovate. Test Approaches or Strategies Major types of test strategies are:  Risk-based strategy : Performing a risk analysis using project documents and stakeholder input, then planning, estimating, designing, and prioritizing the tests based on risk.  Analytical: Analytical test strategy is the requirements-based strategy, where requirements are analyzed for planning, estimating and designing tests.  Model-based: Creation or selection of some formal or informal model for critical system behaviors, usually during the requirements and design stages of the project.  For example, use of mathematical models for loading and response for e-commerce servers, and testing on it.  Methodical: A pre-planned, systematized approach that has been developed in-house, assembled from various concepts developed in- house and gathered from outside  For example, you might have a checklist that you have put together over the years that suggests the major areas of testing to run or you might follow an industry-standard for software quality, such as ISO 9126, for your outline of major test areas.
  • 18.
    © Copyright GlobalLogic2009 18 Connect. Collaborate. Innovate.  Process- or standard-compliant: Use an externally developed approach to testing with little - customization  For example, adopt the IEEE 829 standard for your testing. Alternatively, you might adopt one of the agile methodologies such as Extreme Programming.  Dynamic: Dynamic strategies, such as exploratory testing, concentrate on finding as many defects as possible during test execution and they typically emphasize the later stages of testing.  Consultative or directed: Reliance on a group of non-testers to guide or perform the testing effort and typically emphasize the later stages of testing simply due to the lack of recognition of the value of early testing.  For example, you might ask the users or developers of the system to tell you what to test or even rely on them to do the testing.  Regression-averse: A set of procedures - usually automated - that allow them to detect regression defects. Automate functional tests prior to release of the function.  For example, you might try to automate all the tests of system functionality so that, whenever anything changes, you can re-run every test to ensure nothing has broken. Contd….
  • 19.
    © Copyright GlobalLogic2009 19 Connect. Collaborate. Innovate. Contd…. Some of these strategies are more preventive, others more reactive.  Analytical test strategies involve upfront analysis of the test basis, and tend to identify problems in the test basis prior to test execution. This allows the early -and cheap - removal of defects. That is a strength of preventive approaches.  Dynamic test strategies focus on the test execution period. Such strategies allow the location of defects and defect clusters that might have been hard to anticipate until you have the actual system in front of you. That is a strength of reactive approaches. As an either/or situation strategies are applied, there is no one best way.
  • 20.
    © Copyright GlobalLogic2009 20 Connect. Collaborate. Innovate. Contd…. How do one identify which strategies to pick or blend for the best chance of success? There are many factors; a few of the most important are:  Risks: Testing is about risk management, so consider the risks and the level of risk.  For a well-established application that is evolving slowly, regression is an important risk, so regression-averse strategies make sense.  For a new application, a risk analysis may reveal different risks if you pick a risk- based analytical strategy.  Skills: Consider which skills your testers possess and lack. A standard- compliant strategy is a smart choice when you lack the time and skills in your team to create your own approach.  Objectives: If the objective is to find as many defects as possible with a minimal amount of up-front time and effort invested - then a dynamic strategy makes sense.  Regulations: Sometimes you must satisfy not only stakeholders, but also regulators.  In this case, devise a methodical test strategy that satisfies these regulators that you have met all their requirements.
  • 21.
    © Copyright GlobalLogic2009 21 Connect. Collaborate. Innovate. Contd….  Product: Some products such as weapons systems and contract-development software tend to have well-specified requirements. This leads to synergy with a requirements- based analytical strategy.  Business: Business considerations and business continuity are often important. If a legacy system can be used as a model for a new system, you can use a model-based strategy.
  • 22.
    © Copyright GlobalLogic2009 22 Connect. Collaborate. Innovate. Test Progress Monitoring and Control Test monitoring serve various purposes during the project:  Give feedback on how the testing work is going, allowing opportunities to guide and improve the testing and the project  Provide visibility about the test results  Measure the status of the testing, test coverage and test items against the exit criteria to determine whether the test work is done  Gather data for use in estimating future test efforts Test progress monitoring techniques vary considerably depending on:  The preferences of the testers and stakeholders  The needs and goals of the project  Regulatory requirements, time and money constraints and other factors Test progress monitoring is about gathering detailed test data
  • 23.
    © Copyright GlobalLogic2009 23 Connect. Collaborate. Innovate. 2011 Reporting Test Status  Reporting test status is about effectively communicating our findings to other project stake- holders  It is about helping project stakeholders understand the results of a test period, especially as it relates to key project goals and whether (or when) exit criteria were satisfied  It involves analyzing the information and metrics available to support conclusions, recommendations, and decisions about how to guide the project forward or to take other actions. For example:  we might estimate the number of defects remaining to be discovered  present the costs and benefits of delaying a release date to allow for further testing  assess the remaining product and project risks and  offer an opinion on the confidence the stakeholders should have in the quality of the system under test  On some projects, the test team must create a test summary report. Such a report, created either at a key milestone or at the end of a test level, describes the results of a given level or phase of testing
  • 24.
    © Copyright GlobalLogic2009 24 Connect. Collaborate. Innovate. Test Control It is about guiding and corrective actions to achieve the best possible outcome for the project Consider the following hypothetical examples:  A portion of the software under test will be delivered late, after the planned test start date. Market conditions dictate that we cannot change the release date. Test control might involve re-prioritizing the tests so that we start testing against what is available now  For cost reasons, performance testing is normally run on weekday evenings during off-hours in the production environment. Due to unanticipated high demand for your products, the company has temporarily adopted an evening shift that keeps the production environment in use 18 hours a day, five days a week. Test control might involve rescheduling the performance tests for the weekend In such cases, the team have to take other actions. For example, suppose that the test completion date is at risk due to a high number of defect fixes that fail confirmation testing in the test environment. Here, test control might involve requiring the programmers making the fixes to thoroughly retest the fixes prior to checking them in to the code repository for inclusion in a test build
  • 25.
    © Copyright GlobalLogic2009 25 Connect. Collaborate. Innovate. Configuration Management  It determines what the items are that make up the software or system. These items include:  Source code  Test scripts  Third-party software  Hardware  Data and both development and test documentation  It also sure that these items are managed carefully, thoroughly and attentively throughout the entire project and product life cycle  It supports the build process, which is essential for delivery of a test release into the test environment  It allows to map what is being tested to the underlying files and components that make it up  For example, when we report defects, we need to report them against something, something which is version controlled.  Ideally, when testers receive an organized, version-controlled test release from a change-managed source code repository, it is accompanied by a test item transmittal report or release notes.
  • 26.
    © Copyright GlobalLogic2009 26 Connect. Collaborate. Innovate. Testing & Risks Risks are classified into: 1. Product risks - factors relating to what is produced by the work, i.e. the thing we are testing The possibility that the system or software might fail to satisfy some reasonable customer, user, or stakeholder expectation. Unsatisfactory Software might:  Omit some key function that the customers specified, the users required or the stakeholders were promised  Be unreliable and frequently fail to behave normally  Fail in ways that cause financial or other damage to a user or the company that user works for  Have problems related to a particular quality characteristic, like security, reliability, usability, performance etc Risk-based testing starts with product risk analysis. Techniques used are  Thoroughly understand the requirements and design specifications, user documentation and other items.  Brainstorming with many of the project stakeholders.  Sequence of one-on-one or small-group sessions with the business and technology experts in the company. Some people use all these techniques when they can.
  • 27.
    © Copyright GlobalLogic2009 27 Connect. Collaborate. Innovate. Contd.. Risk-based testing is the idea that we can organize our testing efforts in a way that reduces the residual level of product risk when the system ships.  It uses risk to prioritize and emphasize the appropriate tests during test execution  It starts early in the project, identifying risks to system quality and on that basis testing planning, specification, preparation and execution happens.  It includes both mitigation - testing to provide opportunities to reduce the likelihood of defects, especially high-impact defects - and contingency - testing to identify workarounds to make the defects that do get past us less painful.  It involves measuring how well we are doing at finding and removing defects in critical areas To understand specific risks, example – incorrect calculation in a calculator  Consider incorrect addition. This is a high-impact kind of defect, as everyone who uses the calculator will see it. It is unlikely, since addition is not a complex algorithm.  Contrast that with an incorrect sine calculation. This is a low-impact kind of defect, since few people use the sine function on the Windows calculator. It is more likely to have a defect, though, since sine functions are hard to calculate.
  • 28.
    © Copyright GlobalLogic2009 28 Connect. Collaborate. Innovate. 2011 To discover these risks, ask yourself and other project participants and stakeholders,  'What could go wrong on the project to delay or invalidate the test plan, the test strategy and the test estimate?  What are unacceptable outcomes of testing or in testing?  What are the likelihoods and impacts of each of these risks?' It is similar to risk analysis process for products. Checklists and examples can help in identifying test project risks For any risk, product or project, four typical options are there:  Mitigate: Take steps in advance to reduce the likelihood and impact of the risk  Contingency: Have a plan in place to reduce the impact  Transfer: Convince some other member of the team or project stakeholder to reduce the likelihood or accept the impact of the risk  Ignore: Do nothing about the risk, which is usually a smart option only when there's little that can be done or when the likelihood and impact are low 2. Project Risks - factors relating to the way the work is carried out, i.e. the test project
  • 29.
    © Copyright GlobalLogic2009 29 Connect. Collaborate. Innovate. Contd…. Some typical risks along with some options for managing them:  Logistics or product quality problems that block tests  Test items that won't install in the test environment  Excessive change to the product that invalidates test results or requires updates to test cases, expected results and environments  Insufficient or unrealistic test environments that yield misleading results  Organizational issues such as shortages of people, skills or training, problems with communicating and responding to test results  Supplier issues such as problems with underlying platforms or hardware  Technical problems related to ambiguous, conflicting or unprioritized requirements, an excessively large number of requirements given other project constraints  High system complexity and quality problems with the design, the code or the tests
  • 30.
    © Copyright GlobalLogic2009 30 Connect. Collaborate. Innovate. Incident Management An incident is any situation where the system exhibits questionable behavior, but often we refer to an incident as a defect only when the root cause is some problem in the item we're testing.  Other causes of incidents include:  Failure of the test environment  Corrupted test data  Bad tests, invalid expected results  Tester mistakes An incident report contains a description of the misbehavior that was observed and classification of that misbehavior
  • 31.
    © Copyright GlobalLogic2009 31 Connect. Collaborate. Innovate. Incident reports are managed through a life cycle from discovery to resolution. After an incident is reported, a peer tester or test manager reviews the report.  If passed, the incident report becomes opened, so now the project team decide whether or not to repair the defect.  If the defect is to be repaired, a programmer is assigned to repair  Once the programmer believes the repairs are complete, the incident report returns to the tester for confirmation testing  If the confirmation test fails, the incident report is re-opened and then re- assigned  Once the tester confirms, the incident report is closed. No further work remains to be done  In any state other than rejected, deferred or closed:  Further work is required on the incident prior to the end of this project.  The incident report has a clearly identified owner. The owner is responsible for transitioning the incident into an allowed subsequent state. Incident Life Cycle
  • 32.
    © Copyright GlobalLogic2009 32 Connect. Collaborate. Innovate. Contd….
  • 33.
    © Copyright GlobalLogic2009 33 Connect. Collaborate. Innovate. “Thank You” for your learning contribution! Please submit Online Feedback to help L&D make continuous improvement……participation credit will be given only on feedback submission. For any queries Dial @ Learning: Noida: 4444, Nagpur:333, Pune:5222, Banglore:111 E mail: learning@globallogic.com Check new L&D Reward & Recognition Policy @ Confluence under Global Training 2011