Testing Process Life Cycle
• The Software Testing Life Cycle begins with
understanding the scope of Testing or the Test
Effort for a Project. This is captured in the Test
Plan and answers questions like What will be
tested, How much – breadth and depth or
scope , How, by whom and when? The Plan is
driven by the time available and the number
of resources (People and machines) required.
• These are not to be confused with the
Software Development Life Cycle. Remember,
Testing is a sub-process in the overall Software
Development Life Cycle. The Test Life Cycle
helps streamline only the testing related
activities.
• Software Testing Life Cycle (STLC) is a sequence of
specific activities conducted during the testing process
to ensure software quality goals are met. STLC involves
both verification and validation activities. Contrary to
popular belief, Software Testing is not just a
single/isolate activity, i.e. testing. It consists of a series
of activities carried out methodologically to help certify
your software product. STLC stands for Software
Testing Life Cycle.
•
Independent Testing
• Test Organization
Testing tasks may be done by people in a specific
testing role, or by people in another role (e.g.,
customers). A certain degree of independence
often makes the tester more effective at
finding defects due to differences between
the author’s and the tester’s cognitive biases .
Independence is not, however, a replacement
for familiarity, and developers can efficiently
find many defects in their own code.
• Degrees of independence in testing include the
following (from low level of independence to high
level):
• No independent testers; the only form of testing
available is developers testing their own code.
• Independent developers or testers within the
development teams or the project team; this could
be developers testing their colleagues’ products.
• Independent test team or group within the
organization, reporting to project management or
executive management.
• Independent testers from the business
organization or user community, or with
specializations in specific test types such as
usability, security, performance,
regulatory/compliance, or portability.
• Independent testers external to the
organization, either working on-site
(insourcing) or off-site (outsourcing).
• The way in which independence of testing is
implemented varies depending on the
software development lifecycle model. For
example, in Agile development, testers may be
part of a development team. In some
organizations using Agile methods, these
testers may be considered part of a larger
independent test team as well. In addition, in
such organizations, product owners may
perform acceptance testing to validate user
stories at the end of each iteration.
• Potential benefits of test independence
include:
• Independent testers are likely to recognize
different kinds of failures compared to
developers because of their different
backgrounds, technical perspectives, and
biases.
• An independent tester can verify, challenge, or
disprove assumptions made by stakeholders
during specification and implementation of
the system
• Potential drawbacks of test independence include:
• Isolation from the development team, leading to a
lack of collaboration, delays in providing feedback to
the development team, or an adversarial relationship
with the development team.
• Developers may lose a sense of responsibility for
quality.
• Independent testers may be seen as a bottleneck or
blamed for delays in release.
• Independent testers may lack some important
information (e.g., about the test object).
Tasks of a Test Manager and Tester
• The test manager is tasked with overall
responsibility for the test process and
successful leadership of the test activities. The
test management role might be performed by
a professional test manager, or by a project
manager, a development manager, or a quality
assurance manager. In larger projects or
organizations, several test teams may report
to a test manager, test coach, or test
coordinator, each team being headed by a test
leader or lead tester.
• Typical test manager tasks may include:
• Develop or review a test policy and test
strategy for the organization.
• Plan the test activities by considering the
context, and understanding the test objectives
and risks. This may include selecting test
approaches, estimating test time, effort and
cost, acquiring resources, defining test levels
and test cycles, and planning defect
management.
• Write and update the test plan(s).
• Coordinate the test plan(s) with project
managers, product owners, and
others.
• Share testing perspectives with other project
activities, such as integration planning.
• Initiate the analysis, design, implementation, and
execution of tests, monitor test progress and results,
and check the status of exit criteria (or definition of
done).
• Prepare and deliver test progress reports and test
summary reports based on the information gathered.
• Adapt planning based on test results and progress
(sometimes documented in test progress reports,
and/or in test summary reports for other testing
already completed on the project) and take any
actions necessary for test control.
• Support setting up the defect management system
and adequate configuration management of
testware.
• Introduce suitable metrics for measuring test progress and
evaluating the quality of the testing and the product.
• Support the selection and implementation of tools to support
the test process, including recommending the budget for
tool selection (and possibly purchase and/or support),
allocating time and effort for pilot projects, and providing
continuing support in the use of the tool(s).
• Decide about the implementation of test environment(s).
• Promote and advocate the testers, the test team, and the test
profession within the organization.
• Develop the skills and careers of testers (e.g., through
training
plans, performance evaluations, coaching, etc.).
• Typical tester tasks may include:
• Review and contribute to test plans.
• Analyze, review, and assess requirements, user
stories and acceptance criteria, specifications, and
models for testability (i.e., the test basis).
• Identify and document test conditions, and capture
traceability between test cases, test conditions, and
the test basis.
• Design, set up, and verify test environment(s), often
coordinating with system administration and
network management.
• Design and implement test cases and test
procedures.
• Prepare and acquire test data.
• Create the detailed test execution schedule.
• Execute tests, evaluate the results, and
document deviations from expected results.
• Use appropriate tools to facilitate the test
process.
• Automate tests as needed (may be supported
by a developer or a test automation expert).
• Evaluate non-functional characteristics such as
performance efficiency, reliability, usability,
security, compatibility, and portability.
• Review tests developed by others.
Test Planning and Estimation
• Purpose and Content of a Test Plan
• A test plan outlines test activities for
development and maintenance projects.
Planning is influenced by the test policy and
test strategy of the organization, the
development lifecycles and methods being
used, the scope of testing, objectives, risks,
constraints, criticality, testability, and the
availability of resources.
•
• Test planning is a continuous activity and is
performed throughout the product’s lifecycle.
(Note that the product’s lifecycle may extend
beyond a project’s scope to include the
maintenance phase.) Feedback from test
activities should be used to recognize
changing risks so that planning can be
adjusted. Planning may be documented in a
master test plan and in separate test plans for
test levels, such as system testing and
acceptance testing, or for separate test types,
such as usability testing and performance
testing.
• Test planning activities may include the following and some of
these may be documented in a test plan:
• Determining the scope, objectives, and risks of testing.
• Defining the overall approach of testing.
• Integrating and coordinating the test activities into the
software lifecycle activities.
• Making decisions about what to test, the people and other
resources required to perform the various test activities, and
how test activities will be carried out.
• Scheduling of test analysis, design, implementation,
execution, and evaluation activities, either on particular dates
(e.g., in sequential development) or in the context of each
iteration (e.g., in iterative development).
• Selecting metrics for test monitoring and
control.
• Budgeting for the test activities.
• Determining the level of detail and structure
for test documentation (e.g., by providing
templates or example documents).
Test Strategy and Test Approach
• A test strategy provides a generalized
description of the test process, usually at the
product or organizational level. Common
types of test strategies include:
• Analytical: This type of test strategy is based
on an analysis of some factor (e.g.,
requirement or risk). Risk-based testing is an
example of an analytical approach, where
tests are designed and prioritized based on
the level of risk.
• Model-Based: In this type of test strategy,
tests are designed based on some model of
some required aspect of the product, such as
a function, a business process, an internal
structure, or a non-functional characteristic
(e.g., reliability). Examples of such models
include business process models, state
models, and reliability growth models.
• Methodical: This type of test strategy relies on making
systematic use of some predefined set of tests or test
conditions, such as a taxonomy of common or likely types of
failures, a list of important quality characteristics, or
company-wide look-and-feel standards for mobile apps or
web pages.
• Process-compliant (or standard-compliant): This type of test
strategy involves analyzing, designing, and implementing tests
based on external rules and standards, such as those
specified by industry-specific standards, by process
documentation, by the rigorous identification and use of the
test basis, or by any process or standard imposed on or by
the organization.
• Process-compliant (or standard-compliant): This type of test
strategy involves analyzing, designing, and implementing tests
based on external rules and standards, such as those
specified by industry-specific standards, by process
documentation, by the rigorous identification and use of the
test basis, or by any process or standard imposed on or by the
organization.
• Directed (or consultative): This type of test strategy is driven
primarily by the advice, guidance, or instructions of
stakeholders, business domain experts, or technology experts,
who may be outside the test team or outside the organization
itself.
• Regression-averse: This type of test strategy is motivated by a
desire to avoid regression of existing capabilities. This test
strategy includes reuse of existing testware (especially test
cases and test data), extensive automation of regression tests,
and standard test suites.
• Reactive: In this type of test strategy, testing is reactive to the
component or system being tested, and the events occurring
during test execution, rather than being pre-planned (as the
preceding strategies are). Tests are designed and
implemented, and may immediately be executed in response
to knowledge gained from prior test results. Exploratory
testing is a common technique employed in reactive
strategies.
• In order to exercise effective control over the
quality of the software, and of the testing, it is
advisable to have criteria which define when
a given test activity should start and when the
activity is complete. Entry criteria (more
typically called definition of ready in Agile
development) define the preconditions for
undertaking a given test activity. If entry
criteria are not met, it is likely that the activity
will prove more difficult, more time-
consuming, more costly, and more risky.
Entry Criteria and Exit Criteria
• Exit criteria (more typically called definition
of done in Agile development) define what
conditions must be achieved in order to
declare a test level or a set of tests completed.
Entry and exit criteria should be defined for
each test level and test type, and will differ
based on the test objectives.
• Typical entry criteria include:
• Availability of testable requirements, user
stories, and/or models (e.g., when following a
modelbased testing strategy).
• Availability of test items that have met the exit
criteria for any prior test levels Availability of
test environment.
• Availability of necessary test tools.
• Availability of test data and other necessary
resources
Typical exit criteria include:
• Planned tests have been executed.
• A defined level of coverage (e.g., of requirements,
user stories, acceptance criteria, risks, code) has
been achieved.
• The number of unresolved defects is within an
agreed limit.
• The number of estimated remaining defects is
sufficiently low.
• The evaluated levels of reliability, performance
efficiency, usability, security, and other relevant
quality characteristics are sufficient.
• Even without exit criteria being satisfied, it is
also common for test activities to be curtailed
due to the budget being expended, the
scheduled time being completed, and/or
pressure to bring the product to market. It can
be acceptable to end testing under such
circumstances, if the project stakeholders and
business owners have reviewed and accepted
the risk to go live without further testing.
Test Effort Estimation Techniques
• There are a number of estimation techniques
used to determine the effort required for
adequate testing.
• Two of the most commonly used techniques
are:
• The metrics-based technique: estimating the
test effort based on metrics of former similar
projects, or based on typical values.
• The expert-based technique: estimating the
test effort based on the experience of the
owners of the testing tasks or by experts.
• For example, in Agile development, burndown
charts are examples of the metrics-based
approach as effort is being captured and
reported, and is then used to feed into the
team’s velocity to determine the amount of
work the team can do in the next iteration;
whereas planning poker is an example of the
expert-based approach, as team members are
estimating the effort to deliver a feature
based on their experience .
• Within sequential projects, defect removal
models are examples of the metrics-based
approach, where volumes of defects and time
to remove them are captured and reported,
which then provides a basis for estimating
future projects of a similar nature; whereas
the Wideband Delphi estimation technique is
an example of the expert-based approach in
which groups of experts provides estimates
based on their experience.
Metrics Used in Testing
• Metrics can be collected during and at the end
of test activities in order to assess:
• Progress against the planned schedule and
budget.
• Current quality of the test object.
• Adequacy of the test approach.
• Effectiveness of the test activities with respect
to the objectives
• Common test metrics include:
• Percentage of planned work done in test case
preparation (or percentage of planned test
cases implemented).
• Percentage of planned work done in test
environment preparation.
• Test case execution (e.g., number of test cases
run/not run, test cases passed/failed, and/or
test conditions passed/failed).
• Defect information (e.g., defect density,
defects found and fixed, failure rate, and
confirmation test results).
• Test coverage of requirements, user stories,
acceptance criteria, risks, or code
• Task completion, resource allocation and
usage, and effort
• Cost of testing, including the cost compared to
the benefit of finding the next defect or the
cost compared to the benefit of running the
next test.
Configuration Management in
testing
• The purpose of configuration management is
to establish and maintain the integrity of the
component or system, the testware, and their
relationships to one another through the
project and product lifecycle.
• To properly support testing, configuration
management may involve ensuring the
following:
• All test items are uniquely identified, version
controlled, tracked for changes, and related to
each other.
• All items of testware are uniquely identified,
version controlled, tracked for changes,
related to each other and related to versions
of the test item(s) so that traceability can be
maintained throughout the test process.
• All identified documents and software items
are referenced unambiguously in test
documentation.
• During test planning, configuration
management procedures and infrastructure
(tools) should be identified and implemented.
Risk-based Testing and Product
Quality
• Risk is used to focus the effort required during
testing. It is used to decide where and when
to start testing and to identify areas that need
more attention. Testing is used to reduce the
probability of an adverse event occurring, or
to reduce the impact of an adverse event.
Testing is used as a risk mitigation activity, to
provide feedback about identified risks, as
well as providing feedback on residual
(unresolved) risks.
• A risk-based approach to testing provides
proactive opportunities to reduce the levels of
product risk. It involves product risk analysis,
which includes the identification of product
risks and the assessment of each risk’s
likelihood and impact. The resulting product
risk information is used to guide test planning,
the specification, preparation and execution
of test cases, and test monitoring and control.
Analyzing product risks early contributes to
the success of a project.
• In a risk-based approach, the results of product risk analysis
are used to:
• Determine the test techniques to be employed.
• Determine the particular levels and types of testing to be
performed (e.g., security testing, accessibility testing).
• Determine the extent of testing to be carried out.
• Prioritize testing in an attempt to find the critical defects as
early as possible.
• Determine whether any activities in addition to testing could
be employed to reduce risk (e.g., providing training to
inexperienced designers).
• In a risk-based approach, the results of product risk
analysis are used to:
• Determine the test techniques to be employed.
• Determine the particular levels and types of testing
to be performed (e.g., security testing, accessibility
testing).
• Determine the extent of testing to be carried out.
• Prioritize testing in an attempt to find the critical
defects as early as possible.
• Determine whether any activities in addition to
testing could be employed to reduce risk (e.g.,
providing training to inexperienced designers).
Product and Project Risks
• Product risk involves the possibility that a
work product (e.g., a specification,
component, system, or test) may fail to satisfy
the legitimate needs of its users and/or
stakeholders. When the product risks are
associated with specific quality characteristics
of a product (e.g., functional suitability,
reliability, performance efficiency, usability,
security, compatibility, maintainability, and
portability), product risks are also called
quality risks.
• Examples of product risks include:
• Software might not perform its intended
functions according to the specification.
• Software might not perform its intended
functions according to user, customer, and/or
stakeholder needs.
• A system architecture may not adequately
support some non-functional requirement(s).
• A particular computation may be performed
incorrectly in some circumstances.
• A loop control structure may be coded
incorrectly.
• Response-times may be inadequate for a high-
performance transaction processing system.
• User experience (UX) feedback might not
meet product expectations.
• Project risk involves situations that, should
they occur, may have a negative effect on a
project’s ability to achieve its objectives.
Examples of project risks include:
Project issues.
• Delays may occur in delivery, task completion,
or satisfaction of exit criteria or definition of
done.
• Inaccurate estimates, reallocation of funds to
higher priority projects, or general costcutting
across the organization may result in
inadequate funding.
• Late changes may result in substantial re-
work.
• Organizational issues.
• Skills, training, and staff may not be sufficient.
• Personnel issues may cause conflict and problems.
• Users, business staff, or subject matter experts may not be
available due to conflicting business priorities.
• Political issues.
• Testers may not communicate their needs and/or the test
results adequately.
• Developers and/or testers may fail to follow up on
information found in testing and reviews (e.g., not improving
development and testing practices).
• There may be an improper attitude toward, or expectations
of, testing (e.g., not appreciating the value of finding defects
during testing).
• Technical issues.
• Requirements may not be defined well enough.
• The requirements may not be met, given existing constraints.
• The test environment may not be ready on time.
• Data conversion, migration planning, and their tool support
may be late.
• Weaknesses in the development process may impact the
consistency or quality of project work products such as
design, code, configuration, test data, and test cases.
• Poor defect management and similar problems may
result in
accumulated defects and other technical debt.
• Supplier issues.
• A third party may fail to deliver a necessary product or
service, or go bankrupt.
Debugging
• Remember that a successful test case is one
that shows that a
program does not do what it was designed to
do. Debugging is a two-step
process that begins when you find an error as
a result of a successful test
case. Step 1 is the determination of the exact
nature and location of the
suspected error within the program. Step 2
consists of fixing the error.
• it seems to be the one aspect of the software
production process that programmers
enjoy the least, for these reasons primarily:
Your ego may get in the way. Like it or not,
debugging confirms that programmers are not
perfect; they commit errors in either the design or
the coding of the program.You may run out of
steam. Of all the software development activities,
debugging is the most mentally taxing activity.
Moreover, debugging
usually is performed under a tremendous amount
of organizational or self-induced pressure to fix the
problem as quickly as possible.
• You may lose your way. Debugging is mentally
taxing because the error you’ve found could
occur in virtually any statement within the
program. Without examining the program
first, you can’t be absolutely sure, for
example, that the origin of a numerical error
in a paycheck produced by a payroll
program is not a subroutine that asks the
operator to load a particular form into the
printer.
• You may be on your own. Compared to other
software development
activities, comparatively little research,
literature, and formal instruction exist on the
process of debugging
Debugging by Brute Force
• The most common scheme for debugging a
program is the so-called bruteforce method. It
is popular because it requires little thought
and is the least
mentally taxing of the methods; unfortunately,
it is inefficient and generally unsuccessful.
Brute-force methods can be partitioned into
at least three categories:
• Debugging with a storage dump.
• Debugging according to the common
suggestion to ‘‘scatter print
statements throughout your
program.’’
• Debugging with automated
debugging tools
• The first, debugging with a storage dump
(usually a crude display of all
storage locations in hexadecimal or octal
format) is the most inefficient of
the brute-force methods. Here’s why:
It is difficult to establish a correspondence
between memory locations
and the variables in a source program.
With any program of reasonable complexity,
such a memory dump
will produce a massive amount of data,
most of which is irrelevant.
• A memory dump is a static picture of the
program, showing the state
of the program at only one instant in time; to
find errors, you have to
study the dynamics of a program (state
changes over time).
• A memory dump is rarely produced at the
exact point of the error, so it doesn’t show the
program’s state at the point of the error.
Program actions between the time of the
dump and the time of the error can mask the
clues you need to find the error.
Adequate methodologies don’t exist for
finding errors by analyzing a memory dump
(so many programmers stare, with glazed
eyes, wistfully expecting the error to expose
itself magically from the program dump).
• Automated debugging tools work similarly to
inserting print statements
within the program, but rather than making
changes to the program, you
analyze the dynamics of the program with the
debugging features of the
programming language or special interactive
debugging tools.
Debugging by Induction
• It should be obvious that careful thought will
find most errors without the
debugger even going near the computer. One
particular thought process is
induction, where you move from the
particulars of a situation to the whole.
That is, start with the clues (the symptoms of
the error and possibly the
results of one or more test cases) and look for
relationships among the clues.
• 1. Locate the pertinent data. A major mistake
debuggers make is failing
to take account of all available data or
symptoms about the problem.
Therefore, the first step is the enumeration of
all you know about
what the program did correctly and what it
did incorrectly—the
symptoms that led you to believe there was
an error.
• 2. Organize the data. Remember that
induction implies that you’re
processing from the particulars to the general,
so the second step is to structure the
pertinent data to let you observe the patterns.
Of particular importance is the search for
contradictions, events such as the error occurs
only when the customer has no outstanding
balance in his or her margin account.
• 3. Devise a hypothesis. Next, study the
relationships among the clues, and
devise, using the patterns that might be visible
in the structure of the
clues, one or more hypotheses about the
cause of the error. If you can’t
devise a theory, more data are needed,
perhaps from new test cases. If
multiple theories seem possible, select the
more probable one first.
• 4. Prove the hypothesis. A major mistake at
this point, given the pressures under which
debugging usually is performed, is to skip this
step and jump to conclusions to fix the
problem. Resist this urge, for it is vital to prove
the reasonableness of the hypothesis before
you proceed. If you skip this step, you’ll
probably succeed in correcting only the
problem symptom, not the problem itself.
• 5. Fix the problem. You can proceed with
fixing the problem once you complete the
previous steps. By taking the time to fully
work through each step, you can feel
confident that your fix will correct the bug.
Remember though, that you still need to
perform some type of regression testing to
ensure your bug fix didn’t create problems in
other program areas. As the application grows
larger, so does the likelihood that your fix will
cause problems elsewhere.
• Enumerating the possible causes for the
unexpected error message, we
might get:
1.The program does not accept the word
DISPLAY.
2. The program does not accept the
period.
3.The program does not allow a default as a
first operand; it expects a
storage address to precede the period.
4.The program does not allow an E as a valid
byte count.
Debugging by Backtracking
• An effective method for locating errors in
small programs is to backtrack
the incorrect results through the logic of the
program until you find
the point where the logic went astray. In other
words, start at the point
where the program gives the incorrect
result—such as where incorrect
data were printed.
• Here, you deduce from the observed output
what the
values of the program’s variables must have
been. By performing a mental
reverse execution of the program from this
point and repeatedly applying
the if-then logic that states ‘‘if this was the
state of the program at this
point, then this must have been the state of
the program up here,’’ you can
quickly pinpoint the error.
Debugging by Testing
• consider two types of test cases: test cases for
testing, whose purpose is to expose a
previously undetected error, and test cases for
debugging, whose purpose is to provide
information useful in locating a suspected
error. The difference between the two is that
test cases for testing tend to be ‘‘fat,’’ in that
you are trying to cover many conditions in a
small number of test cases.
• Test cases for debugging, on the other hand,
are ‘‘slim,’’ because you want to cover only a
single condition or a few conditions in each
test case. Actually, this is not an entirely
separate method; it often is used in
conjunction with the induction method to
obtain information needed to generate a
hypothesis and/or to prove a hypothesis. It
also is used with the deduction method to
eliminate suspected causes, refine the
remaining hypothesis, and/or prove a
hypothesis.
What to Estimate?
• Resources: Resources are required to carry out any project tasks. They can be
people, equipment, facilities, funding, or anything else capable of definition
required for the completion of a project activity.
• Times : Time is the most valuable resource in a project. Every project has
a deadline to delivery.
• Human Skills : Human skills mean the knowledge and the experience of the Team
members. They affect to your estimation. For example, a team, whose members
have low testing skills, will take more time to finish the project than the one which
has high testing skills.
• Cost: Cost is the project budget. Generally speaking, it means how much money it
takes to finish the project.
• How to estimate?
• List of Software Test Estimation Techniques
• Work Breakdown Structure
• 3-Point Software Testing Estimation Technique
• Wideband Delphi technique
• Function Point/Testing Point Analysis
• Use – Case Point Method
• Percentage distribution
• Ad-hoc method
• Following is the 4 Step process to arrive at an
estimate
• Step1) Divide the whole project task into subtasks
• Task is a piece of work that has been given to someone. To do this, you can use the Work
Breakdown Structure technique.
• Step 2) Allocate each task to team member
• In this step, each task is assigned to
the appropriate member in the project team.
• Step 3) Effort Estimation For Tasks
• There are 2 techniques which you can apply to
estimate the effort for tasks
• Functional Point Method
• Three Point Estimation
• Suppose your project team has estimated defined per Function Points of 5 hours/points. You
can estimate the total effort to test all the features
Weightage # of Function
Points
Total
Complex 5 3 15
Medium 3 5 15
Simple 1 4 4
Function Total Points 34
Estimate define per point 5
Total Estimated Effort (Person Hours) 170
• So the total effort to complete the task
“Create the test specification” of Guru99 Bank
is around 170 man-hours
• Once you understand the effort that is
required, you can assign resources to
determine how long the task will take
(duration), and then you can estimate labor
and non-labor costs.
• STEP C) Estimate the cost for the tasks
• This step helps you to answer the last
question of customer “How much does
it cost?”
• METHOD 2) Three Point Estimation
• Three-Point estimation is one of the techniques that could be used to estimate a
task. The simplicity of the Three-point estimation makes it a very useful tool for a
Project Manager that who wants to estimate.
• In three-point estimation, three values are produced initially for every task based
on prior experience or best-guesses as follows
• You can estimate as following
• The best case to complete this task is 120 man-hours (around 15 days). In
this case, you have a talented team, they can finish the task in smallest
time.
• The most likely case to complete this task is 170 man-hours (around 21
days). This is a normal case, you have enough resource and ability to
complete the task
• The worst case to complete this task is 200 man-hours (around 25 days).
You need to perform much more work because your team members are
not experienced.
• Now, assign the value to each parameter as below
• The effort to complete the task can be
calculated using double-triangular
distribution formula as follows-
• In the above estimation, you just determine
a possible and not a certain value, we
must know about the probability that the
estimation is correct. You can use the other
formula:
• Step 4) Validate the estimation
• Once you create an aggregate estimate for all
the tasks mentioned in the WBS, you need
to forward it to the management board,
who will review and approve it.
Debugging by deduction
• The process of deduction proceeds from some
general theories or premises,
using the processes of elimination and
refinement, to arrive at a conclusion .
• 1. Enumerate the possible causes or
hypotheses. The first step is to develop a list
of all conceivable causes of the error. They
don’t have to be complete explanations; they
are merely theories to help you structure
and analyze the available data.
2. Use the data to eliminate possible causes.
Carefully examine all of the data, particularly
by looking for contradictions and try to
eliminate all but one of the possible causes.
• If all are eliminated, you need more data
gained from additional test cases to devise
new theories. If more than one possible cause
remains, select the most probable cause—
the prime hypothesis—first.
3. Refine the remaining hypothesis. The
possible cause at this point might be correct,
but it is unlikely to be specific enough to
pinpoint the error. Hence, the next step is to
use the available clues to refine the theory.
• For example, you might start with the idea
that ‘‘there is an error in handling the last
transaction in the file’’ and refine it to ‘‘the
last transaction in the buffer is overlaid with
the end-of-file indicator.’’
4.Prove the remaining hypothesis. This vital
step is identical to step 4 in the induction
method.
5.Fix the error. Again this step is identical to
step 5 in the induction method. To re-
emphasize though, you should thoroughly test
your fix to ensure it does not create problems
elsewhere in the application.
Testing Internet Applications
• Internet applications are essentially client-
server applications in which
the client is a Web browser, and the server is a
Web or application server.
Although conceptually simple, the complexity
of these applications varies
wildly. Some companies have applications
built for business-to-consumer
uses such as banking services and retail
stores,
• while others have businessto-business
applications such as supply chain or sales
force management.
Development and user presentation/user
interface strategies vary for these
different types of websites, and, as you might
imagine, the testing approach
varies as well.
• The importance of rooting out the errors in an
Internet application cannot be overstated. As
a result of the openness and accessibility of
the Internet, competition in the business-to-
consumer and business-tobusiness arena is
intense. Thus, the Internet has created a
buyer’s market for goods and services.
Consumers have developed high
expectations,
• and if your site does not load quickly, respond
immediately, and provide intuitive navigation
features, chances are that the
user will find another company with which to
conduct business. This
issue is not confined to strictly e-commerce or
product promotion
sites.
• Websites that are developed as research or
information resources
frequently are maintained by advertising or
user donations. Either way,
ample competition exists to lure users away,
thereby reducing activity
and concomitant revenue.
• Not only will the customer leave
your site if it exhibits poor quality, your
corporate image will become tarnished as
well. After all, who feels comfortable buying a
car from a company that cannot build a
suitable website? Like it or not, websites have
become the new first impression for business.
In general, consumers don’t
pay to access most websites, so there is little
incentive to remain loyal in
the face of mediocre design or performance.
Basic E-Commerce Architecture
• You will face many challenges when designing
and testing Internet-based
applications due to the large number of
elements you cannot control and
the number of interdependent components.
Adequately testing your application requires
that you make some assumptions about your
customers and how they use the site.
• The following list provides some examples of the
challenges associated with testing Internet-
based applications:
Large and varied user base. The users of your
website possess different skill sets, employ a
variety of browsers, and use different operating
systems or devices. You can also expect your
customers to access your website using a wide
range of connection speeds. Ten years ago not
everyone had broadband Internet access.
Today, most do. However, you still need to
consider bandwidth as Internet content
becomes
• Business environment. If you operate an e-
commerce site, then you must consider issues
such as calculating taxes, determining
shipping costs, completing financial
transactions, and tracking customer profiles.
for example. The developer must thoroughly
understand the structure of the remote
system, and work closely with its owners
and developers to ensure security and
accuracy.
• Locales. Users may reside in other countries,
in which case you will
have internationalization issues such as
language translation, time
zone differences, and currency
conversion.
Security. Because your site is open to the
world, you must protect it
from hackers. They can bring your website to
a grinding halt with
denial-of-service (DoS) attacks, or rip off
your customers’ credit
card information.
• Testing environments. To properly test your
application, you will need
to duplicate the production environment. This
means you should use
Web servers, application servers, and
database servers that are identical to the
production equipment. For the most accurate
testing results, the network infrastructure
will have to be duplicated as well, which
includes routers, switches, and firewalls.
• Another significant testing challenge you face
is testing browser
compatibility. There are several different
browsers on the market today,
and each behaves differently. Although
standards exist for browser operation, most
vendors enhance their products in an effort to
attract a loyal user base. Unfortunately, this
causes the browsers to operate in a
nonstandard way.
Testing Strategies
• Developing a testing strategy for Internet-
based applications requires a
solid understanding of the hardware and
software components that make
up the application. As is critical to successful
testing of standard applications, you will need
a specification document to describe the
expected functionality and performance of
your website. Without this document,
you will not be able to design the appropriate
• You need to test components developed
internally and those purchased from a third
party. For the components developed in-
house
you should employ the tactics presented in
earlier chapters. This includes creating
unit/module tests and performing code
reviews. Integrate the components into your
system only after verifying that they
meet the design specifications and
functionality outlined in the specification
document
• Testing Internet-based applications is best tackled
with a divide- and conquer approach. Fortunately,
the architecture of Internet applications allows you
to identify discrete areas to target testing.
• Presentation layer. The layer of an Internet
application that provides the user interface (UI; or
GUI, graphical user interface).
• Business layer. The layer that models your
business processes, such as user authentication
and transactions
• Data layer. The layer that houses data used by the
application or that is collected from the end user
Presentation Layer Testing
• In a nutshell, presentation layer testing is very labor intensive.
However, ust as you can segment the testing of an Internet
application into discretentities, you can do the same when
testing the presentation layer. Here are the three major areas
of presentation layer testing:
1. Content testing. Overall aesthetics, fonts, colors, spelling,
content ac
curacy, default values.
2. Website architecture. Broken links or graphics.
3. User environment. Web browser versions and operating
system configuration.
• Content testing involves checking the human-interface
element of a website. You need to search for errors in font
type, screen layout, colors, graphic resolutions, and other
features that directly affect the end-user experience. In
addition, you should verify the accuracy of the information
on your website. Providing grammatically correct, but
inaccurate, information harms your company’s credibility as
much as any other GUI bug.
Inaccurate information may also cause legal problems for your
company.
• As mentioned earlier, testing the end-user
environment—also known as
browser-compatibility testing—is often the
most challenging aspect of
testing Internet-based applications. The
combination of browsers and an
operating system (OS) is very large. Not only
should you test each browser
configuration, but different versions of the
same browser as well.
Business Layer Testing
• Business layer testing focuses on finding errors
in the business logic of your Internet
application. You will find testing this layer very
similar to that of stand-alone applications, in
that you can employ both white- and
black-box techniques. You will want to create
test plans and procedures that detect errors in
the application’s performance specification,
data acquisition, and transaction processing.
• Performance. Test to see whether the
application meets documented performance
specifications (generally specified in response
times and throughput rates).
Data validity. Test to detect errors in data
collected from customers.
Transactions. Test to uncover errors in
transaction processing, which
may include credit card processing, e-
mailing verifications, and calculating sales
Data Layer Testing
• Once your site is up and running, the data you
collect become very valuable. Credit card
numbers, payment information, and user
profiles are examples of the types of data you
may collect while running your
e-commerce site. Losing this information
could prove disastrous and crippling to your
business. Therefore, you should develop a set
of procedures to protect your data storage
systems.
• As with the other tiers, you should search for
errors in certain areas
when testing the data layer. These include the
following:
Response time. Quantifying completion
times for Structured Query
Language (SQL) operations. Data
integrity. Verifying that the data are stored
correctly and accurately
• Fault tolerance and recoverability. Maximizing
the MTBF and minimizing the MTTR.
• Three point estimation used for
a. Risk
b. Cost
c. Time
d. None of above
• Function point is used for
• Risk
• Effort
• Time
• None

Software Testing life cycle presentation.pptx

  • 2.
    Testing Process LifeCycle • The Software Testing Life Cycle begins with understanding the scope of Testing or the Test Effort for a Project. This is captured in the Test Plan and answers questions like What will be tested, How much – breadth and depth or scope , How, by whom and when? The Plan is driven by the time available and the number of resources (People and machines) required.
  • 3.
    • These arenot to be confused with the Software Development Life Cycle. Remember, Testing is a sub-process in the overall Software Development Life Cycle. The Test Life Cycle helps streamline only the testing related activities.
  • 4.
    • Software TestingLife Cycle (STLC) is a sequence of specific activities conducted during the testing process to ensure software quality goals are met. STLC involves both verification and validation activities. Contrary to popular belief, Software Testing is not just a single/isolate activity, i.e. testing. It consists of a series of activities carried out methodologically to help certify your software product. STLC stands for Software Testing Life Cycle. •
  • 5.
    Independent Testing • TestOrganization Testing tasks may be done by people in a specific testing role, or by people in another role (e.g., customers). A certain degree of independence often makes the tester more effective at finding defects due to differences between the author’s and the tester’s cognitive biases . Independence is not, however, a replacement for familiarity, and developers can efficiently find many defects in their own code.
  • 6.
    • Degrees ofindependence in testing include the following (from low level of independence to high level): • No independent testers; the only form of testing available is developers testing their own code. • Independent developers or testers within the development teams or the project team; this could be developers testing their colleagues’ products. • Independent test team or group within the organization, reporting to project management or executive management.
  • 7.
    • Independent testersfrom the business organization or user community, or with specializations in specific test types such as usability, security, performance, regulatory/compliance, or portability. • Independent testers external to the organization, either working on-site (insourcing) or off-site (outsourcing).
  • 8.
    • The wayin which independence of testing is implemented varies depending on the software development lifecycle model. For example, in Agile development, testers may be part of a development team. In some organizations using Agile methods, these testers may be considered part of a larger independent test team as well. In addition, in such organizations, product owners may perform acceptance testing to validate user stories at the end of each iteration.
  • 9.
    • Potential benefitsof test independence include: • Independent testers are likely to recognize different kinds of failures compared to developers because of their different backgrounds, technical perspectives, and biases. • An independent tester can verify, challenge, or disprove assumptions made by stakeholders during specification and implementation of the system
  • 10.
    • Potential drawbacksof test independence include: • Isolation from the development team, leading to a lack of collaboration, delays in providing feedback to the development team, or an adversarial relationship with the development team. • Developers may lose a sense of responsibility for quality. • Independent testers may be seen as a bottleneck or blamed for delays in release. • Independent testers may lack some important information (e.g., about the test object).
  • 11.
    Tasks of aTest Manager and Tester • The test manager is tasked with overall responsibility for the test process and successful leadership of the test activities. The test management role might be performed by a professional test manager, or by a project manager, a development manager, or a quality assurance manager. In larger projects or organizations, several test teams may report to a test manager, test coach, or test coordinator, each team being headed by a test leader or lead tester.
  • 12.
    • Typical testmanager tasks may include: • Develop or review a test policy and test strategy for the organization. • Plan the test activities by considering the context, and understanding the test objectives and risks. This may include selecting test approaches, estimating test time, effort and cost, acquiring resources, defining test levels and test cycles, and planning defect management.
  • 13.
    • Write andupdate the test plan(s). • Coordinate the test plan(s) with project managers, product owners, and others. • Share testing perspectives with other project activities, such as integration planning.
  • 14.
    • Initiate theanalysis, design, implementation, and execution of tests, monitor test progress and results, and check the status of exit criteria (or definition of done). • Prepare and deliver test progress reports and test summary reports based on the information gathered. • Adapt planning based on test results and progress (sometimes documented in test progress reports, and/or in test summary reports for other testing already completed on the project) and take any actions necessary for test control. • Support setting up the defect management system and adequate configuration management of testware.
  • 15.
    • Introduce suitablemetrics for measuring test progress and evaluating the quality of the testing and the product. • Support the selection and implementation of tools to support the test process, including recommending the budget for tool selection (and possibly purchase and/or support), allocating time and effort for pilot projects, and providing continuing support in the use of the tool(s). • Decide about the implementation of test environment(s). • Promote and advocate the testers, the test team, and the test profession within the organization. • Develop the skills and careers of testers (e.g., through training plans, performance evaluations, coaching, etc.).
  • 16.
    • Typical testertasks may include: • Review and contribute to test plans. • Analyze, review, and assess requirements, user stories and acceptance criteria, specifications, and models for testability (i.e., the test basis). • Identify and document test conditions, and capture traceability between test cases, test conditions, and the test basis. • Design, set up, and verify test environment(s), often coordinating with system administration and network management. • Design and implement test cases and test procedures. • Prepare and acquire test data.
  • 17.
    • Create thedetailed test execution schedule. • Execute tests, evaluate the results, and document deviations from expected results. • Use appropriate tools to facilitate the test process. • Automate tests as needed (may be supported by a developer or a test automation expert). • Evaluate non-functional characteristics such as performance efficiency, reliability, usability, security, compatibility, and portability. • Review tests developed by others.
  • 18.
    Test Planning andEstimation • Purpose and Content of a Test Plan • A test plan outlines test activities for development and maintenance projects. Planning is influenced by the test policy and test strategy of the organization, the development lifecycles and methods being used, the scope of testing, objectives, risks, constraints, criticality, testability, and the availability of resources. •
  • 19.
    • Test planningis a continuous activity and is performed throughout the product’s lifecycle. (Note that the product’s lifecycle may extend beyond a project’s scope to include the maintenance phase.) Feedback from test activities should be used to recognize changing risks so that planning can be adjusted. Planning may be documented in a master test plan and in separate test plans for test levels, such as system testing and acceptance testing, or for separate test types, such as usability testing and performance testing.
  • 20.
    • Test planningactivities may include the following and some of these may be documented in a test plan: • Determining the scope, objectives, and risks of testing. • Defining the overall approach of testing. • Integrating and coordinating the test activities into the software lifecycle activities. • Making decisions about what to test, the people and other resources required to perform the various test activities, and how test activities will be carried out. • Scheduling of test analysis, design, implementation, execution, and evaluation activities, either on particular dates (e.g., in sequential development) or in the context of each iteration (e.g., in iterative development).
  • 21.
    • Selecting metricsfor test monitoring and control. • Budgeting for the test activities. • Determining the level of detail and structure for test documentation (e.g., by providing templates or example documents).
  • 22.
    Test Strategy andTest Approach • A test strategy provides a generalized description of the test process, usually at the product or organizational level. Common types of test strategies include: • Analytical: This type of test strategy is based on an analysis of some factor (e.g., requirement or risk). Risk-based testing is an example of an analytical approach, where tests are designed and prioritized based on the level of risk.
  • 23.
    • Model-Based: Inthis type of test strategy, tests are designed based on some model of some required aspect of the product, such as a function, a business process, an internal structure, or a non-functional characteristic (e.g., reliability). Examples of such models include business process models, state models, and reliability growth models.
  • 24.
    • Methodical: Thistype of test strategy relies on making systematic use of some predefined set of tests or test conditions, such as a taxonomy of common or likely types of failures, a list of important quality characteristics, or company-wide look-and-feel standards for mobile apps or web pages. • Process-compliant (or standard-compliant): This type of test strategy involves analyzing, designing, and implementing tests based on external rules and standards, such as those specified by industry-specific standards, by process documentation, by the rigorous identification and use of the test basis, or by any process or standard imposed on or by the organization.
  • 25.
    • Process-compliant (orstandard-compliant): This type of test strategy involves analyzing, designing, and implementing tests based on external rules and standards, such as those specified by industry-specific standards, by process documentation, by the rigorous identification and use of the test basis, or by any process or standard imposed on or by the organization. • Directed (or consultative): This type of test strategy is driven primarily by the advice, guidance, or instructions of stakeholders, business domain experts, or technology experts, who may be outside the test team or outside the organization itself.
  • 26.
    • Regression-averse: Thistype of test strategy is motivated by a desire to avoid regression of existing capabilities. This test strategy includes reuse of existing testware (especially test cases and test data), extensive automation of regression tests, and standard test suites. • Reactive: In this type of test strategy, testing is reactive to the component or system being tested, and the events occurring during test execution, rather than being pre-planned (as the preceding strategies are). Tests are designed and implemented, and may immediately be executed in response to knowledge gained from prior test results. Exploratory testing is a common technique employed in reactive strategies.
  • 27.
    • In orderto exercise effective control over the quality of the software, and of the testing, it is advisable to have criteria which define when a given test activity should start and when the activity is complete. Entry criteria (more typically called definition of ready in Agile development) define the preconditions for undertaking a given test activity. If entry criteria are not met, it is likely that the activity will prove more difficult, more time- consuming, more costly, and more risky. Entry Criteria and Exit Criteria
  • 28.
    • Exit criteria(more typically called definition of done in Agile development) define what conditions must be achieved in order to declare a test level or a set of tests completed. Entry and exit criteria should be defined for each test level and test type, and will differ based on the test objectives.
  • 29.
    • Typical entrycriteria include: • Availability of testable requirements, user stories, and/or models (e.g., when following a modelbased testing strategy). • Availability of test items that have met the exit criteria for any prior test levels Availability of test environment. • Availability of necessary test tools. • Availability of test data and other necessary resources
  • 30.
    Typical exit criteriainclude: • Planned tests have been executed. • A defined level of coverage (e.g., of requirements, user stories, acceptance criteria, risks, code) has been achieved. • The number of unresolved defects is within an agreed limit. • The number of estimated remaining defects is sufficiently low. • The evaluated levels of reliability, performance efficiency, usability, security, and other relevant quality characteristics are sufficient.
  • 31.
    • Even withoutexit criteria being satisfied, it is also common for test activities to be curtailed due to the budget being expended, the scheduled time being completed, and/or pressure to bring the product to market. It can be acceptable to end testing under such circumstances, if the project stakeholders and business owners have reviewed and accepted the risk to go live without further testing.
  • 32.
    Test Effort EstimationTechniques • There are a number of estimation techniques used to determine the effort required for adequate testing. • Two of the most commonly used techniques are: • The metrics-based technique: estimating the test effort based on metrics of former similar projects, or based on typical values. • The expert-based technique: estimating the test effort based on the experience of the owners of the testing tasks or by experts.
  • 33.
    • For example,in Agile development, burndown charts are examples of the metrics-based approach as effort is being captured and reported, and is then used to feed into the team’s velocity to determine the amount of work the team can do in the next iteration; whereas planning poker is an example of the expert-based approach, as team members are estimating the effort to deliver a feature based on their experience .
  • 34.
    • Within sequentialprojects, defect removal models are examples of the metrics-based approach, where volumes of defects and time to remove them are captured and reported, which then provides a basis for estimating future projects of a similar nature; whereas the Wideband Delphi estimation technique is an example of the expert-based approach in which groups of experts provides estimates based on their experience.
  • 35.
    Metrics Used inTesting • Metrics can be collected during and at the end of test activities in order to assess: • Progress against the planned schedule and budget. • Current quality of the test object. • Adequacy of the test approach. • Effectiveness of the test activities with respect to the objectives
  • 36.
    • Common testmetrics include: • Percentage of planned work done in test case preparation (or percentage of planned test cases implemented). • Percentage of planned work done in test environment preparation. • Test case execution (e.g., number of test cases run/not run, test cases passed/failed, and/or test conditions passed/failed).
  • 37.
    • Defect information(e.g., defect density, defects found and fixed, failure rate, and confirmation test results). • Test coverage of requirements, user stories, acceptance criteria, risks, or code • Task completion, resource allocation and usage, and effort • Cost of testing, including the cost compared to the benefit of finding the next defect or the cost compared to the benefit of running the next test.
  • 38.
    Configuration Management in testing •The purpose of configuration management is to establish and maintain the integrity of the component or system, the testware, and their relationships to one another through the project and product lifecycle. • To properly support testing, configuration management may involve ensuring the following:
  • 39.
    • All testitems are uniquely identified, version controlled, tracked for changes, and related to each other. • All items of testware are uniquely identified, version controlled, tracked for changes, related to each other and related to versions of the test item(s) so that traceability can be maintained throughout the test process.
  • 40.
    • All identifieddocuments and software items are referenced unambiguously in test documentation. • During test planning, configuration management procedures and infrastructure (tools) should be identified and implemented.
  • 41.
    Risk-based Testing andProduct Quality • Risk is used to focus the effort required during testing. It is used to decide where and when to start testing and to identify areas that need more attention. Testing is used to reduce the probability of an adverse event occurring, or to reduce the impact of an adverse event. Testing is used as a risk mitigation activity, to provide feedback about identified risks, as well as providing feedback on residual (unresolved) risks.
  • 42.
    • A risk-basedapproach to testing provides proactive opportunities to reduce the levels of product risk. It involves product risk analysis, which includes the identification of product risks and the assessment of each risk’s likelihood and impact. The resulting product risk information is used to guide test planning, the specification, preparation and execution of test cases, and test monitoring and control. Analyzing product risks early contributes to the success of a project.
  • 43.
    • In arisk-based approach, the results of product risk analysis are used to: • Determine the test techniques to be employed. • Determine the particular levels and types of testing to be performed (e.g., security testing, accessibility testing). • Determine the extent of testing to be carried out. • Prioritize testing in an attempt to find the critical defects as early as possible. • Determine whether any activities in addition to testing could be employed to reduce risk (e.g., providing training to inexperienced designers).
  • 44.
    • In arisk-based approach, the results of product risk analysis are used to: • Determine the test techniques to be employed. • Determine the particular levels and types of testing to be performed (e.g., security testing, accessibility testing). • Determine the extent of testing to be carried out. • Prioritize testing in an attempt to find the critical defects as early as possible. • Determine whether any activities in addition to testing could be employed to reduce risk (e.g., providing training to inexperienced designers).
  • 45.
    Product and ProjectRisks • Product risk involves the possibility that a work product (e.g., a specification, component, system, or test) may fail to satisfy the legitimate needs of its users and/or stakeholders. When the product risks are associated with specific quality characteristics of a product (e.g., functional suitability, reliability, performance efficiency, usability, security, compatibility, maintainability, and portability), product risks are also called quality risks.
  • 46.
    • Examples ofproduct risks include: • Software might not perform its intended functions according to the specification. • Software might not perform its intended functions according to user, customer, and/or stakeholder needs. • A system architecture may not adequately support some non-functional requirement(s). • A particular computation may be performed incorrectly in some circumstances.
  • 47.
    • A loopcontrol structure may be coded incorrectly. • Response-times may be inadequate for a high- performance transaction processing system. • User experience (UX) feedback might not meet product expectations. • Project risk involves situations that, should they occur, may have a negative effect on a project’s ability to achieve its objectives. Examples of project risks include:
  • 48.
    Project issues. • Delaysmay occur in delivery, task completion, or satisfaction of exit criteria or definition of done. • Inaccurate estimates, reallocation of funds to higher priority projects, or general costcutting across the organization may result in inadequate funding. • Late changes may result in substantial re- work.
  • 49.
    • Organizational issues. •Skills, training, and staff may not be sufficient. • Personnel issues may cause conflict and problems. • Users, business staff, or subject matter experts may not be available due to conflicting business priorities. • Political issues. • Testers may not communicate their needs and/or the test results adequately. • Developers and/or testers may fail to follow up on information found in testing and reviews (e.g., not improving development and testing practices). • There may be an improper attitude toward, or expectations of, testing (e.g., not appreciating the value of finding defects during testing).
  • 50.
    • Technical issues. •Requirements may not be defined well enough. • The requirements may not be met, given existing constraints. • The test environment may not be ready on time. • Data conversion, migration planning, and their tool support may be late. • Weaknesses in the development process may impact the consistency or quality of project work products such as design, code, configuration, test data, and test cases. • Poor defect management and similar problems may result in accumulated defects and other technical debt. • Supplier issues. • A third party may fail to deliver a necessary product or service, or go bankrupt.
  • 51.
    Debugging • Remember thata successful test case is one that shows that a program does not do what it was designed to do. Debugging is a two-step process that begins when you find an error as a result of a successful test case. Step 1 is the determination of the exact nature and location of the suspected error within the program. Step 2 consists of fixing the error.
  • 52.
    • it seemsto be the one aspect of the software production process that programmers enjoy the least, for these reasons primarily: Your ego may get in the way. Like it or not, debugging confirms that programmers are not perfect; they commit errors in either the design or the coding of the program.You may run out of steam. Of all the software development activities, debugging is the most mentally taxing activity. Moreover, debugging usually is performed under a tremendous amount of organizational or self-induced pressure to fix the problem as quickly as possible.
  • 53.
    • You maylose your way. Debugging is mentally taxing because the error you’ve found could occur in virtually any statement within the program. Without examining the program first, you can’t be absolutely sure, for example, that the origin of a numerical error in a paycheck produced by a payroll program is not a subroutine that asks the operator to load a particular form into the printer.
  • 54.
    • You maybe on your own. Compared to other software development activities, comparatively little research, literature, and formal instruction exist on the process of debugging
  • 55.
    Debugging by BruteForce • The most common scheme for debugging a program is the so-called bruteforce method. It is popular because it requires little thought and is the least mentally taxing of the methods; unfortunately, it is inefficient and generally unsuccessful. Brute-force methods can be partitioned into at least three categories:
  • 56.
    • Debugging witha storage dump. • Debugging according to the common suggestion to ‘‘scatter print statements throughout your program.’’ • Debugging with automated debugging tools
  • 57.
    • The first,debugging with a storage dump (usually a crude display of all storage locations in hexadecimal or octal format) is the most inefficient of the brute-force methods. Here’s why: It is difficult to establish a correspondence between memory locations and the variables in a source program. With any program of reasonable complexity, such a memory dump will produce a massive amount of data, most of which is irrelevant.
  • 58.
    • A memorydump is a static picture of the program, showing the state of the program at only one instant in time; to find errors, you have to study the dynamics of a program (state changes over time).
  • 59.
    • A memorydump is rarely produced at the exact point of the error, so it doesn’t show the program’s state at the point of the error. Program actions between the time of the dump and the time of the error can mask the clues you need to find the error. Adequate methodologies don’t exist for finding errors by analyzing a memory dump (so many programmers stare, with glazed eyes, wistfully expecting the error to expose itself magically from the program dump).
  • 60.
    • Automated debuggingtools work similarly to inserting print statements within the program, but rather than making changes to the program, you analyze the dynamics of the program with the debugging features of the programming language or special interactive debugging tools.
  • 61.
    Debugging by Induction •It should be obvious that careful thought will find most errors without the debugger even going near the computer. One particular thought process is induction, where you move from the particulars of a situation to the whole. That is, start with the clues (the symptoms of the error and possibly the results of one or more test cases) and look for relationships among the clues.
  • 63.
    • 1. Locatethe pertinent data. A major mistake debuggers make is failing to take account of all available data or symptoms about the problem. Therefore, the first step is the enumeration of all you know about what the program did correctly and what it did incorrectly—the symptoms that led you to believe there was an error.
  • 64.
    • 2. Organizethe data. Remember that induction implies that you’re processing from the particulars to the general, so the second step is to structure the pertinent data to let you observe the patterns. Of particular importance is the search for contradictions, events such as the error occurs only when the customer has no outstanding balance in his or her margin account.
  • 66.
    • 3. Devisea hypothesis. Next, study the relationships among the clues, and devise, using the patterns that might be visible in the structure of the clues, one or more hypotheses about the cause of the error. If you can’t devise a theory, more data are needed, perhaps from new test cases. If multiple theories seem possible, select the more probable one first.
  • 67.
    • 4. Provethe hypothesis. A major mistake at this point, given the pressures under which debugging usually is performed, is to skip this step and jump to conclusions to fix the problem. Resist this urge, for it is vital to prove the reasonableness of the hypothesis before you proceed. If you skip this step, you’ll probably succeed in correcting only the problem symptom, not the problem itself.
  • 68.
    • 5. Fixthe problem. You can proceed with fixing the problem once you complete the previous steps. By taking the time to fully work through each step, you can feel confident that your fix will correct the bug. Remember though, that you still need to perform some type of regression testing to ensure your bug fix didn’t create problems in other program areas. As the application grows larger, so does the likelihood that your fix will cause problems elsewhere.
  • 71.
    • Enumerating thepossible causes for the unexpected error message, we might get: 1.The program does not accept the word DISPLAY. 2. The program does not accept the period. 3.The program does not allow a default as a first operand; it expects a storage address to precede the period. 4.The program does not allow an E as a valid byte count.
  • 72.
    Debugging by Backtracking •An effective method for locating errors in small programs is to backtrack the incorrect results through the logic of the program until you find the point where the logic went astray. In other words, start at the point where the program gives the incorrect result—such as where incorrect data were printed.
  • 73.
    • Here, youdeduce from the observed output what the values of the program’s variables must have been. By performing a mental reverse execution of the program from this point and repeatedly applying the if-then logic that states ‘‘if this was the state of the program at this point, then this must have been the state of the program up here,’’ you can quickly pinpoint the error.
  • 74.
    Debugging by Testing •consider two types of test cases: test cases for testing, whose purpose is to expose a previously undetected error, and test cases for debugging, whose purpose is to provide information useful in locating a suspected error. The difference between the two is that test cases for testing tend to be ‘‘fat,’’ in that you are trying to cover many conditions in a small number of test cases.
  • 75.
    • Test casesfor debugging, on the other hand, are ‘‘slim,’’ because you want to cover only a single condition or a few conditions in each test case. Actually, this is not an entirely separate method; it often is used in conjunction with the induction method to obtain information needed to generate a hypothesis and/or to prove a hypothesis. It also is used with the deduction method to eliminate suspected causes, refine the remaining hypothesis, and/or prove a hypothesis.
  • 76.
    What to Estimate? •Resources: Resources are required to carry out any project tasks. They can be people, equipment, facilities, funding, or anything else capable of definition required for the completion of a project activity. • Times : Time is the most valuable resource in a project. Every project has a deadline to delivery. • Human Skills : Human skills mean the knowledge and the experience of the Team members. They affect to your estimation. For example, a team, whose members have low testing skills, will take more time to finish the project than the one which has high testing skills. • Cost: Cost is the project budget. Generally speaking, it means how much money it takes to finish the project.
  • 77.
    • How toestimate? • List of Software Test Estimation Techniques • Work Breakdown Structure • 3-Point Software Testing Estimation Technique • Wideband Delphi technique • Function Point/Testing Point Analysis • Use – Case Point Method • Percentage distribution • Ad-hoc method
  • 79.
    • Following isthe 4 Step process to arrive at an estimate
  • 80.
    • Step1) Dividethe whole project task into subtasks • Task is a piece of work that has been given to someone. To do this, you can use the Work Breakdown Structure technique.
  • 81.
    • Step 2)Allocate each task to team member • In this step, each task is assigned to the appropriate member in the project team.
  • 82.
    • Step 3)Effort Estimation For Tasks • There are 2 techniques which you can apply to estimate the effort for tasks • Functional Point Method • Three Point Estimation
  • 83.
    • Suppose yourproject team has estimated defined per Function Points of 5 hours/points. You can estimate the total effort to test all the features Weightage # of Function Points Total Complex 5 3 15 Medium 3 5 15 Simple 1 4 4 Function Total Points 34 Estimate define per point 5 Total Estimated Effort (Person Hours) 170
  • 84.
    • So thetotal effort to complete the task “Create the test specification” of Guru99 Bank is around 170 man-hours • Once you understand the effort that is required, you can assign resources to determine how long the task will take (duration), and then you can estimate labor and non-labor costs.
  • 85.
    • STEP C)Estimate the cost for the tasks • This step helps you to answer the last question of customer “How much does it cost?”
  • 86.
    • METHOD 2)Three Point Estimation • Three-Point estimation is one of the techniques that could be used to estimate a task. The simplicity of the Three-point estimation makes it a very useful tool for a Project Manager that who wants to estimate. • In three-point estimation, three values are produced initially for every task based on prior experience or best-guesses as follows
  • 87.
    • You canestimate as following • The best case to complete this task is 120 man-hours (around 15 days). In this case, you have a talented team, they can finish the task in smallest time. • The most likely case to complete this task is 170 man-hours (around 21 days). This is a normal case, you have enough resource and ability to complete the task • The worst case to complete this task is 200 man-hours (around 25 days). You need to perform much more work because your team members are not experienced. • Now, assign the value to each parameter as below
  • 88.
    • The effortto complete the task can be calculated using double-triangular distribution formula as follows-
  • 89.
    • In theabove estimation, you just determine a possible and not a certain value, we must know about the probability that the estimation is correct. You can use the other formula:
  • 90.
    • Step 4)Validate the estimation • Once you create an aggregate estimate for all the tasks mentioned in the WBS, you need to forward it to the management board, who will review and approve it.
  • 91.
    Debugging by deduction •The process of deduction proceeds from some general theories or premises, using the processes of elimination and refinement, to arrive at a conclusion .
  • 93.
    • 1. Enumeratethe possible causes or hypotheses. The first step is to develop a list of all conceivable causes of the error. They don’t have to be complete explanations; they are merely theories to help you structure and analyze the available data. 2. Use the data to eliminate possible causes. Carefully examine all of the data, particularly by looking for contradictions and try to eliminate all but one of the possible causes.
  • 94.
    • If allare eliminated, you need more data gained from additional test cases to devise new theories. If more than one possible cause remains, select the most probable cause— the prime hypothesis—first. 3. Refine the remaining hypothesis. The possible cause at this point might be correct, but it is unlikely to be specific enough to pinpoint the error. Hence, the next step is to use the available clues to refine the theory.
  • 95.
    • For example,you might start with the idea that ‘‘there is an error in handling the last transaction in the file’’ and refine it to ‘‘the last transaction in the buffer is overlaid with the end-of-file indicator.’’ 4.Prove the remaining hypothesis. This vital step is identical to step 4 in the induction method. 5.Fix the error. Again this step is identical to step 5 in the induction method. To re- emphasize though, you should thoroughly test your fix to ensure it does not create problems elsewhere in the application.
  • 97.
    Testing Internet Applications •Internet applications are essentially client- server applications in which the client is a Web browser, and the server is a Web or application server. Although conceptually simple, the complexity of these applications varies wildly. Some companies have applications built for business-to-consumer uses such as banking services and retail stores,
  • 98.
    • while othershave businessto-business applications such as supply chain or sales force management. Development and user presentation/user interface strategies vary for these different types of websites, and, as you might imagine, the testing approach varies as well.
  • 99.
    • The importanceof rooting out the errors in an Internet application cannot be overstated. As a result of the openness and accessibility of the Internet, competition in the business-to- consumer and business-tobusiness arena is intense. Thus, the Internet has created a buyer’s market for goods and services. Consumers have developed high expectations,
  • 100.
    • and ifyour site does not load quickly, respond immediately, and provide intuitive navigation features, chances are that the user will find another company with which to conduct business. This issue is not confined to strictly e-commerce or product promotion sites.
  • 101.
    • Websites thatare developed as research or information resources frequently are maintained by advertising or user donations. Either way, ample competition exists to lure users away, thereby reducing activity and concomitant revenue.
  • 102.
    • Not onlywill the customer leave your site if it exhibits poor quality, your corporate image will become tarnished as well. After all, who feels comfortable buying a car from a company that cannot build a suitable website? Like it or not, websites have become the new first impression for business. In general, consumers don’t pay to access most websites, so there is little incentive to remain loyal in the face of mediocre design or performance.
  • 103.
  • 104.
    • You willface many challenges when designing and testing Internet-based applications due to the large number of elements you cannot control and the number of interdependent components. Adequately testing your application requires that you make some assumptions about your customers and how they use the site.
  • 105.
    • The followinglist provides some examples of the challenges associated with testing Internet- based applications: Large and varied user base. The users of your website possess different skill sets, employ a variety of browsers, and use different operating systems or devices. You can also expect your customers to access your website using a wide range of connection speeds. Ten years ago not everyone had broadband Internet access. Today, most do. However, you still need to consider bandwidth as Internet content becomes
  • 106.
    • Business environment.If you operate an e- commerce site, then you must consider issues such as calculating taxes, determining shipping costs, completing financial transactions, and tracking customer profiles. for example. The developer must thoroughly understand the structure of the remote system, and work closely with its owners and developers to ensure security and accuracy.
  • 107.
    • Locales. Usersmay reside in other countries, in which case you will have internationalization issues such as language translation, time zone differences, and currency conversion. Security. Because your site is open to the world, you must protect it from hackers. They can bring your website to a grinding halt with denial-of-service (DoS) attacks, or rip off your customers’ credit card information.
  • 108.
    • Testing environments.To properly test your application, you will need to duplicate the production environment. This means you should use Web servers, application servers, and database servers that are identical to the production equipment. For the most accurate testing results, the network infrastructure will have to be duplicated as well, which includes routers, switches, and firewalls.
  • 109.
    • Another significanttesting challenge you face is testing browser compatibility. There are several different browsers on the market today, and each behaves differently. Although standards exist for browser operation, most vendors enhance their products in an effort to attract a loyal user base. Unfortunately, this causes the browsers to operate in a nonstandard way.
  • 111.
    Testing Strategies • Developinga testing strategy for Internet- based applications requires a solid understanding of the hardware and software components that make up the application. As is critical to successful testing of standard applications, you will need a specification document to describe the expected functionality and performance of your website. Without this document, you will not be able to design the appropriate
  • 112.
    • You needto test components developed internally and those purchased from a third party. For the components developed in- house you should employ the tactics presented in earlier chapters. This includes creating unit/module tests and performing code reviews. Integrate the components into your system only after verifying that they meet the design specifications and functionality outlined in the specification document
  • 114.
    • Testing Internet-basedapplications is best tackled with a divide- and conquer approach. Fortunately, the architecture of Internet applications allows you to identify discrete areas to target testing. • Presentation layer. The layer of an Internet application that provides the user interface (UI; or GUI, graphical user interface). • Business layer. The layer that models your business processes, such as user authentication and transactions • Data layer. The layer that houses data used by the application or that is collected from the end user
  • 116.
    Presentation Layer Testing •In a nutshell, presentation layer testing is very labor intensive. However, ust as you can segment the testing of an Internet application into discretentities, you can do the same when testing the presentation layer. Here are the three major areas of presentation layer testing: 1. Content testing. Overall aesthetics, fonts, colors, spelling, content ac curacy, default values. 2. Website architecture. Broken links or graphics. 3. User environment. Web browser versions and operating system configuration.
  • 117.
    • Content testinginvolves checking the human-interface element of a website. You need to search for errors in font type, screen layout, colors, graphic resolutions, and other features that directly affect the end-user experience. In addition, you should verify the accuracy of the information on your website. Providing grammatically correct, but inaccurate, information harms your company’s credibility as much as any other GUI bug. Inaccurate information may also cause legal problems for your company.
  • 118.
    • As mentionedearlier, testing the end-user environment—also known as browser-compatibility testing—is often the most challenging aspect of testing Internet-based applications. The combination of browsers and an operating system (OS) is very large. Not only should you test each browser configuration, but different versions of the same browser as well.
  • 119.
    Business Layer Testing •Business layer testing focuses on finding errors in the business logic of your Internet application. You will find testing this layer very similar to that of stand-alone applications, in that you can employ both white- and black-box techniques. You will want to create test plans and procedures that detect errors in the application’s performance specification, data acquisition, and transaction processing.
  • 120.
    • Performance. Testto see whether the application meets documented performance specifications (generally specified in response times and throughput rates). Data validity. Test to detect errors in data collected from customers. Transactions. Test to uncover errors in transaction processing, which may include credit card processing, e- mailing verifications, and calculating sales
  • 121.
    Data Layer Testing •Once your site is up and running, the data you collect become very valuable. Credit card numbers, payment information, and user profiles are examples of the types of data you may collect while running your e-commerce site. Losing this information could prove disastrous and crippling to your business. Therefore, you should develop a set of procedures to protect your data storage systems.
  • 122.
    • As withthe other tiers, you should search for errors in certain areas when testing the data layer. These include the following: Response time. Quantifying completion times for Structured Query Language (SQL) operations. Data integrity. Verifying that the data are stored correctly and accurately
  • 123.
    • Fault toleranceand recoverability. Maximizing the MTBF and minimizing the MTTR.
  • 124.
    • Three pointestimation used for a. Risk b. Cost c. Time d. None of above
  • 125.
    • Function pointis used for • Risk • Effort • Time • None