SlideShare a Scribd company logo
1 of 65
Software testing
Main issues:
There are a great many testing techniques
Often, only the final code is tested
SE, Testing, Hans van Vliet, ©2008 2
Nasty question
 Suppose you are being asked to lead the team to
test the software that controls a new X-ray
machine. Would you take that job?
 Would you take it if you could name your own
price?
 What if the contract says you’ll be charged with
murder in case a patient dies because of a mal-
functioning of the software?
SE, Testing, Hans van Vliet, ©2008 3
Overview
 Preliminaries
 All sorts of test techniques
 Comparison of test techniques
 Software reliability
SE, Testing, Hans van Vliet, ©2008 4
State-of-the-Art
 30-85 errors are made per 1000 lines of source
code
 extensively tested software contains 0.5-3 errors
per 1000 lines of source code
 testing is postponed, as a consequence: the later
an error is discovered, the more it costs to fix it.
 error distribution: 60% design, 40%
implementation. 66% of the design errors are not
discovered until the software has become
operational.
SE, Testing, Hans van Vliet, ©2008 5
Relative cost of error correction
1
2
5
10
20
50
100
RE design code test operation
SE, Testing, Hans van Vliet, ©2008 6
Lessons
 Many errors are made in the early phases
 These errors are discovered late
 Repairing those errors is costly
 ⇒ It pays off to start testing real early
SE, Testing, Hans van Vliet, ©2008 7
How then to proceed?
 Exhaustive testing most often is not feasible
 Random statistical testing does not work either if
you want to find errors
 Therefore, we look for systematic ways to proceed
during testing
SE, Testing, Hans van Vliet, ©2008 8
Classification of testing techniques
 Classification based on the criterion to measure
the adequacy of a set of test cases:
 coverage-based testing
 fault-based testing
 error-based testing
 Classification based on the source of information
to derive test cases:
 black-box testing (functional, specification-based)
 white-box testing (structural, program-based)
SE, Testing, Hans van Vliet, ©2008 9
Some preliminary questions
 What exactly is an error?
 How does the testing process look like?
 When is test technique A superior to test
technique B?
 What do we want to achieve during testing?
 When to stop testing?
SE, Testing, Hans van Vliet, ©2008 10
Error, fault, failure
 an error is a human activity resulting in software
containing a fault
 a fault is the manifestation of an error
 a fault may result in a failure
SE, Testing, Hans van Vliet, ©2008 11
When exactly is a failure a failure?
 Failure is a relative notion: e.g. a failure w.r.t. the
specification document
 Verification: evaluate a product to see whether it
satisfies the conditions specified at the start:
Have we built the system right?
 Validation: evaluate a product to see whether it
does what we think it should do:
Have we built the right system?
SE, Testing, Hans van Vliet, ©2008 12
Point to ponder: maiden flight of Ariane 5
SE, Testing, Hans van Vliet, ©2008 13
Testing process
oracle
P
P
test
strategy
compare
input
subset of
input
subset of
input
expected
output
real
output
test
results
SE, Testing, Hans van Vliet, ©2008 14
Test adequacy criteria
 Specifies requirements for testing
 Can be used as stopping rule: stop testing if 100%
of the statements have been tested
 Can be used as measurement: a test set that
covers 80% of the test cases is better than one
which covers 70%
 Can be used as test case generator: look for a test
which exercises some statements not covered by
the tests so far
 A given test adequacy criterion and the
associated test technique are opposite sides of
the same coin
SE, Testing, Hans van Vliet, ©2008 15
What is our goal during testing?
 Objective 1: find as many faults as possible
 Objective 2: make you feel confident that the
software works OK
SE, Testing, Hans van Vliet, ©2008 16
Example constructive approach
 Task: test module that sorts an array A[1..n]. A
contains integers; n < 1000
 Solution: take n = 0, 1, 37, 999, 1000. For n = 37,
999, take A as follows:
 A contains random integers
 A contains increasing integers
 A contains decreasing integers
 These are equivalence classes: we assume that
one element from such a class suffices
 This works if the partition is perfect
SE, Testing, Hans van Vliet, ©2008 17
Testing models
 Demonstration: make sure the software satisfies
the specs
 Destruction: try to make the software fail
 Evaluation: detect faults in early phases
 Prevention: prevent faults in early phases
time
SE, Testing, Hans van Vliet, ©2008 18
Testing and the life cycle
 requirements engineering
 criteria: completeness, consistency, feasibility, and testability.
 typical errors: missing, wrong, and extra information
 determine testing strategy
 generate functional test cases
 test specification, through reviews and the like
 design
 functional and structural tests can be devised on the basis of
the decomposition
 the design itself can be tested (against the requirements)
 formal verification techniques
 the architecture can be evaluated
SE, Testing, Hans van Vliet, ©2008 19
Testing and the life cycle (cnt’d)
 implementation
 check consistency implementation and previous documents
 code-inspection and code-walkthrough
 all kinds of functional and structural test techniques
 extensive tool support
 formal verification techniques
 maintenance
 regression testing: either retest all, or a more selective retest
SE, Testing, Hans van Vliet, ©2008 20
Test-Driven Development (TDD)
First write the tests, then do the
design/implementation
Part of agile approaches like XP
Supported by tools, eg. JUnit
Is more than a mere test technique; it subsumes
part of the design work
SE, Testing, Hans van Vliet, ©2008 21
Steps of TDD
1. Add a test
2. Run all tests, and see that the system fails
3. Make a small change to make the test work
4. Run all tests again, and see they all run properly
5. Refactor the system to improve its design and
remove redundancies
SE, Testing, Hans van Vliet, ©2008 22
Test Stages
 module-unit testing and integration testing
 bottom-up versus top-down testing
 system testing
 acceptance testing
 installation testing
SE, Testing, Hans van Vliet, ©2008 23
Test documentation (IEEE 928)
 Test plan
 Test design specification
 Test case specification
 Test procedure specification
 Test item transmittal report
 Test log
 Test incident report
 Test summary report
SE, Testing, Hans van Vliet, ©2008 24
Overview
 Preliminaries
 All sorts of test techniques
 manual techniques
 coverage-based techniques
 fault-based techniques
 error-based techniques
 Comparison of test techniques
 Software reliability
SE, Testing, Hans van Vliet, ©2008 25
Manual Test Techniques
 static versus dynamic analysis
 compiler does a lot of static testing
 static test techniques
 reading, informal versus peer review
 walkthrough and inspections
 correctness proofs, e.g., pre-and post-conditions: {P} S {Q}
 stepwise abstraction
SE, Testing, Hans van Vliet, ©2008 26
(Fagan) inspection
 Going through the code, statement by statement
 Team with ~4 members, with specific roles:
 moderator: organization, chairperson
 code author: silent observer
 (two) inspectors, readers: paraphrase the code
 Uses checklist of well-known faults
 Result: list of problems encountered
SE, Testing, Hans van Vliet, ©2008 27
Example checklist
 Wrong use of data: variable not initialized,
dangling pointer, array index out of bounds, …
 Faults in declarations: undeclared variable,
variable declared twice, …
 Faults in computation: division by zero, mixed-
type expressions, wrong operator priorities, …
 Faults in relational expressions: incorrect
Boolean operator, wrong operator priorities, .
 Faults in control flow: infinite loops, loops that
execute n-1 or n+1 times instead of n, ...
SE, Testing, Hans van Vliet, ©2008 28
Overview
 Preliminaries
 All sorts of test techniques
 manual techniques
 coverage-based techniques
 fault-based techniques
 error-based techniques
 Comparison of test techniques
 Software reliability
SE, Testing, Hans van Vliet, ©2008 29
Coverage-based testing
 Goodness is determined by the coverage of the
product by the test set so far: e.g., % of
statements or requirements tested
 Often based on control-flow graph of the program
 Three techniques:
 control-flow coverage
 data-flow coverage
 coverage-based testing of requirements
SE, Testing, Hans van Vliet, ©2008 30
Example of control-flow coverage
procedure bubble (var a: array [1..n] of integer; n: integer);
var i, j: temp: integer;
begin
for i:= 2 to n do
if a[i] >= a[i-1] then goto next endif;
j:= i;
loop: if j <= 1 then goto next endif;
if a[j] >= a[j-1] then goto next endif;
temp:= a[j]; a[j]:= a[j-1]; a[j-1]:= temp; j:= j-1; goto loop;
next: skip;
enddo
end bubble;
input: n=2, a[1] = 5, a[2] = 3
√
√
√
√
√
√
√
√
√
√
√
√
SE, Testing, Hans van Vliet, ©2008 31
Example of control-flow coverage (cnt’d)
procedure bubble (var a: array [1..n] of integer; n: integer);
var i, j: temp: integer;
begin
for i:= 2 to n do
if a[i] >= a[i-1] then goto next endif;
j:= i;
loop: if j <= 1 then goto next endif;
if a[j] >= a[j-1] then goto next endif;
temp:= a[j]; a[j]:= a[j-1]; a[j-1]:= temp; j:= j-1; goto loop;
next: skip;
enddo
end bubble;
input: n=2, a[1] = 5, a[2] = 3
√
√
√
√
√
√
√
√
√
√
√
√
a[i]=a[i-1]
SE, Testing, Hans van Vliet, ©2008 32
Control-flow coverage
 This example is about All-Nodes coverage,
statement coverage
 A stronger criterion: All-Edges coverage, branch
coverage
 Variations exercise all combinations of
elementary predicates in a branch condition
 Strongest: All-Paths coverage (≡ exhaustive
testing)
 Special case: all linear independent paths, the
cyclomatic number criterion
SE, Testing, Hans van Vliet, ©2008 33
Data-flow coverage
 Looks how variables are treated along paths
through the control graph.
 Variables are defined when they get a new value.
 A definition in statement X is alive in statement Y
if there is a path from X to Y in which this variable
is not defined anew. Such a path is called
definition-clear.
 We may now test all definition-clear paths
between each definition and each use of that
definition and each successor of that node: All-
Uses coverage.
SE, Testing, Hans van Vliet, ©2008 34
Coverage-based testing of requirements
 Requirements may be represented as graphs,
where the nodes represent elementary
requirements, and the edges represent relations
(like yes/no) between requirements.
 And next we may apply the earlier coverage
criteria to this graph
SE, Testing, Hans van Vliet, ©2008 35
Example translation of requirements to a graph
A user may order new
books. He is shown a
screen with fields to fill
in. Certain fields are
mandatory. One field is
used to check whether
the department’s budget
is large enough. If so,
the book is ordered and
the budget reduced
accordingly.
Enter fields
All mandatory
fields there?
Check budget
Order book
Notify
user
Notify user
SE, Testing, Hans van Vliet, ©2008 36
Similarity with Use Case success scenario
1. User fills form
2. Book info checked
3. Dept budget checked
4. Order placed
5. User is informed
Enter fields
All mandatory
fields there?
Check budget
Order book
Notify
user
Notify user
SE, Testing, Hans van Vliet, ©2008 37
Overview
 Preliminaries
 All sorts of test techniques
 manual techniques
 coverage-based techniques
 fault-based techniques
 error-based techniques
 Comparison of test techniques
 Software reliability
SE, Testing, Hans van Vliet, ©2008 38
Fault-based testing
 In coverage-based testing, we take the structure
of the artifact to be tested into account
 In fault-based testing, we do not directly consider
this artifact
 We just look for a test set with a high ability to
detect faults
 Two techniques:
 Fault seeding
 Mutation testing
SE, Testing, Hans van Vliet, ©2008 39
Fault seeding
SE, Testing, Hans van Vliet, ©2008 40
Mutation testing
procedure insert(a, b, n, x);
begin bool found:= false;
for i:= 1 to n do
if a[i] = x
then found:= true; goto leave endif
enddo;
leave:
if found
then b[i]:= b[i] + 1
else n:= n+1; a[n]:= x; b[n]:= 1
endif
end insert;
2
-
n-1
SE, Testing, Hans van Vliet, ©2008 41
Mutation testing (cnt’d)
procedure insert(a, b, n, x);
begin bool found:= false;
for i:= 1 to n do
if a[i] = x
then found:= true; goto leave endif
enddo;
leave:
if found
then b[i]:= b[i] + 1
else n:= n+1; a[n]:= x; b[n]:= 1
endif
end insert;
n-1
SE, Testing, Hans van Vliet, ©2008 42
How tests are treated by mutants
 Let P be the original, and P’ the mutant
 Suppose we have two tests:
 T1 is a test, which inserts an element that equals a[k] with
k<n
 T2 is another test, which inserts an element that does not
equal an element a[k] with k<n
 Now P and P’ will behave the same on T1, while
they differ for T2
 In some sense, T2 is a “better” test, since it in a
way tests this upper bound of the for-loop, which
T1 does not
SE, Testing, Hans van Vliet, ©2008 43
How to use mutants in testing
 If a test produces different results for one of the
mutants, that mutant is said to be dead
 If a test set leaves us with many live mutants, that
test set is of low quality
 If we have M mutants, and a test set results in D
dead mutants, then the mutation adequacy score
is D/M
 A larger mutation adequacy score means a better
test set
SE, Testing, Hans van Vliet, ©2008 44
Strong vs weak mutation testing
 Suppose we have a program P with a component
T
 We have a mutant T’ of T
 Since T is part of P, we then also have a mutant P’
of P
 In weak mutation testing, we require that T and T’
produce different results, but P and P’ may still
produce the same results
 In strong mutation testing, we require that P and
P’ produce different results
SE, Testing, Hans van Vliet, ©2008 45
Assumptions underlying mutation testing
 Competent Programmer Hypothesis: competent
programmers write programs that are
approximately correct
 Coupling Effect Hypothesis: tests that reveal
simple fault can also reveal complex faults
SE, Testing, Hans van Vliet, ©2008 46
Overview
 Preliminaries
 All sorts of test techniques
 manual techniques
 coverage-based techniques
 fault-based techniques
 error-based techniques
 Comparison of test techniques
 Software reliability
SE, Testing, Hans van Vliet, ©2008 47
Error-based testing
 Decomposes input (such as requirements) in a
number of subdomains
 Tests inputs from each of these subdomains, and
especially points near and just on the boundaries
of these subdomains -- those being the spots
where we tend to make errors
 In fact, this is a systematic way of doing what
experienced programmers do: test for 0, 1, nil, etc
SE, Testing, Hans van Vliet, ©2008 48
Error-based testing, example
Example requirement:
Library maintains a list of “hot” books. Each new
book is added to this list. After six months, it is
removed again. Also, if book is more than four
months on the list, and has not been borrowed
more than four times a month, or it is more than
two months old and has been borrowed at most
twice, it is removed from the list.
SE, Testing, Hans van Vliet, ©2008 49
Example (cnt’d)
2 4 6
5
2
av # of
loans
age
A B
SE, Testing, Hans van Vliet, ©2008 50
Strategies for error-based testing
 An ON point is a point on the border of a subdomain
 If a subdomain is open w.r.t. some border, then an
OFF point of that border is a point just inside that
border
 If a subdomain is closed w.r.t. some border, then an
OFF point of that border is a point just outside that
border
 So the circle on the line age=6 is an ON point of both
A and B
 The other circle is an OFF point of both A and B
SE, Testing, Hans van Vliet, ©2008 51
Strategies for error-based testing (cnt’d)
 Suppose we have subdomains Di, i=1,..n
 Create test set with N test cases for ON points of
each border B of each subdomain Di, and at least
one test case for an OFF point of each border
 This set is called N∗1 domain adequate
SE, Testing, Hans van Vliet, ©2008 52
Application to programs
if x < 6 then …
elsif x > 4 and y < 5 then …
elsif x > 2 and y <= 2 then …
else ...
SE, Testing, Hans van Vliet, ©2008 53
Overview
 Preliminaries
 All sorts of test techniques
 manual techniques
 coverage-based techniques
 fault-based techniques
 error-based techniques
 Comparison of test techniques
 Software reliability
SE, Testing, Hans van Vliet, ©2008 54
Comparison of test adequacy criteria
 Criterion A is stronger than criterion B if, for all
programs P and all test sets T, X-adequacy
implies Y-adequacy
 In that sense, e.g., All-Edges is stronger that All-
Nodes coverage (All-Edges “subsumes” All-
Nodes)
 One problem: such criteria can only deal with
paths that can be executed (are feasible). So, if
you have dead code, you can never obtain 100%
statement coverage. Sometimes, the subsumes
relation only holds for the feasible version.
SE, Testing, Hans van Vliet, ©2008 55
Desirable properties of adequacy criteria
 applicability property
 non-exhaustive applicability property
 monotonicity property
 inadequate empty set property
 antiextensionality property
 general multiplicity change property
 antidecomposition property
 anticomposition property
 renaming property
 complexity property
 statement coverage property
SE, Testing, Hans van Vliet, ©2008 56
Experimental results
 There is no uniform best test technique
 The use of multiple techniques results in the
discovery of more faults
 (Fagan) inspections have been found to be very
cost effective
 Early attention to testing does pay off
SE, Testing, Hans van Vliet, ©2008 57
Overview
 Preliminaries
 All sorts of test techniques
 manual techniques
 coverage-based techniques
 fault-based techniques
 error-based techniques
 Comparison of test techniques
 Software reliability
SE, Testing, Hans van Vliet, ©2008 58
Software reliability
Interested in expected number of failures (not
faults)
… in a certain period of time
… of a certain product
… running in a certain environment
SE, Testing, Hans van Vliet, ©2008 59
Software reliability: definition
Probability that the system will not fail during a
certain period of time in a certain environment
SE, Testing, Hans van Vliet, ©2008 60
Failure behavior
Subsequent failures are modeled by a stochastic
process
Failure behavior changes over time (e.g. because
errors are corrected) ⇒ stochastic process is
non-homogeneous
µ(τ) = average number of failures until time τ
λ(τ) = average number of failures at time τ (failure
intensity)
λ(τ) is the derivative of µ(τ)
SE, Testing, Hans van Vliet, ©2008 61
Failure intensity λ(τ) and mean failures µ(τ)
SE, Testing, Hans van Vliet, ©2008 62
Operational profile
Input results in the execution of a certain sequence
of instructions
Different input ⇒ (probably) different sequence
Input domain can thus be split in a series of
equivalence classes
Set of possible input classes together with their
probabilities
SE, Testing, Hans van Vliet, ©2008 63
Two simple models
Basic execution time model (BM)
 Decrease in failure intensity is constant over time
 Assumes uniform operational profile
 Effectiveness of fault correction is constant over time
Logarithmic Poisson execution time model (LPM)
 First failures contribute more to decrease in failure intensity
than later failures
 Assumes non-uniform operational profile
 Effectiveness of fault correction decreases over time
SE, Testing, Hans van Vliet, ©2008 64
Estimating model parameters (for BM)
SE, Testing, Hans van Vliet, ©2008 65
Summary
 Do test as early as possible
 Testing is a continuous process
 Design with testability in mind
 Test activities must be carefully planned,
controlled and documented.
 No single reliability model performs best
consistently

More Related Content

What's hot

'Acceptance Testing' by Erik Boelen
'Acceptance Testing' by Erik Boelen'Acceptance Testing' by Erik Boelen
'Acceptance Testing' by Erik BoelenTEST Huddle
 
Elise Greveraars - Tester Needed? No Thanks, We Use MBT!
Elise Greveraars - Tester Needed? No Thanks, We Use MBT!Elise Greveraars - Tester Needed? No Thanks, We Use MBT!
Elise Greveraars - Tester Needed? No Thanks, We Use MBT!TEST Huddle
 
Gitte Ottosen - Agility and Process Maturity, Of Course They Mix!
Gitte Ottosen - Agility and Process Maturity, Of Course They Mix!Gitte Ottosen - Agility and Process Maturity, Of Course They Mix!
Gitte Ottosen - Agility and Process Maturity, Of Course They Mix!TEST Huddle
 
Learn Software Testing for ISTQB Foundation Exam
Learn Software Testing for ISTQB Foundation ExamLearn Software Testing for ISTQB Foundation Exam
Learn Software Testing for ISTQB Foundation ExamYogindernath Gupta
 
Estimating test effort part 1 of 2
Estimating test effort part 1 of 2Estimating test effort part 1 of 2
Estimating test effort part 1 of 2Ian McDonald
 
Erik Boelen - Testing, The Next Level
Erik Boelen - Testing, The Next LevelErik Boelen - Testing, The Next Level
Erik Boelen - Testing, The Next LevelTEST Huddle
 
Henrik Andersson - Exploratory Testing Champions - EuroSTAR 2010
Henrik Andersson - Exploratory Testing Champions - EuroSTAR 2010Henrik Andersson - Exploratory Testing Champions - EuroSTAR 2010
Henrik Andersson - Exploratory Testing Champions - EuroSTAR 2010TEST Huddle
 
Mats Grindal - Risk-Based Testing - Details of Our Success
Mats Grindal - Risk-Based Testing - Details of Our Success Mats Grindal - Risk-Based Testing - Details of Our Success
Mats Grindal - Risk-Based Testing - Details of Our Success TEST Huddle
 
Using Functional ,Test Automation to Prevent Defects from Escaping the Develo...
Using Functional ,Test Automation to Prevent Defects from Escaping the Develo...Using Functional ,Test Automation to Prevent Defects from Escaping the Develo...
Using Functional ,Test Automation to Prevent Defects from Escaping the Develo...TEST Huddle
 
Darius Silingas - From Model Driven Testing to Test Driven Modelling
Darius Silingas - From Model Driven Testing to Test Driven ModellingDarius Silingas - From Model Driven Testing to Test Driven Modelling
Darius Silingas - From Model Driven Testing to Test Driven ModellingTEST Huddle
 
Rob Baarda - Are Real Test Metrics Predictive for the Future?
Rob Baarda - Are Real Test Metrics Predictive for the Future?Rob Baarda - Are Real Test Metrics Predictive for the Future?
Rob Baarda - Are Real Test Metrics Predictive for the Future?TEST Huddle
 
Peter Zimmerer - Establishing Testing Knowledge and Experience Sharing at Sie...
Peter Zimmerer - Establishing Testing Knowledge and Experience Sharing at Sie...Peter Zimmerer - Establishing Testing Knowledge and Experience Sharing at Sie...
Peter Zimmerer - Establishing Testing Knowledge and Experience Sharing at Sie...TEST Huddle
 
Edwin Van Loon - How Much Testing is Enough - EuroSTAR 2010
Edwin Van Loon -  How Much Testing is Enough - EuroSTAR 2010Edwin Van Loon -  How Much Testing is Enough - EuroSTAR 2010
Edwin Van Loon - How Much Testing is Enough - EuroSTAR 2010TEST Huddle
 
'Continuous Quality Improvements – A Journey Through The Largest Scrum Projec...
'Continuous Quality Improvements – A Journey Through The Largest Scrum Projec...'Continuous Quality Improvements – A Journey Through The Largest Scrum Projec...
'Continuous Quality Improvements – A Journey Through The Largest Scrum Projec...TEST Huddle
 
Estimating test effort part 2 of 2
Estimating test effort part 2 of 2Estimating test effort part 2 of 2
Estimating test effort part 2 of 2Ian McDonald
 
Derk jan de Grood - ET, Best of Both Worlds
Derk jan de Grood - ET, Best of Both WorldsDerk jan de Grood - ET, Best of Both Worlds
Derk jan de Grood - ET, Best of Both WorldsTEST Huddle
 

What's hot (20)

'Acceptance Testing' by Erik Boelen
'Acceptance Testing' by Erik Boelen'Acceptance Testing' by Erik Boelen
'Acceptance Testing' by Erik Boelen
 
Elise Greveraars - Tester Needed? No Thanks, We Use MBT!
Elise Greveraars - Tester Needed? No Thanks, We Use MBT!Elise Greveraars - Tester Needed? No Thanks, We Use MBT!
Elise Greveraars - Tester Needed? No Thanks, We Use MBT!
 
Test Reports
Test ReportsTest Reports
Test Reports
 
Gitte Ottosen - Agility and Process Maturity, Of Course They Mix!
Gitte Ottosen - Agility and Process Maturity, Of Course They Mix!Gitte Ottosen - Agility and Process Maturity, Of Course They Mix!
Gitte Ottosen - Agility and Process Maturity, Of Course They Mix!
 
Learn Software Testing for ISTQB Foundation Exam
Learn Software Testing for ISTQB Foundation ExamLearn Software Testing for ISTQB Foundation Exam
Learn Software Testing for ISTQB Foundation Exam
 
Estimating test effort part 1 of 2
Estimating test effort part 1 of 2Estimating test effort part 1 of 2
Estimating test effort part 1 of 2
 
Erik Boelen - Testing, The Next Level
Erik Boelen - Testing, The Next LevelErik Boelen - Testing, The Next Level
Erik Boelen - Testing, The Next Level
 
Henrik Andersson - Exploratory Testing Champions - EuroSTAR 2010
Henrik Andersson - Exploratory Testing Champions - EuroSTAR 2010Henrik Andersson - Exploratory Testing Champions - EuroSTAR 2010
Henrik Andersson - Exploratory Testing Champions - EuroSTAR 2010
 
Mats Grindal - Risk-Based Testing - Details of Our Success
Mats Grindal - Risk-Based Testing - Details of Our Success Mats Grindal - Risk-Based Testing - Details of Our Success
Mats Grindal - Risk-Based Testing - Details of Our Success
 
Using Functional ,Test Automation to Prevent Defects from Escaping the Develo...
Using Functional ,Test Automation to Prevent Defects from Escaping the Develo...Using Functional ,Test Automation to Prevent Defects from Escaping the Develo...
Using Functional ,Test Automation to Prevent Defects from Escaping the Develo...
 
Darius Silingas - From Model Driven Testing to Test Driven Modelling
Darius Silingas - From Model Driven Testing to Test Driven ModellingDarius Silingas - From Model Driven Testing to Test Driven Modelling
Darius Silingas - From Model Driven Testing to Test Driven Modelling
 
Rob Baarda - Are Real Test Metrics Predictive for the Future?
Rob Baarda - Are Real Test Metrics Predictive for the Future?Rob Baarda - Are Real Test Metrics Predictive for the Future?
Rob Baarda - Are Real Test Metrics Predictive for the Future?
 
Testing strategies
Testing strategiesTesting strategies
Testing strategies
 
Peter Zimmerer - Establishing Testing Knowledge and Experience Sharing at Sie...
Peter Zimmerer - Establishing Testing Knowledge and Experience Sharing at Sie...Peter Zimmerer - Establishing Testing Knowledge and Experience Sharing at Sie...
Peter Zimmerer - Establishing Testing Knowledge and Experience Sharing at Sie...
 
Edwin Van Loon - How Much Testing is Enough - EuroSTAR 2010
Edwin Van Loon -  How Much Testing is Enough - EuroSTAR 2010Edwin Van Loon -  How Much Testing is Enough - EuroSTAR 2010
Edwin Van Loon - How Much Testing is Enough - EuroSTAR 2010
 
ISTQB Technical Test Analyst 2012 Training - Structure-Based Testing
ISTQB Technical Test Analyst 2012 Training - Structure-Based TestingISTQB Technical Test Analyst 2012 Training - Structure-Based Testing
ISTQB Technical Test Analyst 2012 Training - Structure-Based Testing
 
'Continuous Quality Improvements – A Journey Through The Largest Scrum Projec...
'Continuous Quality Improvements – A Journey Through The Largest Scrum Projec...'Continuous Quality Improvements – A Journey Through The Largest Scrum Projec...
'Continuous Quality Improvements – A Journey Through The Largest Scrum Projec...
 
Embedded SW Testing
Embedded SW TestingEmbedded SW Testing
Embedded SW Testing
 
Estimating test effort part 2 of 2
Estimating test effort part 2 of 2Estimating test effort part 2 of 2
Estimating test effort part 2 of 2
 
Derk jan de Grood - ET, Best of Both Worlds
Derk jan de Grood - ET, Best of Both WorldsDerk jan de Grood - ET, Best of Both Worlds
Derk jan de Grood - ET, Best of Both Worlds
 

Viewers also liked

Viewers also liked (16)

Aleksander kaczmarzyk
Aleksander kaczmarzykAleksander kaczmarzyk
Aleksander kaczmarzyk
 
Starlog_StarTrekHiddenFrontier
Starlog_StarTrekHiddenFrontierStarlog_StarTrekHiddenFrontier
Starlog_StarTrekHiddenFrontier
 
Drogas
DrogasDrogas
Drogas
 
Psicología de las ramas
Psicología de las ramas Psicología de las ramas
Psicología de las ramas
 
Agenda
AgendaAgenda
Agenda
 
Derecho
DerechoDerecho
Derecho
 
Cuerpo humano
Cuerpo humanoCuerpo humano
Cuerpo humano
 
Litigio
LitigioLitigio
Litigio
 
Praveen_Chelani
Praveen_ChelaniPraveen_Chelani
Praveen_Chelani
 
La programmazione dell'Unione europea 2014-2020
La programmazione dell'Unione europea 2014-2020La programmazione dell'Unione europea 2014-2020
La programmazione dell'Unione europea 2014-2020
 
Prabhaharan_$CV
Prabhaharan_$CVPrabhaharan_$CV
Prabhaharan_$CV
 
Internet
InternetInternet
Internet
 
Área De Tecnología E Informatica
Área De Tecnología E InformaticaÁrea De Tecnología E Informatica
Área De Tecnología E Informatica
 
IMPORTANCIA DE LA ENSEÑANZA DE LENGUAS EXTRANJERAS Y SU IMPLEMENTACIÓN EN EDU...
IMPORTANCIA DE LA ENSEÑANZA DE LENGUAS EXTRANJERAS Y SU IMPLEMENTACIÓN EN EDU...IMPORTANCIA DE LA ENSEÑANZA DE LENGUAS EXTRANJERAS Y SU IMPLEMENTACIÓN EN EDU...
IMPORTANCIA DE LA ENSEÑANZA DE LENGUAS EXTRANJERAS Y SU IMPLEMENTACIÓN EN EDU...
 
Medicina legal
Medicina legalMedicina legal
Medicina legal
 
Mise en place
Mise en placeMise en place
Mise en place
 

Similar to Testingppt

On the application of SAT solvers for Search Based Software Testing
On the application of SAT solvers for Search Based Software TestingOn the application of SAT solvers for Search Based Software Testing
On the application of SAT solvers for Search Based Software Testingjfrchicanog
 
Test design techniques: Structured and Experienced-based techniques
Test design techniques: Structured and Experienced-based techniquesTest design techniques: Structured and Experienced-based techniques
Test design techniques: Structured and Experienced-based techniquesKhuong Nguyen
 
Assignment 1 Week 2.docx1Assignment 1 Topic Selection.docx
Assignment 1 Week 2.docx1Assignment 1 Topic Selection.docxAssignment 1 Week 2.docx1Assignment 1 Topic Selection.docx
Assignment 1 Week 2.docx1Assignment 1 Topic Selection.docxsherni1
 
What is Test Matrix?
What is Test Matrix?What is Test Matrix?
What is Test Matrix?QA InfoTech
 
Orthogonal array approach a case study
Orthogonal array approach   a case studyOrthogonal array approach   a case study
Orthogonal array approach a case studyKarthikeyan Rajendran
 
Chapter 10 Testing and Quality Assurance1Unders.docx
Chapter 10 Testing and Quality Assurance1Unders.docxChapter 10 Testing and Quality Assurance1Unders.docx
Chapter 10 Testing and Quality Assurance1Unders.docxketurahhazelhurst
 
Testing 3: Types Of Tests That May Be Required
Testing 3: Types Of Tests That May Be RequiredTesting 3: Types Of Tests That May Be Required
Testing 3: Types Of Tests That May Be RequiredArleneAndrews2
 
Automating The Process For Building Reliable Software
Automating The Process For Building Reliable SoftwareAutomating The Process For Building Reliable Software
Automating The Process For Building Reliable Softwareguest8861ff
 
ISTQB / ISEB Foundation Exam Practice -1
ISTQB / ISEB Foundation Exam Practice -1ISTQB / ISEB Foundation Exam Practice -1
ISTQB / ISEB Foundation Exam Practice -1Yogindernath Gupta
 
Types of Software Testing
Types of Software TestingTypes of Software Testing
Types of Software TestingNishant Worah
 
H047054064
H047054064H047054064
H047054064inventy
 
30 February 2005 QUEUE rants [email protected] DARNEDTestin.docx
30  February 2005  QUEUE rants [email protected] DARNEDTestin.docx30  February 2005  QUEUE rants [email protected] DARNEDTestin.docx
30 February 2005 QUEUE rants [email protected] DARNEDTestin.docxtamicawaysmith
 
Combinatorial testing ppt
Combinatorial testing pptCombinatorial testing ppt
Combinatorial testing pptKedar Kumar
 
Testing Software Solutions
Testing Software SolutionsTesting Software Solutions
Testing Software Solutionsgavhays
 
Combinatorial testing
Combinatorial testingCombinatorial testing
Combinatorial testingKedar Kumar
 
Chapter 9 Testing Strategies.ppt
Chapter 9 Testing Strategies.pptChapter 9 Testing Strategies.ppt
Chapter 9 Testing Strategies.pptVijayaPratapReddyM
 

Similar to Testingppt (20)

On the application of SAT solvers for Search Based Software Testing
On the application of SAT solvers for Search Based Software TestingOn the application of SAT solvers for Search Based Software Testing
On the application of SAT solvers for Search Based Software Testing
 
Test design techniques: Structured and Experienced-based techniques
Test design techniques: Structured and Experienced-based techniquesTest design techniques: Structured and Experienced-based techniques
Test design techniques: Structured and Experienced-based techniques
 
Assignment 1 Week 2.docx1Assignment 1 Topic Selection.docx
Assignment 1 Week 2.docx1Assignment 1 Topic Selection.docxAssignment 1 Week 2.docx1Assignment 1 Topic Selection.docx
Assignment 1 Week 2.docx1Assignment 1 Topic Selection.docx
 
What is Test Matrix?
What is Test Matrix?What is Test Matrix?
What is Test Matrix?
 
Orthogonal array approach a case study
Orthogonal array approach   a case studyOrthogonal array approach   a case study
Orthogonal array approach a case study
 
Chapter 10 Testing and Quality Assurance1Unders.docx
Chapter 10 Testing and Quality Assurance1Unders.docxChapter 10 Testing and Quality Assurance1Unders.docx
Chapter 10 Testing and Quality Assurance1Unders.docx
 
Testing 3: Types Of Tests That May Be Required
Testing 3: Types Of Tests That May Be RequiredTesting 3: Types Of Tests That May Be Required
Testing 3: Types Of Tests That May Be Required
 
Automating The Process For Building Reliable Software
Automating The Process For Building Reliable SoftwareAutomating The Process For Building Reliable Software
Automating The Process For Building Reliable Software
 
ISTQB / ISEB Foundation Exam Practice -1
ISTQB / ISEB Foundation Exam Practice -1ISTQB / ISEB Foundation Exam Practice -1
ISTQB / ISEB Foundation Exam Practice -1
 
Types of Software Testing
Types of Software TestingTypes of Software Testing
Types of Software Testing
 
H047054064
H047054064H047054064
H047054064
 
30 February 2005 QUEUE rants [email protected] DARNEDTestin.docx
30  February 2005  QUEUE rants [email protected] DARNEDTestin.docx30  February 2005  QUEUE rants [email protected] DARNEDTestin.docx
30 February 2005 QUEUE rants [email protected] DARNEDTestin.docx
 
Combinatorial testing ppt
Combinatorial testing pptCombinatorial testing ppt
Combinatorial testing ppt
 
Software testing
Software testingSoftware testing
Software testing
 
Testing Software Solutions
Testing Software SolutionsTesting Software Solutions
Testing Software Solutions
 
Combinatorial testing
Combinatorial testingCombinatorial testing
Combinatorial testing
 
Chapter 9 Testing Strategies.ppt
Chapter 9 Testing Strategies.pptChapter 9 Testing Strategies.ppt
Chapter 9 Testing Strategies.ppt
 
Software test proposal
Software test proposalSoftware test proposal
Software test proposal
 
Software testing2
Software testing2Software testing2
Software testing2
 
Software testing
Software testingSoftware testing
Software testing
 

Recently uploaded

The Economic History of the U.S. Lecture 30.pdf
The Economic History of the U.S. Lecture 30.pdfThe Economic History of the U.S. Lecture 30.pdf
The Economic History of the U.S. Lecture 30.pdfGale Pooley
 
Instant Issue Debit Cards - High School Spirit
Instant Issue Debit Cards - High School SpiritInstant Issue Debit Cards - High School Spirit
Instant Issue Debit Cards - High School Spiritegoetzinger
 
High Class Call Girls Nashik Maya 7001305949 Independent Escort Service Nashik
High Class Call Girls Nashik Maya 7001305949 Independent Escort Service NashikHigh Class Call Girls Nashik Maya 7001305949 Independent Escort Service Nashik
High Class Call Girls Nashik Maya 7001305949 Independent Escort Service NashikCall Girls in Nagpur High Profile
 
High Class Call Girls Nagpur Grishma Call 7001035870 Meet With Nagpur Escorts
High Class Call Girls Nagpur Grishma Call 7001035870 Meet With Nagpur EscortsHigh Class Call Girls Nagpur Grishma Call 7001035870 Meet With Nagpur Escorts
High Class Call Girls Nagpur Grishma Call 7001035870 Meet With Nagpur Escortsranjana rawat
 
Independent Call Girl Number in Kurla Mumbai📲 Pooja Nehwal 9892124323 💞 Full ...
Independent Call Girl Number in Kurla Mumbai📲 Pooja Nehwal 9892124323 💞 Full ...Independent Call Girl Number in Kurla Mumbai📲 Pooja Nehwal 9892124323 💞 Full ...
Independent Call Girl Number in Kurla Mumbai📲 Pooja Nehwal 9892124323 💞 Full ...Pooja Nehwal
 
VIP Kolkata Call Girl Serampore 👉 8250192130 Available With Room
VIP Kolkata Call Girl Serampore 👉 8250192130  Available With RoomVIP Kolkata Call Girl Serampore 👉 8250192130  Available With Room
VIP Kolkata Call Girl Serampore 👉 8250192130 Available With Roomdivyansh0kumar0
 
The Economic History of the U.S. Lecture 22.pdf
The Economic History of the U.S. Lecture 22.pdfThe Economic History of the U.S. Lecture 22.pdf
The Economic History of the U.S. Lecture 22.pdfGale Pooley
 
Solution Manual for Financial Accounting, 11th Edition by Robert Libby, Patri...
Solution Manual for Financial Accounting, 11th Edition by Robert Libby, Patri...Solution Manual for Financial Accounting, 11th Edition by Robert Libby, Patri...
Solution Manual for Financial Accounting, 11th Edition by Robert Libby, Patri...ssifa0344
 
Q3 2024 Earnings Conference Call and Webcast Slides
Q3 2024 Earnings Conference Call and Webcast SlidesQ3 2024 Earnings Conference Call and Webcast Slides
Q3 2024 Earnings Conference Call and Webcast SlidesMarketing847413
 
Booking open Available Pune Call Girls Shivane 6297143586 Call Hot Indian Gi...
Booking open Available Pune Call Girls Shivane  6297143586 Call Hot Indian Gi...Booking open Available Pune Call Girls Shivane  6297143586 Call Hot Indian Gi...
Booking open Available Pune Call Girls Shivane 6297143586 Call Hot Indian Gi...Call Girls in Nagpur High Profile
 
The Economic History of the U.S. Lecture 19.pdf
The Economic History of the U.S. Lecture 19.pdfThe Economic History of the U.S. Lecture 19.pdf
The Economic History of the U.S. Lecture 19.pdfGale Pooley
 
05_Annelore Lenoir_Docbyte_MeetupDora&Cybersecurity.pptx
05_Annelore Lenoir_Docbyte_MeetupDora&Cybersecurity.pptx05_Annelore Lenoir_Docbyte_MeetupDora&Cybersecurity.pptx
05_Annelore Lenoir_Docbyte_MeetupDora&Cybersecurity.pptxFinTech Belgium
 
Call US 📞 9892124323 ✅ Kurla Call Girls In Kurla ( Mumbai ) secure service
Call US 📞 9892124323 ✅ Kurla Call Girls In Kurla ( Mumbai ) secure serviceCall US 📞 9892124323 ✅ Kurla Call Girls In Kurla ( Mumbai ) secure service
Call US 📞 9892124323 ✅ Kurla Call Girls In Kurla ( Mumbai ) secure servicePooja Nehwal
 
Call Girls Service Nagpur Maya Call 7001035870 Meet With Nagpur Escorts
Call Girls Service Nagpur Maya Call 7001035870 Meet With Nagpur EscortsCall Girls Service Nagpur Maya Call 7001035870 Meet With Nagpur Escorts
Call Girls Service Nagpur Maya Call 7001035870 Meet With Nagpur Escortsranjana rawat
 
Instant Issue Debit Cards - School Designs
Instant Issue Debit Cards - School DesignsInstant Issue Debit Cards - School Designs
Instant Issue Debit Cards - School Designsegoetzinger
 
VIP Call Girls LB Nagar ( Hyderabad ) Phone 8250192130 | ₹5k To 25k With Room...
VIP Call Girls LB Nagar ( Hyderabad ) Phone 8250192130 | ₹5k To 25k With Room...VIP Call Girls LB Nagar ( Hyderabad ) Phone 8250192130 | ₹5k To 25k With Room...
VIP Call Girls LB Nagar ( Hyderabad ) Phone 8250192130 | ₹5k To 25k With Room...Suhani Kapoor
 
Dharavi Russian callg Girls, { 09892124323 } || Call Girl In Mumbai ...
Dharavi Russian callg Girls, { 09892124323 } || Call Girl In Mumbai ...Dharavi Russian callg Girls, { 09892124323 } || Call Girl In Mumbai ...
Dharavi Russian callg Girls, { 09892124323 } || Call Girl In Mumbai ...Pooja Nehwal
 
Vip B Aizawl Call Girls #9907093804 Contact Number Escorts Service Aizawl
Vip B Aizawl Call Girls #9907093804 Contact Number Escorts Service AizawlVip B Aizawl Call Girls #9907093804 Contact Number Escorts Service Aizawl
Vip B Aizawl Call Girls #9907093804 Contact Number Escorts Service Aizawlmakika9823
 

Recently uploaded (20)

The Economic History of the U.S. Lecture 30.pdf
The Economic History of the U.S. Lecture 30.pdfThe Economic History of the U.S. Lecture 30.pdf
The Economic History of the U.S. Lecture 30.pdf
 
Instant Issue Debit Cards - High School Spirit
Instant Issue Debit Cards - High School SpiritInstant Issue Debit Cards - High School Spirit
Instant Issue Debit Cards - High School Spirit
 
High Class Call Girls Nashik Maya 7001305949 Independent Escort Service Nashik
High Class Call Girls Nashik Maya 7001305949 Independent Escort Service NashikHigh Class Call Girls Nashik Maya 7001305949 Independent Escort Service Nashik
High Class Call Girls Nashik Maya 7001305949 Independent Escort Service Nashik
 
High Class Call Girls Nagpur Grishma Call 7001035870 Meet With Nagpur Escorts
High Class Call Girls Nagpur Grishma Call 7001035870 Meet With Nagpur EscortsHigh Class Call Girls Nagpur Grishma Call 7001035870 Meet With Nagpur Escorts
High Class Call Girls Nagpur Grishma Call 7001035870 Meet With Nagpur Escorts
 
Independent Call Girl Number in Kurla Mumbai📲 Pooja Nehwal 9892124323 💞 Full ...
Independent Call Girl Number in Kurla Mumbai📲 Pooja Nehwal 9892124323 💞 Full ...Independent Call Girl Number in Kurla Mumbai📲 Pooja Nehwal 9892124323 💞 Full ...
Independent Call Girl Number in Kurla Mumbai📲 Pooja Nehwal 9892124323 💞 Full ...
 
VIP Kolkata Call Girl Serampore 👉 8250192130 Available With Room
VIP Kolkata Call Girl Serampore 👉 8250192130  Available With RoomVIP Kolkata Call Girl Serampore 👉 8250192130  Available With Room
VIP Kolkata Call Girl Serampore 👉 8250192130 Available With Room
 
The Economic History of the U.S. Lecture 22.pdf
The Economic History of the U.S. Lecture 22.pdfThe Economic History of the U.S. Lecture 22.pdf
The Economic History of the U.S. Lecture 22.pdf
 
Solution Manual for Financial Accounting, 11th Edition by Robert Libby, Patri...
Solution Manual for Financial Accounting, 11th Edition by Robert Libby, Patri...Solution Manual for Financial Accounting, 11th Edition by Robert Libby, Patri...
Solution Manual for Financial Accounting, 11th Edition by Robert Libby, Patri...
 
Commercial Bank Economic Capsule - April 2024
Commercial Bank Economic Capsule - April 2024Commercial Bank Economic Capsule - April 2024
Commercial Bank Economic Capsule - April 2024
 
Q3 2024 Earnings Conference Call and Webcast Slides
Q3 2024 Earnings Conference Call and Webcast SlidesQ3 2024 Earnings Conference Call and Webcast Slides
Q3 2024 Earnings Conference Call and Webcast Slides
 
Booking open Available Pune Call Girls Shivane 6297143586 Call Hot Indian Gi...
Booking open Available Pune Call Girls Shivane  6297143586 Call Hot Indian Gi...Booking open Available Pune Call Girls Shivane  6297143586 Call Hot Indian Gi...
Booking open Available Pune Call Girls Shivane 6297143586 Call Hot Indian Gi...
 
The Economic History of the U.S. Lecture 19.pdf
The Economic History of the U.S. Lecture 19.pdfThe Economic History of the U.S. Lecture 19.pdf
The Economic History of the U.S. Lecture 19.pdf
 
05_Annelore Lenoir_Docbyte_MeetupDora&Cybersecurity.pptx
05_Annelore Lenoir_Docbyte_MeetupDora&Cybersecurity.pptx05_Annelore Lenoir_Docbyte_MeetupDora&Cybersecurity.pptx
05_Annelore Lenoir_Docbyte_MeetupDora&Cybersecurity.pptx
 
Call US 📞 9892124323 ✅ Kurla Call Girls In Kurla ( Mumbai ) secure service
Call US 📞 9892124323 ✅ Kurla Call Girls In Kurla ( Mumbai ) secure serviceCall US 📞 9892124323 ✅ Kurla Call Girls In Kurla ( Mumbai ) secure service
Call US 📞 9892124323 ✅ Kurla Call Girls In Kurla ( Mumbai ) secure service
 
Call Girls Service Nagpur Maya Call 7001035870 Meet With Nagpur Escorts
Call Girls Service Nagpur Maya Call 7001035870 Meet With Nagpur EscortsCall Girls Service Nagpur Maya Call 7001035870 Meet With Nagpur Escorts
Call Girls Service Nagpur Maya Call 7001035870 Meet With Nagpur Escorts
 
Instant Issue Debit Cards - School Designs
Instant Issue Debit Cards - School DesignsInstant Issue Debit Cards - School Designs
Instant Issue Debit Cards - School Designs
 
VIP Call Girls LB Nagar ( Hyderabad ) Phone 8250192130 | ₹5k To 25k With Room...
VIP Call Girls LB Nagar ( Hyderabad ) Phone 8250192130 | ₹5k To 25k With Room...VIP Call Girls LB Nagar ( Hyderabad ) Phone 8250192130 | ₹5k To 25k With Room...
VIP Call Girls LB Nagar ( Hyderabad ) Phone 8250192130 | ₹5k To 25k With Room...
 
Dharavi Russian callg Girls, { 09892124323 } || Call Girl In Mumbai ...
Dharavi Russian callg Girls, { 09892124323 } || Call Girl In Mumbai ...Dharavi Russian callg Girls, { 09892124323 } || Call Girl In Mumbai ...
Dharavi Russian callg Girls, { 09892124323 } || Call Girl In Mumbai ...
 
Veritas Interim Report 1 January–31 March 2024
Veritas Interim Report 1 January–31 March 2024Veritas Interim Report 1 January–31 March 2024
Veritas Interim Report 1 January–31 March 2024
 
Vip B Aizawl Call Girls #9907093804 Contact Number Escorts Service Aizawl
Vip B Aizawl Call Girls #9907093804 Contact Number Escorts Service AizawlVip B Aizawl Call Girls #9907093804 Contact Number Escorts Service Aizawl
Vip B Aizawl Call Girls #9907093804 Contact Number Escorts Service Aizawl
 

Testingppt

  • 1. Software testing Main issues: There are a great many testing techniques Often, only the final code is tested
  • 2. SE, Testing, Hans van Vliet, ©2008 2 Nasty question  Suppose you are being asked to lead the team to test the software that controls a new X-ray machine. Would you take that job?  Would you take it if you could name your own price?  What if the contract says you’ll be charged with murder in case a patient dies because of a mal- functioning of the software?
  • 3. SE, Testing, Hans van Vliet, ©2008 3 Overview  Preliminaries  All sorts of test techniques  Comparison of test techniques  Software reliability
  • 4. SE, Testing, Hans van Vliet, ©2008 4 State-of-the-Art  30-85 errors are made per 1000 lines of source code  extensively tested software contains 0.5-3 errors per 1000 lines of source code  testing is postponed, as a consequence: the later an error is discovered, the more it costs to fix it.  error distribution: 60% design, 40% implementation. 66% of the design errors are not discovered until the software has become operational.
  • 5. SE, Testing, Hans van Vliet, ©2008 5 Relative cost of error correction 1 2 5 10 20 50 100 RE design code test operation
  • 6. SE, Testing, Hans van Vliet, ©2008 6 Lessons  Many errors are made in the early phases  These errors are discovered late  Repairing those errors is costly  ⇒ It pays off to start testing real early
  • 7. SE, Testing, Hans van Vliet, ©2008 7 How then to proceed?  Exhaustive testing most often is not feasible  Random statistical testing does not work either if you want to find errors  Therefore, we look for systematic ways to proceed during testing
  • 8. SE, Testing, Hans van Vliet, ©2008 8 Classification of testing techniques  Classification based on the criterion to measure the adequacy of a set of test cases:  coverage-based testing  fault-based testing  error-based testing  Classification based on the source of information to derive test cases:  black-box testing (functional, specification-based)  white-box testing (structural, program-based)
  • 9. SE, Testing, Hans van Vliet, ©2008 9 Some preliminary questions  What exactly is an error?  How does the testing process look like?  When is test technique A superior to test technique B?  What do we want to achieve during testing?  When to stop testing?
  • 10. SE, Testing, Hans van Vliet, ©2008 10 Error, fault, failure  an error is a human activity resulting in software containing a fault  a fault is the manifestation of an error  a fault may result in a failure
  • 11. SE, Testing, Hans van Vliet, ©2008 11 When exactly is a failure a failure?  Failure is a relative notion: e.g. a failure w.r.t. the specification document  Verification: evaluate a product to see whether it satisfies the conditions specified at the start: Have we built the system right?  Validation: evaluate a product to see whether it does what we think it should do: Have we built the right system?
  • 12. SE, Testing, Hans van Vliet, ©2008 12 Point to ponder: maiden flight of Ariane 5
  • 13. SE, Testing, Hans van Vliet, ©2008 13 Testing process oracle P P test strategy compare input subset of input subset of input expected output real output test results
  • 14. SE, Testing, Hans van Vliet, ©2008 14 Test adequacy criteria  Specifies requirements for testing  Can be used as stopping rule: stop testing if 100% of the statements have been tested  Can be used as measurement: a test set that covers 80% of the test cases is better than one which covers 70%  Can be used as test case generator: look for a test which exercises some statements not covered by the tests so far  A given test adequacy criterion and the associated test technique are opposite sides of the same coin
  • 15. SE, Testing, Hans van Vliet, ©2008 15 What is our goal during testing?  Objective 1: find as many faults as possible  Objective 2: make you feel confident that the software works OK
  • 16. SE, Testing, Hans van Vliet, ©2008 16 Example constructive approach  Task: test module that sorts an array A[1..n]. A contains integers; n < 1000  Solution: take n = 0, 1, 37, 999, 1000. For n = 37, 999, take A as follows:  A contains random integers  A contains increasing integers  A contains decreasing integers  These are equivalence classes: we assume that one element from such a class suffices  This works if the partition is perfect
  • 17. SE, Testing, Hans van Vliet, ©2008 17 Testing models  Demonstration: make sure the software satisfies the specs  Destruction: try to make the software fail  Evaluation: detect faults in early phases  Prevention: prevent faults in early phases time
  • 18. SE, Testing, Hans van Vliet, ©2008 18 Testing and the life cycle  requirements engineering  criteria: completeness, consistency, feasibility, and testability.  typical errors: missing, wrong, and extra information  determine testing strategy  generate functional test cases  test specification, through reviews and the like  design  functional and structural tests can be devised on the basis of the decomposition  the design itself can be tested (against the requirements)  formal verification techniques  the architecture can be evaluated
  • 19. SE, Testing, Hans van Vliet, ©2008 19 Testing and the life cycle (cnt’d)  implementation  check consistency implementation and previous documents  code-inspection and code-walkthrough  all kinds of functional and structural test techniques  extensive tool support  formal verification techniques  maintenance  regression testing: either retest all, or a more selective retest
  • 20. SE, Testing, Hans van Vliet, ©2008 20 Test-Driven Development (TDD) First write the tests, then do the design/implementation Part of agile approaches like XP Supported by tools, eg. JUnit Is more than a mere test technique; it subsumes part of the design work
  • 21. SE, Testing, Hans van Vliet, ©2008 21 Steps of TDD 1. Add a test 2. Run all tests, and see that the system fails 3. Make a small change to make the test work 4. Run all tests again, and see they all run properly 5. Refactor the system to improve its design and remove redundancies
  • 22. SE, Testing, Hans van Vliet, ©2008 22 Test Stages  module-unit testing and integration testing  bottom-up versus top-down testing  system testing  acceptance testing  installation testing
  • 23. SE, Testing, Hans van Vliet, ©2008 23 Test documentation (IEEE 928)  Test plan  Test design specification  Test case specification  Test procedure specification  Test item transmittal report  Test log  Test incident report  Test summary report
  • 24. SE, Testing, Hans van Vliet, ©2008 24 Overview  Preliminaries  All sorts of test techniques  manual techniques  coverage-based techniques  fault-based techniques  error-based techniques  Comparison of test techniques  Software reliability
  • 25. SE, Testing, Hans van Vliet, ©2008 25 Manual Test Techniques  static versus dynamic analysis  compiler does a lot of static testing  static test techniques  reading, informal versus peer review  walkthrough and inspections  correctness proofs, e.g., pre-and post-conditions: {P} S {Q}  stepwise abstraction
  • 26. SE, Testing, Hans van Vliet, ©2008 26 (Fagan) inspection  Going through the code, statement by statement  Team with ~4 members, with specific roles:  moderator: organization, chairperson  code author: silent observer  (two) inspectors, readers: paraphrase the code  Uses checklist of well-known faults  Result: list of problems encountered
  • 27. SE, Testing, Hans van Vliet, ©2008 27 Example checklist  Wrong use of data: variable not initialized, dangling pointer, array index out of bounds, …  Faults in declarations: undeclared variable, variable declared twice, …  Faults in computation: division by zero, mixed- type expressions, wrong operator priorities, …  Faults in relational expressions: incorrect Boolean operator, wrong operator priorities, .  Faults in control flow: infinite loops, loops that execute n-1 or n+1 times instead of n, ...
  • 28. SE, Testing, Hans van Vliet, ©2008 28 Overview  Preliminaries  All sorts of test techniques  manual techniques  coverage-based techniques  fault-based techniques  error-based techniques  Comparison of test techniques  Software reliability
  • 29. SE, Testing, Hans van Vliet, ©2008 29 Coverage-based testing  Goodness is determined by the coverage of the product by the test set so far: e.g., % of statements or requirements tested  Often based on control-flow graph of the program  Three techniques:  control-flow coverage  data-flow coverage  coverage-based testing of requirements
  • 30. SE, Testing, Hans van Vliet, ©2008 30 Example of control-flow coverage procedure bubble (var a: array [1..n] of integer; n: integer); var i, j: temp: integer; begin for i:= 2 to n do if a[i] >= a[i-1] then goto next endif; j:= i; loop: if j <= 1 then goto next endif; if a[j] >= a[j-1] then goto next endif; temp:= a[j]; a[j]:= a[j-1]; a[j-1]:= temp; j:= j-1; goto loop; next: skip; enddo end bubble; input: n=2, a[1] = 5, a[2] = 3 √ √ √ √ √ √ √ √ √ √ √ √
  • 31. SE, Testing, Hans van Vliet, ©2008 31 Example of control-flow coverage (cnt’d) procedure bubble (var a: array [1..n] of integer; n: integer); var i, j: temp: integer; begin for i:= 2 to n do if a[i] >= a[i-1] then goto next endif; j:= i; loop: if j <= 1 then goto next endif; if a[j] >= a[j-1] then goto next endif; temp:= a[j]; a[j]:= a[j-1]; a[j-1]:= temp; j:= j-1; goto loop; next: skip; enddo end bubble; input: n=2, a[1] = 5, a[2] = 3 √ √ √ √ √ √ √ √ √ √ √ √ a[i]=a[i-1]
  • 32. SE, Testing, Hans van Vliet, ©2008 32 Control-flow coverage  This example is about All-Nodes coverage, statement coverage  A stronger criterion: All-Edges coverage, branch coverage  Variations exercise all combinations of elementary predicates in a branch condition  Strongest: All-Paths coverage (≡ exhaustive testing)  Special case: all linear independent paths, the cyclomatic number criterion
  • 33. SE, Testing, Hans van Vliet, ©2008 33 Data-flow coverage  Looks how variables are treated along paths through the control graph.  Variables are defined when they get a new value.  A definition in statement X is alive in statement Y if there is a path from X to Y in which this variable is not defined anew. Such a path is called definition-clear.  We may now test all definition-clear paths between each definition and each use of that definition and each successor of that node: All- Uses coverage.
  • 34. SE, Testing, Hans van Vliet, ©2008 34 Coverage-based testing of requirements  Requirements may be represented as graphs, where the nodes represent elementary requirements, and the edges represent relations (like yes/no) between requirements.  And next we may apply the earlier coverage criteria to this graph
  • 35. SE, Testing, Hans van Vliet, ©2008 35 Example translation of requirements to a graph A user may order new books. He is shown a screen with fields to fill in. Certain fields are mandatory. One field is used to check whether the department’s budget is large enough. If so, the book is ordered and the budget reduced accordingly. Enter fields All mandatory fields there? Check budget Order book Notify user Notify user
  • 36. SE, Testing, Hans van Vliet, ©2008 36 Similarity with Use Case success scenario 1. User fills form 2. Book info checked 3. Dept budget checked 4. Order placed 5. User is informed Enter fields All mandatory fields there? Check budget Order book Notify user Notify user
  • 37. SE, Testing, Hans van Vliet, ©2008 37 Overview  Preliminaries  All sorts of test techniques  manual techniques  coverage-based techniques  fault-based techniques  error-based techniques  Comparison of test techniques  Software reliability
  • 38. SE, Testing, Hans van Vliet, ©2008 38 Fault-based testing  In coverage-based testing, we take the structure of the artifact to be tested into account  In fault-based testing, we do not directly consider this artifact  We just look for a test set with a high ability to detect faults  Two techniques:  Fault seeding  Mutation testing
  • 39. SE, Testing, Hans van Vliet, ©2008 39 Fault seeding
  • 40. SE, Testing, Hans van Vliet, ©2008 40 Mutation testing procedure insert(a, b, n, x); begin bool found:= false; for i:= 1 to n do if a[i] = x then found:= true; goto leave endif enddo; leave: if found then b[i]:= b[i] + 1 else n:= n+1; a[n]:= x; b[n]:= 1 endif end insert; 2 - n-1
  • 41. SE, Testing, Hans van Vliet, ©2008 41 Mutation testing (cnt’d) procedure insert(a, b, n, x); begin bool found:= false; for i:= 1 to n do if a[i] = x then found:= true; goto leave endif enddo; leave: if found then b[i]:= b[i] + 1 else n:= n+1; a[n]:= x; b[n]:= 1 endif end insert; n-1
  • 42. SE, Testing, Hans van Vliet, ©2008 42 How tests are treated by mutants  Let P be the original, and P’ the mutant  Suppose we have two tests:  T1 is a test, which inserts an element that equals a[k] with k<n  T2 is another test, which inserts an element that does not equal an element a[k] with k<n  Now P and P’ will behave the same on T1, while they differ for T2  In some sense, T2 is a “better” test, since it in a way tests this upper bound of the for-loop, which T1 does not
  • 43. SE, Testing, Hans van Vliet, ©2008 43 How to use mutants in testing  If a test produces different results for one of the mutants, that mutant is said to be dead  If a test set leaves us with many live mutants, that test set is of low quality  If we have M mutants, and a test set results in D dead mutants, then the mutation adequacy score is D/M  A larger mutation adequacy score means a better test set
  • 44. SE, Testing, Hans van Vliet, ©2008 44 Strong vs weak mutation testing  Suppose we have a program P with a component T  We have a mutant T’ of T  Since T is part of P, we then also have a mutant P’ of P  In weak mutation testing, we require that T and T’ produce different results, but P and P’ may still produce the same results  In strong mutation testing, we require that P and P’ produce different results
  • 45. SE, Testing, Hans van Vliet, ©2008 45 Assumptions underlying mutation testing  Competent Programmer Hypothesis: competent programmers write programs that are approximately correct  Coupling Effect Hypothesis: tests that reveal simple fault can also reveal complex faults
  • 46. SE, Testing, Hans van Vliet, ©2008 46 Overview  Preliminaries  All sorts of test techniques  manual techniques  coverage-based techniques  fault-based techniques  error-based techniques  Comparison of test techniques  Software reliability
  • 47. SE, Testing, Hans van Vliet, ©2008 47 Error-based testing  Decomposes input (such as requirements) in a number of subdomains  Tests inputs from each of these subdomains, and especially points near and just on the boundaries of these subdomains -- those being the spots where we tend to make errors  In fact, this is a systematic way of doing what experienced programmers do: test for 0, 1, nil, etc
  • 48. SE, Testing, Hans van Vliet, ©2008 48 Error-based testing, example Example requirement: Library maintains a list of “hot” books. Each new book is added to this list. After six months, it is removed again. Also, if book is more than four months on the list, and has not been borrowed more than four times a month, or it is more than two months old and has been borrowed at most twice, it is removed from the list.
  • 49. SE, Testing, Hans van Vliet, ©2008 49 Example (cnt’d) 2 4 6 5 2 av # of loans age A B
  • 50. SE, Testing, Hans van Vliet, ©2008 50 Strategies for error-based testing  An ON point is a point on the border of a subdomain  If a subdomain is open w.r.t. some border, then an OFF point of that border is a point just inside that border  If a subdomain is closed w.r.t. some border, then an OFF point of that border is a point just outside that border  So the circle on the line age=6 is an ON point of both A and B  The other circle is an OFF point of both A and B
  • 51. SE, Testing, Hans van Vliet, ©2008 51 Strategies for error-based testing (cnt’d)  Suppose we have subdomains Di, i=1,..n  Create test set with N test cases for ON points of each border B of each subdomain Di, and at least one test case for an OFF point of each border  This set is called N∗1 domain adequate
  • 52. SE, Testing, Hans van Vliet, ©2008 52 Application to programs if x < 6 then … elsif x > 4 and y < 5 then … elsif x > 2 and y <= 2 then … else ...
  • 53. SE, Testing, Hans van Vliet, ©2008 53 Overview  Preliminaries  All sorts of test techniques  manual techniques  coverage-based techniques  fault-based techniques  error-based techniques  Comparison of test techniques  Software reliability
  • 54. SE, Testing, Hans van Vliet, ©2008 54 Comparison of test adequacy criteria  Criterion A is stronger than criterion B if, for all programs P and all test sets T, X-adequacy implies Y-adequacy  In that sense, e.g., All-Edges is stronger that All- Nodes coverage (All-Edges “subsumes” All- Nodes)  One problem: such criteria can only deal with paths that can be executed (are feasible). So, if you have dead code, you can never obtain 100% statement coverage. Sometimes, the subsumes relation only holds for the feasible version.
  • 55. SE, Testing, Hans van Vliet, ©2008 55 Desirable properties of adequacy criteria  applicability property  non-exhaustive applicability property  monotonicity property  inadequate empty set property  antiextensionality property  general multiplicity change property  antidecomposition property  anticomposition property  renaming property  complexity property  statement coverage property
  • 56. SE, Testing, Hans van Vliet, ©2008 56 Experimental results  There is no uniform best test technique  The use of multiple techniques results in the discovery of more faults  (Fagan) inspections have been found to be very cost effective  Early attention to testing does pay off
  • 57. SE, Testing, Hans van Vliet, ©2008 57 Overview  Preliminaries  All sorts of test techniques  manual techniques  coverage-based techniques  fault-based techniques  error-based techniques  Comparison of test techniques  Software reliability
  • 58. SE, Testing, Hans van Vliet, ©2008 58 Software reliability Interested in expected number of failures (not faults) … in a certain period of time … of a certain product … running in a certain environment
  • 59. SE, Testing, Hans van Vliet, ©2008 59 Software reliability: definition Probability that the system will not fail during a certain period of time in a certain environment
  • 60. SE, Testing, Hans van Vliet, ©2008 60 Failure behavior Subsequent failures are modeled by a stochastic process Failure behavior changes over time (e.g. because errors are corrected) ⇒ stochastic process is non-homogeneous µ(τ) = average number of failures until time τ λ(τ) = average number of failures at time τ (failure intensity) λ(τ) is the derivative of µ(τ)
  • 61. SE, Testing, Hans van Vliet, ©2008 61 Failure intensity λ(τ) and mean failures µ(τ)
  • 62. SE, Testing, Hans van Vliet, ©2008 62 Operational profile Input results in the execution of a certain sequence of instructions Different input ⇒ (probably) different sequence Input domain can thus be split in a series of equivalence classes Set of possible input classes together with their probabilities
  • 63. SE, Testing, Hans van Vliet, ©2008 63 Two simple models Basic execution time model (BM)  Decrease in failure intensity is constant over time  Assumes uniform operational profile  Effectiveness of fault correction is constant over time Logarithmic Poisson execution time model (LPM)  First failures contribute more to decrease in failure intensity than later failures  Assumes non-uniform operational profile  Effectiveness of fault correction decreases over time
  • 64. SE, Testing, Hans van Vliet, ©2008 64 Estimating model parameters (for BM)
  • 65. SE, Testing, Hans van Vliet, ©2008 65 Summary  Do test as early as possible  Testing is a continuous process  Design with testability in mind  Test activities must be carefully planned, controlled and documented.  No single reliability model performs best consistently

Editor's Notes

  1. As an exercise, refer the students to the Therac-case, discussed in chapter 1.
  2. It thus really pays off to start testing early. See also following picture
  3. Old picture from Boehm’s book. It shows that errors discovered during operation might cost 100 times as much as errors discovered during requirements engineering.
  4. As a simple illustration of why exhaustive testing does not work: take a simple loop with an if statement in it. Exhaustive testing if the loop is executed 100 times takes 2100 test cases Random testing does work if you want to achieve reliability (see later sections/slides).
  5. Coverage-based: e.g. how many statements or requirements have been tested so far Fault-based: e.g., how many seeded faults are found Error-based: focus on error-prone points, e.g. off-by-one points Black-box: you do not look inside, but only base yourself on the specification/functional description White-box: you do look inside, to the structure, the actual program/specification. This classification is mostly used at the module level.
  6. For example, I may accidentally assume a procedure is only called with a positive argument (the error). So I forget to test for negative values (the fault). Now if the procedure is actually called with a negative argument, something may go wrong (wrong answer, abortion): the failure Note that the relation between errors, faults and failures need not be 1-1.
  7. But even with this definition, things may be subtle. Suppose a program contains a fault which never shows up, say because a certain piece of the code never gets executed. Is this “latent” fault actually a fault? If not, does it become a fault if we reuse this part of the program in another context? See also next slide.
  8. The Ariane 5 took off and exploded within 40 seconds. Ultimate cause: overflow in conversion of some variable; this case was not tested. In the Ariane 4, this did not cause any problem. This variable related to the horizontal speed of the rocket. The piece of software in question only served to speed up the restart of the launching process in case something went wrong and one had to stop the launch prematurily. The software ran for about a minute after launching. The Ariane 4 is much slower than the Ariane 5, so within this one minute, the rocket was still going up, and the variable in question had a small value. In the Ariane 5, by this time horizontal speed was much higher. So, failure to specify boundary conditions for this software? Reuse failure?
  9. If exhausting testing does not work, we have to select a good subset. But how do we determine the quality of such a test set? This is a very crucial step, and the various test techniques all address this issue in one way or another.
  10. Note that the stopping rule view is a special case of the measurement view. We use these adequacy criteria to decide whether one testing technique is better than another. A number of such relations between test techniques is given later on.
  11. Objective 1 is the kind of objective used in all kinds of functional and structural test techniques. These try to systematically exercise the software so as to make sure we test “everything”. The idea behind objective 2 is that we might not be interested in faults that never show up, but we really want to find those that hae a large probability of manifesting themselves. So we pursue a high reliability. Random testing then works, provided the test cases profile matches the operational profile, I.e. the distribution of test cases mimics actual use of the system. An example development method where this objective is applied is Cleanroom.
  12. This as example approach where we want to find as many faults as possible. The partition is perfect iff the paths in the program follow the equivalence classes chosen. For instance, we assume that the sorting module treat all arrays of length 1 &amp;lt; n &amp;lt; 999 the same. Probably, those of length 1 an 999 are also treated in the same way, but just to make sure we test these boundary cases separately. Now if the sorting program treats, say, arrays with negative numbers differently from those with positive numbers, this equivalence class partitioning is not perfect, and a fault in the program may go unnoticed because we may happen to use a test case that, say, only has positive numbers, and none that has negative numbers.
  13. The first two models are phase models; testing is a phase following coding. The demonstration mode is often used when testing one’s own software. This model also applies when the test set is not carefully/systematically constructed. All kinds of structural and functional techniques follow the destructive mode of operation. The last two models acknowledge that testing is something that has to be done in every development phase. For instance, requirements can be reviewed too. And by making sure that there is a test for every requirement, including every non-functional requirement, you can even prevent errors from being made in the first place. Over the years, a gradual shift can be observed, from demonstration to prevention.
  14. Correctness proofs: complex, not one very often Stepwise abstraction: opposite of stepwise refinement, so you develop pre and post-conditions of a module by working backwards from the individual statements
  15. Much of the real value of this type of technique is in the learning process that the peole get involved in.
  16. In branch coverage, both branches of an if-statement are tested, even if one is empty In normal branch coverage, a combined condition like a = 1 and b = 2 requires two tests. We may also test al four combinations of the two simple predicates. The cyclomatic number criterion is related to the cyclomatic complexity metric of McCabe
  17. We have to include each successor to enforce that all branches following a P-use are taken. Further variations differentiate between uses in a predicate (C-use) and uses elsewhere (computations, C-use). This leads to criteria like All-C-uses/Some-P-uses and the like.
  18. Kinds of variations in program testing: seed faults founded by one group into the program tested by another group.
  19. In each variation, mutant, one simple change is made.
  20. Note that if we happen to insert an element of a that occurs before the final element, we won’t notice a difference
  21. This is a graphical view of this same requirement. It shows the two-dimensional (age, average number of loans per month) domain. The subdomains are bordered by lnes such as age=6, or (age=4, 0&amp;lt;= av&amp;lt;= 5). For each border, it is indicated which of the adjacent subdomains is closed by putting a hachure at that side; a subdomain is closed at some border iff that border belongs to the subdomain; otherwise it is open.
  22. This yields the same picture, with the same borders, and can be used with the same test set.
  23. Usually, stronger metrics induce more costs
  24. These properties relate to program-based criteria. The first four are rather general and should apply to any test adequacy criterion. E.G. the applicability criterion says:for every program, there is an adequate test set. This is not true for the All-Nodes and Al-Edges criteria, e.g., since they may have dead code, so that you cannot achieve 100% coverage. Anticomposition: if components have been tested adequately, this does not mean their composition is also tested adequately (re Ariane 5 disaster). This does not hold for the All-Nodes and All-Edges.