Test Management
Software Testing
ISTQB Foundation Certificate Course
1 Principles 2 Lifecycle
4 Dynamic test
techniques
3 Static testing
5 Management 6 Tools
Contents
Organisation
Configuration Management
Test estimation, monitoring and control
Incident management
Standards for testing
ISTQB Foundation Certificate
Course
Test Management
1 2
4 5
3
6
Importance of Independence
Time
No. faults
Release to
End Users
Organisational structures for testing

Developer responsibility (only)

Development team responsibility (buddy
system)

Tester(s) on the development team

Dedicated team of testers (not developers)

Internal test consultants (advice, review,
support, not perform the testing)

Outside organisation (3rd party testers)
Testing by developers

Pro’s:
- know the code best
know the code best
- will find problems that the testers will miss
will find problems that the testers will miss
- they can find and fix faults cheaply
they can find and fix faults cheaply

Con’s
- difficult to destroy own work
difficult to destroy own work
- tendency to 'see' expected results, not actual results
tendency to 'see' expected results, not actual results
- subjective assessment
subjective assessment
Testing by development team (buddy)

Pro’s:
- some independence
some independence
- technical depth
technical depth
- on friendly terms with “buddy” - less threatening
on friendly terms with “buddy” - less threatening

Con’s
- pressure of own development work
pressure of own development work
- technical view, not business view
technical view, not business view
- lack of testing skill
lack of testing skill
Tester on development team

Pro’s:
- independent view of the software
independent view of the software
- dedicated to testing, no development responsibility
dedicated to testing, no development responsibility
- part of the team, working to same goal: quality
part of the team, working to same goal: quality

Con’s
- lack of respect
lack of respect
- lonely, thankless task
lonely, thankless task
- corruptible (peer pressure)
corruptible (peer pressure)
- a single view / opinion
a single view / opinion
Independent test team

Pro’s:
- dedicated team just to do testing
dedicated team just to do testing
- specialist testing expertise
specialist testing expertise
- testing is more objective & more consistent
testing is more objective & more consistent

Con’s
- “
“over the wall” syndrome
over the wall” syndrome
- may be antagonistic / confrontational
may be antagonistic / confrontational
- over-reliance on testers, insufficient testing by
over-reliance on testers, insufficient testing by
developers
developers
Internal test consultants

Pro’s:
- highly specialist testing expertise, providing support and help
highly specialist testing expertise, providing support and help
to improve testing done by all
to improve testing done by all
- better planning, estimation & control from a broad view of
better planning, estimation & control from a broad view of
testing in the organisation
testing in the organisation

Con’s
- someone still has to do the testing
someone still has to do the testing
- level of expertise enough?
level of expertise enough?
- needs good “people” skills - communication
needs good “people” skills - communication
- influence, not authority
influence, not authority
Outside organisation (3rd party)

Pro’s:
- highly specialist testing expertise (if outsourced to a
highly specialist testing expertise (if outsourced to a
good organisation)
good organisation)
- independent of internal politics
independent of internal politics

Con’s
- lack of company and product knowledge
lack of company and product knowledge
- expertise gained goes outside the company
expertise gained goes outside the company
- expensive?
expensive?
Usual choices

Component testing:
- done by programmers (or buddy)
done by programmers (or buddy)

Integration testing in the small:
- poorly defined activity
poorly defined activity

System testing:
- often done by independent test team
often done by independent test team

Acceptance testing:
- done by users (with technical help)
done by users (with technical help)
- demonstration for confidence
demonstration for confidence
Resourcing issues

independence is important
- not a replacement for familiarity
not a replacement for familiarity

different levels of independence
- pro's and con's at all levels
pro's and con's at all levels

test techniques offer another dimension to
independence (independence of thought)

test strategy should use a good mix
- "declaration of independence”
"declaration of independence”

balance of skills needed
Skills needed in testing

Technique specialists

Automators

Database experts

Business skills & understanding

Usability expert

Test environment expert

Test managers
Contents
Organisation
Configuration Management
Test estimation, monitoring and control
Incident management
Standards for testing
ISTQB Foundation Certificate
Course
Test Management
1 2
4 5
3
6
Problems resulting from poor
configuration management

can’t reproduce a fault reported by a customer

can’t roll back to previous subsystem

one change overwrites another

emergency fault fix needs testing but tests have
been updated to new software version

which code changes belong to which version?

faults which were fixed re-appear

tests worked perfectly - on old version

“Shouldn’t that feature be in this version?”
A definition of Configuration
Management

“The process of identifying and defining the
configuration items in a system,

controlling the release and change of these items
throughout the system life cycle,

recording and reporting the status of configuration
items and change requests,

and verifying the completeness and correctness of
configuration items.”
- ANSI/IEEE Std 729-1983, Software Engineering
ANSI/IEEE Std 729-1983, Software Engineering
Terminology
Terminology
Configuration Management

An engineering management procedure that
includes
- configuration identification
configuration identification
- configuration control
configuration control
- configuration status accounting
configuration status accounting
- configuration audit
configuration audit
• Encyclopedia of Software Engineering, 1994
Encyclopedia of Software Engineering, 1994
Configuration identification
Configuration
Identification
Configuration
Control
Status
Accounting
Configuration
Auditing
Configuration
Structures
CI Planning
Version/issue
Numbering
Baseline/release
Planning
Naming
Conventions
CI: Configuration item: stand alone,
test alone, use alone element
Selection
criteria
Configuration control
CI
Submission
Withdrawal/
Distribution
control
Status/version
Control
Clearance
Investigation
Impact
Analysis
Authorised
Amendment
Review/
Test
Controlled
Area/library
Problem/fault
Reporting
Change
Control
Configuration
Control Board
Configuration
Identification
Configuration
Control
Status
Accounting
Configuration
Auditing
Status accounting & Configuration
Auditing
Configuration
Identification
Configuration
Control
Status
Accounting
Configuration
Auditing
Status Accounting
Database
Input to SA
Database
Queries and
Reports
Data
Analysis
Traceability,
impact
analysis
Procedural
Conformance
CI
Verification
Agree with
customer what
has been built,
tested & delivered
Products for CM in testing

test plans

test designs

test cases:
- test input
test input
- test data
test data
- test scripts
test scripts
- expected results
expected results

actual results

test tools
CM is critical
for controlled
testing
What would not be under
configuration management?
Live data!
Contents
Organisation
Configuration Management
Test estimation, monitoring and control
Incident management
Standards for testing
ISTQB Foundation Certificate
Course
Test Management
1 2
4 5
3
6
Estimating testing is no different

Estimating any job involves the following
- identify tasks
identify tasks
- how long for each task
how long for each task
- who should perform the task
who should perform the task
- when should the task start and finish
when should the task start and finish
- what resources, what skills
what resources, what skills
- predictable dependencies
predictable dependencies
• task precedence (build test before running it)
task precedence (build test before running it)
• technical precedence (add & display before edit)
technical precedence (add & display before edit)
Estimating testing is different

Additional destabilising dependencies
- testing is not an independent activity
testing is not an independent activity
- delivery schedules for testable items missed
delivery schedules for testable items missed
- test environments are critical
test environments are critical

Test Iterations (Cycles)
- testing should find faults
testing should find faults
- faults need to be fixed
faults need to be fixed
- after fixed, need to retest
after fixed, need to retest
- how many times does this happen?
how many times does this happen?
Test cycles / iterations
Debug D R
D R
3-4 iterations is typical
Test
Theory:
Test
Practice:
Des Ex Ver
Bld
Iden
Retest
Retest
Estimating iterations

past history

number of faults expected
- can predict from previous test effectiveness and previous
can predict from previous test effectiveness and previous
faults found (in test, review, Inspection)
faults found (in test, review, Inspection)
- % faults found in each iteration (nested faults)
% faults found in each iteration (nested faults)
- % fixed [in]correctly
% fixed [in]correctly

time to report faults

time waiting for fixes

how much in each iteration?
Time to report faults

If it takes 10 mins to write a fault report, how
many can be written in one day?

The more fault reports you write, the less
testing you will be able to do.
Test Fault analysis & reporting
Mike Royce: suspension criteria: when testers spend > 25% time on faults
Measuring test execution progress 1
tests run
tests passed
tests planned
now release
date
what does this
mean?
what would
you do?
Diverging S-curve
poor test entry criteria
ran easy tests first
insufficient debug effort
common faults affect all
tests
software quality very
poor
tighten entry criteria
cancel project
do more debugging
stop testing until faults
fixed
continue testing to scope
software quality
Note: solutions / actions will impact other
things as well, e.g. schedules
Possible causes Potential control actions
Measuring test execution progress 2
tests
planned
run
passed
action
taken
old release
date
new release
date
Measuring test execution progress 3
tests
planned
run
passed
action
taken
old release
date
new release
date
Case history
Incident Reports (IRs)
0
20
40
60
80
100
120
140
160
180
200
04-Jun 24-Jul 12-Sep 01-Nov 21-Dec 09-Feb
Opened IRs
Closed IRs
Source: Tim Trew, Philips, June 1999
Control

Management actions and decisions
- affect the process, tasks and people
affect the process, tasks and people
- to meet original or modified plan
to meet original or modified plan
- to achieve objectives
to achieve objectives

Examples
- tighten entry / exit criteria
tighten entry / exit criteria
- reallocation of resources
reallocation of resources
Feedback is essential to see the effect of
actions and decisions
Entry and exit criteria
Test
Phase 1
Test
Phase 2
"tested"
is it ready for my
testing?
Phase 2 Phase 1
Entry criteria Exit criteria
Acceptance
criteria
Completion
criteria
Entry/exit criteria examples
poor
better

clean compiled

programmer claims it is working OK

lots of tests have been run

tests have been reviewed / Inspected

no faults found in current tests

all faults found fixed and retested

specified coverage achieved

all tests run after last fault fix, no new faults
What actions can you take?

What can you affect?
- resource allocation
resource allocation
- number of test iterations
number of test iterations
- tests included in an
tests included in an
iteration
iteration
- entry / exit criteria
entry / exit criteria
applied
applied
- release date
release date

What can you not
affect:
- number of faults already there
number of faults already there

What can you affect
indirectly?
- rework effort
rework effort
- which faults to be fixed [first]
which faults to be fixed [first]
- quality of fixes (entry criteria
quality of fixes (entry criteria
to retest)
to retest)
Contents
Organisation
Configuration Management
Test estimation, monitoring and control
Incident management
Standards for testing
ISTQB Foundation Certificate
Course
Test Management
1 2
4 5
3
6
Incident management

Incident: any event that occurs during testing
that requires subsequent investigation or
correction.
- actual results do not match expected results
actual results do not match expected results
- possible causes:
possible causes:
• software fault
software fault
• test was not performed correctly
test was not performed correctly
• expected results incorrect
expected results incorrect
- can be raised for documentation as well as code
can be raised for documentation as well as code
Incidents

May be used to monitor and improve testing

Should be logged (after hand-over)

Should be tracked through stages, e.g.:
- initial recording
initial recording
- analysis (s/w fault, test fault, enhancement, etc.)
analysis (s/w fault, test fault, enhancement, etc.)
- assignment to fix (if fault)
assignment to fix (if fault)
- fixed not tested
fixed not tested
- fixed and tested OK
fixed and tested OK
- closed
closed
Use of incident metrics
Is this testing approach “wearing out”?
What happened
in that week?
We’re better
than last year
How many faults
can we expect?
Report as quickly as possible?
report
5
test can’t reproduce - “not a fault” - still there
can’t reproduce, back to test to report again
insufficient information - fix is incorrect
dev 5
reproduce
20
fix
5
re-test fault fixed
10
dev
can’t
reproduce
incident
report
test
10
What information about incidents?

Test ID

Test environment

Software under test ID

Actual & expected results

Severity, scope, priority

Name of tester

Any other relevant information (e.g. how to
reproduce it)
Severity versus priority

Severity
- impact of a failure caused by this fault
impact of a failure caused by this fault

Priority
- urgency to fix a fault
urgency to fix a fault

Examples
- minor cosmetic typo
minor cosmetic typo
- crash if this feature is used
crash if this feature is used
company name,
board member:
priority, not severe
Experimental,
not needed yet:
severe, not priority
Tester Tasks Developer Tasks
Incident Lifecycle
1 steps to reproduce a
fault
2 test fault or system fault
3 external factors that
influence the symptoms
4 root cause of the
problem
5 how to repair (without
introducing new
problems)
6 changes debugged and
properly component
tested
7 is the fault fixed?
Source: Rex Black “Managing the Testing Process”, MS Press, 1999
Contents
Organisation
Configuration Management
Test estimation, monitoring and control
Incident management
Standards for testing
ISTQB Foundation Certificate
Course
Test Management
1 2
4 5
3
6
Standards for testing

QA standards (e.g. ISO 9000)
- testing should be performed
testing should be performed

industry-specific standards (e.g. railway,
pharmaceutical, medical)
- what level of testing should be performed
what level of testing should be performed

testing standards (e.g. BS 7925-1&2)
- how to perform testing
how to perform testing
Summary: Key Points
Independence can be achieved by different
organisational structures
Configuration Management is critical for testing
Tests must be estimated, monitored and controlled
Incidents need to be managed
Standards for testing: quality, industry, testing
ISTQB Foundation Certificate
Course
Test Management
1 2
4 5
3
6
Which of the following are the most important factors to be
taken into account when selecting test techniques?
Tools available.
Regulatory standards.
Experience of the development team.
Knowledge of the test team.
The need to maintain levels of capability in each technique.

a. (i) and (ii)

b. (ii) and (iv)

c. (iii) and (iv)

d. (i) and (v)
What is the purpose of exit criteria?

a. To identify how many test to design.

b. To identify when to start testing.

c. To identify when to stop testing.

d. To identify who will carry out the test execution
What can a risk based approach to testing
provide?

The types of test techniques to be employed.

The total tests needed to provide 100 per cent
coverage.

An estimation of the total cost of testing.

Only that test execution is effective at reducing risk.
When assembling a test team to work on an enhancement to an existing
system, which of the following has the highest level of test independence?

a. A business analyst who wrote the original
requirements for the system.

b. A permanent programmer who reviewed some of
the new code, but has not written any of it.

c. A permanent tester who found most defects in
the original system.

d. A contract tester who has never worked for the
organization before.
What test roles (or parts in the testing process) is a developer
most likely to perform?

Executing component integration tests.

Static analysis

Setting up the test environment

Deciding how much testing should be automated.

a. (i) and (ii)

b. (i) and (i)

c. (ii) and (iii)

d. (iii) and (iv)
Which of the following are valid justifications for the
developers testing their own code during unit testing?

(i) Their lack of independence is mitigated by independent
testing during system and acceptance testing.

(ii) A person with a good understanding of the code can find
more defects more quickly using white-box techniques.

(iii) Developers have a better understanding of the
requirements than testers.

(iv) Testers write unnecessary incident reports because they
find minor differences between the way in which the system
behaves and the way in which it is specified to work.
a. (i) and (ii)
b. (i) and (iv)
c. (ii) and (iii)
d. (iii) and (iv)
Which of +the following terms is used to describe the management of
software components comprising an integrated system?

a. Configuration management.

b. Incident management

c. Test monitoring

d. Risk management
A new system is about to be developed. Which of the
following functions has the highest level of risk?

a. Likelihood of failure = 20 %; impact value=
£100,000

b. Likelihood of failure = 10 %; impact value=
£150,000

c. Likelihood of failure = 1 %; impact value=
£500,000

d. Likelihood of failure = 2 %; impact value=
£200,000
Which of the following statements about risks is most
accurate?

a. Project risks rarely affect product risk.

b. Product risks rarely affect project risk.

c. A risk based approach is more likely to be used to
mitigate product rather than project risks.

d. A risk based approach is more likely to be used to
mitigate project rather than product risks.

Software Testing ISTQB study material.ppt

  • 1.
    Test Management Software Testing ISTQBFoundation Certificate Course 1 Principles 2 Lifecycle 4 Dynamic test techniques 3 Static testing 5 Management 6 Tools
  • 2.
    Contents Organisation Configuration Management Test estimation,monitoring and control Incident management Standards for testing ISTQB Foundation Certificate Course Test Management 1 2 4 5 3 6
  • 3.
    Importance of Independence Time No.faults Release to End Users
  • 4.
    Organisational structures fortesting  Developer responsibility (only)  Development team responsibility (buddy system)  Tester(s) on the development team  Dedicated team of testers (not developers)  Internal test consultants (advice, review, support, not perform the testing)  Outside organisation (3rd party testers)
  • 5.
    Testing by developers  Pro’s: -know the code best know the code best - will find problems that the testers will miss will find problems that the testers will miss - they can find and fix faults cheaply they can find and fix faults cheaply  Con’s - difficult to destroy own work difficult to destroy own work - tendency to 'see' expected results, not actual results tendency to 'see' expected results, not actual results - subjective assessment subjective assessment
  • 6.
    Testing by developmentteam (buddy)  Pro’s: - some independence some independence - technical depth technical depth - on friendly terms with “buddy” - less threatening on friendly terms with “buddy” - less threatening  Con’s - pressure of own development work pressure of own development work - technical view, not business view technical view, not business view - lack of testing skill lack of testing skill
  • 7.
    Tester on developmentteam  Pro’s: - independent view of the software independent view of the software - dedicated to testing, no development responsibility dedicated to testing, no development responsibility - part of the team, working to same goal: quality part of the team, working to same goal: quality  Con’s - lack of respect lack of respect - lonely, thankless task lonely, thankless task - corruptible (peer pressure) corruptible (peer pressure) - a single view / opinion a single view / opinion
  • 8.
    Independent test team  Pro’s: -dedicated team just to do testing dedicated team just to do testing - specialist testing expertise specialist testing expertise - testing is more objective & more consistent testing is more objective & more consistent  Con’s - “ “over the wall” syndrome over the wall” syndrome - may be antagonistic / confrontational may be antagonistic / confrontational - over-reliance on testers, insufficient testing by over-reliance on testers, insufficient testing by developers developers
  • 9.
    Internal test consultants  Pro’s: -highly specialist testing expertise, providing support and help highly specialist testing expertise, providing support and help to improve testing done by all to improve testing done by all - better planning, estimation & control from a broad view of better planning, estimation & control from a broad view of testing in the organisation testing in the organisation  Con’s - someone still has to do the testing someone still has to do the testing - level of expertise enough? level of expertise enough? - needs good “people” skills - communication needs good “people” skills - communication - influence, not authority influence, not authority
  • 10.
    Outside organisation (3rdparty)  Pro’s: - highly specialist testing expertise (if outsourced to a highly specialist testing expertise (if outsourced to a good organisation) good organisation) - independent of internal politics independent of internal politics  Con’s - lack of company and product knowledge lack of company and product knowledge - expertise gained goes outside the company expertise gained goes outside the company - expensive? expensive?
  • 11.
    Usual choices  Component testing: -done by programmers (or buddy) done by programmers (or buddy)  Integration testing in the small: - poorly defined activity poorly defined activity  System testing: - often done by independent test team often done by independent test team  Acceptance testing: - done by users (with technical help) done by users (with technical help) - demonstration for confidence demonstration for confidence
  • 12.
    Resourcing issues  independence isimportant - not a replacement for familiarity not a replacement for familiarity  different levels of independence - pro's and con's at all levels pro's and con's at all levels  test techniques offer another dimension to independence (independence of thought)  test strategy should use a good mix - "declaration of independence” "declaration of independence”  balance of skills needed
  • 13.
    Skills needed intesting  Technique specialists  Automators  Database experts  Business skills & understanding  Usability expert  Test environment expert  Test managers
  • 14.
    Contents Organisation Configuration Management Test estimation,monitoring and control Incident management Standards for testing ISTQB Foundation Certificate Course Test Management 1 2 4 5 3 6
  • 15.
    Problems resulting frompoor configuration management  can’t reproduce a fault reported by a customer  can’t roll back to previous subsystem  one change overwrites another  emergency fault fix needs testing but tests have been updated to new software version  which code changes belong to which version?  faults which were fixed re-appear  tests worked perfectly - on old version  “Shouldn’t that feature be in this version?”
  • 16.
    A definition ofConfiguration Management  “The process of identifying and defining the configuration items in a system,  controlling the release and change of these items throughout the system life cycle,  recording and reporting the status of configuration items and change requests,  and verifying the completeness and correctness of configuration items.” - ANSI/IEEE Std 729-1983, Software Engineering ANSI/IEEE Std 729-1983, Software Engineering Terminology Terminology
  • 17.
    Configuration Management  An engineeringmanagement procedure that includes - configuration identification configuration identification - configuration control configuration control - configuration status accounting configuration status accounting - configuration audit configuration audit • Encyclopedia of Software Engineering, 1994 Encyclopedia of Software Engineering, 1994
  • 18.
  • 19.
  • 20.
    Status accounting &Configuration Auditing Configuration Identification Configuration Control Status Accounting Configuration Auditing Status Accounting Database Input to SA Database Queries and Reports Data Analysis Traceability, impact analysis Procedural Conformance CI Verification Agree with customer what has been built, tested & delivered
  • 21.
    Products for CMin testing  test plans  test designs  test cases: - test input test input - test data test data - test scripts test scripts - expected results expected results  actual results  test tools CM is critical for controlled testing What would not be under configuration management? Live data!
  • 22.
    Contents Organisation Configuration Management Test estimation,monitoring and control Incident management Standards for testing ISTQB Foundation Certificate Course Test Management 1 2 4 5 3 6
  • 23.
    Estimating testing isno different  Estimating any job involves the following - identify tasks identify tasks - how long for each task how long for each task - who should perform the task who should perform the task - when should the task start and finish when should the task start and finish - what resources, what skills what resources, what skills - predictable dependencies predictable dependencies • task precedence (build test before running it) task precedence (build test before running it) • technical precedence (add & display before edit) technical precedence (add & display before edit)
  • 24.
    Estimating testing isdifferent  Additional destabilising dependencies - testing is not an independent activity testing is not an independent activity - delivery schedules for testable items missed delivery schedules for testable items missed - test environments are critical test environments are critical  Test Iterations (Cycles) - testing should find faults testing should find faults - faults need to be fixed faults need to be fixed - after fixed, need to retest after fixed, need to retest - how many times does this happen? how many times does this happen?
  • 25.
    Test cycles /iterations Debug D R D R 3-4 iterations is typical Test Theory: Test Practice: Des Ex Ver Bld Iden Retest Retest
  • 26.
    Estimating iterations  past history  numberof faults expected - can predict from previous test effectiveness and previous can predict from previous test effectiveness and previous faults found (in test, review, Inspection) faults found (in test, review, Inspection) - % faults found in each iteration (nested faults) % faults found in each iteration (nested faults) - % fixed [in]correctly % fixed [in]correctly  time to report faults  time waiting for fixes  how much in each iteration?
  • 27.
    Time to reportfaults  If it takes 10 mins to write a fault report, how many can be written in one day?  The more fault reports you write, the less testing you will be able to do. Test Fault analysis & reporting Mike Royce: suspension criteria: when testers spend > 25% time on faults
  • 28.
    Measuring test executionprogress 1 tests run tests passed tests planned now release date what does this mean? what would you do?
  • 29.
    Diverging S-curve poor testentry criteria ran easy tests first insufficient debug effort common faults affect all tests software quality very poor tighten entry criteria cancel project do more debugging stop testing until faults fixed continue testing to scope software quality Note: solutions / actions will impact other things as well, e.g. schedules Possible causes Potential control actions
  • 30.
    Measuring test executionprogress 2 tests planned run passed action taken old release date new release date
  • 31.
    Measuring test executionprogress 3 tests planned run passed action taken old release date new release date
  • 32.
    Case history Incident Reports(IRs) 0 20 40 60 80 100 120 140 160 180 200 04-Jun 24-Jul 12-Sep 01-Nov 21-Dec 09-Feb Opened IRs Closed IRs Source: Tim Trew, Philips, June 1999
  • 33.
    Control  Management actions anddecisions - affect the process, tasks and people affect the process, tasks and people - to meet original or modified plan to meet original or modified plan - to achieve objectives to achieve objectives  Examples - tighten entry / exit criteria tighten entry / exit criteria - reallocation of resources reallocation of resources Feedback is essential to see the effect of actions and decisions
  • 34.
    Entry and exitcriteria Test Phase 1 Test Phase 2 "tested" is it ready for my testing? Phase 2 Phase 1 Entry criteria Exit criteria Acceptance criteria Completion criteria
  • 35.
    Entry/exit criteria examples poor better  cleancompiled  programmer claims it is working OK  lots of tests have been run  tests have been reviewed / Inspected  no faults found in current tests  all faults found fixed and retested  specified coverage achieved  all tests run after last fault fix, no new faults
  • 36.
    What actions canyou take?  What can you affect? - resource allocation resource allocation - number of test iterations number of test iterations - tests included in an tests included in an iteration iteration - entry / exit criteria entry / exit criteria applied applied - release date release date  What can you not affect: - number of faults already there number of faults already there  What can you affect indirectly? - rework effort rework effort - which faults to be fixed [first] which faults to be fixed [first] - quality of fixes (entry criteria quality of fixes (entry criteria to retest) to retest)
  • 37.
    Contents Organisation Configuration Management Test estimation,monitoring and control Incident management Standards for testing ISTQB Foundation Certificate Course Test Management 1 2 4 5 3 6
  • 38.
    Incident management  Incident: anyevent that occurs during testing that requires subsequent investigation or correction. - actual results do not match expected results actual results do not match expected results - possible causes: possible causes: • software fault software fault • test was not performed correctly test was not performed correctly • expected results incorrect expected results incorrect - can be raised for documentation as well as code can be raised for documentation as well as code
  • 39.
    Incidents  May be usedto monitor and improve testing  Should be logged (after hand-over)  Should be tracked through stages, e.g.: - initial recording initial recording - analysis (s/w fault, test fault, enhancement, etc.) analysis (s/w fault, test fault, enhancement, etc.) - assignment to fix (if fault) assignment to fix (if fault) - fixed not tested fixed not tested - fixed and tested OK fixed and tested OK - closed closed
  • 40.
    Use of incidentmetrics Is this testing approach “wearing out”? What happened in that week? We’re better than last year How many faults can we expect?
  • 41.
    Report as quicklyas possible? report 5 test can’t reproduce - “not a fault” - still there can’t reproduce, back to test to report again insufficient information - fix is incorrect dev 5 reproduce 20 fix 5 re-test fault fixed 10 dev can’t reproduce incident report test 10
  • 42.
    What information aboutincidents?  Test ID  Test environment  Software under test ID  Actual & expected results  Severity, scope, priority  Name of tester  Any other relevant information (e.g. how to reproduce it)
  • 43.
    Severity versus priority  Severity -impact of a failure caused by this fault impact of a failure caused by this fault  Priority - urgency to fix a fault urgency to fix a fault  Examples - minor cosmetic typo minor cosmetic typo - crash if this feature is used crash if this feature is used company name, board member: priority, not severe Experimental, not needed yet: severe, not priority
  • 44.
    Tester Tasks DeveloperTasks Incident Lifecycle 1 steps to reproduce a fault 2 test fault or system fault 3 external factors that influence the symptoms 4 root cause of the problem 5 how to repair (without introducing new problems) 6 changes debugged and properly component tested 7 is the fault fixed? Source: Rex Black “Managing the Testing Process”, MS Press, 1999
  • 45.
    Contents Organisation Configuration Management Test estimation,monitoring and control Incident management Standards for testing ISTQB Foundation Certificate Course Test Management 1 2 4 5 3 6
  • 46.
    Standards for testing  QAstandards (e.g. ISO 9000) - testing should be performed testing should be performed  industry-specific standards (e.g. railway, pharmaceutical, medical) - what level of testing should be performed what level of testing should be performed  testing standards (e.g. BS 7925-1&2) - how to perform testing how to perform testing
  • 47.
    Summary: Key Points Independencecan be achieved by different organisational structures Configuration Management is critical for testing Tests must be estimated, monitored and controlled Incidents need to be managed Standards for testing: quality, industry, testing ISTQB Foundation Certificate Course Test Management 1 2 4 5 3 6
  • 48.
    Which of thefollowing are the most important factors to be taken into account when selecting test techniques? Tools available. Regulatory standards. Experience of the development team. Knowledge of the test team. The need to maintain levels of capability in each technique.  a. (i) and (ii)  b. (ii) and (iv)  c. (iii) and (iv)  d. (i) and (v)
  • 49.
    What is thepurpose of exit criteria?  a. To identify how many test to design.  b. To identify when to start testing.  c. To identify when to stop testing.  d. To identify who will carry out the test execution
  • 50.
    What can arisk based approach to testing provide?  The types of test techniques to be employed.  The total tests needed to provide 100 per cent coverage.  An estimation of the total cost of testing.  Only that test execution is effective at reducing risk.
  • 51.
    When assembling atest team to work on an enhancement to an existing system, which of the following has the highest level of test independence?  a. A business analyst who wrote the original requirements for the system.  b. A permanent programmer who reviewed some of the new code, but has not written any of it.  c. A permanent tester who found most defects in the original system.  d. A contract tester who has never worked for the organization before.
  • 52.
    What test roles(or parts in the testing process) is a developer most likely to perform?  Executing component integration tests.  Static analysis  Setting up the test environment  Deciding how much testing should be automated.  a. (i) and (ii)  b. (i) and (i)  c. (ii) and (iii)  d. (iii) and (iv)
  • 53.
    Which of thefollowing are valid justifications for the developers testing their own code during unit testing?  (i) Their lack of independence is mitigated by independent testing during system and acceptance testing.  (ii) A person with a good understanding of the code can find more defects more quickly using white-box techniques.  (iii) Developers have a better understanding of the requirements than testers.  (iv) Testers write unnecessary incident reports because they find minor differences between the way in which the system behaves and the way in which it is specified to work. a. (i) and (ii) b. (i) and (iv) c. (ii) and (iii) d. (iii) and (iv)
  • 54.
    Which of +thefollowing terms is used to describe the management of software components comprising an integrated system?  a. Configuration management.  b. Incident management  c. Test monitoring  d. Risk management
  • 55.
    A new systemis about to be developed. Which of the following functions has the highest level of risk?  a. Likelihood of failure = 20 %; impact value= £100,000  b. Likelihood of failure = 10 %; impact value= £150,000  c. Likelihood of failure = 1 %; impact value= £500,000  d. Likelihood of failure = 2 %; impact value= £200,000
  • 56.
    Which of thefollowing statements about risks is most accurate?  a. Project risks rarely affect product risk.  b. Product risks rarely affect project risk.  c. A risk based approach is more likely to be used to mitigate product rather than project risks.  d. A risk based approach is more likely to be used to mitigate project rather than product risks.

Editor's Notes

  • #23 Software being tested has internal dependencies - calling hierarchy - messages passed - use of data - visibility features (display / print) Testing is dependent on the development schedule - test order should determine planned build order - actual build order depends on internal development aspects Testing is dependent on the quality of the software - faults found => retesting
  • #24 The Test Effort includes development activities as well as test activities