2. Proprietary and ConfidentialMarch 4, 20152
What is Software Testing?
“…the process of executing a program with the intent to certify its quality”
Mills
“…the process of executing a program with the intent of finding failures /
faults”
Myers
“…the process of exercising software to detect errors & to verify that it
satisfies specified requirements”
“Testing is any activity aimed at evaluating an attribute or capability of a
program or system and determining that it meets required results.”
Bill Hetzel, 1983
3. March 4, 20153 Proprietary and Confidential
What is Software Testing (continued)?
Software testing is a process:
The input is often stakeholder requirements
The output is quality information
The process methodically probes the application
from various angles
4. Proprietary and ConfidentialMarch 4, 20154
What is Software Testing (continued)?
Software testing comprises aspects of:
• Engineering
– We must design tests that are effective in identifying software failures
• Literature
– We must understand the stakeholders needs and desires.
– A stakeholder is a person or entity who has a vested interest in the success
of a project. They may be end-users, financial backers, company sponsors
or corporate members.
• Communication
– We must present our findings in a way that our clients can make informed
decisions upon
5. March 4, 20155 Proprietary and Confidential
Get with the Lingo!
Terms: Error - a human
action producing
incorrect result
Fault - a
manifestation of
error in software
Failure - deviation
of software from
expected delivery
or service
Do 100
I=1.10
X
Points to
Points to
produces
produces
6. Proprietary and ConfidentialMarch 4, 20156
Testing Terminology
DEFECT
The departure of a quality characteristic from its specified value that results in a product or
service not satisfying its normal usage requirements
RELIABILITY
The probability that software will not cause the failure of a system for a specified period of time
under specified conditions
QUALITY
The totality of the characteristics of an entity that bears on its ability to satisfy or implied needs
QUALITY ASSURANCE
All those planned actions used to fulfill the requirements for quality.
7. March 4, 20157 Proprietary and Confidential
Misconceptions
Testing is an unnecessary expense
A common development perception
Software testers are ten a penny
Huge difference between good and bad testing
Software testing is not difficult
Most testing activities are not rocket science but …
• Testers have a juggle many things at once
• Make new decisions constantly
• Think methodically and scientifically
• Questions things other people take for granted
8. Proprietary and ConfidentialMarch 4, 20158
Cost of Software Failures
A single failure may incur little cost or millions
In extreme cases software failure may cost LIVES (safety
critical systems – Airline, Nuclear Power)
The cost of failures increases proportionately (tenfold)
with the passing of each successive stage in the system
development process before they are detected
To correct a problem a requirements stage may cost £1
To correct the problem post-implementation may cost
millions
9. March 4, 20159 Proprietary and Confidential
Summary
The purpose of testing is to find faults. Faults
can be fixed therefore making better software.
Better software is more reliable, less prone to
failures.
Testing enables us to measure the quality of
the software
This enables us to understand and manage
the risk to the business
11. March 4, 201511 Proprietary and Confidential
The Test Process
The test process has 5 steps:
Test Planning
Test Specification
Test Execution
Test Recording
Checking for Test Completion
12. Proprietary and ConfidentialMarch 4, 201512
Test Planning and Test Specification
Test Planning
• The Test Plan describes how the Test Strategy is implemented
• A project plan for testing
• Defines what is to be tested, how it is to be tested and what is needed
for testing
Test Specification has 3 steps
• Preparation & Analysis
• Building Test Cases
• Defining expected Results
13. March 4, 201513 Proprietary and Confidential
Test Execution Checklist
Test execution schedule / log
Identify which tests are to be run
Test environment primed and ready
Resources ready, willing and able
Back-up and Recovery Processes in place
Any batch runs planned and scheduled
When all the above are in place we are ready to run the tests
14. Proprietary and ConfidentialMarch 4, 201514
Test Recording and Test Completion
Test Recording
Test Log should record
Software and Test Versions
Specifications / Requirements used as Test Base
Test Timings
Test Results (Actual Results v Expected Result)
Any Defect Details
Test Completion
Have we fulfilled the Test Exit Criteria
Used to determine when to stop this phase of testing
Key Functionality Tested
Test Coverage
Defect detection rate
15. March 4, 201515 Proprietary and Confidential
The Psychology of Testing
In this session we will:
Understand what qualities make a good tester
Look at a testers relationship with developers
Look at a tester relationship with management
Understand the issues with testing independence
16. Proprietary and ConfidentialMarch 4, 201516
The Psychology of Testing
What makes a Tester?
Testing is primarily to find faults
Can be regarded as “destructive”
Development is “constructive”
Testing asks questions
Testers need to ask questions
A tester needs many qualities
17. March 4, 201517 Proprietary and Confidential
The Psychology of Testing
What makes a tester:
Intellectual Qualities
Can absorb incomplete data
Can work with incomplete data
Can level quickly on manual levels
Good verbal communication
Good written communication
Ability to prioritise
18. Proprietary and Confidential
The Psychology of Testing
What makes a Tester?
Knowledge
How projects work
How computer systems and business needs interact
Test Techniques
Testing Processes
Testing best practices
To be able to thin outside the box
19. March 4, 201519 Proprietary and Confidential
The Psychology of Testing
What makes a good tester?
More Skills to acquire
Ability to find faults – planning, preparation & execution
Ability to understand systems
Ability to read specifications
Ability to extract testable functionality
Ability to work efficiently
Ability to focus on essentials
20. Proprietary and Confidential
The Psychology of Testing
What makes a Tester?
Communication with Developers
A good relationship is vital
Developers need to keep testers up-to-date with changes to
application
Testers need to inform developers of defects to allow fixes to be
applied
21. March 4, 201521 Proprietary and Confidential
The Psychology of Testing
What makes a good tester?
Communication with Management
Managers need progress reports
The best way is through Metrics
Number of Tests planned and prepared
Number of tests executed to date
Number of defects raised & fixed
How long planning and preparation and execution stages take
22. Proprietary and ConfidentialMarch 4, 201522
The Psychology of Testing
Testing Independence
It is important that testing is separate from development
The developer is likely to confirm adherence not deviation
The developer will make assumptions
Levels of Independence
Low – Developers write their own tests
Medium – tests are written by another developer
High – Tests are written by an independent body
Utopia – tests generated automatically
23. Proprietary and ConfidentialMarch 4, 201523
The Psychology of Testing
Testers require a particular set of skills
The desire to break things
The desire to explore and experiment
Communication
Questioning
Testing requires a different mentally to development
“Destroying” things rather than creating them
Testing should be separate from development
25. Proprietary and ConfidentialMarch 4, 201525
Re-Testing and Regression Testing
Re-Testing is the re-running of failed tests once a fix has been
implemented to check that the fix has worked
Regression Testing is running a wider test suite to check for
unexpected errors in unchanged code
26. March 4, 201526 Proprietary and Confidential
Re-Testing
The need for re-testing needs to be planned and
designed for:
Schedules need to allow for re-testing
Tests need to be easily re-run
Test Data needs to be re-usable
The environment needs to be easily restored
27. Proprietary and ConfidentialMarch 4, 201527
Regression Testing
Tests will need to be re-run when checking software upgrades
Regression tests should be run whenever there is a change to
the software or the environment
Regression tests are executed to prove aspects of a system
have not changed
Regression testing is a vital testing technique
Selecting cases for regression
Tests for areas that change regularly
Tests of functions that have a high level of faults
Regression Testing is the ideal foundation for Automation
28. March 4, 201528 Proprietary and Confidential
Regression Testing Strategy
Several factors to determine strategy:
How many test cases in regression test set?
What criteria should be used to select them?
When will regression testing be performed?
Who’s responsibility is it?
Why should it be a continuous activity?
29. Proprietary and ConfidentialMarch 4, 201529
Expected Results
In this session we will:
Understand the need to define expected results
Understand where expected results can be found
30. March 4, 201530 Proprietary and Confidential
Expected Results
Expected Results = Expected Outcomes
Identify required behaviour
If the expected outcome of a test is not defined
the actual output may be misinterpreted
Running a test with only a general concept of the
outcome is fatal
It is vital the Expected Results are defined with
the tests before they can be used
You cannot decide whether a test has passed
just because it looks right
31. Proprietary and ConfidentialMarch 4, 201531
Expected Results
Summary
Without defining expected results how do you know if a
test has passed or failed?
Expected Results can be found in the system
specifications and by asking experienced business
users
32. March 4, 201532 Proprietary and Confidential
Prioritisation of Tests
In this session we will
Understand why we need to prioritise tests
Understand how we decide the priority of individual tests
33. Proprietary and ConfidentialMarch 4, 201533
Prioritisation of Tests
It is not possible to test everything
Therefore errors will get through to the Live System
We must do the best testing possible in the time available
This means we must prioritize and focus testing on the priorities
34. March 4, 201534 Proprietary and Confidential
Prioritisation of Tests
Aspects to Consider
Severity
Probability
Visibility
Priority of Requirements
Frequency of Change
Vulnerability to error
Technical Complexity
Time and Resources
35. Proprietary and ConfidentialMarch 4, 201535
Prioritisation of Tests
Business Criticality
What elements of the application are essential to the
success of the organization?
Customer Factors
How visible would a failure be?
What does the customer want?
Technical Factors
How complex is it?
How likely is an error?
How often does this change?
36. March 4, 201536 Proprietary and Confidential
Prioritisation of Tests
Summary
There will never be enough time to complete all tests
Therefore those tests that cover those areas deemed most
important (to the business, highest risk) must be tested first
where possible
37. Proprietary and ConfidentialMarch 4, 201537
Models for Testing
There are many approaches to testing.
To models widely use are:
Verification, Validation and Testing (VV&T)
The V-Model
V-Model
The most commonly used model in testing
It represents the software development lifecycle
Shows the varies stages in development and testing
Shows the relationship between the various stages
38. March 4, 201538 Proprietary and Confidential
Models for Testing
VV&T
Verification
• The process of evaluating a system or component to determine
whether the products of the given development phase satisfies the
conditions imposed at the start of the phase
Validation
• Determination of the correctness of the products of software
development with respect to the user needs and requirements
Testing
• The process of exercising software to verify that it satisfies
specified requirements and to detect faults
39. March 4, 201539 Proprietary and Confidential
Testing in the Lifecycle
Types of Lifecycle
Sequential Lifecycle – Waterfall, V-Model
Iterative Lifecycles – Spiral, Pre-planned incremental
delivery
Rapid Development Models – RAD, DSDM (Agile)
Evolutionary
Object Oriented (OO)
Extreme Programming (XP)
44. March 4, 201544 Proprietary and Confidential
High Level Test Planning
In this session we will
Look at how a test plan is put together
Understand how it should be used and maintained
Understand why they are so important to a testing project
45. Proprietary and ConfidentialMarch 4, 201545
High Level Test Planning
What is a test plan?
“A document describing the scope, approach, resources and
schedule of intended testing activities. It identifies test items,
the features to be tested, the testing tasks, who will do each
task and any risks required contingency planning”
A project plan for testing
Covering all aspects of testing
A “living” document that should change as testing progresses
46. Proprietary and ConfidentialMarch 4, 201546
High Level Test Planning
IEEE 829 – Standard for Test Documentation
Institute of Electrical & Electronic Engineers
• Standard for Test Documentation
• Includes documentation templates for:
– Test Plan: Plan how the testing will proceed.
– Test Design Specification: Decide what needs to be tested.
– Test Case Specification: Create the tests to be run.
– Test Script: Describe how the tests are run.
– Test Item Transmittal Report: List of items released for testing.
– Test Log: Record the details of tests in time order.
– Test Incident Report: Details events that need to be investigated.
– Test Summary Report: Summarise and evaluate tests.
47. March 4, 201547 Proprietary and Confidential
High Level Test Planning
1. Test plan identifier.
2. Introduction.
3. Test items.
4. Features to be tested.
5. Features not to be tested.
6. Approach.
7. Item pass/fail criteria.
8. Suspension/Resumption.
9. Test deliverables.
10. Testing tasks.
11. Environmental needs.
12. Responsibilities.
13. Staffing and training needs.
14. Schedule.
15. Risks and contingencies.
16. Approvals.
48. Proprietary and ConfidentialMarch 4, 201548
High Level Test Planning : Who
Test Staff
•Who will perform the testing
•How will we address skill gaps
•Contingency planning
Support Staff
•Who will install operating systems
•Who will set up and manage test databases
49. March 4, 201549 Proprietary and Confidential
High Level Test Planning : What
Test items
What components need testing
Features to be tested
What features are subject to testing
Feature not to be tested
What features are not ready for testing
What features have already been tested
What features will be tested at a later level
Software & Testing Risk
Contingency planning
50. Proprietary and ConfidentialMarch 4, 201550
High Level Test Planning : When
Test Schedule
Use of Gant charts
Initially based on activities found in test process
Test Deliverables
When do they need to be delivered
What is their frequency of delivery
Work Breakdown
List of tasks required to be performed
All tasks have an associated period of time
51. March 4, 201551 Proprietary and Confidential
High Level Test Planning : How
Test Environment
Physical location
Workspace requirements
Hardware Requirements
Software Requirements
Network Requirements
52. Proprietary and ConfidentialMarch 4, 201552
High Level Test Planning
Test Approach
Arguably the most important section of a test plan
document
Relates to the details of test case designRelates to the details of test case design
Often categorized by testing typeOften categorized by testing type
SpecifiesSpecifies
Techniques to be used in designTechniques to be used in design
Test completion criteriaTest completion criteria
Measurement techniquesMeasurement techniques
ApproachApproach
53. March 4, 201553 Proprietary and Confidential
High Level Test Planning
Summary
Test plans are project plans for testing
They identify why testing is needed, what will be tested,
the scope of this phase of testing, what deliverables testing
will provide and what is required to enable testing to
succeed
They are “living” documents that must evolve as the project
progresses
55. March 4, 201555 Proprietary and Confidential
Day 2
Topics:
Types of Testing
White Box Testing
Black Box Testing
Reviews
56. Proprietary and ConfidentialMarch 4, 201556
Key Testing Phases
Component Testing
The Testing of individual software components. The main focus is on the internal program
structure
Component Integration Testing
Process of combining components into larger assembles. Looks at the interaction between
components and their interfaces.
Functional System Testing
The process of testing an integrated system to verify that it meets specify requirements
Non-Functional System Testing
Testing of those requirements that do not relate to functionality - performance
System Integration Testing
Testing performed to expose faults in the interfaces and in the interaction between integrated
systems
Acceptance Testing
Formal testing conducted to enable a user, customer or other authorized entity whether to
accept a system or component
57. March 4, 201557 Proprietary and Confidential
Acceptance Testing
In this session we will
Understand what acceptance testing is
Why you would want to do it
How you would plan it and prepare for it
What you need to actually do it
Understand the different types of acceptance
testing
58. Proprietary and Confidential
Acceptance Testing
What is Acceptance Testing?
“Formal testing conducted to enable a user, customer or
other authorized entity to determine whether to accept a
system or component”
59. March 4, 201559 Proprietary and Confidential
Acceptance Testing
What is User Acceptance Testing (UAT)?
Exactly what it says it is
The set-up will represent a working environment
Users of the end product conducting the tests
Covers all aspects of the project, not just the system
Also known as Business Acceptance Testing or Business
Process Testing
60. Proprietary and ConfidentialMarch 4, 201560
Acceptance Testing
Planning UAT
Why Plan?
Things to Consider – what environment is going to be
replicated
Preparing Tests
Manual
Automated
61. March 4, 201561 Proprietary and Confidential
Acceptance Testing
Why Plan?
If you don’t, then how do you know you have achieved
what you set out to do
Avoids repetition
Test according to code releases
Makes efficient and effective use of time and resources
63. March 4, 201563 Proprietary and Confidential
Acceptance Testing
Preparing your tests:
Take a logical approach
Identify the business processes
Build into everyday business scenarios
64. Proprietary and ConfidentialMarch 4, 201564
Acceptance Testing
Data Requirements
Copied Environments
Created Environments
Running the Tests:
Order of Tests
Confidence Checks
Automated and Manual test runs
65. March 4, 201565 Proprietary and Confidential
Acceptance Testing
Contract Acceptance Testing
“A demonstration of the acceptance criteria”
Acceptance Criteria will have been defined in the
contract
Before the software is accepted it is necessary to
show that it matches its specification as defined in
the criteria
66. Proprietary and ConfidentialMarch 4, 201566
Acceptance Testing
Other types if Acceptance Testing:
Alpha Testing – customers do the testing
Beta Testing
Factory Acceptance Testing (FAT)
67. March 4, 201567 Proprietary and Confidential
Acceptance Testing
Summary
Before software is released it should be
subjected to Acceptance Testing
User representation in testing is VITAL
If the product does not pass UAT then a decision
about implementation needs to be made
68. Proprietary and ConfidentialMarch 4, 201568
Functional System Test
In this session we will
Understand what functional system testing is
Understand the benefits of functional system
testing
69. March 4, 201569 Proprietary and Confidential
Functional System Testing
What is functional system testing?
Testing of the complete system
Ideally done by an independent test team
Two type – functional and non-functional
70. Proprietary and ConfidentialMarch 4, 201570
Functional System Test
A Functional Requirement is
“ A requirement that specifies a function that a system
or system component must perform”
Functional System testing is geared to checking the
function of the system against specification
My be requirement based or business process based
71. March 4, 201571 Proprietary and Confidential
Functional System Testing
Testing based on requirements
Requirements specification used to derive test
case
System is testing to ensure the requirements are
met
72. Proprietary and ConfidentialMarch 4, 201572
Functional System Test
Testing carried out against Business Processes
Based on expected use of the system
Builds use cases – test cases that reflect actual or
expected use of the system
73. March 4, 201573 Proprietary and Confidential
Non-Functional System Testing
In this session we will
Understand what non-functional System Testing is
Understand the need for Non-Functional System
Testing
Look at the various types of Non-Functional System
Testing
74. Proprietary and ConfidentialMarch 4, 201574
Non-Functional System Testing
Non-Functional System Testing is defined as:
“ Testing of those requirements that do not relate to the
Functionality e.g. Performance, Load, Usability,
Security etc”
75. March 4, 201575 Proprietary and Confidential
Non-Functional System Test
Security Testing
“Testing whether the system meets specified security requirements”
Usability Testing
“Testing the ease with which users can learn and use the product”
Load Testing
“Testing geared to assessing the applications ability to deal with the
expected throughput of data and users
Stress Testing
“Assesses individual components by exercising them to and beyond
the limits of expected use”
Performance Testing
“Tests the efficiency of individual components of an application”
76. Proprietary and ConfidentialMarch 4, 201576
Non-Functional System Test
Other Non-Functional System Test
Volume Testing
Recovery Testing
Documentation Testing
Storage Testing
Installability Testing
77. March 4, 201577 Proprietary and Confidential
Non-Functional System Test
Summary
Just because a systems functions have been tested
doesn’t mean that testing is complete
There are a range of non-functional tests that need to be
performed upon a system
78. Proprietary and ConfidentialMarch 4, 201578
Component Testing
In this session we will
Understand what Component Testing is
Look at the Component Testing Process
Look at the myriad types of Component Testing
79. March 4, 201579 Proprietary and Confidential
Component Testing
What is Component Testing?
“ The testing of an individual software component.
This is also known as Unit Testing”
What is a component?
A minimal software item for which a separate specification
is available
80. Proprietary and ConfidentialMarch 4, 201580
Component Testing
Component Testing is the lowest form of testing
At the bottom of the V-Model
Each component is tested in isolation
Prior to being integrated into the system
Involves testing the code itself
Test the code in the greatest detail
Usually done by the components developer
81. March 4, 201581 Proprietary and Confidential
Component Testing
BS7925-2 Standard for Component Testing
Component Test Process:
Component Test Planning
Component Test Specification
Component Test Execution
Component Test Recording
Checking for Component Test Completion
82. Proprietary and ConfidentialMarch 4, 201582
Maintenance Testing
In this session we will
Look at the challenges that testers face when an
application changes post implementation
See how to ensure that maintenance applied to
the system does not cause failures
83. March 4, 201583 Proprietary and Confidential
Maintenance Testing
What is Maintenance Testing?
Testing of changes to existing, established systems. It is
regression testing in the “Live” environment
Checking that the fixes have been made and that the
system has not regressed
84. Proprietary and ConfidentialMarch 4, 201584
Maintenance Testing
Summary
All established systems need maintenance from time
to time
The changed code / function will need to be tested
A regression test will need to be executed to ensure
that the change(s) have been made correctly and that
they have not affected other parts of the system
Impact Analysis is key
86. March 4, 201586 Proprietary and Confidential
Black and White Box Testing
In this session we will
Understand the differences between Black and
White Box testing and where they feature in the
testing lifecycle
Understand how a systematic approach provides
confidence
Understand how tools can be used to improve
and increase productivity
87. Proprietary and ConfidentialMarch 4, 201587
Black and White Box Testing
What is Black Box Testing?
“ Test Case selection that is based on an analysis of the specification of a
component without reference to it’s internal workings”
What is White Box Testing?
“ Test Case selection based on an analysis of the internal structure of a
component”
88. March 4, 201588 Proprietary and Confidential
Black Box Testing
Concentrates on the Business Function
Can be used throughout testing cycle
Dominates the later stages of testing although is relevant
throughout the development lifecycle
Little / No Knowledge of the underlying code needed
89. Proprietary and ConfidentialMarch 4, 201589
White Box Testing
Also known as Structural Testing
Focuses on Lines of Code
Looks at specific conditions
Looks at the mechanics of the Application
Useful in the early stages of testing
90. March 4, 201590 Proprietary and Confidential
Summary
Black box testing focuses of functionality
White box testing focuses on code
A systematic approach is needed for both:
Tests need to be planned, prepared, executed and verified
Expected results need to be defined and understood
Tools can help increase productivity and quality
91. Proprietary and ConfidentialMarch 4, 201591
Black Box Test Techniques
Who do we need black box test techniques:
Exhaustive testing is not possible
Due to the constraints of time, money and resources
Therefore we must create a sub-set of test
There must be achievable, but should not reduce coverage
We should also focus on areas of likely risk
Those places where mistakes may occur
92. March 4, 201592 Proprietary and Confidential
Black Box Test Techniques
Each black box test technique has
A method
how to do it
A test case design approach
How to create test cases using the approach
A measurement technique
Except Random & Syntax
See BS7925-2 for detailed information
93. Proprietary and ConfidentialMarch 4, 201593
Black Box Test Techniques
Equivalence Partitioning
Boundary Value Analysis
State Transition
Cause & Effect Graphing
Syntax Testing
Random Testing
94. March 4, 201594 Proprietary and Confidential
Equivalence Partitioning
Uses a model of the component to partition input
and output values into sets
Such that each value within a set can be reasonably
expected to be treated in the same manner
Therefore only one example of that set needs to be
input to or resultant from the component
95. Proprietary and ConfidentialMarch 4, 201595
Equivalence Partitioning
Equivalence Partitioning – Test Case Design
Inputs to the component
Partitions Exercised:
Valid Partitions
Invalid Partitions
Expected Results
96. March 4, 201596 Proprietary and Confidential
BVP and EP
BREAK
97. Proprietary and ConfidentialMarch 4, 201597
Boundary Value Analysis
Uses a model of the computer to identify the values close
to and on partitions boundaries
For input and output, valid and invalid data
Chosen specifically to exercise the boundaries of the
area under test
98. March 4, 201598 Proprietary and Confidential
Boundary Value Analysis – Test Case Design
Inputs to the component
Boundaries to be exercised
Just below
On
Just Above
Expected Results
100. March 4, 2015100 Proprietary and Confidential
Example: Car Insurance
101. Proprietary and Confidential
Input Partitions & Boundaries
March 4, 2015101
Applicant’s Age: 21 25 41 60
0 - 20 21 - 24 25 - 40 41 - 60 >60
Parked
Street
Garage
Convictions
None
Parking
Speeding
Banned
Health
No Issues
Some Issues
Serious Issues
102. March 4, 2015102 Proprietary and Confidential
Output Partitions & Boundaries
Outputs
Total Score: 4 8 13 23
4 – 7
cheap
8 – 12
normal
23 – 37
refused
37
13 -22
pricey
103. Proprietary and ConfidentialMarch 4, 2015103
Key to Partitions Table
Partition Input/
output
Valid/
Invalid
Description
P1 Input Valid Age = 0-20
P2 Input Valid Age = 21-24
P3 Input Valid Age = 25-40
P4 Input Valid Age = 41-60
P5 Input Valid Age = >60
P6 Input Valid Parked = Street
P7 Input Valid Parked = Garaged
P8 Input Valid Convictions = None
P9 Input Valid Convictions = Parking
P10 Input Valid Convictions = Speeding
P11 Input Valid Convictions = Banned
P12 Input Valid Health = No issues
P13 Input Valid Health = Some issues
P14 Input Valid Health = Serious issues
P15 Output Valid Output = Cheap Insurance
P16 Output Valid Output = Normal Insurance
P17 Output Valid Output = Expensive Insurance
P18 Output Valid Output = Refused
104. March 4, 2015104 Proprietary and Confidential
Key to Boundaries Table
Key
ID or
‘Tag’
Input or
Output
Defined
in
Spec?
Descriptive
text
Boundary Input/
output
Valid/
Invalid
Description
B1 Input Valid Input age ‘0’
B2 Input Valid Input age ‘21’
B3 Input Valid Input age ‘25’
B4 Input Valid Input age ‘41’
B5 Input Valid Input age ‘60’
B6 Output Valid Output total ‘3’
B7 Output Valid Output total ‘8’
B8 Output Valid Output total ‘13’
B9 Output Valid Output total ‘23’
B10 Output Valid Output total ‘47’
105. Proprietary and Confidential
105
Boundary Value Analysis TCs
Test Case 1 2 3 4 5 6
Input Age -1 0 1 20 21 22
Input Parked Garaged Garaged Garaged Garaged Garaged Garaged
Input Convictions None None None None None None
Input Health No issues No issues No issues No issues No issues No issues
Boundary tested B1 (Age=0) B2 (Age=21)
Valid/Invalid I V V V V V
Total - 13 13 13 8 8
Exp. Output - Expensive Expensive Expensive Normal Normal
106. March 4, 2015106 Proprietary and Confidential
Boundary Value Analysis TCs
Test Case 1 2 3 4 5 6
Input Age -1 0 1 20 21 22
Input Parked Garaged Garaged Garaged Garaged Garaged Garaged
Input Convictions None None None None None None
Input Health No issues No issues No issues No issues No issues No issues
Boundary tested B1 (Age=0) B2 (Age=21)
Valid/Invalid I V V V V V
Total - 13 13 13 8 8
Exp. Output - Expensive Expensive Expensive Normal Normal
107. Proprietary and Confidential
107
Progress =
Number of distinct boundary values executed
Total number of boundary values
* 100%
Progress: Testing Techniques
Progress =
Number of distinct partitions executed
Total number of partitions
* 100%
108. March 4, 2015108 Proprietary and Confidential
Why Do Both EP And BVA?
Invalid partitions maybe easily missed.
If the test fails, is the whole partition wrong, or
is a boundary in the wrong place…have to test
mid-partition anyway.
With BVA we are testing extremes. This does
not give confidence for typical scenarios.
110. Proprietary and ConfidentialMarch 4, 2015110
State Transition Testing
May be useful to model the software
State Transition Diagram shows:
States the software can occupy
Transitions between the states
Events that cause the transitions
Actions that result from transitions
111. March 4, 2015111 Proprietary and Confidential
The Specification
Example
Windows ‘Folder’ Program
113. March 4, 2015113 Proprietary and Confidential
Level 0-Switch Test Cases
Trans# Start
State
Event Action Finish
State
1 0F Create 1 File
Selected
SF
2 0F Invert
Selection
No files
selected
0F
3 UF Invert
Selection
1 File
selected
SF
4 SF Delete 1 File
Deleted
0F
5 SF Invert
Selection
No files
selected
UF
114. Proprietary and Confidential114
Logical Execution Order
Trans# Start
State
Event Action Finish
State
1 0F Invert
Selection
No files
selected
0F
2 0F Create 1 File
Selected
SF
3 SF Invert
Selection
No files
selected
UF
4 UF Invert
Selection
1 File
selected
SF
5 SF Delete 1 File
Deleted
0F
115. March 4, 2015115 Proprietary and Confidential
Level 1-Switch Test Cases
Test Case 1 2 3 4 5 6 7 8 9
Start State 0F SF UF UF SF SF 0F 0F 0F
Input IS IS IS IS D D C C IS
Next State 0F UF SF SF 0F 0F SF SF 0F
Input C IS IS D C IS D IS IS
Finish State SF SF UF 0F SF 0F 0F UF 0F
116. Proprietary and Confidential116
State Tables
R S CM
S1 AT/S3 N/S1 D/S2
S2 AD/S4 N/S2 T/S1
S3 N/S3 T/S1 N/S3
S4 N/S4 D/S2 N/S4
N PRESENTS NO OUTPUT THEREFORE AN INVALID TRANSITION
117. March 4, 2015117 Proprietary and Confidential
Test Basis: Digital Watch
Reset (R)
Set (S) Change Mode (CM)
Welcome to the digital era!
118. Proprietary and Confidential118
Change
Date (S4)
Display
Date (S2)
Display
Time (S1)
Change
Time (S3)
Reset (R)
Alter time (AT)
set (S)
display time (T)
Reset (R)
Alter Date (AD)
set (S)
display date (D)
Change mode (CM)
display date (D)
Change mode (CM)
display time (T)
State Transition Diagram
119. March 4, 2015119 Proprietary and Confidential
Summary – Black Box Techniques
Black box Testing concentrates on testing the
features of the systems
Techniques enable us to maximise testing
Creates an achievable set of tests that offer maximum
coverage
Ensure possible areas of Risk are tested
Black box testing is relevant throughout the testing
process
120. Proprietary and ConfidentialMarch 4, 2015120
White Box Testing
In this session we will
Understand what White Box Testing is
Look at some of the different types of White Box
Testing
121. March 4, 2015121 Proprietary and Confidential
White Box Testing
What is White Box Testing?
“Test case selection based on an analysis of the
internal structure of a component”
Also known as Glass Box testing or Clear Box
testing
122. Proprietary and ConfidentialMarch 4, 2015122
White Box Testing
Why do we need White Box Techniques?
Provide formal structure to testing code
Enable us to measure how much of a component has
been tested
Example
<100 lines of code
100,000,000,000,000 possible paths
At 1,000 tests per second would still take 3,170 years to test all paths
123. March 4, 2015123 Proprietary and Confidential
White Box Testing
To plan and design effective cases requires a
knowledge of the
Programming language usedf
Database(s) used
Operating System(s) used
And ideally knowledge of the code itself
124. Proprietary and ConfidentialMarch 4, 2015124
White Box Testing Techniques
BS7925-2 lists all the white box test techniques
Statement Testing
Branch / Decision Testing
Branch Condition Testing
Branch Condition Combination Testing
Modified Condition Decision Testing
Linear Code Sequence & Jump
Data Flow Testing
125. March 4, 2015125 Proprietary and Confidential
Statement Testing
“ A test case design technique for a
component in which test cases are designed
to execute statements”
“ Test cases are designed and run with the
intention of executing every statement in a
component”
126. Proprietary and ConfidentialMarch 4, 2015126
Branch / Decision Testing
A technique used to execute all branches the
code may take based on decisions made
Test cases designed to ensure all branches &
decision points are covered
127. March 4, 2015127 Proprietary and Confidential
EXAMPLES
PRACTICAL EXAMPLES
128. Proprietary and ConfidentialMarch 4, 2015128
Summary
White box testing can be done immediately after
code is written
Doesn’t need the complete system
Does need knowledge of the code
A combination of all techniques are required for a
successful test
Don’t rely on just one technique
Control Flow Graphing is a pre-requisitie
130. Proprietary and ConfidentialMarch 4, 2015130
Reviews and Test Process
What is Static Testing?
“Testing of an object without execution on a computer”
How is this done
By reviewing the system deliverables
131. March 4, 2015131 Proprietary and Confidential
Reviews and the Test Process
Why Review?
To identify errors as soon as possible in the development
lifecycle
Reviews offer the chance to find errors in the system
specifications
This should lead to
Development productivity improvements
Reduced development time-scales
Lifetime cost reductions
Reduced failure levels
132. Proprietary and ConfidentialMarch 4, 2015132
Reviews and the Test Process
When to review?
As soon as an object is ready, before it is used as a
product or the basis for the next step in development
What to review?
Anything and everything can be reviewed
Requirements, System and Program Specifications
should be reviewed prior to publication
System Design deliverables should be reviewed both
in terms of functionality and technical robustness
133. March 4, 2015133 Proprietary and Confidential
Reviews and the Test Process
What should be reviewed?
Program Specifications should be reviewed before
construction
Code should be reviewed before execution
Test plans should be reviewed before creating tests
Test Results should be reviewed before implementation
134. Proprietary and ConfidentialMarch 4, 2015134
Reviews and the Test Process
The cost of failures
On-going reviews cost approximately 15% of
development budget
This includes activities such as the review itself, metric
analysis & process improvement
Reviews are highly cost effective
Finding and fixing faults in reviews is significantly cheaper
than finding and fixing faults in later stages of testing
135. March 4, 2015135 Proprietary and Confidential
Reviews & the Test Process
The benefits of reviews
Reviews save time and money
Development productivity improvements
People take greater care
They have more pride in doing good work
They have a greater understanding of what they are required to
deliver
Reduced development costs
Reduced fault levels
Reduced lifetime costs
136. Proprietary and ConfidentialMarch 4, 2015136
Summary
Reviews enable us to ensure that the systems
specification is correct and relates to the user
requirements
Anything generated by a project can be reviewed
In order for them to be effective, reviews must be well
managed
137. March 4, 2015137 Proprietary and Confidential
Types of Reviews
A process or meeting during which a work
product, or set of work products, is presented
to project personnel, managers, users or other
interested parties for comment or approval’
BS7925-1
138. Proprietary and ConfidentialMarch 4, 2015138
Informal Reviews
Informal Reviews
As the least formal of all reviews, it can occur at any time and is largely unplanned. It will
include conversations at the photocopier, coffee machine and during breaks. It large
consist of questions like ‘ What do you think of this?’ and normal occurs between peers. No
formal documentation is produce and no record is keep of the review.
139. March 4, 2015139 Proprietary and Confidential
Walkthroughs
Walkthroughs
A review process in which a professional leads one or
more members of the development team through a
segment of a document that he or she has written while the
other members ask questions and make comments about
technique, style, possible error, violation of development
standards, and other problems.
140. Proprietary and ConfidentialMarch 4, 2015140
Technical Reviews
Technical/Peer Reviews
A formal meeting at which work products, are presented to
interested parties for comments and approval. Participants are
often peers with no management participation. A technical expert
may be present.
141. March 4, 2015141 Proprietary and Confidential
Inspections
Inspections
A formal evaluation technique in which documents are
examined in detail by a person or group other than the
author to detect errors, violations of development
standards, and other problems.
142. Proprietary and ConfidentialMarch 4, 2015142
The Inspection Process
Fagan’s Inspection Process
Kick-off
Checking
Logging
Editing
Entry
Exit
Process Improvement
Change Request
Software Development Stage
Next Development Stage
143. March 4, 2015143 Proprietary and Confidential
Review Purposes
Purposes:
FaultFinding
Education
C
onsensus
Walkthroughs
Technical/Peer
Reviews
Inspections
144. Proprietary and ConfidentialMarch 4, 2015144
Summary
There are various types of reviews
Companies must decide which one(s) are best for them
In order to gain maximum benefit from them they
must be organized and implemented
146. Proprietary and ConfidentialMarch 4, 2015146
Static Analysis
In this session we will
Understand what Static Analysis is
Look at some of the elements of Static Analysis
Compiler
Static Analysis Tools
Data Flow Analysis
Control Flow Analysis
Complexity Analysis
147. March 4, 2015147 Proprietary and Confidential
Static Analysis
What is Static Analysis?
Analysis of the code without dynamic execution
Attempting to identify errors in the code
Provides metrics on flow through SUT and its
complexity
A form of automation – usually done with tools
148. Proprietary and ConfidentialMarch 4, 2015148
Static Analysis
What do we how to find?
Unreachable code
Undeclared variables
Parameter type mismatches
Uncalled functions and procedures
Possible array boundary violations
149. March 4, 2015149 Proprietary and Confidential
Incident Management
An “incident” is when testing cannot progress as
planned
“any significant, unplanned event that occurs during
the testing that requires subsequent investigation
and / or correction
The application does not function as anticipated
When actual results differ from expected results
Components are missing
They must be raised against documentation, the
SUT, the test environment or the tests themselves
150. Proprietary and ConfidentialMarch 4, 2015150
Raising An Incident
Immediately when a test case fails
• More than one test case may fail due to the same reason,
resulting in duplicate incidents
• But … information is fresh in your mind
After the execution of the test set
• Allows the one-to-many relationship between incidents and test
cases to be facilitated better
• But … information is not so fresh in the mind
152. Proprietary and ConfidentialMarch 4, 2015152
Incident Record
Other Incident Information
Name of Tester
Test Case ID
Test Specification ID & version
Test Script ID & version
Software Build
Traced Requirements
Resolution Details (added by developer
153. March 4, 2015153 Proprietary and Confidential
Incident Lifecycle
Lifecycle ensures that an appropriate
process for incident closure is followed
May be used to monitor and improve testing
Status and change histories should be
maintained
Resolution progress should be tracked
154. Proprietary and ConfidentialMarch 4, 2015154
Incident Lifecycle
IEEE 1044 – Standard for Software anomalies
Submitted Assigned Opened Resolved Closed
Duplicate
Submit
Assign Open Resolve Valid
Reject
DuplicateDuplicate
155. March 4, 2015155 Proprietary and Confidential
Incident Management Systems
Use a database platform
Each incident record is stored as a record in
the incident database
Facilities to ensure incident lifecycle is
followed
Some test management systems have built in
incident management systems
157. March 4, 2015157 Proprietary and Confidential
Summary
An incident is “any significant, unplanned
event that occurs during the testing that
requires subsequent investigation and / or
correction”
Incidents should be recorded and tracked
Analysis of incidents will enable us to see
where problems arose and to aid in test
process improvement
159. March 4, 2015159 Proprietary and Confidential
Risk-Based Testing
Currently a software testing buzz word
Risk determines the testing schedule
Higher risk test items are tested first
Higher risk test items are tested well
Dependant on thorough Risk Management
160. Proprietary and ConfidentialMarch 4, 2015160
Three main activities:
Identification of risk
– Does it exist?
Analysis of risk
– How serious could it be?
Mitigation of risk
– What should we do about it?
161. March 4, 2015161 Proprietary and Confidential
Risk Analysis
Risk Severity = Likelihood x Impact
Risk Exposure = Severity x Frequency
If we use our quantitative values for risk then
we are said to be doing ‘Risk-based testing’
162. Proprietary and ConfidentialMarch 4, 2015162
Project and Product Risks
Example Project Risks
Development team in different location from independent testing
team.
New, unfamiliar technology introduced to the project.
Project team inexperienced or not mature.
Unrealistic time constraints.
Example Product Risks
Failure will cause loss of revenue
Failure will cause loss of life
Failure will affect future business
163. March 4, 2015163 Proprietary and Confidential
Testing and Risk
Risk Management affects testing by:
Defining an ordering of tests
Suggesting that some tests are more important
than others
Dictating the depth of testing to be performed
Directing focus to most important tests (MITs) –
Prioritising Tests
164. Proprietary and ConfidentialMarch 4, 2015164
Possible Risk Criterion
Impact of failure to stakeholders
Likelihood of failure
Visibility to the wider world
Business priority
Quantity of change undergone
How error prone
Criticality to business
Technical complexity
Difficulty of testing
Cost of testing
166. Proprietary and ConfidentialMarch 4, 2015166
Test Estimation, Monitoring & Control
In this session we will
Understand how we estimate how long we need for
testing
Understand how we monitor the progress of testing
once it has started
Understand what steps we take to ensure that the
effort progresses as smoothly as possible
167. March 4, 2015167 Proprietary and Confidential
Test Estimation
The same as estimating for any project
We need to identify the number of tasks to be
performed, the length of each test, the skills and
resources required and the various dependencies
Testing has a high degree of dependency
You cannot test something until it has been delivered
Faults found need to be fixed and re-tested
The environment must be available whenever a test is to
be run
168. Proprietary and ConfidentialMarch 4, 2015168
Test Estimation
Factors to consider
Risk of failure
Complexity of Code
Criticality
Coverage
Stability of System Under Test
169. March 4, 2015169 Proprietary and Confidential
Test Estimation
Testing is a tool to aid Risk Management
Consider the cost of the business of failure of a feature
Test the most important features as soon as possible
Be prepared to repeat these tests
170. Proprietary and ConfidentialMarch 4, 2015170
Test Estimation Techniques
Expert Based Techniques
Previous Experience
Formula Based Techniques
% of development effort
Lines of code
Function Point Analysis
Test Point Analysis
Testing:
A dictionary definition of testing is … ‘A procedure for critical evaluation; a means of determining the presence, quality, or truth of something; a trial’. The way in which this is done in software testing is to begin by establishing what attributes or capabilities the software product should exhibit: these are known as Software Requirements. Software testing makes a comparison between the attributes and capabilities that a software product should exhibit and what it actually does exhibit.
Bill Hetzel:
In 1972, Dr. Bill Hetzel convened the first formal conference on Software Testing at the University of North Carolina. In 1981, “Bill” was teaching a public seminar entitled “Structured Software Testing”. Later his book, with the tongue-in-cheek title “The Complete Guide to Software Testing” was published. Dr. Bill Hetzel is thought to be one of the founders of formal software testing and was instrumental in moving testing from being a secondary function performed by developers to a career path based on scientific principles.
The following terms are taken from British Standard BS7925-1.
Error. A human action that produces an incorrect result.
Fault. A manifestation of an error in software. A fault, if encountered may cause a failure.
Failure. Deviation of the software from its expected delivery or service. [Fenton]
Testing is an unnecessary expense:
It is true that testing is an expensive activity, in some cases 40-50% of overall development costs, but ask any company whose software product has failed due to lack of quality and they will tell you that the expense of tidying up the mess is greater by far. See the next slide for examples of some high profile organisations who paid dearly for software failures.
Software testers are ten a penny:
It is true that almost anyone can give the appearance of performing software testing, however, there is a vast gulf between the information produced and failures uncovered by a qualified software testing professional who uses formal testing techniques to those who perform testing in an uncontrolled and ad-hoc basis.
Software testing is not difficult:
Oh yes it is! In most products with a degree of complexity there may be many possible tests and combinations of tests that could be selected and used to evaluate the product. It is the responsibility of the tester to make an informed decision about which of these tests would be most likely to uncover a failure and what would be the best way to simulate these tests. Add to this the need to understand what the product should actually achieve and you will find that software testing is every bit as demanding as any other role in the software development team.
The slide shows some of those organisations who may have benefited from better and more thorough testing practices.
Ariane Rocket
On June 4, 1996, the maiden flight of the European Ariane 5 launcher crashed about 40 seconds after takeoff. Media reports indicated that the amount lost was half a billion dollars … uninsured. The CNES (French National Centre for Space Studies) and the European Space Agency immediately appointed an international inquiry board, consisting of respected experts from major European countries, who produced their report in hardly more than a month. Its conclusion: the explosion was the result of a software failure.
Mercury Probes
Project Mercury&apos;s sub-orbital flights were in 1961, and its orbital flights began in 1962. During its sub-orbital flights it was discovered that its program results were not accurate enough. They discovered that the reason was due to one line of FORTRAN code: DO 10 I=1.10, the programmer clearly intended DO 10 I=1, 10. Glenford Myer’s in his book &quot;Software Reliability: Principles and Practices&quot; (John Wiley & Sons, 1976) propagates the myth that this was a ‘billion dollar error’ however no known spacecraft was ever lost due to this problem.
There are various lifecycle models that have been developed in order to meet specific development needs. The models specify the various stages of the process and the order in which they are carried out
The waterfall model is a sequential software development process, in which progress is seen as flowing steadily downwards (like a waterfall) through the phases of Conception, Initiation, Analysis, Design (validation), Construction, Testing and maintenance.
The waterfall development model has its origins in the manufacturing and construction industries; highly structured physical environments in which after-the-fact changes are prohibitively costly, if not impossible. Since no formal software development methodologies existed at the time, this hardware-oriented model was simply adapted for software development.
Rapid Application Development (RAD) refers to a type of software development methodology which uses minimal planning in favor of rapid prototyping. The &quot;planning&quot; of software developed using RAD is interleaved with writing the software itself. The lack of extensive pre-planning generally allows software to be written much faster, and makes it easier to change requirements.
IEEE829 is a standard for Test Documentation and covers a variety of test documents, one of which is the Test Plan document. By the way, IEEE (pronounced ‘I triple E’), is an acronym that stands for Institute of Electrical and Electronic Engineering.
IEEE829 Test Plan template consist of 16 section or ‘clauses’. These are basically headings found within the template under which you would document the relevant planning information.
Not only does the test plan seek to identify those people involved in the testing effort but it also defines their responsibilities and seeks to address any skills gap they may have in regards to the current project. Often a degree of contingency planning is appropriate to address the issue of staff absenteeism (remember most test estimation is based on 100% staff availability) or a reduction in the number of staff available later in the project.
It is also vital to identify support staff, who are external to the testing team, that maybe required to set up our testing environment or address any technical problems that may arise during testing. Obviously it is important that they are aware that they may be called upon and also to plan in advance when their service maybe required.
Here we identify the ‘scope’ of the testing effort. Test items are often considered as high-level components of an application. For example, Microsoft Word, Excel, PowerPoint etc could be considered test items of Microsoft Office. Features therefore are services or functionality of these test items. With this definition, the spell checker would be a feature of the Microsoft Word test item.
The question is often asked, ‘Why do we need to specify features not to be tested when we already have identified features to be tested. Surely it is obvious that if a feature is not in the features to be tested section then it is automatically not to be tested?’. Well, the answer to this question is that it makes clear what is beyond the bounds of the system to be tested and rules out the possibility that we forget to add it to the features to be tested section. It would surprise you how often testing effort is wasted by testers on testing items or features that weren’t ready to be tested.
Final, we have another opportunity to perform more contingency planning. On this occasion we would like at risk associated with the project that may impact testing. For example, it is often the risk that the software development team may provide us with the first release of the software after the date that we had scheduled our test execution to begin. Now we may overrun because they have overrun. What should we do now to make up the time lost?
Here we specify when testing tasks are started, when they should be completed and when deliverables are required. A deliverable is any output of the software testing process and can include test plans, progress reports, test data and so on. This information is often displayed as a Gant chart which is referenced from the test plan. A Gant chart graphically depicts timescales for various activities: an example of a Gant chart can be found in Microsoft Project, a tool in which Gant charts are used extensively.
So what do we mean by an ‘approach’ at this level of test planning? Well, let me answer this question with a short example: We should know by now that Component testing is fraught with possible pitfalls, the ‘my baby’ syndrome etc. We could approach component testing in a number of ways. Firstly, we could simply let each developer test their own code after its production. Secondly, we could ask the developers to buddy with other developers. We could even have a ‘developer’ whose responsibility is to test all code produce by the development team. These are all examples of different approaches or ways of performing the testing, in this case the testing of components.
Boundary Value Analysis (BVA) is often referred to as Range Testing and works on the premise that faults tend to occur near boundaries (because boundaries are often represented by decision-making statements within code). To use BVA, first you must identify the boundaries; if you have already performed Equivalence Partitioning then this task is made easier as the boundaries should already have been identified. Next, for each boundary, select the values on both sides of the boundary. In some cases three values can be used if the boundary is considered to be a minimum or maximum value.
It is often useful to create a test condition table to clearly identify and document test conditions that arise from the use of EP and BVA techniques. The condition table can be considered an intermediary step before the creation of test cases and can allow for the relationship between test cases and test conditions to be one-to-many, in other words one test case can be designed in such a way as to cover more than one test condition found in the condition table.
Car Insurance Specification
A car insurance quotation system operates with several inputs parameters. Each input value has an associated score – the more desirable the input value, the higher the score. The system currently operates with the following inputs and values:
Input 1 - Drivers age:
Under 21: 10 points
21-24: 5 points
25-40: 2 points
41-60: 1 point
Over 60: 6 points
Input 2 - Car Parked:
Garaged: 1 point
Street: 2 points
Input 3 - Drivers convictions:
None: 1 point
Parking: 2 points
Speeding: 10 points
Previously banned: 25 points
Input 4 – Drivers Health:
No Issues: 1 point
Some Issues: 5 points
Serious issues: 10 points
The applicant’s total score will be tallied and the following actions will be taken based on the result:
Total Score &lt;=7 – Offer cheap insurance
Total Score &gt;7, &lt;=12 – Offer normal insurance
Total Score &gt;12, &lt;=22 – Offer pricey insurance
Total Score &gt;22 – Refuse insurance
Draw the partitions and boundaries out for the inputs.
Draw the partitions and boundaries out for the outputs.
The next step is to produce a key to the partitions…
… and then produce a key to the boundaries.
The test case template shows only the first 5 test cases, those covering boundaries for Age of 0 and 21. Of course we would complete the test set by including 3 test cases for each boundary giving us a total of 30 test case (10 boundaries x 3 significant values for each).
Actual there would probably be only 28 test cases since test values for total 3 [output boundary 4] and total 48 [output boundary 47] cannot be generated from the inputs and therefore they cannot be tested. You should however investigate ways of trying to generate these outputs with combinations of any inputs.
As previously mentioned with Equivalence Partitioning, you may wish to use the 1-to-many method of producing test case from test conditions (boundary values in this case). To do so here, you would attempt to find input combinations that will give you a score of an identified boundary value – sounds like hard work to me!!
The test case template shows only the first 5 test cases, those covering boundaries for Age of 0 and 21. Of course we would complete the test set by including 3 test cases for each boundary giving us a total of 30 test case (10 boundaries x 3 significant values for each).
Actual there would probably be only 28 test cases since test values for total 3 [output boundary 4] and total 48 [output boundary 47] cannot be generated from the inputs and therefore they cannot be tested. You should however investigate ways of trying to generate these outputs with combinations of any inputs.
As previously mentioned with Equivalence Partitioning, you may wish to use the 1-to-many method of producing test case from test conditions (boundary values in this case). To do so here, you would attempt to find input combinations that will give you a score of an identified boundary value – sounds like hard work to me!!
Progress can also be measured using the formal test case design techniques that we have previously discussed. In the example above, we are using BVA and EP to provide us with Boundary Value Progress and Equivalence Partitioning Progress respectively. Here, the items used in the metric are the boundaries and partitions described by the model (Note: a boundary value corresponds to a test case on a boundary or an incremental distance either side of it).
It is technically correct to say that employing only BVA will also cover some partitions identified during EP. However, invalid partitions such as decimal, special characters etc. are usually identified as part of the EP technique. Also if a test fails, for reporting purposes we may have to identify and execute the middle value anyway.
From the model, test cases can be designed to achieve the desired level of coverage. Level 0-Switch is the term used that refers to test cases being designed based on individual transitions as the test case template describes above. To increase the level of coverage we increase the switch level. Therefore, level 1-switch requires test cases to be designed based on transition pairs, level 2-switch is based on transition triplets, and so on.
In the previous slide the test cases have no predefined execution order. In this case, if they were executed in their current order, 1 to 6, then at various points in the execution we would be required to invoke additional events to bridge the gap between the finishing point of one test case and the starting point of the next test case. It is therefore beneficial to rearrange test cases into an efficient execution order where the completion state of one test case is the starting point of another.
Notice how we now have more test cases for level 1-switch (10 in total), than we had for level 0-switch. Also, notice how our test case template has increased in size to accommodate the second transition (from the second instance of Event to Finish State).
In the previous example, level 0-switch was achieved with six test cases, however only valid transitions have been tested. A more complete test set will also test for invalid transitions. Where our test strategy dictates extensive positive and negative testing, a state table is used to identify and document test cases. In this table, states are listed as rows on the left-hand side and events are listed as columns across the top of the table. The cross-section cells represent the action performed and the resulting state that we are transitioned to. Each cell (shaded in grey) represents a test case. Any cell where the action is ‘N’ is a negative test case indicating that a transition is not possible from the current state to another state using this event. Using a state transition table, how many test cases should we derive? No. of States * No. of Events.
Consider the classic digital watch of the 1980’s. These early digital watches had two primary functions: the first was to display the time and the second was to display the date. Both the time and the date could be modified, indicated by either a flashing time to allow you to change the time, or a flashing date to allow you to change the date. The button on the bottom-left of the watch (Reset) was used to modified either the time or the date. Once a modification was made, the top-left button (Set) was pressed to allow the new time or date to be stored. Finally, the top-right button was used to toggle between displaying the time and date.
The diagram on the slide above depicts a State Transition Diagram for the digital watch. The rectangular boxes represent states that the watch could be in at any given time. The lines represent valid transitions between states, with the arrow head representing the direction of the transition. The labels associated with each transition represent the event initiated and the action performed by the watch in response to this event (the event is above the action in all cases in this diagram).
Reviews are a form of static testing in which there is no need to execute code. They involve desk-checking, proof-reading and data-stepping and maybe performed by individuals or within a group.. The slide above depicts the four types of reviews beginning with the most informal, informal review, to the most formal, that of the inspection.
The inspection process can begin at any time in any stage of the software development lifecycle. The inspection leader will analyse the items requested or due for inspection to determine if they have met the required standard to begin the inspection process, if not they are returned to the author for correction.
Kick-off Meeting. The inspection leader begins the inspection by identifying those individuals required to form the inspection team and notifies them of the date and time of the kick-off meeting. During the kick-off meeting the inspection leader will clarify the goals of the inspection, distribute the material to be inspected and inform each inspector of their role and responsibilities.
Individual Checking. Each inspector will then perform a period of individual checking on the inspection items, looking for and recording any faults or potential faults encountered.
Logging Meeting. The inspection continues by the commencement of the logging meeting. With the guidance of the inspection leader, inspectors will discuss their findings and come to agreement on faults that exist which need to be rectified. This information is record and passed to author of the inspection items.
Editing. It is now the responsibility of the author of the inspection items to make the necessary corrections to the inspection items. Some of these items may have a degree of impact on other development artefacts so therefore a change request is submitted which would be subject to the normal change request process, otherwise the author will make the necessary changes without the need to submit a change request.
Exit. When the changes have been made the author informs the inspection leader who determines whether or not the exit criteria have been met. If they have this instance of the inspection process is at an end otherwise an iteration of the editing activity is performed.
Most of us, at some time or another, have performed a walkthrough. In doing so our primary objective would have been to share information with others. It is often the case that walkthroughs are used at the end of the requirement specification activities to describe to all members of the software development team the project requirements: during the requirements specification activity inspections may have been used to detect faults within the documentation being produced.
Technical documentation is the target of technical or peer reviews. Its main purpose is the gain a consensus between interested parties (whose individuals will be peers of the technical author) and to make decisions on the direction to take on some technical aspect of the software under construction.
An incident can be raised immediately when a test case has failed or after the execution of the current test set. Both techniques have their advantages and disadvantages as can be seen on the slide but it is often better for your clients (developers in this instance) to receive concise incident reports, the fewer the better as they have to build incident resolution into their busy schedules.
An incident record contains all the information that a developer needs to reproduce the failure, identify the fault and record resolution information. Each incident record should consist of at least the following:
Unique ID – Used as a reference within other test artefacts
Headline - Acts as a summary of the system failure
State- Current position in the incident lifecycle
Project- Development project the incident relates to
Severity- The estimated impact of the failure on the stakeholders
Priority- Urgency of correction for testing purposes
Owner- Developer in responsible for incident resolution
Tester- Tester responsible for finding incident and re-testing incident
Test Info- Test Case Id, Test Script ID etc for the purpose of reproduction and re-testing
An incident can be raised immediately when a test case has failed or after the execution of the current test set. Both techniques have their advantages and disadvantages as can be seen on the slide but it is often better for your clients (developers in this instance) to receive concise incident reports, the fewer the better as they have to build incident resolution into their busy schedules.
Each incident report generated should be subject to an incident lifecycle to ensure that the appropriate process is followed for incident closure. What we don’t want to see is incidents being closed by someone who hasn’t been given the authority to do so or being closed before a proper re-testing has been performed.
Within most incident management systems, the incident lifecycle can be defined using a state transition table. In addition to this, access rights can be defined for each type of user, stating whether or not they can move incident records from one state to another or make modifications to incidents records at each stage in the lifecycle: two things that are difficult to enforce manually.
Above is an example of a simple incident lifecycle
These systems automate the process of incident management. They are based on the creation of one or more incident databases that store incidents which have been raised through the user interface, allowing you to manage their transition through the lifecycle and correlate incident statistics into various queries, graphs and reports.